John Collins - Making AI Happen
Engineering AI Solutions to Tomorrow's Challenges
Engineering AI Solutions to Tomorrow's Challenges
AI platforms are quickly evolving into fundamental infrastructure. Moving forward, I believe AI assistants will be central to everyone's engagement with the digital realm.
Through my ongoing project project-manager.ai, I aim to showcase how tailored AI project management frameworks can be obtained from a range of AI assistants. The project serves as a real-life example demonstrating that working with a range of AI models may currently be the most sensible strategy for AI initiatives.
Model Integration: At the core of project-manager.ai, multiple models are combined, illustrating how it is possible to blend open-source and closed-source strategies. This approach mitigates the risks associated with single model dependencies and allows for a richer interactive experience.
Flexibility and Context-Awareness: The system uses engineered prompts that adapt to different models and project types, providing context-specific guidance, which underlines the advantages of a combined model strategy.
Robustness: Security and performance optimizations like API leverage, caching, and rate-limiting serve as the backend mechanisms, underscoring the reliability of a combined open-source and closed-source approach.
You may explore the functionalities of project-manager.ai by following the steps below. First, choose an AI-Analyst model from the available options. Next, select the type of project you are interested in. Finally, input a brief description of your project as a prompt. To access the online version, please click on the appropriate links or icons located in the navigation bar.
The recent unveiling of GPT Builder marks another significant milestone in the usability of GPTs, introducing an emerging layer of abstraction. I think this inevitably means an expansion in the number and diversity of developers engaging with GPTs, and paves the way for a broad spectrum of applications. I expect that soon advanced custom GPTs will possess an array of capabilities, including reading, writing, auditory processing, verbal communication, visual recognition, artistic creation, and cognitive processing.
GPT Builder enables these models to leverage existing digital tools, specialize in specific domains, access and utilize custom datasets, and perform various actions within the digital landscape. The ability for these models to exhibit tailored communication styles, perform distinct actions, and collaborate with one another, opens up new frontiers in AI.
This advancement is not just a step forward in AI technology; it represents a profound shift in how we interact with and utilize computational resources. As we accelerate into this journey, it's important for stakeholders across industries to prepare for the transformative changes that these technologies are poised to bring.
Here is my Custom GPT, which is called AI Project Navigator. Building and publishing it took less than 30 minutes. Please give it a try by clicking the link, which requires an active ChatGPT Plus subscription for access.
Many enterprises I engage with express hesitancy about transmitting their data through public APIs when leveraging AI models like ChatGPT or PaLM-2. The primary apprehensions are rooted in safeguarding intellectual property (IP) and ensuring stringent data privacy protocols. While the latest version of ChatGPT Enterprise does offer robust security measures, including end-to-end encryption and explicit assurances that the user data won't contribute to the ongoing training of OpenAI's models, some institutions may still have specialized requirements that necessitate an isolated environment for model deployment.
In such scenarios, implementing and perhaps fine-tuning an on-premises AI model like Llama-2 can offer a tailored solution.
By adopting this approach, data flow remains confined within the organization's secure network perimeter, and the unique advantages of customizing the model for domain-specific tasks are retained exclusively by the enterprise.
Click continue reading below to see how I have implemented llama-2-7b alongside gpt-4-0613 so that each can be used to produce a basic due diligence report for a given corporate name according to specific circumstances.
I take a deep dive into the fine-tuning of language models. I show that fine-tuning is a strong approach for times series prediction. The ability to train a model on specific data, such as price sequences, enhances its understanding and predictive power for those unique use cases.
I fine-tuned a GPT-3.5 Turbo model on 100,000 lines of Apple stock OHLC data to predict future price movements. OpenAI's dashboard for my fine-tuning job summary for the GPT-3.5 Turbo model can be seen to the left.
This project demonstrates Prompt Evolution (Fernando et al., 2023) by generating project management prompts tailored to AI projects.
Prompt Evolution uses an iterative process to evolve better prompts for a domain, in this case project management of AI initiatives. As described in "Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution" (Fernando et al., 2023), it leverages a language model to generate variations of prompts, evaluates their utility, and selects the most useful prompts to propagate to the next generation.
Prompt engineering involves crafting and optimizing input prompts to guide language models, to produce desired outputs. Beyond simply iterating on prompt structures, methods include prefix-tuning, few-shot and many-shot prompting, templating, and employing meta-prompts, as well as leveraging external knowledge or combining prompts with model fine-tuning to achieve nuanced responses
I built the AI chatbot or 'AI due diligence assistant' below to show how prompt engineering is used to obtain a business-specific response, in this case a due diligence report on a specified company. To use it, enter a company name, select 'Due Diligence' as your AI-Analyst (this is the default for this demo and the button should be green), wait for up to 2 minutes while the model does its work, copy and paste the response into a report and/or send it to yourself via email. The other buttons or response modalities are active, but I haven't worked them extensively yet.
© 2023 johncollins