John Collins - Making AI Happen

Engineering AI Solutions to Tomorrow's Challenges


Working with multiple AI assistants.

AI platforms are quickly evolving into fundamental infrastructure. Moving forward, I believe AI assistants will be central to everyone's engagement with the digital realm.

Through my ongoing project, I aim to showcase how tailored AI project management frameworks can be obtained from a range of AI assistants. The project serves as a real-life example demonstrating that working with a range of AI models may currently be the most sensible strategy for AI initiatives.

Model Integration: At the core of, multiple models are combined, illustrating how it is possible to blend open-source and closed-source strategies. This approach mitigates the risks associated with single model dependencies and allows for a richer interactive experience.

Flexibility and Context-Awareness: The system uses engineered prompts that adapt to different models and project types, providing context-specific guidance, which underlines the advantages of a combined model strategy.

Robustness: Security and performance optimizations like API leverage, caching, and rate-limiting serve as the backend mechanisms, underscoring the reliability of a combined open-source and closed-source approach.

You may explore the functionalities of by following the steps below. First, choose an AI-Analyst model from the available options. Next, select the type of project you are interested in. Finally, input a brief description of your project as a prompt. To access the online version, please click on the appropriate links or icons located in the navigation bar.

Custom GPTs

Building a Custom GPT with OpenAI's new GPT Builder

The recent unveiling of GPT Builder marks another significant milestone in the usability of GPTs, introducing an emerging layer of abstraction. I think this inevitably means an expansion in the number and diversity of developers engaging with GPTs, and paves the way for a broad spectrum of applications. I expect that soon advanced custom GPTs will possess an array of capabilities, including reading, writing, auditory processing, verbal communication, visual recognition, artistic creation, and cognitive processing.

GPT Builder enables these models to leverage existing digital tools, specialize in specific domains, access and utilize custom datasets, and perform various actions within the digital landscape. The ability for these models to exhibit tailored communication styles, perform distinct actions, and collaborate with one another, opens up new frontiers in AI.

This advancement is not just a step forward in AI technology; it represents a profound shift in how we interact with and utilize computational resources. As we accelerate into this journey, it's important for stakeholders across industries to prepare for the transformative changes that these technologies are poised to bring.

Here is my Custom GPT, which is called AI Project Navigator. Building and publishing it took less than 30 minutes. Please give it a try by clicking the link, which requires an active ChatGPT Plus subscription for access.

Continue reading ...


Llama-2-7B can be used as an 'internal' AI assistant when, for example, client data and IP must be protected, alongside 'external' AI assistants such as GPT-4.

Many enterprises I engage with express hesitancy about transmitting their data through public APIs when leveraging AI models like ChatGPT or PaLM-2. The primary apprehensions are rooted in safeguarding intellectual property (IP) and ensuring stringent data privacy protocols. While the latest version of ChatGPT Enterprise does offer robust security measures, including end-to-end encryption and explicit assurances that the user data won't contribute to the ongoing training of OpenAI's models, some institutions may still have specialized requirements that necessitate an isolated environment for model deployment.

In such scenarios, implementing and perhaps fine-tuning an on-premises AI model like Llama-2 can offer a tailored solution.

By adopting this approach, data flow remains confined within the organization's secure network perimeter, and the unique advantages of customizing the model for domain-specific tasks are retained exclusively by the enterprise.

Click continue reading below to see how I have implemented llama-2-7b alongside gpt-4-0613 so that each can be used to produce a basic due diligence report for a given corporate name according to specific circumstances.

Continue reading ...


Leveraging LLMs for time series forecasting of stock prices and volatility.

I take a deep dive into the fine-tuning of language models. I show that fine-tuning is a strong approach for times series prediction. The ability to train a model on specific data, such as price sequences, enhances its understanding and predictive power for those unique use cases.

I fine-tuned a GPT-3.5 Turbo model on 100,000 lines of Apple stock OHLC data to predict future price movements. OpenAI's dashboard for my fine-tuning job summary for the GPT-3.5 Turbo model can be seen to the left.

Prompt Evolution

Evolving Project Management Prompts for AI Projects.

This project demonstrates Prompt Evolution (Fernando et al., 2023) by generating project management prompts tailored to AI projects.

Prompt Evolution uses an iterative process to evolve better prompts for a domain, in this case project management of AI initiatives. As described in "Promptbreeder: Self-Referential Self-Improvement via Prompt Evolution" (Fernando et al., 2023), it leverages a language model to generate variations of prompts, evaluates their utility, and selects the most useful prompts to propagate to the next generation.

Prompt Engineering

Prompt engineering involves a deep understanding of the model's capabilities, idiosyncrasies, and limitations, making it a blend of both science and art.

Prompt engineering involves crafting and optimizing input prompts to guide language models, to produce desired outputs. Beyond simply iterating on prompt structures, methods include prefix-tuning, few-shot and many-shot prompting, templating, and employing meta-prompts, as well as leveraging external knowledge or combining prompts with model fine-tuning to achieve nuanced responses

Prompt engineering to obtain business-specific responses

I built the AI chatbot or 'AI due diligence assistant' below to show how prompt engineering is used to obtain a business-specific response, in this case a due diligence report on a specified company. To use it, enter a company name, select 'Due Diligence' as your AI-Analyst (this is the default for this demo and the button should be green), wait for up to 2 minutes while the model does its work, copy and paste the response into a report and/or send it to yourself via email. The other buttons or response modalities are active, but I haven't worked them extensively yet.

Continue reading ...

Insights & Projects

More examples of my work in AI

GPT Builder Background
GPT Builder Home

GPT Builder

Building a Custom GPT with OpenAI's GPT Builder.

Prompt Engineering vs Fine-Tuning

Fine tuning

I take a deep dive into the fine-tuning of language models. I show that fine-tuning is a strong approach for times series prediction. The ability to train a model on specific data, such as price sequences, enhances its understanding and predictive power for those unique use cases.

ChatGPT Enterprise

Is Enterprise AI compatible with the information security and data protection standards set by our organization?

Before the advent of ChatGPT Enterprise, securing approval from Information Security departments for systems accessing LLMs presented a considerable challenge.


Showing how Llama-2 can be used as an 'internal' AI assistant when, for example, client data must be protected, alongside 'external' AI assistants such as GPT-4

code interpreter

Code interpreter is a game changer

Why code interpreter signifies a pivotal shift, one that has been taking place quietly for some time but is now more clearly visible.

Option pricing with ChatGPT

Option pricing with ChatGPT

A key role for LLMs is as intelligent API gateways. Using ChatGPT plugins to obtain option prices illustrates this. LLMs can be seen as a dialogue interface rather than a storage for knowledge, serving as the interactive layer between a human user and a knowledge graph with various sources.

AI assistant for due diligence

AI assistant for due diligence

In 1-2 years (if not much earlier), AI assistants will be ubiquitous. Imho, this will lead to profound change in work behaviours.

Prompt engineering

Prompt engineering

Prompt engineering involves a deep understanding of the model's capabilities, idiosyncrasies, and limitations, making it a blend of both science and art. Implementing apps that use prompt engineering can be technically challenging, so it requires coding and deployment skills too.

Vol prediction with GPT

Predicting Volatility with a GPT

I developed Karpathy's nanoGPT, which is a compact GPT with a bigram language model and transformer structure, to predict stock price volatility using OHLC tick-by-tick data.

Build and train a GPT

Build and train a GPT

I built and trained a Generative Pretrained Transformer (GPT). This post is a hands-on exploration of GPTs and the transformer architecture, following Karpathy's approach.


Predicting Volatility with the LSTM

This post includes a detailed mathematical enunciation of LSTM. A thorough derivation of the LSTM network is generally skipped in finance deep learning literature that features the LSTM.

High Frequency Data

High Frequency Data

In this post I discuss the preparation of high frequency datasets for Apple (Apple Inc., NASDAQ, ticker: AAPL), JPM (JPMorgan Chase & Co., NYSE, ticker: JPM), and the EURUSD currency pair, using volatility prediction as my problem setting.

Deep Learning in Finance

Deep Learning in Finance

In this post I discuss some of the papers I have read and found useful when considering deep learning in finance. I discuss academic literature here because a rigorous consideration of deep learning in finance is important, imho.

© 2023 johncollins