Applying GPT, ChatGPT and other AI that started first with learning tools and evolved into an entire new business.
There is no bigger topic in tech (and possibly in pop-culture) at the moment than ChatGPT and it’s parent model, GPT-3(3.5/4).
GPT, or Generative Pre-trained Transformer , is a state-of-the-art language model developed by OpenAI. It has the ability to generate human-like text, and has been used for a wide range of applications such as language translation, language summarization, and even writing creative fiction. It is the current iteration of the GPT series (ChatGPT), and has set a new standard for language models with its impressive performance and capabilities.
Our machine learning team’s journey with GPT began when OpenAI released their public access to the GPT-3 Playground on June 11, 2020. The Playground is a web-based interface that allows users to interact with the GPT-3 model, inputting their own prompts and receiving generated text in response. The release of the Playground was a significant step for OpenAI in making GPT-3 more accessible to the public, allowing anyone with an internet connection to test out the model’s capabilities and explore its possibilities.
Most of our experiments were what our friends and colleagues described as “hand wavy” parlor tricks.
We quickly abandoned that effort months later with the release of the Codex model, which is a platform that utilizes GPT-3 to assist developers with coding and documentation tasks, on September 15th, 2020. Which later became the product, CoPilot.
In November 2020, we received our beta key to OpenAI. We were excited to test out the capabilities of the model and see how it could be used in our own projects.
This came just in time as our Director of Machine Learning and AI was scheduled to head out on a 10 day hiatus to the middle of nowhere in Big Sur, CA to focus on emerging technologies in complete isolation.
The product of that 10-day isolation was to have integrated OpenAI’s GPT-3 into FileMaker with mostly experiments on how to train the model inline within the text itself. Similar to how we had worked with it in the OpenAI Sandbox.
But this became an important education into settings, prompts and completions. Which would become the building blocks for our later integrations into GPT-3.
From the moment we started experimenting with GPT-3, we were blown away by its capabilities and the ease of use of the API. We were able to generate high-quality text with just a few lines of code, and we quickly realized the potential for this model to revolutionize the applications we create for our customers.
In 2021, we evolved this work into a proof of concept we called the Claris Learning Companion where we integrated what we learned about prompt engineering and the power of GPT-3.
We felt strongly that this could be the future of learning. What we found most interesting was that this DID NOT use the Codex model, but instead the TEXT model and since FileMaker has been around since before the internet, there was rich, public content available for the model to train itself on how to create FileMaker scripts, calculations and functions.
The training data for GPT-3 comes primarily from the internet, specifically a diverse range of text including books, articles, and websites. OpenAI used web scraping techniques to gather a large dataset of text from various sources such as Wikipedia, Common Crawl, and other websites. The model was trained on a diverse range of text data including news articles, books, scientific papers, and websites. The text was cleaned and preprocessed to remove any irrelevant information and to ensure that the data was of high quality before being used to train the model.
It’s worth noting that OpenAI has used a technique called “Web scraping” to gather the data from the internet, this technique is automated method of extracting a large amount of data from websites, fast and efficient, it can be done by using code, but it’s important to note that scraping data from certain websites may be illegal or violate the website’s terms of service.
Since FileMaker has been discussed on the internet since it’s inception, there was TONS of training data available to make something like the FileMaker Learning Companion come to life.
Next came our fascination with not just requests and responses, but rather the idea of how to use this technology within a CHAT CONVERSATION.
Chat conversations, as opposed to just a requests and responses, actually includes the previous requests and responses into the later requests.
This meant that now requests later in the conversation would retain the context of the previous request/responses.
This seemed like a game-changer to us so we modified the FileMaker Learning Companion back in April of 2022 to be a Chat format instead.
This later became the same construct that ChatGPT would use in the fall of 2022. But we felt strongly this was just the format needed for a learning companion.
This led to various “Chatbot- type” projects for our customers that proved to be a massive evolution of what we knew to be the chatbots of the past.
We even started to experiment with AI Voice Models and created a custom voice that we added to the Learning Companion, just for fun.
Fun, maybe even creepy.
This has evolved into a custom API of that same voice that may or may not be behind various videos you have already watched. Or maybe even some zoom calls you’ve been on.
Our experiments led us to try to create our own interfaces for these models outside of the FileMaker platform.
Then our experiments led us to incorporate the GPT-3 API into Siri.
First on iPhone:
Then on Desktop:
and finally to the Apple Watch:
Once we realized this was much bigger than FileMaker, we created our own Bot called “Summon The Bot” and did the same integrations for other business use cases.
This later became the basis of many of our customer integrations.
We also started to experiment with using GPT-3 in some of our education applications as both an essay reader and for determining the reading level of essays for determining student learning progress.
We realized that some of the “pre-trained” machine learning models like Summary Extraction, Keyword Extraction, Sentiment Analysis could be emulated with GPT-3 but with a “twist”.
Our custom bot led to Sales Companion CRM integration and more Custom Chatbots.
Our Sales Companion application was also the basis for our comparison of the text-divinci-002 model we had been using both internally and within our customer projects against the new text-divinci-003 model. We demonstrated the differences in the two models in our internal CRM integration that we use here at iSolutions:
Little did we know, the divinci-003 update was just one part of OpenAI’s “preview” of GPT-3.5. The other arrived shortly after.
On November 30, 2022, the world changed. OpenAI released “ChatGPT” which was a public version of GPT-3.5. This was a preview of things to come in GPT-4 but was also an example of GPT-3 with a combination of both supervised and reinforcement learning techniques.
This was the culmination of our chat conversation thoughts and discoveries our team had been making on the fine-tuning front as well as validation of our own reinforcement learning techniques we were using in our fine-tuning projects for our customers.
Oh, and it introduced millions to GPT-3.5.
Of course we tested out our FileMaker Learning Companion against ChatGPT:
Then even as a tool for answering questions in the Claris Community:
and even experimented with ChatGPT as a way to onboard new customers to the Claris platform to create their own custom learning journeys.
Just as a word of caution: In order for something like this to be useful in production, there would need to be hundreds of hours of fine tuning and reinforcement learning done to the ChatGPT model and that would come at a huge expense. So likely, this would need to be created by Claris themselves and built directly into the product. Claris, however, has no plans for an integrations like that but instead will release an OpenAI Claris Connect connector to allow the public to interact with the untrained ChatGPT. At the very least, this can help users understand the need for fine-tuning GPT-# and how important proper prompt engineering can be. Proceed with caution with this tool, but enjoy your own parlor tricks!
However, a rumor is that the world’s most popular FileMaker trainer has been spending hundreds of hours fine-tuning ChatGPT with every single video, session presentation and classroom transcript he has ever used over the last 20 years into a custom, fine-tuned Claris Learning Companion “AMA” product using ChatGPT that could be released to the public soon. This could revolutionize how we learn in our community well beyond the “hand wavy” parlor tricks that ChatGPT and GPT-3/4 provide on their own. Want to learn more about the Claris Learning Companion? CLICK HERE
Our most recent research on ChatGPT has focused on evaluating embedded text, with the potential to address the issue of the model’s training data only being up to date as of its last training.
For example, one way to access current data is to include it within the prompt itself, or to make an API call to gather and embed this data.
We used data from an array of Homes and asked ChatGPT to summarize the data and make conclusions.
Then asked ChatGPT to provide a summary list based on the data set. Embedding data into prompts could potentially revolutionize the way we interact with data and create new dashboard experiences.
While there is still much work to be done in this area of research, the potential of using embedded text to improve the timeliness of ChatGPT’s training data is exciting.
When GPT-3.5 was released in FEB 2023 in the form of the ChatGPT API release (also know as “Turbo) we immediately did a side-by-side test to see how the ChatGPT API improved the experience of Davinci-003:
We found that in addition to 10x less cost per 1000 tokens, the speed increase was significant.
Throughout our journey with GPT, we have been constantly impressed by its capabilities. We have been able to use it for a wide range of projects, from language summarization to creative writing and chatbots. We have also been able to train it on our own custom data to fine-tune its abilities for specific tasks.
We launched an entirely new division of iSolutions in 2022 called “iSolutionsai” where we built a team of machine learning experts, data scientists and model programmers and prompt engineers and started building custom models as a service for our customers.
This turned into platform agnostic API-driven custom models and integrations with GPT-3/4. Our portfolio continues to grow with some of the most amazing work we have ever done. We are excited to see where this technology will go in the future and how it will continue to shape the way we work with our projects.
If you want to learn more about how GPT-3/4 or custom machine learning models could benefit your business, feel free to reach out to us for a free consultation or even a demo of some of the amazing things these technologies can do.
Our experienced team can help you create:
- Custom Machine Learning Models
- GPT-3/4 Language Models
- Computer Vision Models
- AI Chatbots
- Learning Companions
- Community Companions
- Sales Companions
- Marketing Companions
Visit us at iSolutionsAI.com
if you enjoyed this article, please check out:
How ChatGPT and Embedded, Proprietary Data will Replace Traditional Business Applications as We Know Them