What FileMaker Developers Should Know about ChatGPT (mostly, GPT)

9 min readFeb 15, 2023


What FileMaker folks should know about the biggest thing in tech.

You CAN NOW Integrate FileMaker and ChatGPT

YES! As of MARCH 1 2023!

ChatGPT is powered by gpt-3.5-turbo, which has just been released as OpenAI’s most advanced language model.

Using the OpenAI API, you can build your own applications with gpt-3.5-turbo to do things like:

  • Draft an email or other piece of writing
  • Write Python code
  • Answer questions about a set of documents
  • Create conversational agents
  • Give your software a natural language interface
  • Tutor in a range of subjects
  • Translate languages
  • Simulate characters for video games and much more

However, you CAN also integrate GPT-3. This API has been available since 2020 and is well documented and easy to understand. See below for many examples of what you can do by integrating the GPT API into FIleMaker. All you need is knowledge on integrating APIs and parsing JSON.

Here is a side-by-side test of davinci-003 vs gpt-3.5-turbo:

Ten side-by-side tests of Davnci vs ChatGPT

Keep in mind that GPT-3 and ChatGPT are not the same thing. There are very important distinctions.

ChatGPT is not GPT-3

ChatGPT is actually based on GPT-3.5 (also not available via API…yet). ChatGPT is more an evolution of InstructGPT, call it a “second try” at creating a conversational version of GPT.

ChatGPT is a fine-tuned version of GPT-3.5 that is a combination of two different models: GPT and Codex. Codex is the model that runs GitHub Copilot, the revolutionary coding tool that writes JavaScript, Python, etc. However, the role of Codex in ChatGPT is to influence logic into the conversation experience. Not to reinforce coding. Yet, you can still get ChatGPT to write code for you. You should absolutely be trying this out.

But ChatGPT is very different from GPT-3 even beyond the integration of Codex. The part that makes ChatGPT so amazing at conversation is that The developers of ChatGPT utilized a blend of Supervised Learning and Reinforcement Learning to enhance its performance. However, it’s the Reinforcement Learning aspect that sets ChatGPT apart. OpenAI used a specific technique referred to as Reinforcement Learning from Human Feedback (RLHF) which incorporates human feedback into the training process to reduce the occurrence of harmful, false, and biased outputs.

This is the part that makes it so incredible at sounding human. It is because humans actually influenced this version of GPT! Hundreds, possibly thousands (total is not publicly known) of labelers have interacted with GPT and provided various types of feedback, both reinforcement and reward model learning. In this process, human AI trainers acted as both users and AI assistants in conversation simulations. The trainers were given access to model-generated suggestions to help them in composing their responses. The resulting dialogue data was combined with a transformed InstructGPT dataset to create a new dialogue format.

This process makes ChatGPT significantly more conversational than GPT. It also makes is less offensive, harmful and untruthful. Yet those are all residue of GPT-3, which is the only model you can currently integrate via an API.

But the other thing that is critically important to understand about ChatGPT is that it is a CHAT. ChatGPT is able to maintain context and understand the relationship between previous and current responses within a conversation, allowing it to generate more coherent and meaningful responses in a conversational setting. GPT-3 alone needs to be integrated into a chat to do so.

So instead of request/response calls using an API, if you embed those calls as a sequence of related API calls and this provides some incredible results.

This also allows for ChatGPT to generate personal and tailored responses by taking into account the user’s preferences, previous interactions, and other contextual information. These can all be added into prompts in the chat format, thus influencing the results.

Overall, it is just as much the “Chat” as it is the “GPT” that provides this more engaging and natural user experience compared to traditional single request-response systems, making it ideal for conversational AI applications such as chatbots.

How You Can Use Language Models

Common modeling tasks like: summarization, keyword extraction, classification, and sentiment analysis, all of that can be done with pre-trained and publicly available machine learning model services that have been around for years.

These tasks do not need Large Language Models (LLM) like GPT.

GPT can do all of these tasks, but so can literally hundreds of services currently available via API online. Large language models, such as ChatGPT, are best suited for several natural language processing (NLP) tasks, including:

Text generation: These models can generate coherent and contextually relevant text, such as chatbot responses, summaries, and translations.

Question answering: Large language models can understand and answer questions based on a given context, making them suitable for knowledge-based applications.

Text completion: Large language models can generate missing words or phrases to complete sentences, paragraphs, or even entire documents.

  • Sooooo much more!

These models have also been used for various other NLP tasks, such as named entity recognition, text-to-speech synthesis, and text-to-image generation, among others.

If you want to integrate copywriting, chatbots, virtual assistants, content generation, question answering, etc into your application, GPT is the right choice for you.

You can get paid access to their APIs right now through either OpenAI or Microsoft Azure Cognitive Services today. This API was released in 2020.

Keep in mind that once ChatGPT is available via API, you will still need to interact with it in a chat format, making something like an API with direct access to GPT-3.5 (or GPT-4) more interesting for integrating into business application work flows.

Using ChatGPT (or GPT) with FileMaker

The corpus of data that was used to create the GPT models is essentially a product of a web crawl of all public data ever shared on the internet. This data was collected by web crawlers, which are automated programs that scan the web, following links and collecting data. The web crawl data typically consists of a wide variety of text sources, including websites, articles, forums, and social media posts.

For some perspective, just Wikipedia alone only represents 3% of the data in this corpus.

But we have all been taking about FileMaker on the internet for decades in forums, chats, emails, communities. Even the help systems have been on public web pages for several versions.

All of this was used to train GPT on how to write FileMaker. To be clear, it learned all of this using that data. It did NOT index and clip this data. It learned. Let that sink in.

However, if you have ever been on a forum, you know there is a ton of noise. As a result, much of that crawl is misinformation. Mostly, as GPT learned FileMaker, it never had any feedback on whether it was learning it correctly.

So, keep in mind that it learned FileMaker from all of us.

Not Yet Ready to “Code”…yet.

Creating scripts form scratch and calculations from scratch with ideal error reduction and scoring will take significant fine-tuning and reinforcement learning. This is a job for our community and to get future versions of the GPT models to the point where this is flawless IS POSSIBLE, but might take our help as a community. (See below)

Instead, try these cool tips:

Use GPT to “Explain” calculations

This is an excellent way to learn and also a great way to get “unstuck” in your development efforts. If you want to troubleshoot or try to understand how another developer did something, this tool is ideal. Simply copy your calculations into ChatGPT and ask it to “explain” and you will get amazing results.

Even paste in your calculations and ask GPT to “give me three versions with the same result” and learn ways to make your calculations more dynamic or teach you new ways to accomplish similar outcomes.

Use it to Suggest Relationships

Relationships are notoriously difficult in FileMaker and new learners spend the most time here and many learners get frustrated and leave the platform because they cannot get past this topic. However, GPT and ChatGPT make for excellent relational database helpers. You can simply prompt ChatGPT with the tables and types of information you need to manage and it will write out an entire plan of how you can relate this information with guidance on the tables you want, the keys and any joins needed. This is a huge help to nee learners on the platform.

Use to to Debug or Refactor Scripts

We all know learning FileMaker script is an exercise in both learning the steps and in making your scripts efficient. ChatGPT does an excellent job at explaining existing scripts, making it a perfect tool of new learners. Use plug-ins to copy text versions of your scripts onto ChatGPT (or via the GPT-3 API) and you’ll be amazed with the results.

Fo experienced developers, try the same but ask ChatGPT to “refactor” your scripts. You’ll be blown away with the suggestions and will certainly learn new ways to make your scripts more efficient.

Claris Learning Companion Project

In December of 2020, iSolutions began an internal research project called “Claris Learning Companion”. Our goal was to focus on the impact that GPT could have on the learning experience, specifically for FileMaker. learners.

If you are interested in learning more about the Claris Learning Companion research project and possibly being part of reinforcement learning in the future, please sign up at www.ClarisLearningCompanion.com If you want to see videos of the Claris Learning Companion in action over the years, check out embedded videos in this article: https://isolutions.medium.com/our-journey-with-gpt-3-and-chatgpt-bc7ab0c6cf82

Scoring Accuracy

Large Language models like GPT-3.5, which ChatGPT is based upon have ways to measure accuracy of their output.

The most popular is something called “semantic scoring”. Semantic scoring refers to the process of evaluating the semantic meaning and coherence of generated text. The score provides an estimate of how well the generated text captures the intended meaning and how well it aligns with human-written text.

This is generally one of the first steps in preparation for fine-tuning as you have to establish a baseline to see whether fine-tuning improves the accuracy of output.

In our research project, we pulled a series of sample functions directly from Claris’ help system and formatted them as Prompt / Completion pairs.

Before semantic scoring, the data sets must be converted into vectors. This process, known as vectorization, involves converting text into numerical representations that can be processed by machine learning algorithms. Vectors can help identify close responses by computing the similarity between the vectors of two texts. By using vectorization and other similarity measures, you can identify the most similar responses among a set of generated text and evaluate their semantic relevance to the input prompt or reference text.

This are widely used to determine “before” and “after” accuracy of Language models for fine tuning and reinforcement efforts.

While, admittedly, our sample dats sets were less than 50 samples, we did extrapolate multiples of those.

Our scoring range outputs -1 to 1 as a score so they should be seen as a “percentage correct”, but the small sample size scored .6438 which is considerably higher than -1 (which would indicate 100% inaccuracy) but no doubt lower than 1 (which would indicate 100% accuracy).

The conclusion was easy, this model is a candidate for fine tuning and reinforcement learning. Which is exactly what this post reccomends and which will be the next step in the research project for any who are interested in participating.

A couple caveats: the model we used was divinici-002 (GPT-3.5 has this since added divinci-003)and ChatGPT uses divimci-003 so it could have improved accuracy, we just haven’t tested it. And, reinforcement learning is a critical part of training models like this for them to be useful to learners at all levels. Regardless, the next step is reinforcement and fine tuning…or test again with GPT-4.

It might take some time until this is useful for experienced programmers in a one-shot manner, but for new learners who are coming into the process at a “-1", this tool can certainly be useful in their learning journeys.

Learn about ChatGPT: www.WhatISChatGPT.training

No, this post was not written using ChatGPT 😂




Multiple award-winning experts in custom applications, machine learning models and artificial intelligence for business.