The AI PLAYBOOK: Trusting Foundation Models for Business

iSolutions
14 min readMar 4, 2023

--

How business leaders can easily overcome raw language model deficiencies to create world-class integrations for business differentiation

(*article not written with “generative AI”)

TL;DR: In this article, we explore the potential of large language models, the risks associated with using raw language models for business purposes, and the techniques employed by iSolutionsAI to eliminate these risks and easily create safe, trustworthy business applications.

or listen to this as a conversational podcast instead:

Generative Artwork by Dall-E

As businesses search for ways to “do more with less”, the recent rise in popularity of #ChatGPT has thrust language models into the technology conversation.

Smart business leaders are looking to AI for a way to “do more with less”

Large language models have become increasingly popular over the past few years due to their ability to answer questions, write stories, and engage in conversations.

Yet, the true value for CEOs comes from how they can leverage the foundation models that these language models are based upon along with their own data and systems. These foundation models can easily become the magic bullet for any CEO in 2023 and beyond.

The key distinction between language models and foundation models is that foundation models are intended to serve as a base layer that can be adapted for a wide variety of tasks like data analysis or providing insights that require the logic, reasoning and complex thinking capabilities of foundation models. Whereas language models are specifically designed for tasks involving the generation or understanding of human language.

We have created this AI Playbook for Business Leaders to highlight the tools and techniques they need to make intelligent decisions on how to effectively and safely integrate foundation models into their systems and workflows.

To understand the value of these methods, it is important to understand the problems they solve. The potential benefits of foundation language models are huge, and the opportunity to leverage this technology to create innovative applications and products is extremely promising.

Problems caused by using raw large language models for business

To address any concerns about the use of raw language models for business purposes, iSolutionsAI has leveraged years of experience in AI to create these processes that eliminate these risks and create safe, trustworthy applications for business differentiation.

With the right tools and processes, businesses can easily overcome raw language model deficiencies and leverage foundation models to create world-class integrations for market differentiation.

Raw Language Model Deficiencies

Let’s examine the the potential risks associated with using raw language models that all CEOs should be aware of, then explore the various tools and methods currently available to mitigate these risks.

Accuracy

Large language models, such as GPT-3.5 (the model behind #ChatGPT) and GPT-4, are trained on massive amounts of data which includes a diverse range of texts and information available on the internet.

While these language models are impressive in their ability to generate human-like responses, they may not always be truthful or accurate.

While GPT-4 introduced advancements that address these issues, there are still truth gaps in raw language model responses.

These models’ accuracy issues are caused by limitations in contextual understanding, biases in their training data, limited knowledge and experience, over-reliance on statistical patterns, and potential for intentional misinformation.

At iSolutionsAI , we make it easy to pull in factual content and to systematically increase the accuracy of foundation model outputs to allow for trustworthy business integrations.

We accomplish this by using various techniques including:

Fine-tuning: Fine-tuning involves training the new, private version of the foundation model on a specific task or domain to improve its accuracy in that area. We work with our clients to provide additional training data that is specific to the objective of their model integrations which results in a dedicated, secure version that is accessible only to their organization. Fine-tuning efforts are surprisingly easy to implement and have significant returns in accuracy.

Human-in-the-Loop: We facilitate “Human-in-the-Loop” reinforcement learning, which involves incorporating human feedback into the model training process. This can be done by having human reviewers check the model’s output and provide feedback, which can then be used to improve the model’s accuracy. This feedback provides additional input for further fine-tuning. Human reinforcement is a low-cost option for CEOs since they are able to leverage their own internal resources to provide the feedback.

Scoring Accuracy Improvements

At iSolutionsAI, we use scoring processes to provide a standardized and objective way to evaluate the performance of fine-tuned foundation models “before and after” these techniques are applied.

These scores help to ensure that the new models we create for our clients are accurate, reliable, and can generalize to new data. Thus, increasing accuracy prior to implementation within business applications and workflows.

Safeguarding

Whether businesses intend to integrate raw language models or foundation models into their internal or customer-facing systems, it is critical that the output does not contain harmful content and upholds ethical standards.

Moderation is an essential component of large language model output to ensure that the content generated by the model is appropriate, safe, and free from harmful or offensive language.

OpenAI is extremely serious about safe and responsible AI. As a result, they provide developers with a Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content — an instance of using AI systems to assist with human supervision of these systems.

This reduces the chances of products “saying” the wrong thing, even when deployed to users at-scale.

As a consequence, AI can unlock benefits in sensitive settings, like education, where with these tools in place, foundation models can be used with confidence.

Content moderation is critical for business AI implementations

You can read more about their policies and enforcement in their technical paper describing our methodology and the dataset used for evaluation.

iSolutionsAI integrates the moderation endpoint into all outgoing requests and incoming responses in our client’s application integrations that involve generating content.

Tone

Foundation models are trained on massive amounts of data and use statistical patterns to generate responses to given prompts. They do not have their own inherent “tone” or objectives, but rather reflect the patterns and biases present in the data set they were trained on.

This means that if a business wants to use a large language model like ChatGPT to communicate with customers or clients, there is a risk that the model’s responses may not match the tone or objectives that the business desires.

For example, if an organization wants to project a professional and formal tone, but the data used to train the model includes more informal or colloquial language, the model’s responses may not be appropriate for their needs.

To mitigate this gap, iSolutionsAI creates sophisticated custom personas. These personas are developed before deploying our systems so they can be integrated in as the focused lens of the insights our clients intend to gain from their data using foundation models.

These personas define the identity, purpose and direction information that make up what an entity (organization, group, user) is trying to achieve.

This unique process allows us to work with our clients to craft a “voice” for their language model output that is consistent with their brand identity, and desired outcomes.

We use a partnership with an experienced marketing firm to create custom personas, brand guidelines and product messaging statements that we insert into prompt templates to adjust the model’s responses, thus ensuring that they reflect the desired tone and objectives of the organization.

This is a critically important process that when absent, can lead to undesired results.

Context Window

To truly harness the power of foundation models, businesses will want to leverage as much of their proprietary data as possible.

However, using interfaces to these models like the ChatGPT web interface or even API calls to back-end foundation models carry a restriction defined by the “context window”.

A context window constraint refers to the limitation on the amount of words or tokens that the model considers when generating its output.

At iSolutionsAI, many of our strategies that leverage AI involve creative ways to add context into these windows to provide customized outputs.

While GPT-4 introduced much larger context windows (8k and 32k tokens) and other open source models have introduced even larger ones (100k tokens), they come with computational and memory costs, as the model needs to process and store more data.

While inexperienced developers with only access to ChatGPT may quickly hit a wall with these constraints, we use various proven techniques to optimize the context window available while still using very large sets of our client’s data.

Several strategies for dealing with large data sets and context window constraints

For example, we use embeddings, intelligent splitting (a technique invented by iSolutions AI), chains, agents and memory techniques to harness the important context within our client’s proprietary data to make leveraging language models significantly more robust that single-shot approaches.

Additionally, here at iSolutionsAI, we are betting on content windows expansion to 1 millions tokens within 12 months. This will remove these concerns for many foundation model implementations.

Yet, this will completely change the way you approach foundation model implementation use cases. As a result, even today we continue to experiment and build applications that require these windows so we have the strategies in place on day one of the eventual release highly expanded content windows.

Relevance Gap

The relevance gap is a concept in information retrieval that refers to the difference between the information needs of a user and the relevance of the information retrieved by a model. The relevance gap can arise when the model generates outputs that are technically correct and fluent, but may not fully meet the user’s information needs or preferences.

Or potentially, a language model may have been trained before information was created that could help respond to an inquiry appropriately.

To mitigate the relevance gap, iSolutionsAI will fine-tune on specific datasets or use cases to better align with the user’s preferences and needs. We also use feedback mechanisms that can be integrated into a foundation model to allow users to provide input and refine the generated outputs.

We use sophisticated agents and tools that we introduce into API call chains that allow us to pull data from APIs or proprietary data sources to perform specialized functions.

These tools allow us to solve complicated business problems using proprietary data with the help of the power of foundation models in ways that weren’t possible just a couple years ago.

Math

Large language models are not optimized for mathematical tasks in the same way that they are for language tasks. Language models are typically trained to predict the probability of the next word in a sequence of words, based on the preceding words.

Using tools and agents to solve math accuracy issues in language models

In contrast, math involves working with numbers, equations, and formulas, and often requires more precise and structured reasoning.

While some large language models may be able to perform simple math calculations, such as addition and subtraction, they may not be able to handle more complex math problems with the same accuracy and efficiency as a dedicated mathematical computation engine.

Since business applications require accuracy of output and will need reliable mathematical outputs, we use several tools and techniques at iSolutionsAI to resolve these issues.

One method is to use API calls to dedicated mathematical computation engines, such as Wolfram Alpha, that are specifically designed to perform complex mathematical computations with a high degree of accuracy and precision.

Solving for math accuracy in language models

Additionally, whenever our language models identify a math problem, we make calls to a calculator tool to perform the calculation. The user inputs the problem in natural language, and the foundation model converts it to the format required by the calculator (numbers and math operations). The calculation is performed, and the answer is converted back into free language, ensuring a seamless user experience.

Legal Compliance

When it comes to using foundation models in business systems, organizations need to take steps to ensure that the use of these models is compliant with legal requirements and mitigates potential risks. There are techniques that make this process easy and comprehensive.

Incorporating a legal statement as context within the language model can help organizations mitigate risks and ensure compliance with legal requirements.

Our approach at iSolutionsAI is to incorporate a company legal statement as context within the language model API calls

A legal statement can provide clarity on the scope and limitations of the model’s use and ensure that it is compliant with relevant regulations and laws.

For example, it can specify that the foundation model is only to be used for certain business purposes and that it is not to be used for any illegal or unethical activities.

We can work closely with your legal staff to craft legal statements that provide clarity on the scope and limitations of the language model’s use and ensure that it is compliant with relevant regulations and laws.

We then incorporate these as context in our foundation model calls. Ultimately helping to protect our client’s reputation and reduce the likelihood of legal or regulatory penalties that interfacing with raw language models might expose.

Access to Proprietary Data

Language models are like locked boxes that you can use to understand text data, but you can’t change how they work.

However, you will want to leverage the power of a foundation model to analyze data you have on hand, such as the inventory in your warehouse, the sales figures for your online store, or the performance metrics for your marketing campaigns.

As a company with years of experience in building custom software applications, iSolutionsAI believes it is critical that our AI solutions can interface with your databases, allowing you to interact with your data and uncover the insights you need in previously unimaginable ways.

In addition to extensive custom software experience, iSolutionsAI also brings a team of data scientists and custom machine learning model builders to your AI deployments to give your data super-powers before it is integrated with language models. The ML+AI approach is a critical key to optimizing your data for effective language model integration.

This philosophy is anchored in the premise that your business’ data is what makes your organization unique and that can lead to differentiation within your market. So your AI implementations should use as much of your secret sauce as possible.

Our philosophy on how every business can leverage their own data using AI in ways not previously possible.

We describe that philosophy in detail here:

For example, you could ask the system to “Show me the most popular product in my store” or “Find all the customers who spent over $500 in the last month.”

Just Ask Your Data with help from language models

Moreover, our approach also enables you to combine multiple data sources and repositories that were previously incompatible. This means you can create a more comprehensive dataset that incorporates information from different departments, third-party vendors, and other sources.

For instance, you could merge your customer database with your social media analytics tool to gain a more holistic view of your brand’s online reputation.

We do a full inventory of your data before creating models in order to strategize ways to incorporate them all into your language model deployments.

Latent Intelligence

While “asking” your data or documents questions has become popular in the AI community, at iSolutionsAI we focus primarily on the intelligence that lives within your business’s data, without users having to ask.

Our approach begins with creating a “single source of truth” for all your proprietary business data.

Then, we create environments where the the latent intelligence within this data can be constantly evaluated to provide insights that are specifically useful for the personas interacting with the data.

We call this as “Latent Intelligence”, which refers to the hidden patterns, trends, and insights that are embedded within a business’s proprietary data. It goes beyond surface-level information and uncovers valuable knowledge that can drive strategic decision-making.

We create a custom environment to dynamically extract these insights without users explicitly asking for them.

We use processes like, Efficient Data Processing, Exploratory Data Analysis, Feature Engineering, Anomaly Detection and Unsupervised Learning coupled with our years of experience creating custom Machine Learning models to provide predictive analytics and advanced insights.

The days of looking backwards on your data using dashboards and reports is gone and the era of looking forward with insights and predictive analysis is now upon businesses.

The combination of latent intelligence and dynamic analytics tools allows businesses to extract valuable insights from their proprietary data without explicitly asking for them. By leveraging environments, organizations can uncover hidden knowledge, make data-driven decisions, improve operational efficiency, enhance customer experiences, and gain a competitive edge in the market.

The Right Model for the Right Job

In the fascinating world of natural language processing, there are a number of different models that serve various purposes, each with their own unique attributes and capabilities.

Two prominent types include large language models (LLMs) like OpenAI’s GPT, and open-source models such as LLaMA or MosaiacML.

LLMs like GPT are pre-trained on a vast array of internet text, and they excel at generating human-like text based on the input they receive. Their main advantage lies in their ability to handle a wide range of tasks without needing task-specific training data.

On the other hand, open-source models like LLaMA or MosaiacML provide a different value proposition. These models, often created and maintained by a community of researchers and developers, offer transparency, customizability, and control.

The main advantage is that you can run them in your own secure environment, which can be particularly beneficial for businesses dealing with proprietary data. That’s right, a foundation model that you can integrate with your proprietary data that does not expose your data, keeping your AI strategies secure and safe.

At iSolutionsAI, we monitor and sandbox the Open-source LLM offerings constantly in order to find the right model for the job and to help our clients keep all of their data secure, while still taking advantage of modern LLM capabilities.

Turning Deficiencies into Differentiators

At iSolutionsAI, we understand the limitations of connecting directly to raw language models and our Playbook for AI turns them into advantages for our clients.

We leverage tools and APIs to help our clients pull in safe, factual content, moderate conversations, and ensure scoring accuracy.

Our agents, chains, and embedding techniques help improve context and eliminate the relevance gap.

We also offer fine-tuning and reinforcement learning to help our clients include proprietary data into AI in a truthful and consistent manner.

Additionally, we embed legal terms to ensure compliance and mitigate risks.

We also ensure the security of our client’s data when interacting with LLMs through our techniques or even deploying systems locally with open-source models.

With our proven approach, our clients can harness the full potential of foundation models while minimizing risks and ensuring compliance.

Opportunities for Integrating Foundation Models for Business

The range of use cases for foundation models is limited only by imagination.

We have created AI solutions that help our clients differentiate themselves from their competition and “do more with less”, some examples include:

  • “Asking” docs
  • “Asking” data
  • Latent Intelligence
  • Business brains
  • Chat interfaces
  • Companions
  • Copywriting
  • Summarizing
  • Semantic search
  • Knowledge base improvements
  • so much more!

Overall, foundation models can help CEOs improve efficiency, reduce costs, and improve customer satisfaction.

By choosing iSolutionsAI to build AI solutions, businesses can harness the power of these tools and achieve their business objectives more effectively.

CONTACT US NOW TO START YOUR JOURNEY

Visit iSolutionsAI.com to start a conversation about how AI can help your organization do more with less

--

--

iSolutions
iSolutions

Written by iSolutions

Multiple award-winning experts in custom applications, machine learning models and artificial intelligence for business.

No responses yet