Controlling Hallucinating LLMs with Truth

iSolutions
10 min readNov 1, 2023

--

How Businesses can Transform AI Hurdles into Advantages

TL;DR:

  • Language model hallucinations produce convincing but incorrect text, a challenge for businesses, but there are techniques and strategies to mitigate these issues and leverage AI effectively.
  • Providing comprehensive prompts with context to language models helps control hallucinations, ensuring more accurate and reliable output in various applications.

One of the more common objections to leveraging language models for business is the concern over hallucinations in language model responses.

Business decision makers should be aware of techniques available today that not only control these hallucinations, but also allow their business to leverage it’s own truth in previously impossible ways.

Intelligent business decision makers can turn these concerns into advantages when evaluating their AI strategies.

What are hallucinations?

Language model hallucinations refer to situations where large language models generate convincing but false or nonsensical text. This can happen when the models do not have enough context or knowledge about a particular topic, so the model attempts to “fill in the gaps” by generating plausible-sounding but incorrect text.

This phenomenon happens because language models are not databases nor are they intended to be information sources, but rather they are trained to generate text probabilistically.

During training, a transformer (the specific type of neural network architecture used in language models) is shown many sentences and texts. The model learns to predict the next word in a sequence given the previous words and context. To do this, it associates words statistically — noting how often word A is followed by word B, word C, word D in its training data.

For example, after training on many sentences, the model may note these example transition frequencies:

- “The” is followed by “cat” 50 times

- “The” is followed by “dog” 40 times

- “The” is followed by “car” 30 times

- “The” is followed by other nouns 500 times

So the probability distribution for the next word after seeing “The” becomes:

- P(“cat” | “The”) = 50/620 = 0.08

- P(“dog” | “The”) = 40/620 = 0.06

- P(“car” | “The”) = 30/620 = 0.05

- P(other noun | “The”) = 500/620 = 0.81

Then, if generating text given the prompt “The”, it will randomly sample from this probability distribution to pick the next word. Words like “cat” and “dog” will have higher chances than “car”, but generic nouns are most likely.

The model builds up similar probability tables for many common word pairs and sequences based on counts in the training data. These allow the model to generate words probabilistically during completion.

So in this way, the model learns the statistical associations between words that allow it to generate new text that conforms to the patterns in the training data, even completing specific sentences it hasn’t seen before. The probabilities quantify the word relationships.

When a human poses a “question” (referred to as a “prompt”) to the model, it responds by generating a continuation of the prompt text, not by actually understanding or answering the question. The words the model predicts are based on the statistical probabilities learned from the training data, not any true comprehension or reasoning about the question.

For instance, if prompted with

What is the capital of France?

the model might respond

The capital of France is Paris.”

But this merely continues the question text pattern statistically, not because the model actually knows Paris is the capital.

So while the model may seem to provide plausible answers to questions, it is an illusion of intelligence. The model uses the probabilities derived from its training data to complete text in a statistically coherent way, not to convey any real knowledge or understanding about the topic. It does not truly comprehend the question asked nor the response it generates.

However, when prompted with unusual word combinations or topics not well represented in its training data, the transformer does not have strong statistical guidance on what words should follow plausibly. So it ends up sampling words based on weaker statistical patterns learned from different contexts.

This can result in coherent but false or nonsensical continuations, because the statistical associations being used to generate the text probabilistically do not accurately represent facts about the specific prompt topic. The transformer is just predicting words that tend to follow previous words generally, not words that are necessarily true.

Therefore, the large language model should not be mistaken as having the capabilities of a knowledgeable search engine or database. It cannot retrieve or understand information, only textually continue prompts probabilistically.

When hallucinations don’t matter

Hallucinations in language models can actually be quite beneficial in creative and artistic domains, where the goal is not to convey factual information but to ignite imagination and innovation.

In storytelling and fiction writing, probabilistic completions can infuse unpredictability and fantastical elements into narratives, introducing unique plot twists or magical realms. Similarly, poetry and artistic expression benefit from hallucinations as they evoke metaphorical language and abstract ideas, enhancing the depth and impact of artistic works.

In business uses cases, they help generate catchy product slogans, they can drive innovation workshops, act as focus groups and help decisions makers brain storm ideas.

Providing Truth to control hallucinations

Another example of how a business can use a language model is to automate the summarization of meeting transcripts, extracting key points, highlights from discussions and even isolating next steps.

In this example, hallucinations pose minimal risk because the model’s role is to condense and rephrase existing information rather than generate new, potentially inaccurate content.

All of the truth, in this case, is being provided as part of the prompt in the form of additional context that is sent to the language model along with the prompt.

The prompt is not just a question but rather a set of instructions or initial input given to the language model, guiding it on how to perform the task. In the case of meeting transcript summarization, the prompt includes the instruction to summarize along with the transcript itself, making all the necessary information available to the model.

This ensures that the model doesn’t need to generate hallucinations or fabricate information because it can extract and rephrase the facts presented in the provided transcript.

In business use cases involving language models, a prompt should be viewed as a comprehensive set of instructions along with the context required to perform the task accurately. This combined prompt serves as the guiding framework for the model to understand the task and generate a meaningful response.

A proper prompt typically consists of two essential components:

1. Question or Instruction: The first part of the prompt provides explicit instructions to the language model, specifying the desired task or action it should perform. For example, in the case of meeting transcript summarization, the instruction could be something like “Summarize the key points from the following meeting transcript.

2. Context and Ground Truth: The second part of the prompt contains the context or reference material that serves as the source of truth or information for the model. This context is crucial as it contains the factual details, data, or information that the model needs to refer to when generating its response. In the context of summarizing meeting transcripts, this section includes the actual transcript itself, which is the authoritative source of information.

For instance, the prompt could explicitly state,

Generate a summary based solely on the information presented in the provided transcript. [TRANSCRIPT TEXT]”

This instruction reinforces the idea that the model should not create fictional details or make assumptions beyond what is presented in the provided context.

Ground Truth

The concept of providing truth to models is quite common. In various fields, including data analysis, machine learning, and research, “ground truth” refers to the absolute and verified truth or reality against which other measurements or data are compared or evaluated.

For example, in machine learning tasks such as image classification, the ground truth consists of the correct labels or categories for each image in a dataset. Algorithms are trained and tested against this ground truth to measure their accuracy in making predictions.

Providing context in prompts to a language model is a way of offering ground truth to the language model, especially in tasks that require generating accurate and contextually relevant responses.

By embedding context within prompts, you essentially provide the model with the necessary background information, guidelines, or factual references required to generate responses that adhere to the ground truth.

This practice enhances the model’s ability to generate accurate, relevant, and contextually appropriate content, making it a valuable tool in various applications that rely on the provision of ground truth to ensure reliable results.

Truth Injection Techniques

Retrieval Augmented Generation (RAG) is a mechanism designed to enhance the performance and reliability of large language models by dynamically incorporating external information as context within prompts.

In business scenarios where nuanced performance is crucial and raw language models may fall short, controlling truth by providing proprietary data to the models is the best way to combat hallucinations.

RAG significantly reduces the likelihood of hallucination by relying on actual documents, data, APIs, etc in the system to generate responses. Thus, ensuring that the generated information is grounded in real, verifiable data.

Integrating RAG techniques like converting documents to vectors, using semantic search, and connecting databases and APIs to prompts can dynamically add truth to your interactions with language models, enabling businesses to maintain control over the accuracy and relevance of the information generated by these models.

Here’s how this process can work:

Converting Documents to Vectors:

Businesses can convert documents, reports, or information into numerical forms known as vectors. These vectors act as data structures that not only capture what words and documents mean but also how they relate to each other. This helps language models understand the content better, resulting in more structured and contextually rich information that can be sent along with the prompts to assist in generating responses that align with the meaning and intent of the task.

Semantic Search:

Semantic searching is a way for businesses to leverage vectorized documents, passages, or data points that closely match the meaning and context of a given query or prompt. Unlike basic keyword matching, semantic searching takes into account the actual meaning and context behind words and phrases, making it easier to pinpoint the most relevant information. This process cretaes a robust foundation for extracting contextually rich insights from data and providing them as context within prompts to language models.

Connecting Databases and APIs:

Businesses can further strengthen their information retrieval capabilities by integrating query responses from their databases and external APIs into prompts as context. This provides the model with dynamic access to up-to-date and precise data directly from proprietary sources and external APIs, guaranteeing that the responses remain firmly rooted in real-time information that is unique to the organization.

By connecting proprietary data sources and employing semantic search, businesses maintain control over the “truth” presented in their prompts, ensuring that responses generated by the language model align with the most accurate and relevant data from their databases and APIs.

These techniques empower businesses to leverage language models while retaining control over accuracy and relevance, which is particularly valuable in industries where data precision and proprietary information compliance are vital, such as finance, healthcare, and legal sectors. This approach enables businesses to harness language model capabilities while ensuring that the generated responses remain trustworthy and closely aligned with their proprietary data sources and truth.

A New Interface Layer to Business Data

Using these techniques to incorporate truth into prompts for language models serves a dual purpose. Firstly, it effectively reduces the potential for hallucinations by supplying the model with the information it needs to evaluate and generate responses as part of the prompt. By doing so, it ensures that the generated content aligns with verified data, mitigating the risk of inaccurate or speculative information.

But a hidden benefit is that this approach introduces a transformative dimension to business information management. It eliminates the need for traditional paradigms like the presentation layers of databases or web front-ends as the sole means of data interaction.

Instead, it enables businesses to consolidate their diverse data sources into a unified “source of truth.” This centralized data repository then becomes the foundation for a dynamic and efficient modern interface layer, allowing users to interact with proprietary data seamlessly and in real-time.

Users gain the ability to access, query, and derive insights from this unified data source through intuitive input layers, creating a more agile and responsive environment for informed decision-making and operational efficiency.

Ultimately, this innovative approach revolutionizes how businesses harness their data, making it a valuable asset in achieving their goals and staying competitive in rapidly evolving markets.

How to Get Started

Businesses should view these new techniques as a reason to take inventory of all their business data. By leveraging retrieval techniques, businesses can consolidate these disparate data sources into a single “source of truth,” which serves as a centralized repository of verified information.

Before even looking at how to integrate language models businesses can start taking inventory of their proprietary data immediately. For example, without delay they can:

1. Identify all data sources and document data types.

2. Classify data by sensitivity and importance.

3. Map data flows and dependencies.

4. Assess and improve data quality.

5. Establish data governance, ownership, and retention policies.

This consolidation of data into a single source of truth with newly defined governance, ownership, and retention policies not only streamlines data management but also paves the way for exploring a new interface layer — a modern mechanism for interacting with data that was previously unattainable.

With language models at the core, businesses can now provide users with a dynamic and intuitive means of engaging with their proprietary data. This interface layer offers unparalleled flexibility and efficiency, enabling users to access, query, and gain insights from the unified data source in ways never before possible.

By embracing language models along with these techniques, businesses not only enhance data governance and accuracy but also unlock a transformative potential for how they leverage and interact with their data assets. This holistic approach empowers organizations to extract more value from their unique data and make data-driven decisions with newfound agility and precision, ultimately driving innovation and competitiveness in their respective industries.

By choosing iSolutionsAI to build AI solutions, businesses can harness the power of these techniques and achieve their business objectives more effectively.

CONTACT US NOW TO START YOUR JOURNEY

Visit iSolutionsAI.com to start a conversation about how AI can help your organization do more with less

--

--

iSolutions
iSolutions

Written by iSolutions

Multiple award-winning experts in custom applications, machine learning models and artificial intelligence for business.

No responses yet