Generative AI hallucinations: What can IT do?

BrandPost By Sharon Maher, Dell Technologies
Nov 07, 20235 mins
Artificial Intelligence

IT can reduce the risk of generative AI hallucinations by building more robust systems or training users to more effectively use existing tools.

Reality of Consciousness
Credit: agsandrew

Generative AI adoption is growing in the workplace—and for good reason. Studies indicate the potential for significant productivity gains: workers saw some writing projects speed up by 40% in a study released by Science and developers were able to complete certain tasks up to 30% faster according to McKinsey research. But the double-edged sword to these productivity gains is one of generative AI’s known Achilles heels: its ability to occasionally “hallucinate,” or present incorrect information as fact.

Hallucinations can be problematic for organizations racing to adopt generative AI. In a perfect world, generative AI outputs do not need to be rigorously scrutinized. But in the rare instances where erroneous information from GenAI hallucinations makes it out to the public, the results can be embarrassing and can erode brand trust and credibility.

What IT can do about generative AI hallucinations

Fortunately, there are actions IT organizations can take to reduce the risk of generative AI hallucinations—either through decisions they make within their own environments or how internal users are trained to use existing tools. Here are a range of options IT can use to get started.

Use retrieval-augmented generation (RAG)

Retrieval-augmented generation (RAG) is a technique that allows models to retrieve information from a specified dataset or knowledge base. This approach allows you to use a large language model to generate answers based on relevant documents you provided from your data source which can result in more relevant and accurate outputs. What’s valuable about RAG is that it can be reasonably easy to stand up and can be done on existing infrastructure with code snippets readily available online.

Consider fine-tuning a large language model

Retrieval-augmented generation can be a useful technique for getting more accurate outputs, but it doesn’t impact the underlying large language model you’re working with. For that, you’d need to move on to fine-tuning. This is a supervised process that involves retraining a large language model with data so that it generates content more accurately based on that data. RAG and fine-tuning do not need to be an either/or proposition; in fact, a fine-tuned model paired with RAG has been shown to significantly reduce hallucinations.

Employ prompt engineering

Prompt engineering is the fancy term for using the question-and-answer process of interacting with a large language model to train it. Using certain prompt engineering techniques can train models to respond in more predictable ways and can increase the accuracy of problem-solving. However, prompt engineering is limited in that it does not have the ability to increase the knowledge of the base model—in many ways, it comes down to the trial-and-error of knowing what prompts deliver good results and then using them reliably.

Teach generative AI best practices to everyday users

This last step cannot be neglected: ensure users have adequate training in getting the most from large language models and are using best practices like peer reviews and fact-checking of content. Teach rank-and-file users how to author prompts in ways that are more likely to result in high-quality outcomes. For example, are they using clear language and providing adequate context within their prompts? Likewise, once they have an output, are they reviewing the contents with internal subject matter experts and peers? These commonsense practices can reduce errors and ensure content is up to snuff before it is seen publicly.

The antidote to hallucinations: Where IT goes from here

As organizations consider their generative AI journeys, the risks of AI hallucinations may be a cause for concern, but with the right strategies in place, IT can reduce those risks and realize generative AI’s promise. It’s likely many IT organizations will employ a number of these approaches, for example, model training or augmentation alongside user education for the broadest possible coverage. And it’s also worth noting these strategies are not exhaustive and what works for each organization will depend on specific use cases and available resources. IT organizations will also want to consider what deployment options will give them the right mix of security and customization to meet their needs.

No matter where you are in your GenAI journey, the steps above can help. And if you need more guidance, enlisting the support of partners can get you there faster. At Dell, we work with organizations every day to help them identify use cases, put solutions in place, increase adoption, and even train internal users to speed up innovation.

To learn more, visit dell.com/ai.