AI hallucinations are not going anywhere. Case in point: Google’s generative AI model, Gemini, made headlines earlier this year for generating images that avoided depicting white men. The results were strange – Asian Nazis, black Vikings, and female Popes – and caused serious concerns about the future of AI and its potential impact on the world. Why couldn’t an AI model built by Google – a company renowned for organizing information – get basic facts right?
These errors are not unique to Gemini. Hallucinations occur when generative AI produces incorrect or misleading outputs but presents them as factual. Recently, an attorney using OpenAI’s ChatGPT presented fabricated case law in court, and Microsoft’s AI falsely claimed a ceasefire in Israel. The implications of these hallucinations are far-reaching, risking the spread of misinformation, amplifying biases, and causing costly errors in critical systems.
But here’s the reality – AI hallucinations aren’t bugs in the system—they’re features of it. No matter how well we build these models, they will hallucinate. Instead of chasing the impossible dream of eliminating hallucinations, our focus should be on rethinking model development to reduce their frequency and implementing additional steps to mitigate the risks they pose.
Reducing AI Hallucinations
Most hallucinations come down to one thing: poor data quality. We can reduce their frequency and severity by refining the datasets used to train these models and improving their architecture. By cleaning and curating datasets, researchers can minimize errors and biases that lead to hallucinations. For instance, if an AI model trained on a dataset with an overrepresentation of urban environments is asked to analyze rural infrastructure, it might incorrectly recommend high-rise buildings rather than low-density housing. It’s like giving a person the wrong map and expecting them to find the right destination. By refining the dataset to include more balanced examples of both urban and rural environments, the model can better understand and generate accurate solutions for diverse geographical areas.
Data augmentation is another defense. It involves expanding datasets by generating new samples and preventing overfitting—a common cause of hallucinations. One common technique for data augmentation is the use of generative adversarial networks (GANs), where two neural networks—one generating synthetic data and the other evaluating it—are trained together. This process allows GANs to create highly realistic, diverse data samples that resemble real-world scenarios. Imagine training a medical AI system with more synthetic images of rare diseases; since these cases are uncommon in real-world data, augmenting the dataset with synthetic examples could significantly improve the system’s performance.
Then there’s the model architecture. Hybrid models—ones that combine transformers with reasoning engines or knowledge graphs—are showing promise in keeping hallucinations in check by introducing more grounded, factual knowledge. Continual learning, where models are updated with new information over time, is another approach that holds promise.
Human Involvement in AI Training
Since hallucinations are inevitable, incorporating human-in-the-loop approaches is essential. We can’t deploy AI and expect it to be perfect, so having human experts review AI outputs helps reduce the risk of harmful outcomes. In industries where the stakes are high—such as in healthcare, legal, or financial services— having human experts actively review AI outputs can drastically reduce the risk of harmful outcomes. Active learning allows human experts to guide model training by correcting errors or labeling uncertain predictions. Techniques like red teaming—where experts attempt to break the system—help expose vulnerabilities before deployment, ensuring the model is more reliable in real-world use.
Another key area of focus is automated fact-checking. AI systems can be integrated with external knowledge bases to verify claims in real-time, flagging potential hallucinations for human review. This hybrid human-AI approach ensures that even when an AI goes rogue, we can catch it in the act and correct it before causing significant harm.
Finally, building transparency into AI systems is key to managing hallucinations. Encouraging AI models to provide citations for their outputs or implementing mechanisms where the model explains its reasoning can help users identify when a hallucination has occurred. If an AI system offers a questionable output, the ability to trace the information back to its source or understand how the model arrived at a conclusion gives users a way to validate or challenge the accuracy of the information. This transparency not only helps catch hallucinations early but also fosters trust in the AI by providing users with tools to understand its decision-making process.
No matter how advanced AI gets, systems will never be completely immune to AI hallucinations. The future of AI depends on how well we manage them. As AI becomes increasingly influential, ensuring that hallucinations don’t spiral out of control is crucial. The future isn’t about eliminating AI hallucinations; it’s about mastering them.
About the Author
Ulrik Stig Hansen is the President and Co-Founder of Encord, a pioneering company specializing in building data infrastructure for artificial intelligence development. Since its inception in 2020, Encord has raised over $50 million and gained recognition for its innovative approach to data management, curation, annotation, and model evaluation. Ulrik holds a Master’s of Science in Computer Science from Imperial College London, providing him with the technical expertise and strategic vision essential for driving Encord’s growth. Outside of his professional life, he is passionate about developing ultra-low latency software applications in C++ and enjoys experimenting with innovative culinary techniques, particularly sushi making.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Check us out on YouTube!