AI Hallucinations Are Inevitable—Here’s How We Can Reduce Them

In this contributed article, Ulrik Stig Hansen, President and Co-Founder of Encord, discusses the reality – AI hallucinations aren’t bugs in the system—they’re features of it. No matter how well we build these models, they will hallucinate. Instead of chasing the impossible dream of eliminating hallucinations, our focus should be on rethinking model development to reduce their frequency and implementing additional steps to mitigate the risks they pose.

Generative AI Models Are Built to Hallucinate: The Question is How to Control Them

In this contributed article, Stefano Soatto, Professor of Computer Science at the University of California, Los Angeles and a Vice President at Amazon Web Services, discusses generative AI models and how they are designed and trained to hallucinate, so hallucinations are a common product of any generative model. However, instead of preventing generative AI models from hallucinating, we should be designing AI systems that can control them. Hallucinations are indeed a problem – a big problem – but one that an AI system, that includes a generative model as a component, can control.