In this contributed article, Ulrik Stig Hansen, President and Co-Founder of Encord, discusses the reality – AI hallucinations aren’t bugs in the system—they’re features of it. No matter how well we build these models, they will hallucinate. Instead of chasing the impossible dream of eliminating hallucinations, our focus should be on rethinking model development to reduce their frequency and implementing additional steps to mitigate the risks they pose.