Generative AI (GenAI) is transforming a huge number of industries, but its remarkable ability to create text, visuals, audio, and video is only valuable to businesses when that output can be trusted. When GenAI fails to generate accurate, reliable outputs, it becomes at best an expensive misinvestment and at worst a major risk to business.
Instances of AI presenting misleading or fabricated information are becoming increasingly common. Recent legal proceedings, for example, revealed how two attorneys relied on AI-generated citations that appeared credible, complete with case references and judicial quotations. This underscores the risk of relying on AI outputs without verification.
Why Generative AI invents what it cannot know
As predictive engines, generative AI models are inherently prone to fabricating information. In crude terms, their core purpose is to synthesise vast amounts of data, identify the patterns within that data, and use them to determine the most probable data output that follows a given prompt.
Clearly, this is different from ‘knowing the right answer’ – when AI predicts that a certain phrase is the most probable response to a prompt, it states that phrase as though it were true. Sometimes, that phrase may refer to details that are not, in fact, true. The AI model ‘hallucinates’ these facts or figures because they seem probable, not because they’re correct.
An engineering approach with AI at its heart
This is a systemic industry challenge, not just a model failure. Because large language models (LLMs) are fundamentally predictive engines, the issue is not just a small bug to be patched, but a design flaw that requires a new engineering approach. In the absence of a verified knowledge base, the model relies purely on probabilistic predictions rather than factual validation.
While concerns around hallucinations still remain, it is important to recognise the significant progress already made across the industry to reduce them. Today, they are no longer considered among the top-tier concerns for AI reliability, as improvements in model architecture, context grounding, and validation layers have meaningfully lowered error rates.
For example, newer AI models are increasingly trained with structured reinforcement signals that penalise incorrect outputs, while enterprise deployments now routinely integrate verifiable data sources to anchor responses.
The organisations are also increasingly focusing on context-aware AI, where models are supported by verifiable information sources rather than relying solely on predictions. Retrieval-augmented generation (RAG) has emerged as one method to provide these contextual anchors, but it is only one facet of a broader shift. A new wave of verifiable, validation-driven frameworks is now being developed to ensure AI systems not only generate content, but also verify it, validate it, and then act responsibly on it.
Beating AI hallucinations
This is partly a technical concern, requiring organisations to build a robust, scalable data management system that ensures the model has access to all relevant information. But it’s also about people and creating a responsible approach to this new technology that will help organisations scale their AI adoption at speed.
Upskilling is critical – both ensuring employees have the skills required to get the best out of GenAI and that there are the levels of transparency needed to ensure trust. This will enable the confident use of AI and help enable responsible deployment. Companies need to broaden their training efforts beyond pure technical know-how, giving employees a grounding in AI ethics and how to identify and reduce errors and bias.
Over the next six months, further enhancements are expected as models become more context-aware, more tightly coupled with real-time knowledge bases, and increasingly governed by transparent validation frameworks. These advancements will continue to improve factual accuracy and help organisations deploy AI with greater confidence, reducing hallucinations even further.
Harshul Asnani is the president and Head of UKI and Europe at Tech Mahindra