When AI hallucinates: Risks for insurers and risk managers

11 November, 2025 | Current General
When AI hallucinates, it can quickly become the source of bad decisions.
Wenn KI halluziniert kann sie schnell zur Quelle von Fehlentscheidungen werden.

Generative artificial intelligence promises efficiency, automation and new insights from data. However, it can also “hallucinate”, i.e. generate convincing but false information. For the insurance and risk industry, which relies on facts, traceability and trust, this creates new operational, liability-related and regulatory risks.

Generative models such as ChatGPT or Copilot are increasingly being used in the insurance industry for everything from claims processing and customer communication to risk assessment. Their ability to respond in natural language creates proximity and efficiency, but also harbors a treacherous danger: when a model “invents” something, it often sounds plausible even though it is wrong. Deloitte describes this phenomenon as “malicious hallucinations” – erroneous but convincing outputs that can have fatal consequences in the insurance environment.

One example: an AI-based chatbot falsely tells a customer that their household contents insurance also covers flood damage, even though this is excluded. If a claim is made, there is a risk of financial losses, recourse claims and massive reputational damage. Such cases are not classic software errors, but an expression of the way language models work: They generate language, not truth.

Halluzinierende KI in der Praxis

The insurance industry has long been using AI in key processes. In underwriting and risk assessment in particular, models can recognize erroneous patterns or create fictitious correlations. A study by InsuranceIndustry.ai warns that this can result in incorrect risk profiles, for example if the AI “invents” a claims history that never existed.

Hallucinations also harbor dangers in claims processing and fraud detection. Systems that prepare decisions without human control can misclassify cases, misinterpret documents or apply “creative” rules. This leads to distortions and wrong decisions that are difficult to understand. An incorrect automated response to a customer inquiry can also trigger legal risks, especially in highly regulated markets such as Switzerland and the EU.

The market is beginning to respond to these uncertainties. The Finnish start-up Armilla, for example, has worked with insurers to develop a policy that insures companies against damage caused by faulty AI outputs, including hallucinations. It is one of the first insurance policies against algorithmic errors of this kind. This shows that Hallucinations are not a marginal phenomenon, but a recognized risk category.

The new operational risk

In essence, AI hallucinations give rise to three closely interlinked risk dimensions. Firstly, there is a risk of incorrect decisions in underwriting, pricing or claims assessment if AI outputs are adopted uncritically. Secondly, incorrect information provided to customers can lead to liability and reputational risks. Thirdly, regulators are focusing on the issue of transparency and traceability.

The EU AI Act requires insurers to ensure that AI-supported systems are explainable and verifiable. This is a challenge, as language models are usually unable to cite sources. In this context, Deloitte refers to ‘source traceability’ as one of the biggest hurdles for the industry.

Governance and data quality thus become key levers. Hallucinations are less a question of poor technology than a consequence of a lack of control, unclear responsibilities and an inadequate database. Where there is a lack of human supervision, risks arise that are almost impossible to calculate.

How insurers take countermeasures

Many companies are now responding with so-called ‘human-in-the-loop’ approaches, in which AI results are checked by employees. Others are relying on ‘retrieval-augmented generation’ (RAG), a technology that ensures that generated responses are based on verified internal knowledge bases rather than open internet sources. A study shows that this can significantly reduce the error rate.

At the same time, a new governance framework is emerging: Companies are establishing AI audits, defining responsibilities and documenting the origin of training data. Insuring AI risks, whether internally via provisions or externally via specialized policies, is becoming part of this strategy. And last but not least, external communication is becoming increasingly important: those who transparently explain when and how AI is used will strengthen customers’ trust in an increasingly automated world.

AI hallucinations – a structural risk

AI hallucinations are not a technical curiosity, but a structural risk. In an industry based on data, trust and precision, the “creative” machine can quickly become a source of bad decisions. Insurers must learn to manage the unpredictable, just as they have always done with human behavior. Governance, data quality, human control and, where appropriate, insurance cover against algorithmic errors are no longer optional, but a prerequisite for trust in a new, AI-supported risk world.

Binci Heeb

Read also: AI is becoming a partner to humans – not just a tool


Tags: #AI #AI audits #Algorithmic errors #Damage assessment #Data quality #Data recognition #Efficiency #Error rate #Governance #Hallucination #Human #Insurer #Practice #Pricing #Risk manager #Risks #Traceability