When Artificial Intelligence Becomes Insurance Liability
21 October, 2025 | Nicht kategorisiert Current General
Criminals are increasingly using generative AI systems such as Claude, ChatGPT or Gemini to carry out fraud, extortion and targeted attacks. The challenge is also growing in Switzerland: for insurers and reinsurers, this means new liability, claims and premium risks.
The US AI provider Anthropic reported in a security report from the end of August 2025 that its language model Claude had been misused several times for cyber attacks. Criminals had used the chatbot to write phishing messages, identify vulnerabilities in networks and automate entire attack processes. In one case, Claude is even said to have helped negotiate ransom demands. According to Anthropic, 17 organizations from various industries – including healthcare, government and religion – were attacked in a single month.
AI as a tool of the attackers
This dramatically lowers the barrier to entry for cybercrime. What used to require a coordinated team of hackers can now be carried out by a single person with the help of AI agents. The effort, knowledge and risk for attackers are significantly reduced. For insurance companies, this means that attacks can be faster, more targeted and have a greater reach, which significantly increases aggregation and accumulation risks.
Swiss context: focus on the financial and insurance sector
In Switzerland, security authorities are observing this development with concern. The National Cyber Security Center (NCSC) reports a significant increase in cases in which fraudsters are using AI-supported content. Deepfakes from financial or government representatives are used to gain trust, obtain authorizations or manipulate damage reports. Social engineering attacks, in which AI creates deceptively real messages, voices or videos, are particularly sensitive.
In the second half of 2023, the NCSC recorded over 30,000 reported cyber incidents, a significant proportion of which involved generative AI elements. The financial and insurance sectors are also affected: AI-supported invoice and payment fraud is on the rise. There is also the regulatory dimension. Swiss financial and insurance companies have to meet stricter requirements for data protection and know-your-customer procedures. These rules become more complex when AI tools are involved that process or generate data autonomously.
New challenges for insurers
This significantly changes the risk profile for the insurance industry. The use of artificial intelligence by attackers is leading to a new dynamic: damage no longer occurs in isolation, but can spread quickly across industries, countries and systems. An attack can simultaneously affect dozens of companies that use similar technologies or supply chains. This increases the probability of damage aggregation and simultaneous major events.
Insurers also need to rethink their premium structures. Traditional models based on historical data fall short here. AI-related risks are developing faster than they can be statistically recorded. Underwriters must therefore establish new valuation methods and explicitly take AI scenarios into account. Some insurers are already considering introducing special exclusions or additional clauses for the use of generative AI in order to avoid misunderstandings with regard to cover.
Another aspect concerns liability. If a company uses AI that unintentionally causes damage or is misused, who is responsible? This question arises not only for companies, but also for insurers who cover such risks. A new area of tension is emerging between technical innovation, regulatory pressure and insurance reality, which has barely been clarified in legal terms.
Need for action in the insurance industry
Insurers and reinsurers in Switzerland are now faced with the task of adapting their strategies to this new threat situation. This includes analyzing internally which AI systems are used and how they are secured. It is equally important to review the insurance conditions: Are AI-based attacks covered or explicitly excluded? Many policies still define cyber risks according to older models and do not take sufficient account of the new role of generative systems.
In addition, insurers should formulate clear guidelines for their customers, such as training on the safe use of AI tools, guidelines for incident response plans or for governance when using chatbots and automation systems. The monitoring of AI-supported attacks is also becoming increasingly important. Early warning systems that detect anomalies in communication or data behavior can help to limit damage.
In turn, the extent to which a customer is exposed will be a decisive factor when determining premiums. A company that uses generative AI productively but does not implement clear guidelines or security checks will probably have to pay higher risk premiums in future. Prevention and transparency will become a competitive advantage for both customers and insurers.
AI is an opportunity and a risk
The use of generative AI by cyber criminals marks a turning point in the digital risk landscape. What used to require specialist knowledge is now accessible to everyone. For the insurance industry, this means a double challenge: on the one hand, they have to strengthen their own defenses and, on the other, reassess the risks of their customers. Artificial intelligence is therefore both an opportunity and a risk, and a tool of progress that can become a liability trap in the wrong context.
Binci Heeb
Read also: AI Is Becoming a Partner to Humans – Not Just a Tool
Suche:
Sponsoren:
Categories
- Blog (31)
- Company portraits (2)
- Current (161)
- General (142)
- Interviews (15)
- Jobs (1)
- Nicht kategorisiert (724)
- Podcasts (24)
- Video (14)