When insurers let AI do the talking
3 October, 2025 | Current General
Health insurers such as Helsana and CSS are increasingly relying on AI, whether as automated invoice filters, mandatory symptom checkers or digital assistants. But with the promise of savings come questions about liability, transparency and personal responsibility: who pays if AI gets it wrong?
Insurers are under pressure: premiums are rising, budgets are tighter. AI promises efficiency: according to CSS, it prevents around CHF 800 million in benefit costs every year that are deemed uneconomical. But trouble lurks when misinterpretations are made: Incorrect filtering, an inadequate database or unrecognized bias can lead to benefits being wrongly denied or paid out late.
Patient data: Gold and a stumbling block at the same time
Sustainable AI hardly works without masses of data. But health insurance companies work with highly sensitive health data. How this data is collected, processed and protected is crucial. Biased samples and a lack of data diversity (e.g. by age, gender, origin) increase the risk that AI recommendations will systematically discriminate. In addition, patients need to know who owns their data, who has access to it, how it is anonymized and whether they can object.
Concrete cases: Helsana & CSS are already using AI
Actor | Application | Mode of action / remarkable features |
CSS and Helsana | Auditing with AI | CSS checks over 85% of all invoices automatically and uses AI to identify anomalies in billing (e.g. excessive amounts, accident allocations). In this way, the system prevents around CHF 800 million in benefit costs each year, which are of no benefit to the insured person or the system. At Helsana, more than 80% of the approximately 26 million invoices are received electronically each year. Of these invoices, a good 95% are checked without human intervention, including with the help of AI. |
CSS | Customer service / chatbot pilots & advanced analytics | CSS is piloting AI models to process customer concerns more efficiently and make administrative processes faster and cheaper. Prevention programs are also supported by data analytics in order to provide tailored offers. KPMG |
Helsana | Symptom-Checker via App für Produkt «BeneFit PLUS Flexmed» | A digital symptom check er, which is classified as a medical device and provides an AI-based initial assessment via the Compassana app, is mandatory for adults under this tariff. The actual diagnosis is still made by the family doctor; the symptom checker only provides initial indications as to what could be causing the symptoms. |
Helsana | Acquisition of a software service provider (Adcubum) | The Helsana Group is acquiring Adcubum, a company that provides software solutions for insurers. This underlines its interest in operating more digitally and data-driven, including AI/automation options. NetWeek |
Responsibility & vulnerability – what follows from this
These examples show how insurers in Switzerland are already using AI. This results in specific risks and potential in terms of responsibility:
- Transparency towards insured persons: If an insurer like Helsana demands that an AI symptom checker classified as a medical device be used first, it must be clear: How does it work? What data does it use to make recommendations? What happens in the event of incorrect assessments? Without transparency, the risk of recourse claims or loss of trust increases.
- Automation bias & wrong decisions: When policyholders rely too heavily on an AI recommendation, or when claims rejections or delays occur because of automated AI filters, insurers can be held liable – legally or politically.
- Allocation of liability: The EU and Switzerland are already discussing what the legal framework should look like: Who is liable if an AI tool makes incorrect recommendations? Is it the insurer who uses the system, the developer, the operator or the patient?
- Data protection & data quality: Insurers work with sensitive health data. Incorrect, unbalanced or outdated data can lead to distortions (bias). In addition, the handling of data must be legally compliant – e.g. with regard to the Data Protection Act (DSG in Switzerland) and cantonal requirements.
- Patient rights & freedom of choice: Even if insurers introduce AI tools, patients and insured persons must not be forced to forego human assessment or traditional channels.
Outlook: Between innovation and control
Swiss insurers show how AI can offer benefits, but the system must be embedded in a controlled manner. Some starting points:
- Introduction of review bodies for AI systems, analogous to Swissmedic for medicines.
- Legal clarity: who is liable if AI tools in insurance products influence decisions?
- Promotion of independent data pools/research data so that the individual insurer or provider does not determine which data quality is used.
- Mandatory information for insured persons on the use of AI, as well as mechanisms for contesting or complaining about incorrect decisions.
The final conclusion is that AI can be a powerful tool for Swiss insurers to reduce costs and speed up processes. However, the more algorithms intervene in healthcare, the more important transparency, data quality and clear responsibilities become. Only if insurers disclose how their systems work, patient data remains effectively protected and the final decision lies with people will AI strengthen trust instead of jeopardizing it.
Binci Heeb
Read also: Data, diagnoses, breakthroughs – How AI is revolutionizing healthcare