The call came shortly after nine. A customer’s voice, calm but tense underneath. And a question that goes deeper than any rejected claim: whether anyone in this industry is still really standing up for themselves. One that remains.
The call came shortly after nine.
A customer in her mid-fifties, insured for twenty years, calm voice, but you could hear the tension underneath. Her claim had been rejected, automatically, no one had ever looked.
She didn’t ask: “Why did the algorithm calculate it that way?”
She asked, “Is anyone else here going to help me?”
I have not forgotten this sentence, not because of the customer, but because of the question it triggered in me: Who would have answered her?
The silent temptation of efficiency
We are living in a moment in which the insurance industry is experiencing a silent temptation, not the temptation to risk too much or regulate too little, but an older, more subtle one: the temptation to relinquish responsibility without having to call it that.
It’s called efficiency. It’s called scaling. It’s called digitization, and sometimes there’s something else underneath.
When decisions are delegated
I have been observing this pattern for years, not just in insurance, but everywhere where decisions are increasingly delegated to systems. And I observe how relieved managers often seem when an algorithm makes a difficult decision: no discussion, no queries, no unpleasant conversation. The discomfort only comes later, usually when someone calls.
The question behind this is not a question of technology, it is a question of character: what am I prepared to take responsibility for myself?
Why trust is human
Here is what happens psychologically in these moments. Neurobiologically, trust does not arise because someone can understand a decision, but because someone senses that another person will stand up for it. Our brains are evolutionarily designed to trust social actors, beings who have intentions, can make mistakes and are accountable for consequences.
Algorithms do not fall into this category, not because they make poorer decisions, but because they cannot take responsibility.
Algorithmic Aversion
This means that even if a system makes the objectively better decision, more statistically sound and fairer than a person ever would, it cannot build trust, not because it is wrong, but because the brain of the other person simply does not receive a trust signal.
In behavioral research, this is called algorithmic aversion: the deep aversion to algorithmic judgments as soon as something goes wrong. Documented, reproducible and completely human. For the insurance industry, this means something concrete: customers who receive an automated decision do not question the system when they reject it. They question the relationship.
When people are allowed to make mistakes
If a person makes a wrong decision, they can explain themselves. They can say: I saw it that way and I was wrong. They can apologize, reassess the case and win back trust step by step. These moments are unpleasant, but they are the foundation on which loss of trust is repaired.
An algorithm cannot do this: it continues to optimize according to the same parameters until someone changes them, and for the customer this means that she cannot call anyone, cannot convince anyone, but can only accept or complain.
Regulation as reality
The regulatory authorities have already understood this, even if the industry is still hesitant. FINMA and European regulations are increasingly demanding explainability for automated decisions, not out of technological scepticism, but because regulators know what psychologists have known for some time: A decision that no one can justify is not a decision. It is a risk.
In conversations with managers who have to make decisions under pressure, I regularly observe how automated decision-making systems unconsciously establish a phenomenon that psychologists call responsibility diffusion. Responsibility is spread so widely across the system, data, model and supervision that, in the end, nobody is really responsible anymore. Everyone has a say, but no one has decided.
The broker in a dilemma
This is tempting for managers in efficiency-driven organizations, but for brokers it is existentially dangerous, because the broker is in the middle. They have advised the customer, recommended the product and built up the relationship. If the insurer automatically declines and the broker has to explain without knowing the decision, they are left with no answer.
It doesn’t damage the system. It damages it.
Who we want to be
I am not writing about technology here, but about who we want to be when the system decides for us. This becomes apparent every day, in concrete moments: Who signs off on a decision made by a model without questioning it? Who explains to a customer why her claim was rejected and remains genuinely present instead of referring to the system? Who takes on the conversation that nobody wants to have?
Leadership shows itself in the uncomfortable
Leadership personality is not demonstrated by how you introduce systems. It is shown in how you deal with what systems cannot do.
And systems cannot stand up: Only people can do that.
The question was never: “How much AI do we use?” The question was always: “What am I prepared to take responsibility for myself?”
A promise that carries
Efficiency is important, scaling is important, and models that make better decisions than a human under time pressure are important. But insurance, reduced to its essentials, is a promise: You pay today so that someone will be there tomorrow if something goes wrong. This promise doesn’t work because you can calculate it. It works because you believe it.
And faith needs someone to stand up for it.
Trust is not a feature that can be built into a model. It is an attitude that you choose every day, or not.
Someone takes off at the end
The customer from the other day got her claim reimbursed after a person looked. She called a friend and told her she was well insured.
It’s not in any KPI, but it’s the only reason why this industry will still exist tomorrow. Behind it is someone who has picked up the phone.
Marcus Selzer
Read also: When the head beats the AI