AI Is Becoming a Partner to Humans – Not Just a Tool

13 October, 2025 | Current General Interviews
KI wird zum Partner des Menschen - nicht zum Werkzeug: Diese Meinung vertritt Prof. Dr. Kathrin Kind.
AI is becoming a partner to humans - not just a toll, says Prof. Dr. Kathrin Kind.

Prof. Dr. Kathrin Kind is optimistic but realistic about the future of artificial intelligence. In this interview, the expert talks about ground-breaking developments in medicine and science, the shift from “human-in-the-loop” to “human-on-the-loop”, the need for global governance and why trust is the decisive currency in the age of AI.

Prof. Dr. Kathrin Kind is Chief Data Scientist and AI Director Global Growth Markets at Cognizant and, among other things, Responsible AI Governor for Switzerland at the Global Council for Responsible AI and Member, Global Future Council on Data Frontiers at the World Economic Forum and as such one of the world’s most recognized experts to talk about the future of AI.

Where do you see artificial intelligence making the most profound impact in the next 10 years in science, business, or society?

I believe the most profound impact will arise from the synergy between scientific discovery and its societal application, particularly in medicine and materials science. In healthcare, we are moving beyond simply using AI for diagnostics. We’re on the cusp of AI-driven drug discovery, where new therapeutics are designed in silico at a fraction of the traditional time and cost. This will fundamentally alter our approach to diseases like cancer and Alzheimer’s. Simultaneously, in materials science, AI will discover novel materials with properties we can barely imagine today, which will be essential for sustainable energy solutions and next-generation electronics. The business world will then rapidly translate these breakthroughs into tangible products, creating a virtuous cycle of innovation.

Will AI remain a supportive tool, or do you expect it to become a true collaborator in research and decision-making?

We are decisively moving beyond the ‘supportive tool’ paradigm. A tool is passive; a calculator, for instance, waits for instruction. AI is becoming a true collaborator. In research, an AI can now not only analyse data but also formulate novel hypotheses and even design the experiments to test them. This is a qualitative shift. The relationship is becoming one of partnership, where the human researcher sets the strategic direction, curiosity, and ethical boundaries, while the AI collaborator explores vast, complex solution spaces, revealing patterns and possibilities that would elude human cognition. We are moving from a ‘human-in-the-loop’ to a ‘human-on-the-loop’ model.

How can we ensure that AI systems are not only technically reliable but also trusted by the public?

Trust is the currency of AI adoption. It rests on two pillars: technical robustness and social legitimacy.

  • Technical Robustness: We must advance the field of Explainable AI (XAI). A decision from an AI system, especially in a high-stakes field like medicine or law, cannot be a “black box.” We need systems that can articulate the rationale behind their outputs in a human-understandable way. Rigorous, adversarial testing must also become standard practice to ensure systems are safe and reliable under real-world conditions.
  • Social Legitimacy: This is achieved through transparency, clear lines of accountability, and public engagement. People must understand how these systems are governed and have clear avenues for redress when things go wrong. Crucially, ethicists, social scientists, and domain experts must be involved from the very beginning of the design process, not as an afterthought.
Do you believe we need global AI governance frameworks which are similar to climate accords or is national regulation sufficient?

I firmly believe we need global AI governance frameworks. AI is a borderless technology. Its models, data, and effects flow seamlessly across the globe. Disparate national regulations risk creating a fragmented “patchwork,” leading to regulatory arbitrage where companies exploit loopholes in jurisdictions with weaker rules. Much like climate accords or nuclear non-proliferation treaties, we need a global consensus on fundamental principles—safety, fairness, and accountability. This would establish a foundational standard, allowing individual nations to build upon it with more specific regulations tailored to their own cultural and legal contexts.

Despite technical advances, bias in AI remains a challenge. What promising approaches do you see to mitigate this problem?

Bias is one of the most stubborn challenges, as it often reflects and amplifies existing societal inequalities present in the data. There are several promising approaches to mitigation:

  • Data-Centric AI: A significant focus is now on meticulously curating and augmenting training datasets to ensure they are balanced and representative. This includes sophisticated techniques for generating synthetic data to fill in gaps for underrepresented groups.
  • Algorithmic Fairness: We are developing algorithms with mathematically defined fairness constraints built directly into their optimisation process. This forces the model to balance accuracy with equity metrics, such as ensuring that its error rates are comparable across different demographic groups.
  • Continuous Auditing: Recognising that bias is not a problem to be “solved” once, but a risk to be managed continuously. This involves deploying independent, diverse teams to audit AI systems throughout their lifecycle to detect and correct emergent biases.
How do you think AI will change the way we conduct and publish scientific research in the future?

AI is poised to fundamentally reshape the scientific method itself. The process of hypothesis, experimentation, and discovery will be dramatically accelerated. AI will empower researchers to analyse datasets of immense scale and complexity, generating insights that were previously impossible.

In publishing, the traditional, static journal article may evolve. We could see the rise of “living papers”—dynamic, interactive documents that are continuously updated by AI as new data becomes available. AI will also augment the peer review process, helping to check for statistical validity, reproducibility, and even potential plagiarism, thereby increasing the rigour and speed of scientific dissemination.

How should universities and schools adapt to prepare the next generation for a world where AI is ubiquitous?

Our educational philosophy must undergo a paradigm shift. Rote memorisation of facts, a task at which AI excels, must give way to nurturing uniquely human skills. The focus should be on fostering critical thinking, creativity, complex problem-solving, and emotional intelligence.

Schools and universities must integrate AI literacy across all disciplines, not just in computer science. Every student should graduate with a foundational understanding of how AI works, its ethical implications, and how to collaborate with it effectively. The goal is to prepare the next generation not to compete with AI, but to leverage it as a powerful tool for thought and creation.

The U.S. and China dominate much of the AI landscape. What unique role can Europe, and perhaps Switzerland play in shaping AI’s future?

While the U.S. and China may lead in terms of sheer scale and investment, Europe, and Switzerland in particular, are uniquely positioned to pioneer a “third way.” This path is defined by a commitment to developing human-centric, trustworthy, and ethical AI. By championing robust regulatory frameworks like the EU’s AI Act, Europe can set a global gold standard for responsible innovation.

Furthermore, leveraging world-class academic institutions (such as ETH Zurich and EPFL) and a strong industrial base in high-value sectors like pharmaceuticals, robotics, and finance, the region can excel in creating specialised, high-quality AI solutions where trust and precision are paramount. Europe’s role is not necessarily to win the race for scale, but to lead the world in responsible and beneficial AI.

Generative AI tools have already transformed creative industries. Do you see risks of over-reliance, or is this the start of a new human–machine creativity?

I view this as the dawn of a new era of human-machine creativity. History offers a useful parallel: the invention of the camera did not end painting. On the contrary, it liberated painters from the need for pure realism, catalysing movements like Impressionism and Cubism.

Similarly, generative AI is a tool that can augment human ingenuity. The risk of over-reliance leading to creative homogeneity is real, but it is not inevitable. The true potential lies in using these tools as a creative partner—a tireless brainstorming assistant that can help artists, musicians, and writers explore and iterate on ideas at an unprecedented speed. It is a multiplier for human creativity, not a substitute.

Training large AI models consumes enormous amounts of energy. How can we reconcile AI innovation with the urgent need for sustainability?

The environmental cost of training large-scale AI models is a serious and legitimate concern. Addressing this requires a multi-pronged approach:

  • Algorithmic Efficiency: A great deal of research is focused on creating more efficient algorithms and model architectures—what we often call “Green AI.” Techniques like model pruning, quantisation, and knowledge distillation can drastically reduce computational requirements.
  • Hardware Innovation: The development of new, energy-efficient hardware, such as neuromorphic chips that mimic the brain’s structure, will be crucial.
  • Sustainable computing: This includes supplying data centers with renewable energy sources and optimizing their physical location and cooling systems.
  • A Shift in Mindset: We must challenge the notion that “bigger is always better.” There is a growing movement towards developing smaller, more specialised models that are highly effective for specific tasks without the enormous energy footprint.
What do you see as the most realistic breakthroughs of AI in healthcare, and what obstacles still stand in the way?

In the near future, the most realistic and impactful breakthroughs will be:

  • Radiology and Pathology: AI will become the standard of care in analysing medical images (MRIs, CT scans, biopsies). It will detect diseases like cancer earlier and with greater accuracy than the human eye, acting as an indispensable assistant to clinicians.
  • Personalised Medicine: By analysing a patient’s genomic data, lifestyle factors, and clinical history, AI will help predict which treatment protocols will be most effective for that specific individual, moving us away from a one-size-fits-all approach.
  • Operational Efficiency: AI will optimise hospital operations, from predicting patient admissions to managing surgical schedules, reducing wait times and improving the quality of care.

The primary obstacles are not purely technical. They are data governance (ensuring patient data is private and secure), regulatory approval (creating clear, efficient pathways for validating medical AI), and clinical integration (seamlessly embedding these tools into doctors’ workflows).

On a personal level: what excites you most about the next stage of AI, and what keeps you up at night?

What excites me most is the potential of AI to function as a universal amplifier of human intellect. I am profoundly optimistic about its capacity to help us solve humanity’s most complex and enduring problems—from developing cures for neurodegenerative diseases to designing fusion reactors and understanding the fundamental nature of consciousness. It’s the ultimate tool for scientific discovery.

What keeps me up at night is the asymmetry between the speed of technological development and the pace of our social and ethical adaptation. My principal concern is the misuse of powerful AI systems, whether in autonomous weaponry, pervasive surveillance, or the creation of sophisticated disinformation that could destabilise societies. We are building something incredibly powerful, and ensuring it remains aligned with humanity’s best interests is the single most important challenge of our time. It is a profound responsibility that we must all share.

Conclusion

The next decade of AI will bring changes in research, business and society. Prof. Dr. Kathrin Kind sees this less as a threat than an opportunity: if we succeed in combining technological innovation with responsibility and foresight. “We are building something incredibly powerful,” she says. “The question is whether we can manage to shape it in the best interests of humanity.”

The questions were asked by Binci Heeb.

Read also: Agentic AI, satellites & start-ups: how innovation is changing the world of insurance


Tags: #Acceleration #AI #Diagnostics #Disinformation #Emotional Intelligence #Ethics #Future #Hardware Innovation #Humans #Legitimacy #Partner #Pathology #Personalised Medicine #Pharmaceuticals #Publishing #Radiology #Robustness #Tool