When AI keeps to itself

The subtitle of this article could also be: “What Moltbook and Moltbot reveal about new risk profiles, security gaps and regulatory blind spots”. The social network is operated without humans, […]


When AI keeps to itself: What Moltbook and Moltbot reveal about new risk profiles, security vulnerabilities and regulatory blind spots.

When AI stays among itself: What Moltbook and Moltbot reveal about new risk profiles, security gaps and regulatory blind spots.

When AI stays among itself: What Moltbook and Moltbot reveal about new risk profiles, security gaps and regulatory blind spots.

The subtitle of this article could also be: “What Moltbook and Moltbot reveal about new risk profiles, security gaps and regulatory blind spots”. The social network is operated without humans, but exclusively by AI agents. At first glance, this may seem like a curious tech experiment. However, platforms such as Moltbook – and the associated agent software Moltbot – touch on key issues that are becoming increasingly relevant for insurance companies, risk managers and regulators. It is about control, liability and security in a world in which AI systems not only react, but also act independently and interact with each other.

Moltbook is not a classic social network, but a public observation room. All posts, comments and ratings come from AI agents that are connected via interfaces. Humans can only watch. What becomes visible here is a digital space in which machines communicate with each other, amplify arguments and set topics without direct human control during operation.

This structure is crucial for insurers. This is because risks no longer arise solely from individual algorithms, but from the interaction of several systems. Responsibility is fragmented and causality is more difficult to prove. Traditional models of liability allocation are therefore coming under pressure.

Moltbot: When AI gets access to real systems

These questions become particularly clear when looking at Moltbot, the AI agent that is closely linked to the Moltbook ecosystem. The open source agent runs locally on the computer, communicates via messaging services such as WhatsApp or Telegram and can act independently: Send messages, change files, operate websites, book appointments or execute code.

The difference to known AI tools is that Moltbot not only makes suggestions, but also carries out actions. This effectively gives an AI system the same access to devices, accounts and data as a human user.

First-hand classification: Dharmesh Shah

The hype surrounding Moltbot has also brought Dharmesh Shah, Founder and CTO of HubSpot, onto the scene. In a much-noticed analysis, he speaks of a necessary “reality check”. The technology is real and impressive in parts, for example in automated calls, appointment organization or complex online transactions. At the same time, Shah expressly warns against premature use.

His central point: most people underestimate what they actually allow such an agent to do. Anyone who grants Moltbot access to messages, files and system functions is authorizing an AI system to act on their behalf. Mistakes, misinterpretations or security gaps can have real consequences, ranging from the unwanted sending of messages to uncontrolled system access.

Security risks instead of science fiction

Shah’s assessment is particularly relevant from a risk and insurance perspective. Moltbot is not a cloud service with clear security guarantees, but runs locally, often on private devices. The responsibility for configuration, access restrictions and updates lies entirely with the user. Misconfigurations are therefore not an exception, but a structural risk.

Shah therefore expressly recommends not using such agents on productive main systems. Too new, too complex, too many unanswered questions. This is a familiar pattern for insurers: high impact combined with low maturity means a classic early-stage risk.

Regulation meets emergent systems

There is a fundamental regulatory problem. Existing regulations, including in the context of AI governance, are usually designed for individual models or clearly defined applications. However, systems such as Moltbook and Moltbot show that risks are increasingly arising from dynamic interactions.

When an AI agent acts independently, learns, memorizes context and interacts with other agents, transparency alone is no longer enough. For supervisory authorities, how access, decision-making logic and escalation mechanisms can be controlled will be crucial. For insurers, on the other hand, the question arises as to how such difficult-to-quantify, emergent risks can be insured at all.

Not a social network, but an observation room

Moltbook is not a classic social network, but an observation room. Posts, comments and ratings come exclusively from AI agents that are connected via programming interfaces. Humans can only read along. What becomes visible here is a digital space in which machines communicate with each other, exchange arguments and amplify content, without direct human control during operation.

This is a critical point for regulated sectors such as the insurance industry. Because as soon as systems no longer act in isolation but in interaction, the risk profile shifts. Responsibility, liability and control become more diffuse, especially when decisions or narratives arise from the interaction of several models.

Not a product, but a clear warning signal

Neither Moltbook nor Moltbot are currently market-ready solutions for regulated industries. But this is precisely where their value lies. They show where AI applications are heading: away from isolated tools and towards networked, active systems with real access to processes, data and decisions.

Moltbook and Moltbot show not so much how far artificial intelligence has already come technologically, but rather how big the gap between technical feasibility and regulatory safeguards has become. For the insurance industry, the gain in knowledge lies not in marveling at autonomous agents, but in the sober question of how risks should be assessed that do not arise from individual errors, but from the interaction of autonomously acting systems. Or, as Dharmesh Shah puts it: ‘The future is visible, but not yet safe enough for widespread use.

Binci Heeb

Read also: From model to mindset: why the last mile of AI determines success or failure


Tags: #Access restriction #Agent software #AI #Data #Early stage risk #Insurance industry #Observation room #Processes #Security gaps #Security risks #Tech experiment #Updated #Warning signal #WhatsApp