No more Shadow AI: How SMEs can Use Artificial Intelligence in Compliance with the Law
24 July, 2025 | Current General
The use of artificial intelligence (AI) is also an integral part of small and medium-sized companies. have long been part of everyday life. Clear legal guidelines and internal company regulations are essential to ensure that AI innovations do not become a risk. What’s important now when selecting, using and governing AI tools.
Just over a year ago, thebrokernews interviewed David Rosenthal, one of Switzerland’s leading experts in data and technology law and Team Head/Partner at VISCHER AG, about AI use from a legal perspective. Here are some more points from the lawyer on how small and medium-sized enterprises can use AI in a legally compliant manner.
Why companies need to actively manage AI
Whether ChatGPT, Copilot or industry-specific solutions: Employees are increasingly using AI applications on their own initiative and often also privately if the company does not provide compliant tools. This poses a risk of loss of control and data. At the same time, AI skills are becoming a key qualification. Companies will benefit if they provide their teams with legally compliant and efficient tools and create suitable internal rules.
Step 1: The right tools – and contracts
The starting point for AI in SMEs is usually the procurement of proven tools with a particular focus on generative AI such as ChatGPT or Gemini. For greater legal certainty, functional diversity and cost benefits, it is often advisable to use cloud APIs from large providers instead of individual accounts. Own AI models on own servers, on the other hand, require specialist knowledge and larger investments that are rarely worthwhile.
The following points in the contract with the provider are essential for legally compliant use:
- Data processing agreement (DPA): This should cover all functions and take country-specific requirements into account.
- Confidentiality obligation: Customer data must be protected.
- Consent to use: Your own data may not be used to train the provider AI.
- License clarity: Companies should be allowed to work freely with AI outputs, without further license costs or restrictions.
Only enterprise versions usually meet these requirements sufficiently. The involvement of an internal “tool owner” helps to keep an eye on contracts and compliance issues at all times.
Step 2: Create internal rules
Companies should issue precise instructions on which AI tools may be used and how. Templates and sample instructions provide a practicable basis for individual adaptations. The most important contents:
- Clear designation of approved tools and the person responsible for them
- Definition of permitted data categories for the use of AI
- Transparency and inspection obligations: Regular human monitoring of results
- Reporting obligations for critical incidents or new AI applications
The EU AI Act makes AI applications with increased risk particularly relevant from a regulatory perspective, for example in recruiting or with sensitive personnel data. They are often subject to additional testing and verification obligations.
Step 3: Training and transparency
Only informed employees use AI responsibly. The basis for this is regular training, not only on opportunities and operation, but also on risks such as false information or deepfakes. Power users from your own team can pass on AI knowledge and promote acceptance.
A brief note in the privacy policy that personal data may be processed by AI service providers is often sufficient. Important: The use of AI must be transparent and comprehensible for data subjects as soon as personal data is involved or statutory transparency obligations apply.
Control and updating: AI deployment is a matter for the boss
The dynamic nature of AI technology requires processes and tools to be reviewed at least once a year. Checklists, risk forms and continuous feedback from the team help with this. This ensures that the use of AI not only remains compliant, but also economically sensible and efficient.
For SMEs, the legally compliant use of AI is not rocket science, but it is a management task: clear responsibilities, tried-and-tested tools and regular practical checks ensure security and successful use in equal measure.
Binci Heeb
Read also: https://www.thebrokernews.ch/achtung-ki-nutzung-aus-rechtlicher-sicht/