Using AI agents in a legally compliant manner: From risk to competitive advantage

AI agents represent a significant milestone in the evolution of artificial intelligence. Unlike traditional AI systems, which typically perform individual tasks reactively, AI agents can independently set and achieve goals, develop strategies, and actively implement them within complex processes. While this opens up new opportunities for companies, it also entails considerable legal risks. In this article, we explain how you can harness the potential of AI agents responsibly while complying with the requirements of the AI Act and the general civil and liability law framework.

Arrange a non-binding initial consultation

Understanding the basics of AI agents

AI agents differ from deterministic, rule-based systems and classic AI applications, which are typically confined to specific, individual tasks. While chatbots and rule-based systems merely respond to inputs, AI agents can take the initiative, skip intermediate steps and make and implement decisions independently (e.g. by clicking in a browser or via APIs).

Technically, AI agents are typically based on large language models (LLMs). They are also embedded in a software environment that gives them access to external tools, data sources and interfaces. This enables them to generate text and independently develop strategies and execute actions.

Consequently, the distinction between a mere tool and a system capable of acting semi-autonomously becomes blurred.

For business practice, this means that questions of responsibility must be clarified at an early stage. Who is responsible for the decisions made when a system selects its own courses of action? How can it be ensured that these decisions are legally permissible?

Practical introduction:

  • Gain an overview of the systems currently on the market.
  • Learn to distinguish between applications that are merely 'tools' and those that can act with genuine autonomy.
  • Bring decision-makers and specialist departments up to the same level of knowledge.

Newsletter

For your Inbox

Current updates and important information on topics such as data law, information security, technology, artificial intelligence, and much more. (only in German)

Please add 2 and 4.

Mit Klick auf den Button stimmen Sie dem Versand unseres Newsletters und der aggregierten Nutzungsanalyse (Öffnungsrate und Linkklicks) zu. Sie können Ihre Einwilligung jederzeit widerrufen, z.B. über den Abmeldelink im Newsletter. Mehr Informationen: Datenschutzerklärung.

Determining suitable areas of application for AI agents

AI agents have many potential applications, ranging from customer service and booking simple services to contract review and negotiations. They can handle more complex tasks than conventional AI systems. However, new risks also arise, such as undesirable actions and sources of error that are more difficult to control and trace.

Nevertheless, AI agents are not automatically considered high-risk AI systems under the AI Act. As with conventional AI systems, their legal classification under the AI Act largely depends on their specific area of application. For example:

  • A candidate selection tool is often categorised as high-risk AI under the AI Act.
  • A pure booking assistant, on the other hand, is generally not considered a high-risk system.

For companies, this means that opportunities primarily arise where processes are standardised and errors remain manageable. However, significantly stricter regulatory requirements apply in sensitive areas such as human resources or decisions on contract conclusion.

Recommendation:

  • Conduct a company-wide inventory of potential applications.
  • Prioritise according to risk, clearly distinguishing between high- and low-risk applications.
  • Launch pilot projects in manageable and controllable areas.

Define governance and responsibilities for AI agents

The greatest danger does not lie in the technology itself, but in the unclear organisational structures within which AI agents are used. The AI Act therefore requires clear roles and responsibilities. Companies must understand whether they are acting as providers, operators or users, or whether they are fulfilling multiple roles in a specific case, as each of these roles entails different obligations.

Key steps:

  • Appoint responsible persons, such as AI, data protection or IT officers.
  • Document who uses the agent and for what purpose.
  • Establish binding risk analyses and contingency plans.

Compliance and risk management

Misjudgements in the use of AI agents can have serious consequences. The AI Act provides for fines of up to €35 million or 7% of global annual turnover. Therefore, companies must carefully assess whether an agent should be classified as a high-risk system and take appropriate protective measures.

Action required:

  • Classify and document each system carefully.
  • Incorporate approval checkpoints for critical decisions.
  • Define technical limits that the agent must never exceed.
  • Establish regular audits and continuous monitoring.

Train and sensitise employees:

The reliability of an AI agent depends on the people who use it. The AI Act requires companies to demonstrate sufficient AI expertise. This involves more than just technical knowledge; it also requires an understanding of risks such as discrimination, black box decisions and automation bias, or blind trust in machine results.

Checklist:

  • Conduct training for all relevant user groups (e.g. lawyers, IT and management).
  • Ensure awareness of risks and possible manipulation.
  • Establish interdisciplinary training that brings together law, technology and management.

Schedule your initial consultation

Describe your situation to us in a no-obligation phone call, and our lawyers will work with you to find the best solution.

Schedule consultation

Contracts and liability

The question of whether AI agents can legally enter into contracts is a particularly sensitive one. Under German law, a declaration of intent always requires human will. As an agent does not have its own will, classifying it as a representative (Section 164 of the German Civil Code) is not legally convincing.

However, an agent is more than just a messenger of declarations because it makes decisions, acts deterministically, and has its own scope for decision-making.

In practice, this means that contracts concluded by AI agents are legally uncertain, at least under German and European law. Therefore, companies should generally avoid having contracts with significant business impact or implications for customers being concluded fully automatically by an AI agent. A manual final check by a responsible person remains indispensable in most cases.

Caution is also advised with regard to liability: errors made by an AI agent are attributed to the company. An unfavourable contract or discriminatory decision during the application process can result in significant economic losses. It is therefore crucial to define clear internal responsibilities and liability regulations in advance, both within internal governance and external contracts.

Balancing the opportunities and risks of AI agents

AI agents offer companies a wide range of advantages. They can significantly increase the efficiency of organisational processes by taking on routine tasks and freeing up employees to focus on more important activities. This increases productivity and facilitates the scaling of business processes.

AI agents are also available around the clock, acting as reliable points of contact in customer service, which is increasingly crucial for the competitiveness of many companies.

However, these advantages should not obscure the fact that AI agents also pose considerable risks. The lack of transparency in decision-making, known as the black box phenomenon, is particularly problematic. Companies risk discriminatory patterns or faulty logic going undetected.

Added to this is the danger of the automation trap, whereby employees accept the machine's decisions without checking them and become dependent on a system whose weaknesses they can no longer recognise. External manipulation also poses a serious risk; for instance, a compromised agent could deliver incorrect results without being noticed.

Against this backdrop, it is crucial to view the deployment of AI agents as an ongoing process rather than a one-off decision. Opportunities and risks must be regularly reassessed and technical and organisational measures must be continuously adapted. Only in this way can companies ensure that the benefits outweigh the risks.

AI agents are not just a future consideration, but an immediate challenge for business practice. Those who address this issue early on will lay the foundation for a competitive advantage. At the same time, it is important to implement the legal and organisational requirements consistently.

A balance must be struck between care and speed: companies that blindly trust the technology risk substantial fines and liability consequences. Conversely, those who check, train and document in good time can take advantage of the opportunities while remaining aware of the risks.

Schedule your initial consultation

Describe your situation to us in a no-obligation phone call, and our lawyers will work with you to find the best solution.

Schedule consultation