20.01.2025

Artificial intelligence (AI) - who is liable if a robot fails?

Imagine that an AI-based robot assists doctors during an operation and makes the wrong decision. As a result, the operation goes wrong and the patient can claim damages or compensation for pain and suffering. A self-driving car decides to veer onto the pavement instead of into a hedge and injures people. Again, the resulting damage must be compensated. But by whom? Who is liable? Doctors or drivers who use artificial intelligence for their own purposes? The manufacturers? Or can the artificial intelligence itself be held liable?

Arrange a no-obligation initial consultation

Liability gap for AI-powered machines: Who is responsible?

These questions are difficult to answer from a legal perspective. The German liability system is always based on wrongdoing that results in damage. However, misconduct that has legal consequences cannot be committed by just anyone. Under German law, only those who are legally recognised as having legal personality can be held liable.

This is not yet the case for robots and machines, so the misconduct of a human behind the artificial intelligence must be taken into account. But can that human be held liable at all if the AI-based machine makes decisions independently? Or are we facing a major liability gap that will legally block the use and development of robots and artificial intelligence?

In light of these challenging issues, the use of robots and artificial intelligence can pose a variety of challenges and problems. These include issues relating to data processing and privacy, the security and reliability of systems, and compliance with ethical principles. In this article, we look at possible solutions to these issues and how you can reap the full benefits of robots and artificial intelligence without incurring legal risks.

Manufacturer liability for AI robots: a closer look

First, it is worth taking a closer look at the liability of the manufacturer of the AI-based robot. It stands to reason that they could be liable if the machine misbehaves. After all, it is the manufacturer who is closest to the decision-making process by developing, programming and training the artificial intelligence. German law has long embraced this basic idea that manufacturers are "close to the product", regardless of the type of product. It may sound confusing at first, but manufacturers' liability for product defects must always be assessed according to the same rules, regardless of whether the product is a water bottle or a highly complex self-driving car.

Product Liability Act (ProdHaftG): Basis of liability and requirements for manufacturers

On the one hand, manufacturer liability can arise from the product liability provisions of the Product Liability Act (ProdHaftG). The following requirements must be met (§ 1 (1) ProdHaftG)

Injury of a protected legal interest (death of a person, injury to body or health, damage to property)

  • due to a defective product
  • resulting in (financial) loss
  • and there is no statutory exemption from § 1 (2), (3) ProdHaftG.

Manufacturer liability for AI products: Difficulties in determining defects and product liability

Given these conditions, manufacturers should always be liable for infringements of legal interests caused by the artificial intelligence they have developed. This claim is primarily based on the principle of strict liability. In the case of strict liability, it does not matter whether the injuring party is responsible for the violation of legal interests.

In concrete terms, this means that it does not matter whether he or she acted negligently or intentionally. The only decisive factor for liability is that he or she has created a source of danger by placing the defective product on the market. Furthermore, the infringement of legal interests and the damage must be attributable to the product defect (causality).

Artificial Intelligence Liability: Difficulties in determining defects and product liabilityManufacturer liability for AI products: Difficulties in determining defects and product liability

However, it is difficult to determine whether there is a defect because the decision-making processes of an artificial intelligence are difficult to understand (keyword: black box) and therefore a programming error is not always easy to prove. It is particularly difficult because it is up to the injured party to prove the fault. In cases of doubt, these are lay people who have no idea of artificial intelligence and the technology behind it, and who were not involved in the production process. In most cases, the producer's liability under product liability law will fail because of this requirement.

Product liability according to § 823 BGB: Differences, reversal of the burden of proof and the development status of AI

Von der Produkthaftung muss jedoch die sog. Produzentenhaftung unterschieden werden. Diese richtet sich nicht nach dem ProdHaftG, sondern nach dem allgemeinen Schadenersatzanspruch des § 823 Abs. 1 BGB. Dieser Schadenersatzanspruch ist nicht speziell auf Produzent:innen eines Produkts zugeschnitten, sondern kann jeden treffen, der ein dort aufgezähltes Rechtsgut verletzt. Handelt es sich allerdings um einen oder eine Produzent:in, werden spezielle Kriterien innerhalb dieses Anspruchs angewendet.

Fault-based liability vs. product liability: Requirements for manufacturers of defective products

In contrast to strict product liability under the German Product Liability Act (ProdHaftG), liability under Section 823 (1) of the German Civil Code (BGB) requires fault.

The reason for liability under producer liability is that the producer has culpably put a defective product into circulation, i.e. negligently or intentionally. Negligence implies a failure to exercise due care in road traffic. Intentional acts are committed by anyone who knowingly and deliberately places a defective product on the market.

The accusation is usually a breach of the manufacturer's so-called duty to maintain safety. According to this, anyone who creates and maintains a dangerous situation or source of danger of any kind is obliged to take account of the danger and to take reasonable precautions to prevent, as far as possible, damage to third parties.

In particular, four groups of cases have emerged for producer liability:

Organisational duties: the manufacturer(s) must organise the business in such a way that errors in production (manufacture), design and instruction are eliminated as far as possible or are detected by inspections.

Duty to instruct: as part of this duty, the manufacturer is obliged to provide information and warnings, for example on how to handle the product in order to prevent damage.

Duty to monitor: unlike the above duties, which must be fulfilled before the product is placed on the market, the duty to monitor begins after the product is already on the market. If the product is subsequently found to present a risk, the manufacturer must take all reasonable steps to prevent that risk. This applies both to products already on the market and to future production.

Duty to prevent risks: If the manufacturer has identified risks that may arise from the use of the product, he must ensure that they are prevented. The manufacturer must therefore make efforts to prevent the occurrence of harm. This may take the form of a warning, recall, removal or similar measures.

Special features of producer liability: reversal of the burden of proof and challenges with AI development status

However, injured parties face the same problem in producer liability as in product liability: how to prove that the product is defective.

In principle, the injured party must also prove in the context of producer liability that the producer culpably put a defective product on the market. However, this is where a special feature of producer liability comes into play: in contrast to product liability, the burden of proof is relaxed or even reversed.

Producers are relieved of the burden of proving that they did not place a defective product on the market, i.e. that they complied with their safety obligations. This is due to the fact that the injured party rarely has access to the production process and can therefore rarely bring a claim under the Product Liability Act. Somehow, however, he or she must have a chance of making the manufacturer pay for the damage.

Product liability for AI manufacturers will raise a number of difficult (albeit exciting) questions in the future. Particularly because of the state of development of artificial intelligence. Will AI manufacturers be able to say that the defect is no longer their responsibility, but that it occurred much later due to the self-learning properties of the AI? How does this relate to the due diligence requirements mentioned above? And how can AI producers fulfil their duty of care once the AI is on the market and in use?

It is not yet clear what the answers to these questions will be.

Liability of the user of AI technology: the question of fault and challenges in practice

The user or operator of AI-based technology is not subject to product and manufacturer liability. Depending on the specific case, liability may arise from both contract law principles and tort law.

The following example illustrates the difficulties involved.

A physician relies primarily on an AI-based diagnostic system to diagnose a melanoma. The AI mistakenly diagnoses the melanoma as benign - the AI's diagnosis is therefore incorrect. The physician relies on the accuracy of the diagnosis and, as a result, the patient suffers harm during treatment.

The doctor's contractual liability is governed by the provisions of the treatment contract (§§ 630a et seq. BGB). Liability in tort is governed by the aforementioned Section 823 BGB.

Liability for the use of AI in medical diagnosis: difficulties in assessing faul

Fault will often be the sticking point. The injured party would therefore have to prove that the person using the AI failed to exercise due care. In practice, this is usually difficult. In principle, this problem exists in the context of both contractual and tort liability.

After all, how can one assess whether the doctor has exercised due diligence? After all, even the doctor cannot gain insight into the AI's decision-making process and assess whether and where something has gone wrong.

The decisive factor for the question of liability is likely to be whether and to what extent the doctor could or was allowed to rely on the decisions and whether a critical review of the AI-supported diagnosis is necessary.

With regard to contractual liability, the physician should not be able to use this circumstance to escape liability. According to Section 630a (2) of the German Civil Code (BGB), treatment must be carried out in accordance with existing, generally recognised professional standards, unless otherwise agreed.

AI applications are not yet part of recognised medical standards. In this respect, medical staff cannot blindly rely on the results of AI, but are still obliged to (critically) review the results and make their own diagnosis.

These circumstances must also be taken into account in the context of tortious liability.

However, the key difference between liability in contract and liability in tort is who has to prove the breach, causation and fault.

Liability in tort follows the principles described above: the injured party must prove the culpable breach. As mentioned above, this is usually an insurmountable hurdle for the injured party.

On the contractual side, however, the situation is different. In the field of medical malpractice law, Section 630h (1) of the German Civil Code (BGB) provides for a reversal of the burden of proof with regard to fault and culpability. According to this, a doctor is presumed to be at fault if a general risk of treatment, which was fully controllable by the treating doctor, occurred and led to the patient's injury. The burden of proof is much more favourable to the injured party.

It can be said that the liability of AI users can arise from a number of different angles. While tort liability is likely to be regularly denied due to the difficulty of providing evidence, contractual liability is more likely to be accepted.

Liability issues in autonomous driving: Peculiarities of the Road Traffic Act (StVG)

Autonomous driving is probably the best-known example of artificial intelligence and liability. This is mainly due to the fact that more and more automatic systems are taking over control of the car, and it is only a matter of time before AI can take over the car completely - at least from a technical point of view. Legally, the same liability issues will arise as with any other autonomous system. The difference is that liability in road traffic is governed by special legislation in the Road Traffic Act (StVG).

Liability in autonomous driving: Differences between the liability of the driver and the vehicle owner

In 2017, the legislator adapted the Road Traffic Act to the future of autonomous driving. The StVG liability regime distinguishes between the liability of the driver of the vehicle and the liability of the owner of the vehicle. The driver is liable according to § 18 StVG depending on fault. The keeper, on the other hand, is liable regardless of fault (Section 7(1) of the StVG), i.e. in cases of doubt always if a person is killed, a person's body or health is injured or property is damaged while operating a motor vehicle.

Liability rules for autonomous driving: Who is legally regarded as the driver?

The 2017 amendment meant that the highly or fully automated driving function was initially declared expressly permissible (Section 1a(1) of the StVG). However, the change relevant to liability issues is that it has been clarified that a driver within the meaning of the StVG is also the person who activates and uses the autonomous driving function. In particular, he or she is to be regarded as the driver even if he or she is not driving the vehicle.

Liability consequences for owners, drivers and manufacturers in the case of autonomous driving

What are the liability consequences of this clarification? Firstly, the structure of liability for owners and drivers remains the same. In addition, the amendment means that the person who "lets" his or her vehicle drive is also to be regarded as the driver within the meaning of Section 18(1) of the Road Traffic Act. This means that he/she is liable in the same way as a driver who is not driving an autonomous vehicle. If he is responsible for the accident, he is liable.

The amendment has therefore clarified that the driver is liable regardless of the level of automation if he/she is at fault. However, this raises the question of whether liability can be waived if the accident is due to an error by the AI. As a rule, however, this circumstance will not lead to the driver not being liable. This is because, as Section 1b of the StVG shows, the driver cannot and must not always rely on the autonomous driving system. Rather, as is usual in road traffic, the driver must remain alert and take control of the vehicle immediately if the autonomous driving system requests it or if the driver recognises that the autonomous driving system is no longer being used as intended.

In the future, it will depend on whether and to what extent the error was or would have been detectable by the driver. The driver's liability will only cease when the vehicle is fully autonomous and there is no longer a need for a human driver, but only people are being transported.

The term "force majeure" refers to an external, extraordinary and unavoidable event. For autonomous driving systems, it is crucial that an AI error does not constitute an external event. This is because the AI is directly linked to the vehicle's automatic system.

Furthermore, the manufacturer of the autonomous car is not exempt. He is not liable under the Road Traffic Act, but under the Product Liability Act and the Civil Code.

In summary, a comprehensive liability regime already exists for autonomous driving systems.

Reforming AI liability law: what the EU is planning

There will be a number of changes to AI liability law in the foreseeable future. This is because there are currently several legislative efforts at EU level that deal with the regulation of AI.

To a large extent, the provisions of the AI Regulation are likely to constitute a protective law within the meaning of Section 823(2) of the German Civil Code (BGB), which means that a claim for damages under Section 823(2) of the BGB may arise in the event of a breach of the provisions of the AI Regulation. However, the AI Regulation itself does not contain any liability provisions.

However, the EU is also working on two directives that deal with liability issues in relation to AI. These are primarily intended to address the issues described above. They also aim to promote the development and deployment of AI by creating legal certainty. In particular, they will enable companies to better assess and insure against their liability risks.

Adapting the EU Product Liability Directive to Artificial Intelligence

On the one hand, the European Product Liability Directive, which is also the basis for the German Product Liability Act, is to be revised and replaced. The Directive, which came into force in 1985, is intended to modernise strict liability in the context of advancing digitalisation. For example, the new directive will extend the scope of liability to artificial intelligence. The proposed legislation will also make it easier for victims to provide evidence.

The directive is expected to enter into force later this year. Member States will then have 12 months to transpose its provisions into national law.

The above-mentioned problems for the injured party in proving that the product was defective will be addressed by the Directive, but not completely eliminated. There will be no complete reversal of the burden of proof in favour of the injured party, so that the injured party will still have to prove the defect in the software and causality. The Directive does, however, provide for some simplification of the burden of proof.

For example, a presumption of causation will be introduced. If victims can prove that someone was responsible for a breach of an obligation relevant to the damage, and a link between the damage and the breach is reasonably probable, there will be a rebuttable presumption of causation between the two.

The "black box" problem mentioned above is also addressed by the new EU Product Liability Directive. Access to relevant evidence is to be made easier for victims by imposing a duty of disclosure on defendants. Courts will then be able to order the disclosure of certain information on request. However, disclosure will be subject to the condition that claimants have submitted sufficient facts and evidence to make a claim for damages appear likely. To ensure that manufacturers cannot evade this obligation unchallenged, the Directive provides that the product is presumed to be defective if the manufacturer fails to comply with his disclosure obligation.

If this information concerns business secrets, the court must take measures to ensure confidentiality.

The new Product Liability Directive addresses many of the existing problems and provides a remedy in this respect. However, it remains to be seen whether this will lead to far-reaching changes in the area of AI liability. This is because the Directive does not cover precisely those types of damage that are regularly of particular importance for AI. For example, purely financial losses or immaterial damage resulting from manipulation or discrimination remain outside the scope of product liability.

The new AI liability directive

In addition, a completely new directive will regulate AI-specific liability. However, unlike the Product Liability Directive, it will not deal with strict liability, but will regulate cases where damage is caused by AI intentionally or negligently. It is intended to cover a wide range of harms, including invasions of privacy caused by security problems in AI.

The AI Liability Directive does not provide a separate basis for claims by victims. Rather, the Directive provides for rules on the enforcement of a claim for damages.

Similar to the new Product Liability Directive, the AI Liability Directive will include a presumption of causality and a duty of disclosure to make it easier for injured parties to assert their claims. It covers so-called high-risk AI, which is regulated by the AI Regulation. To some extent, it can therefore be seen as a sanction regime for damages resulting from non-compliance with the requirements of the Regulation.

The date of entry into force of the AI Liability Directive is not yet foreseeable. It is also uncertain whether the current draft will be the final version.

In September 2024, the Scientific Service of the European Parliament published a study on the AI Liability Directive. The study contains proposals for amendments and improvements. Among other things, the study suggests that strict liability should be introduced for certain AI systems. In general, the study suggests that the AI Directive should be more closely aligned with the AI Regulation and that the AI systems covered should be more precisely defined and closely aligned with the AI Regulation.

The study also raises the question of whether a directive on AI liability is the "right" legal instrument. It suggests whether an EU Regulation would be preferable. Due to its direct applicability, a regulation would harmonise AI liability to a greater extent and create a largely uniform framework across Europe.

It remains to be seen whether and to what extent the European legislator is willing to adapt the AI Liability Directive.Discussions on the legal personality of AI: obstacles and practical challenges

The next question is about the practical challenges arising from this regulation.

An artificial intelligence does not have a bank account or insurance, which is problematic if it causes damage. The introduction of a kind of e-person could solve this problem. However, such an e-person would also have the same fundamental rights as a natural person, which would be contrary to the ECHR and the EU Charter of Fundamental Rights. Until now, there has been a liability regime through recourse to the persons behind the AI. The introduction of an e-person would therefore be unnecessary.

As the development of AI progresses, more and more questions are being raised about the liability regime that will apply when AI acts or can act fully autonomously in the future.

For example, how will it be assessed and who will be liable if an AI enters into a fully automated contract with another company or even another AI? According to the current state of the law, an artificial intelligence does not have the legal personality of a human being. An AI is not the bearer of rights and obligations. It is not yet possible to have an AI as a contractual partner, nor can the problem be solved by means of the regulations on representation (§§ 164 et seq. BGB) or the liability of a vicarious agent (§ 831 BGB). The reason for this is that they require a person with legal capacity.

Furthermore, an AI does not have a bank account or insurance, which is problematic if damage is caused.

Against this background, the introduction of an "e-person" is being discussed as a solution to this problem. The creation of a legal personality for AI systems and applications could therefore eliminate these problems. It should be noted, however, that this is not enough. In order to create legal certainty, the legislator would have to make extensive changes to the law in almost every conceivable area. The consequences would be profound changes to the entire legal system.

An end to the discussion and possible solutions is not yet in sight. At present, there is no need for such an e-person (yet). The existing liability regime, which is based on the person behind the AI, is still tried and tested and (still) provides sufficient protection.

Artificial intelligence and liability: a theoretically seamless liability regime

In summary, the German liability system is seamless. There are sufficient potential bases for claims for damage caused by AI or the use of AI. Even with the new challenges posed by artificial intelligence and robots, there will always be a suitable legal basis.

However, the existing liability regime in relation to AI is not without its problems. In particular, injured parties sometimes face major hurdles in the context of provability. Although the EU is attempting to address these through its legislative processes, they have not been eliminated.

Here are the key facts at a glance:

Manufacturer liability: The manufacturer can be held liable under product or producer liability. In practice, a claim based on product liability will generally not stand a chance, but a claim based on producer liability will.

Liability of the system operator: The system operator will only be liable if he can be accused of misconduct. The standards to be applied will depend heavily on the individual case and the specific use of the AI. The question of whether the user is liable on the basis of contractual provisions should not be underestimated.

Liability of the vehicle owner: For the special case of autonomous vehicles, the StVG offers a comprehensive and seamless liability system. As a rule, both the owner and the driver are liable. Exculpation for AI errors will only be possible in the rarest of cases.

There are several ways to sue different persons and to obtain compensation for the material or immaterial damage caused. However, questions that are difficult to answer in practice, such as the distinction between negligent and non-negligent behaviour, make the liability system seem incomplete. However, any ambiguities and loopholes are likely to be removed in the foreseeable future by a reform at EU level. This will make it easier for injured parties to pursue their claims and enable companies to better assess and insure against liability risks.

We can assist you in solving the legal issues and problems associated with artificial intelligence and robotics, helping you to minimise liability risks when using AI-based systems and creating the legal framework for your products and services.

Our lawyers have many years of experience in technology law and keep abreast of the latest developments in the field of artificial intelligence. We offer individual and practical advice tailored to your specific needs and requirements. We are here for you!

More news

13.02.2025

Microsoft Copilot for M365 and privacy: How to use it securely in your organisation

10.02.2025

AI Act: New obligations from 2 February 2025

07.02.2025

Software Escrow: Secure storage of source code