The use of “artificial intelligence” (AI) raises new legal questions. This raises the question of whether and to what extent these questions can already be answered by existing laws, or whether new regulations are necessary for AI. In the following article, we examine what answers are possible and where legal gaps and risks may exist.

1. Legal Requirements for the Use of AI

In EU member states, the legal requirements for the use of AI are primarily governed by the Regulation on Artificial Intelligence, known as the AI Act. The AI Act largely entered into force on February 2, 2025, and August 2, 2025; additional provisions will apply starting August 2, 2026. Regarding high-risk AI systems in certain products, the AI Act was scheduled to take effect on August 2, 2027. However, due to the European Commission’s measures to reduce bureaucracy, this date may still be postponed. In this regard, the requirements of the AI Act may also be relaxed, particularly for small and medium-sized enterprises, especially with regard to documentation and monitoring.

The German implementing law for the AI Act, which contains more organizational details, is also not expected to enter into force until spring 2026 (the Federal Cabinet approved a draft bill on February 11, 2026, which is now being debated by the German Bundestag; the law was actually supposed to have been in place by August 2, 2025). The AI Act is a law which then applies directly throughout the entire EU.

The AI Act follows a risk-based approach, meaning that the requirements for AI vary depending on the risk posed by the AI. In this context, “risk” is to be understood not only in a technical sense, but in a broader sense; for example, creditworthiness checks performed by an AI system are classified as high-risk AI systems. There are four levels in total:

  • Prohibited AI systems are those that pose an unacceptable risk to health, safety, or the fundamental rights set forth in the EU Charter of Fundamental Rights. For example, the use of AI to subliminally manipulate human behavior in a way that causes significant harm is prohibited, as are certain surveillance systems. A violation of the ban can be punished with a fine of up to €35 million, and even more – namely, 7% of the company’s global annual turnover if this amount exceeds €35 million.
  • High-risk AI systems are those posing a high risk to health, safety, or fundamental rights as defined in the Charter of Fundamental Rights of the European Union. In addition to the aforementioned creditworthiness checks, border controls and AI-controlled autonomous vehicles, for example, are also considered high-risk AI. A wide range of obligations apply to such AI systems, in particular comprehensive regulations regarding risk management. A violation of these obligations can be punished with a fine of up to €15 million, and even more – namely, 3% of the company’s global annual turnover if this amount exceeds €15 million.
  • AI systems intended for direct interaction with humans, those that generate or manipulate artificial media content (including text), or those that recognize emotions or biometric features must generally meet certain transparency requirements. For example, chatbots must disclose that they are AI systems if this is not obvious from the circumstances (exceptions apply primarily to law enforcement and crime prevention).
  • All other AI systems are unregulated.

As outlined above, fines may be calculated based on percentages of the company’s global revenue. It is expected that the total revenue of affiliated companies will be taken into account if the parent company controls the subsidiary both legally and de facto. If the parent company holds more than approximately 85% of the subsidiary, the European Commission has, in comparable cases, presumed – subject to rebuttal – that de facto control is also exercised.

When acquiring companies, it is therefore essential to carefully assess whether and how the target company uses AI, as there is a risk of assuming significant liability risks; as noted, the acquirer’s revenue is likely to be included in the calculation of the fine.

2. Liability Issues Regarding AI

A very important aspect of AI use is the question of who is liable for damages caused by AI errors – that is, to whom the action or erroneous decision of an AI is attributed. This question is not addressed by the AI Act.

A distinction must be made between two types of liability: contractual liability and so-called statutory liability, in particular product liability and manufacturer liability. Contractual liability applies only between the contracting parties, e.g., seller and buyer. Statutory liability, in particular product liability, applies even in the absence of a contractual relationship. For example, a car manufacturer is liable to a pedestrian if the vehicle injures the pedestrian due to a product defect (e.g. brake system failure).

2.1. Contractual Liability for AI

Contractual liability applies between contracting parties. If one contracting party uses AI to fulfill its obligations, and if the AI causes damage to the other contracting party, liability depends on the specific contractual agreement.

In the absence of a contractual agreement regarding liability, one can often assume that the contracting party is liable for damages caused by the AI to the other party. However, the situation is different if it is explicitly stated or implied as part of the contract that the AI is permitted to make errors. This can currently be assumed, for example, when a contract for the use of an AI chatbot is concluded.

If one were to take a different view, any user of a paid AI chatbot who relies on its responses and acts accordingly could demand compensation for all damages resulting from such actions if the response – as is often the case today – is erroneous.

If, on the other hand, AI systems are sold for a specific purpose – such as controlling a vehicle – it will be reasonable to expect that the AI functions flawlessly in that regard, meaning the AI manufacturer would then be fully liable. However, the legal situation is likely to be somewhat different if a seller who is not the manufacturer sells an AI or an item containing AI – such as a machine – and if the AI then causes damage to the buyer.

In this case, the seller is likely liable only if, despite exercising due care, they could not have detected that the AI was defective. However, the standards for due care in the context of AI will be very high.

The seller or manufacturer may be able to protect themselves somewhat against contractual liability by agreeing to a limitation of liability with their customer in the contract. The problem here is that, on the one hand, the customer may not be willing to accept this, and on the other hand, liability limitations in general terms and conditions are only possible to a very limited extent. However, this is an issue that applies not only to AI and which requires legal advice in general.

2.2. Legal Liability for AI

2.2.1. Liability under the AI Act

Statutory liability for AI may arise, on the one hand, from the AI Act. However, the AI Act itself does not generally provide a basis for claims for damages but only regulates the fines that the authorities may impose. Nevertheless, a violation of the AI Act may play a role, particularly in the context of product liability and manufacturer liability.

2.2.2. Product Liability and Manufacturer Liability for AI

Product liability and manufacturer liability is the liability of the manufacturer (including suppliers and, where applicable, importers) toward anyone. There is currently no separate statutory regulation governing product liability/manufacturer liability for AI. A planned EU directive on liability for AI has been abandoned.

Admittedly, the new general (i.e. not limited to AI) EU Product Liability Directive does address this issue, as it also applies to software. However, on the one hand, this applies only to a subset of possible scenarios, and on the other hand, an EU directive – unlike a regulation such as the AI Regulation – serves merely as a framework within which EU member states can then enact their own rules.

Germany, however, has not yet enacted any such rules. The relevant law could potentially come into force at the end of 2026 (the implementation deadline is December 9, 2026, though Germany frequently fails to meet deadlines for EU laws; the implementing law for the AI Act will also be delayed by approximately one year), but all products placed on the market by that time are subject to the existing law.

Under the applicable general product liability laws, the manufacturer is generally liable if the product has a defect. A defect exists if the product does not provide the level of safety that can be expected under all circumstances. This is where the provisions of the AI Act come into play, as it can at least be expected that an AI system complies with the requirements of the AI Act.

However, as described, product and manufacturer liability applies only to manufacturers, including suppliers and, where applicable, importers. Often, however, it will be the case that not the manufacturer, but a third party deploys the AI (in the AI Act, the manufacturer is referred to as the “provider” and the party deploying it as the “operator”). In that case, the following applies:

2.2.3. Liability of the AI Operator

The AI operator is liable (unlike the manufacturer) to third parties (toward whom no contractual obligation has been entered into) generally only in cases of fault, i.e., in cases of intent and negligence.

The use of AI is considered negligent in any case if it does not meet at least the requirements of the AI Act. However, even if the requirements of the AI Act are met, the specific use of AI may still be negligent depending on the circumstances of the case, including the measures taken. In this respect, it depends on the individual case, but a prerequisite for liability is always at least negligent conduct on the part of the operator.

The situation is different, however, in the case of operating motor vehicles: In this event, one is liable even without fault, for example, if a car causes harm to a third party. Here, so-called strict liability applies, see § 7 StVG (similar principles apply to animal owner liability).

This strict liability was introduced due to the dangerous nature of motor vehicles (or animals), i.e., because the liable party has “released” something potentially dangerous “onto the general public,” e.g. by registering a car for road traffic (or keeping animals). When AI controls a motor vehicle, this liability already applies at this time, but it also applies to “normal”, i.e. non-autonomous motor vehicles (without AI).

However, this special liability applies only to motor vehicles. The same applies in other areas where there is a specific strict liability (e.g. air travel). It is conceivable that the legislature could introduce corresponding strict liability (possibly similar to that for motor vehicles, with mandatory liability insurance) for all products containing AI; however, this is not currently the case.

Another question is whether the operator of AI, if required to pay damages to an injured party due to strict liability, can seek recourse against the seller or manufacturer, e.g., based on contractual liability; however, recourse claims may also arise for other reasons.

In other scenarios as well – such as so-called “liability for interference” – the use of AI that causes harm can lead to liability toward third parties even in the absence of fault.

For example, a lower court ruled that an information service using AI must remedy the consequences of incorrect information provided by the AI, in that case that a company was insolvent, because the information service had knowingly used the AI.

In principle, this is likely correct: Just as with other tools, the party using the tool must be liable if the tool malfunctions. Another question is whether this can apply generally and in all cases. To date, there is hardly any further case law regarding the liability of AI operators.

3. ContractConclusionby AI

Another issue in the context of AI is whether an AI can make declarations of intent to con-clude contracts. AI lacks the capacity to enter into (legal) transactions. The Federal Court of Justice (BGH) has ruled on declarations of intent made by software, stating that the declaration of intent is ultimately made by the natural person using a computer system as a means of communication.

With regard to AI, the problem arises that the AI makes decisions largely autonomously, and the AI’s declaration of intent is linked to an input made long beforehand (e.g., “Order when stock is running low in a quantity based on the expected order volume”), the specific effects of which are not known to the user in detail. This has consequences in the case of erroneous declarations, as such declarations can indeed be contested, but the prerequisite for this is that there is a discrepancy between what was intended and what was declared (slips of the tongue, spelling errors). This also applies to incorrect transmission by couriers or due to technical errors. Therefore, contestations are in principle also possible in the case of declarations made by AI. However, so-called “error of motive,” such as a mistake regarding one’s own needs, is legally irrelevant.

This raises the question of whether the AI’s “error” should be classified as a technical error, i.e., as an incorrect transmission, or whether one can argue that the AI had a “motive” for its actions, since it (erroneously) assumed that a significant order volume was to be expected, even though, for example, the item manufactured from the ordered product is no longer being sold. If the AI erroneously assumes that an order is necessary, this is ultimately because the AI’s programming regarding order-triggering factors was flawed. However, this is not a technical error in the strict sense, but rather an error related to motive in terms of law. What the AI’s output (its “will”) was – namely, to trigger an order – was correctly implemented. Therefore, in such cases, a challenge of the contract is likely not possible.

4. Copyright and Patent Law Aspects of AI

From a copyright perspective, two aspects of AI must be distinguished. On the one hand, the question arises as to whether results created with or by AI are protected by copyright. On the other hand, there is debate over whether the training (required for the AI to function) using copyright-protected data constitutes a copyright infringement.

4.1. Copyright Protection of Results Created by AI

Not every work product created by a human is protected by copyright. In Germany, and similarly throughout the EU, work products are only protected by copyright if, broadly speaking, they express the author’s individual creative effort. However, if the (human) creator has merely provided the AI with the inspiration for the creation (e.g., “Paint a picture in the style of van Gogh”), the question arises as to whether the result is protected by copyright. As far as can be seen, there is no supreme court case law on this issue yet. The lower courts and, presumably, the legal literature assume that the task description (“prompt”) given to the AI must be so detailed and dominant that the result can ultimately be attributed to the creator, for copyrights to come into existence.

Software is also protected by copyright, albeit through certain special provisions; however, the law expressly requires an individual creative contribution. While any software that is not entirely trivial is protected, the requirement for an independent intellectual – that is, human – creation still applies to software as well, so that ultimately the same principlesappliyas in “classical” copyright law. Similarly, the ECJ (European Court of Justice) has ruled regarding databases, which are also copyright-protected by specific special provisions, that the requirements for individual creation are the same as in “classical” copyright law.

4.2. Patent Protection ForResults Created By AI

The work products of a human being may also be protected by patent law. Unlike copyright, however, a patent must be filed,in order for one to obtain protection.

Broadly speaking, only inventions that act upon natural forces and go beyond the prior art are eligible for patent protection. This also raises the question of whether inventions created by AI can receive patent protection. The Federal Patent Court (BPatG) appears to hold the view that this is possible. However, the AI cannot be named as the inventor in the application; instead, the person who assigned the relevant task to the AI must be named. The Federal Court of Justice (BGH) has expressly left open the question of whether inventions created by AI can be patent protected, but it has pointed out that what is decisive for patentability is whether the invention goes beyond the state of the art, and not what specific considerations the inventor has made. Given that inventors typically make use of technical tools for an invention and that the right formulation of the problem often constitutes the essence of an invention, it seems reasonable that, unlike in copyright law, where the creator’s personality as expressed in the work is to be protected, an invention made by AI is patentable, and the person who operated the AI accordingly is considered the inventor.

5. Training AI With Copyright-Protected Data

A topic currently under intense debate is whether training an AI with copyright-protected data constitutes a copyright infringement. This question is very interesting, but it is likely to play a less significant role in daily practice for most companies.

In principle, under German copyright law, “data mining” is permitted, unless the relevant website contains a machine-readable prohibition against automatic extraction (for data ac-cessible only offline, the restriction need not be machine-readable). The question whether AI training still falls under the term “data mining” has not been conclusively clarified, but the lower courts generally view it that way.

However, data mining” is only permitted if the extracted data that is no longer necessary for the “data mining” is deleted. Since in one case an AI was able to reproduce, in some in-stances, larger excerpts from data (specifically song lyrics) – meaning the data was likely “memorized” by the AI in that specific case – a German court has already ruled that deletion did not occur, rendering the use unlawful in that specific instance (a British court took a slightly different view regarding images).

Another provision in German copyright law permits data mining for research purposes even if the data explicitly contains a prohibition on automated extraction. In this regard, it has been ruled that a company seeking to use AI commercially cannot invoke this research privilege.

To the extent that software is used to train AI so that the AI “learns” to create software, this will generally only be possible with open-source software, as only the source code for such software is publicly available. Since open-source software may usually be reproduced with-out restriction under the terms of its license, the training – even if it constitutes reproduction – is then likely permitted. However, the question arises as to whether, in the case of so-called copyleft open-source software, the software produced by the AI must also be placed under a corresponding copyleft license, since copyleft licenses require that modifications also be placed under the copyleft license. In this regard, it will depend on whether the AI actually reproduces the software, as well as on the exact wording of the copyleft license.

6. Conclusion

In conclusion, most legal questions regarding AI can be answered with sufficient certainty under currently applicable laws. With regard to the liability of AI operators, itmay be prudentif a form of strict liability is introduced, similar to the already existing strict liability for road traffic accidents.

Dr. Wolf Günther / Dr. Meinhard Erben
KANZLEI DR. ERBEN ATTORNEYS

Version: 2026-05-05