Who Is Liable When Artificial Intelligence Makes Mistakes?
Lawmakers and insurers struggle with the risks of an innovative technology.
The risk of harm caused by artificial intelligence (AI) is increasing. Accidents involving autonomous vehicles have frequently made it into the news. But the dangers associated with AI are rising steadily as it spreads rapidly into virtually every area of life. In medicine, for instance, misdiagnoses can result from data biases or incorrect outputs generated by AI systems. Surgical errors may occur if operating robots malfunction. In the financial sector, faulty assessments by banks or consulting firms can lead to major losses. In industry, misdirected warehouse robots can cause damage to stored goods. Therefore, the question arises: who bears responsibility for such claims?
Is the liability law sufficient?
This question is becoming more urgent as more businesses adopt AI. By 2024, 55 percent of Swiss SMEs had already integrated AI into their workflows, according to a labour market study conducted by the research institute Sotomo on behalf of AXA Switzerland.
Clear liability rules are crucial for the development and application of new technologies like AI. A lack of such rules may prevent companies from adopting AI, stop developers from working on it, and deter insurers from offering appropriate liability insurance products.
Currently, Switzerland has no special laws specifically regulating liability for AI. That doesn't mean there's a legal vacuum. Swiss liability law, with its general clauses, is fundamentally capable of addressing new technological developments and providing solutions. Liability is therefore determined based on general contractual and non-contractual principles. In addition, there are specific liability rules that do not depend on fault—for example, the liability of motor vehicle owners, which includes self-driving cars, or the liability of aircraft operators, which applies to autonomous drones.
The Product Liability Act also offers protection. It regulates non-contractual, fault-independent liability of manufacturers for personal injury or property damage caused by defective products intended for private use. Software and AI systems qualify as products under the Product Liability Act. However, this interpretation is disputed and has not yet been confirmed by the Swiss Federal Supreme Court. Whether the law needs to be updated – especially considering the new EU Product Liability Directive – remains an open question.
It is important to note that AI, unlike individuals or corporations, cannot be held legally liable. It is a technical tool used by individuals or legal entities who are responsible for the associated risks. Possible liable parties include the user of the AI, its manufacturer or developer, or the importer – depending on their involvement in the damage scenario.
Complex and opaque
Still, there are particular challenges when it comes to liability for damage caused by AI. This raises the question of whether non-contractual liability rules should be revised. The complexity and opacity of AI systems can make it difficult to prove causation and fault. The damaged party must demonstrate that unlawful conduct directly caused the damage.
The European Commission's proposed AI Liability Directive addresses these challenges. The draft directive requires providers of high-risk AI systems to disclose available evidence to the injured party in case of damage. For all AI systems, under certain conditions, a rebuttable presumption of causality between the AI and the damage is to be introduced.
Switzerland is also considering whether changes to liability law are necessary. In February 2025, the Federal Council decided to pursue a national regulatory approach for AI. Switzerland plans to ratify the Council of Europe’s Convention on Artificial Intelligence and make the necessary legal adjustments. According to the federal office in charge, adopting the discussed EU AI Liability Directive could help enforce civil claims more effectively. However, further analysis and outcomes of EU discussions on the directive must be awaited.
Legislators are not the only ones who may need to act. Companies, too, should review and, if necessary, adapt their contractual frameworks – especially regarding liability clauses.
Liability insurance can protect the assets of AI developers, importers, or users against claims by damaged parties. Coverage is governed by the specific contractual terms and conditions.
Insurers search for solutions
Even if the term "artificial intelligence" is not explicitly mentioned in conventional liability insurance products, coverage must still be evaluated in each individual case. If claims related to software delivery are excluded in the general terms and conditions, there may be no coverage for damages involving AI. The same applies to exclusions for cyber events, depending on how such events are defined in the policy.
Some insurers have begun offering specific liability products for algorithm and performance risks associated with AI. However, these products are still in their infancy. To avoid disputes, it is essential to define clear conditions, limitations, and exclusions for coverage.