G
News
November 29, 2024
5 min read

AI Liability in the EU: Business guide to Product (PLD) and AI Liability Directives (AILD)

The EU is establishing an AI liability framework through two key regulations: the Product Liability Directive (PLD), taking effect in 2024, and the proposed AI Liability Directive (AILD). The PLD introduces strict liability for defective AI systems and software, while the AILD addresses negligent use, though its final form remains under debate. Learn in this article the key points of these regulations and how they will impact businesses.

EU's AI liability directives
Stanislas Renondin
EU's AI liability directives
EU's AI liability directives

The EU is taking bold steps to manage AI risks through two key regulations. The Product Liability Directive (PLD), which takes effect in December 2024, makes companies legally responsible for harm caused by defective AI and software. Its companion regulation, the AI Liability Directive (AILD), would create additional rules around negligent use of AI systems, though its final form is still being debated.

These regulations put real teeth into AI oversight. Under the PLD, companies throughout the AI supply chain face strict liability - meaning they're responsible for damages regardless of fault. The AILD would add another layer of accountability, though ongoing political discussions may alter its scope. For EU businesses working with AI, understanding and preparing for these rules is crucial to avoid legal exposure.

The EU Product Liability Directive (PLD): Ready to roll

The updated PLD represents a major overhaul of EU liability law, addressing the complexities of digital products, including AI systems. Originally introduced in 1985, the directive has been modernized to reflect technological advancements and the risks associated with AI.

1. Key updates in the EU PLD: AI systems requirements

  • AI Systems as Products: For the first time, the PLD explicitly classifies software, AI systems, and digital components as "products." This ensures that developers and manufacturers are liable for defects, even when dealing with intangible technologies like algorithms.
  • Liability for Continuous Learning: AI systems that evolve through updates or continuous learning are treated as new products when significant modifications occur. This holds developers accountable for post-deployment changes.
  • Broader Compensation Rights: Victims can claim damages for death, personal injury, psychological harm, property damage, and data loss. However, business-only assets are excluded from compensation.
  • Joint and Several Liability: Liability extends to all economic operators, including manufacturers, importers, distributors, and third parties making substantial modifications. Victims can seek full compensation from any responsible party.

2. AI product defectiveness: Legal definition under EU PLD

A product is deemed defective if it fails to meet expected or legally required safety standards. Key factors include:

  • Inadequate design, labeling, or instructions.
  • Failure to address foreseeable risks, such as those posed by updates or recalls.
  • Non-compliance with safety requirements established by EU law.

3. Presumptions to support claimants

The PLD introduces mechanisms to ease the burden of proof for claimants:

  • Presumption of Defectiveness: Applied in cases of non-compliance with safety standards, obvious malfunctions, or failure by defendants to disclose evidence.
  • Presumption of Causation: Particularly relevant for AI-related claims, where proving the causal link between a defect and harm can be excessively difficult due to AI's complexity.

4. EU AI Act and PLD compliance: Regulatory alignment

Non-compliance with high-risk AI obligations under the AI Act can trigger liability claims under the PLD. This creates a strong incentive for businesses to ensure full adherence to EU safety standards.

5. Time limits and defenses under the Product Liability Directive

  • Claims must be filed within 3 years of damage awareness and within 10 years of the product’s market introduction (extendable to 25 years for exceptional cases).
  • Defendants can invoke the development risk defense, arguing that risks were unknowable at the time of release, based on the state of scientific and technical knowledge.

The PLD provides immediate clarity for businesses and a pathway to compensation for victims, setting a high standard for AI accountability.

The AI Liability Directive (AILD): Challenges and opportunities for businesses

The AI Liability Directive (AILD) was designed to complement the PLD by addressing gaps in fault-based liability for software and AI systems. It targets cases where harm results from negligence or misconduct rather than product defects.

AILD latest updates: New EU AI compliance requirements

A recent study by the European Parliament Research Service (EPRS) recommended significant changes to the AILD:

  • Broader Scope: Expanding coverage to include general-purpose AI systems and transitioning the directive into a broader Software Liability Instrument.
  • Incorporating Strict Liability: Applying strict liability rules to high-impact and prohibited AI systems, as defined in the AI Act.
  • Harmonizing Standards: Aligning the AILD more closely with the updated PLD and the AI Act to ensure consistency and reduce market fragmentation.

Political hurdles

Despite these recommendations, the AILD faces resistance from several Member States:

  • Complexity and Overlap: Critics argue that the directive adds unnecessary complexity and overlaps with existing national laws and the PLD.
  • Lack of Case Law: Opponents claim there are too few AI-related legal cases to justify a dedicated liability directive.
  • Diverging Views Among Supporters: Even supportive countries, like the Netherlands, express concerns about the AILD’s practicality and alignment with the AI Act.

These challenges have fueled speculation that the directive may be significantly revised—or even abandoned.

PLD vs AILD: Key differences in EU AI Liability framework

EU AI Liability Framework

The PLD and AILD serve complementary but distinct purposes:

  • The PLD focuses on strict liability, ensuring compensation for damages caused by defective AI systems and software without requiring proof of fault.
  • The AILD targets fault-based liability, addressing cases of negligence or failure to meet obligations, particularly for high-risk or general-purpose AI systems.

The PLD is finalized and ready for implementation, while the AILD remains a work in progress, with its scope and future subject to ongoing debates.

Addressing the "Black Box" problem

Both directives tackle the inherent opacity of AI systems, often referred to as the "black box" problem:

  • The PLD introduces presumptions of defectiveness and causation to support claimants when evidence is inaccessible or technical complexity poses challenges.
  • The AILD proposes mandatory disclosure requirements, compelling AI providers to share relevant technical information about their systems.

These provisions reflect the EU’s commitment to balancing fairness for victims with the realities of AI development and deployment.

Business impact of AI Liability in the EU

The evolving liability framework presents both challenges and opportunities for businesses operating in the EU:

  1. Higher Compliance Costs: Companies must invest in documentation, risk management, and transparency systems to meet PLD requirements and prepare for potential AILD obligations.
  2. Navigating Uncertainty: While the PLD offers clear rules, the AILD’s uncertain future means businesses must remain agile and proactive in monitoring regulatory changes.
  3. Global Influence: The EU’s leadership in AI governance is expected to shape liability standards worldwide, prompting businesses to adopt harmonized practices across jurisdictions.

Giskard's solution for EU AI Compliance

As organizations prepare for the comprehensive EU AI regulatory framework, including the AI Act and related liability measures, Giskard's compliance platform offers a strategic solution for businesses navigating these new requirements. Our platform combines deep regulatory expertise with advanced automation to streamline compliance management across all aspects of the EU's AI legislative package. By automating compliance assessment, documentation, and risk evaluation processes, we enable organizations to confidently meet their regulatory obligations while optimizing resources. Our system continuously monitors regulatory developments and generates tailored recommendations, transforming AI compliance from a complex challenge into a manageable process. Through this innovative approach, we empower businesses to focus on growth while maintaining robust compliance with Europe's evolving AI requirements.

Conclusion: Preparing for a new era of AI accountability

The EU is reshaping how companies handle AI risks. The Product Liability Directive, coming in 2024, sets clear rules for holding businesses accountable when AI systems cause harm. The proposed AI Liability Directive aims to fill remaining gaps, though its final shape remains uncertain amid ongoing debate.

For businesses using AI in Europe, the clock is ticking. Meeting PLD requirements isn't optional, and staying ahead of potential AILD requirements will be key to avoiding legal troubles. The message is clear: organizations need to start preparing now for this new era of AI oversight.

At Giskard, we help organizations streamline compliance and ensure their AI systems meet regulatory standards. Contact us today to discover how we can help your company to comply with AI regulations. 

Integrate | Scan | Test | Automate

Giskard: Testing & evaluation framework for LLMs and AI models

Automatic LLM testing
Protect agaisnt AI risks
Evaluate RAG applications
Ensure compliance

AI Liability in the EU: Business guide to Product (PLD) and AI Liability Directives (AILD)

The EU is establishing an AI liability framework through two key regulations: the Product Liability Directive (PLD), taking effect in 2024, and the proposed AI Liability Directive (AILD). The PLD introduces strict liability for defective AI systems and software, while the AILD addresses negligent use, though its final form remains under debate. Learn in this article the key points of these regulations and how they will impact businesses.

The EU is taking bold steps to manage AI risks through two key regulations. The Product Liability Directive (PLD), which takes effect in December 2024, makes companies legally responsible for harm caused by defective AI and software. Its companion regulation, the AI Liability Directive (AILD), would create additional rules around negligent use of AI systems, though its final form is still being debated.

These regulations put real teeth into AI oversight. Under the PLD, companies throughout the AI supply chain face strict liability - meaning they're responsible for damages regardless of fault. The AILD would add another layer of accountability, though ongoing political discussions may alter its scope. For EU businesses working with AI, understanding and preparing for these rules is crucial to avoid legal exposure.

The EU Product Liability Directive (PLD): Ready to roll

The updated PLD represents a major overhaul of EU liability law, addressing the complexities of digital products, including AI systems. Originally introduced in 1985, the directive has been modernized to reflect technological advancements and the risks associated with AI.

1. Key updates in the EU PLD: AI systems requirements

  • AI Systems as Products: For the first time, the PLD explicitly classifies software, AI systems, and digital components as "products." This ensures that developers and manufacturers are liable for defects, even when dealing with intangible technologies like algorithms.
  • Liability for Continuous Learning: AI systems that evolve through updates or continuous learning are treated as new products when significant modifications occur. This holds developers accountable for post-deployment changes.
  • Broader Compensation Rights: Victims can claim damages for death, personal injury, psychological harm, property damage, and data loss. However, business-only assets are excluded from compensation.
  • Joint and Several Liability: Liability extends to all economic operators, including manufacturers, importers, distributors, and third parties making substantial modifications. Victims can seek full compensation from any responsible party.

2. AI product defectiveness: Legal definition under EU PLD

A product is deemed defective if it fails to meet expected or legally required safety standards. Key factors include:

  • Inadequate design, labeling, or instructions.
  • Failure to address foreseeable risks, such as those posed by updates or recalls.
  • Non-compliance with safety requirements established by EU law.

3. Presumptions to support claimants

The PLD introduces mechanisms to ease the burden of proof for claimants:

  • Presumption of Defectiveness: Applied in cases of non-compliance with safety standards, obvious malfunctions, or failure by defendants to disclose evidence.
  • Presumption of Causation: Particularly relevant for AI-related claims, where proving the causal link between a defect and harm can be excessively difficult due to AI's complexity.

4. EU AI Act and PLD compliance: Regulatory alignment

Non-compliance with high-risk AI obligations under the AI Act can trigger liability claims under the PLD. This creates a strong incentive for businesses to ensure full adherence to EU safety standards.

5. Time limits and defenses under the Product Liability Directive

  • Claims must be filed within 3 years of damage awareness and within 10 years of the product’s market introduction (extendable to 25 years for exceptional cases).
  • Defendants can invoke the development risk defense, arguing that risks were unknowable at the time of release, based on the state of scientific and technical knowledge.

The PLD provides immediate clarity for businesses and a pathway to compensation for victims, setting a high standard for AI accountability.

The AI Liability Directive (AILD): Challenges and opportunities for businesses

The AI Liability Directive (AILD) was designed to complement the PLD by addressing gaps in fault-based liability for software and AI systems. It targets cases where harm results from negligence or misconduct rather than product defects.

AILD latest updates: New EU AI compliance requirements

A recent study by the European Parliament Research Service (EPRS) recommended significant changes to the AILD:

  • Broader Scope: Expanding coverage to include general-purpose AI systems and transitioning the directive into a broader Software Liability Instrument.
  • Incorporating Strict Liability: Applying strict liability rules to high-impact and prohibited AI systems, as defined in the AI Act.
  • Harmonizing Standards: Aligning the AILD more closely with the updated PLD and the AI Act to ensure consistency and reduce market fragmentation.

Political hurdles

Despite these recommendations, the AILD faces resistance from several Member States:

  • Complexity and Overlap: Critics argue that the directive adds unnecessary complexity and overlaps with existing national laws and the PLD.
  • Lack of Case Law: Opponents claim there are too few AI-related legal cases to justify a dedicated liability directive.
  • Diverging Views Among Supporters: Even supportive countries, like the Netherlands, express concerns about the AILD’s practicality and alignment with the AI Act.

These challenges have fueled speculation that the directive may be significantly revised—or even abandoned.

PLD vs AILD: Key differences in EU AI Liability framework

EU AI Liability Framework

The PLD and AILD serve complementary but distinct purposes:

  • The PLD focuses on strict liability, ensuring compensation for damages caused by defective AI systems and software without requiring proof of fault.
  • The AILD targets fault-based liability, addressing cases of negligence or failure to meet obligations, particularly for high-risk or general-purpose AI systems.

The PLD is finalized and ready for implementation, while the AILD remains a work in progress, with its scope and future subject to ongoing debates.

Addressing the "Black Box" problem

Both directives tackle the inherent opacity of AI systems, often referred to as the "black box" problem:

  • The PLD introduces presumptions of defectiveness and causation to support claimants when evidence is inaccessible or technical complexity poses challenges.
  • The AILD proposes mandatory disclosure requirements, compelling AI providers to share relevant technical information about their systems.

These provisions reflect the EU’s commitment to balancing fairness for victims with the realities of AI development and deployment.

Business impact of AI Liability in the EU

The evolving liability framework presents both challenges and opportunities for businesses operating in the EU:

  1. Higher Compliance Costs: Companies must invest in documentation, risk management, and transparency systems to meet PLD requirements and prepare for potential AILD obligations.
  2. Navigating Uncertainty: While the PLD offers clear rules, the AILD’s uncertain future means businesses must remain agile and proactive in monitoring regulatory changes.
  3. Global Influence: The EU’s leadership in AI governance is expected to shape liability standards worldwide, prompting businesses to adopt harmonized practices across jurisdictions.

Giskard's solution for EU AI Compliance

As organizations prepare for the comprehensive EU AI regulatory framework, including the AI Act and related liability measures, Giskard's compliance platform offers a strategic solution for businesses navigating these new requirements. Our platform combines deep regulatory expertise with advanced automation to streamline compliance management across all aspects of the EU's AI legislative package. By automating compliance assessment, documentation, and risk evaluation processes, we enable organizations to confidently meet their regulatory obligations while optimizing resources. Our system continuously monitors regulatory developments and generates tailored recommendations, transforming AI compliance from a complex challenge into a manageable process. Through this innovative approach, we empower businesses to focus on growth while maintaining robust compliance with Europe's evolving AI requirements.

Conclusion: Preparing for a new era of AI accountability

The EU is reshaping how companies handle AI risks. The Product Liability Directive, coming in 2024, sets clear rules for holding businesses accountable when AI systems cause harm. The proposed AI Liability Directive aims to fill remaining gaps, though its final shape remains uncertain amid ongoing debate.

For businesses using AI in Europe, the clock is ticking. Meeting PLD requirements isn't optional, and staying ahead of potential AILD requirements will be key to avoiding legal troubles. The message is clear: organizations need to start preparing now for this new era of AI oversight.

At Giskard, we help organizations streamline compliance and ensure their AI systems meet regulatory standards. Contact us today to discover how we can help your company to comply with AI regulations. 

Get Free Content

Download our guide and learn What the EU AI Act means for Generative AI Systems Providers.