EIOPA Lays Out AI Governance Expectations for Insurance Sector Amidst Growing EU Scrutiny

EIOPA Lays Out AI Governance Expectations for Insurance Sector Amidst Growing EU Scrutiny

By
Key Takeaways
  • EIOPA’s Opinion Clarifies AI Oversight: The Opinion interprets how existing insurance-sector rules apply to AI systems, especially in the context of the AI Act.
  • No New Rules Introduced: Instead, the guidance reinforces existing legislation and emphasizes risk-based, proportionate governance expectations.
  • Focus on Core Principles: Supervisory expectations center around data governance, documentation, transparency, human oversight, and cybersecurity.
  • Promoting Convergence: EIOPA aims to align supervisory practices across the EU and will evaluate convergence two years post-publication.
  • More Guidance to Come: EIOPA plans to issue further thematic analyses and guidance on specific AI systems in insurance.
Deep Dive

The European Insurance and Occupational Pensions Authority (EIOPA) has published a sweeping Opinion on the governance and risk management of artificial intelligence (AI) systems in the insurance sector, offering fresh clarity to national supervisors navigating the intersection of sectoral regulation and the EU AI Act.

While the Opinion does not introduce new rules, it plays a critical interpretive role, bridging existing insurance legislation with the AI Act, which entered into force last year. AI is already reshaping insurance operations, from underwriting and pricing to fraud detection and claims management. EIOPA’s move aims to ensure this transformation proceeds responsibly, with proportional oversight based on risk.

“AI will continue to transform insurance, but transformation without trust is not sustainable,” said Petra Hielkema, Chairperson of EIOPA, in the signed Opinion.

A Framework for Responsible Innovation

EIOPA’s Opinion is addressed to National Competent Authorities (NCAs) and articulates supervisory expectations grounded in existing legislation, chiefly the Solvency II and Insurance Distribution Directives. The document emphasizes that AI systems are already subject to broad, technologically neutral governance and risk management principles under current sectoral law.

While the AI Act classifies certain insurance-related applications, such as life and health underwriting, as high-risk, EIOPA’s Opinion carefully avoids overlap by excluding high-risk and prohibited systems under the AI Act from its scope. Instead, the Opinion offers guidance on how to interpret sectoral rules when overseeing or using lower-risk AI systems.

At its core, the framework outlined in the Opinion is risk-based and proportionate. It emphasizes data governance, fairness, explainability, cybersecurity, and human oversight, principles echoed across both AI and insurance regulatory landscapes.

What Supervisors and Insurers Should Expect

The Opinion outlines several supervisory expectations for insurers using AI systems:

  • Data Governance: Insurers should ensure AI training and testing data is complete, accurate, and appropriate for its intended purpose. Reasonable efforts should be made to remove bias, including potentially unlawful proxy discrimination.
  • Documentation and Record Keeping: Firms are expected to document AI-related processes, from data collection and algorithm selection to decision rationales and performance monitoring. This supports traceability, auditability, and regulatory transparency.
  • Explainability and Transparency: Insurers must be able to explain AI-driven decisions to customers, regulators, and auditors. Where full explainability isn’t feasible, particularly with complex systems, firms are expected to implement guardrails and human oversight.
  • Human Oversight: Internal control systems should cover the full AI lifecycle, with clear roles for governance bodies, compliance, audit, actuarial functions, and data protection officers. Some insurers may also appoint dedicated AI officers or committees.
  • Cybersecurity and Robustness: Insurers should ensure AI systems are resilient to adversarial threats and system vulnerabilities. Performance metrics should be used to detect issues like model drift or data degradation.

The Opinion also reiterates the need for complaint mechanisms and redress procedures for customers affected by AI-driven decisions.

A Nudge Toward Convergence

One of EIOPA’s central aims is to foster supervisory convergence across the EU’s fragmented insurance market. To that end, it plans to assess national supervisory practices within two years of the Opinion’s publication. Further guidance and thematic analyses may follow based on market developments and supervisory feedback.

The Opinion also draws heavily from prior work by EIOPA’s Consultative Expert Group on Digital Ethics, including detailed examples of fairness metrics and documentation practices in its annex.

As the AI Act begins to take shape across Europe, EIOPA’s Opinion is a timely signal to insurers and supervisors alike that aligning innovation with sector-specific safeguards isn’t just encouraged, it’s expected.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong