EU Lawmakers Reach Deal to Ease AI Act Compliance While Expanding Ban on Harmful AI Tools

EU Lawmakers Reach Deal to Ease AI Act Compliance While Expanding Ban on Harmful AI Tools

By
Key Takeaways
  • High-Risk AI Rules Delayed: EU lawmakers agreed to postpone key obligations for high-risk AI systems, with some requirements now applying from December 2027 and August 2028 to allow more time for standards and implementation guidance.
  • Ban on “Nudifier” Apps: The provisional agreement prohibits AI systems used to create non-consensual explicit content and AI-generated child sexual abuse material, covering images, video, and audio.
  • Compliance Simplification Measures: Negotiators moved to reduce overlapping obligations by clarifying that certain AI-enabled machinery products only need to comply with sector-specific safety laws.
  • Expanded Support for Growing Firms: Certain exemptions previously aimed at SMEs will also apply to small mid-cap enterprises to help reduce compliance burdens.
Deep Dive

European Union lawmakers reached a provisional agreement early Thursday on a new package of amendments to the bloc’s sweeping AI Act, striking a compromise designed to ease compliance burdens for businesses while tightening restrictions on some of the most controversial uses of artificial intelligence.

The agreement, reached after overnight trilogue negotiations between the European Parliament and member states, is part of the EU’s broader “digital omnibus” initiative, an effort aimed at simplifying overlapping digital regulations without dismantling the core structure of the laws themselves. In this case, lawmakers sought to recalibrate parts of the AI Act as governments and companies continue grappling with how to implement one of the world’s most ambitious AI regulatory frameworks.

The deal is headlined by a decision to delay the application of several requirements tied to high-risk AI systems, giving regulators, standards bodies, and companies more time to prepare the technical guidance and support measures needed to operationalize the law.

Under the provisional agreement, obligations for high-risk AI systems used in areas such as biometrics, critical infrastructure, education, employment, law enforcement, and border management would now apply from 2 December 2027. Requirements covering AI systems used as safety components under existing EU product safety legislation would begin applying from 2 August 2028.

Lawmakers also agreed to delay requirements related to watermarking AI-generated content until 2 December 2026. Those measures are intended to help users identify and trace AI-generated material online.

The overnight negotiations also produced a ban targeting AI systems used to generate non-consensual sexually explicit content and AI-assisted child sexual abuse material.

The prohibition would apply to AI systems placed on the EU market for the purpose of creating such material, systems released without reasonable safeguards to prevent those outcomes, and deployers using the technology for those purposes. The restrictions cover images, video, and audio content. Companies would have until 2 December 2026 to comply.

The move reflects growing concern among European lawmakers over the rapid spread of so-called “nudifier” applications and other generative AI tools capable of producing exploitative or abusive synthetic media.

Beyond the headline-grabbing restrictions, negotiators also focused heavily on reducing regulatory overlap, which is a recurring criticism raised by businesses since the AI Act was first finalized.

The agreement clarifies that certain AI-enabled machinery products will only need to comply with sector-specific safety rules instead of facing duplicate obligations under both the AI Act and existing machinery legislation. Lawmakers also narrowed the definition of what qualifies as a “safety component,” meaning AI functions that merely assist users or optimize performance would not automatically fall into the AI Act’s high-risk category unless a malfunction could create a genuine health or safety risk.

Other changes included new provisions allowing the processing of personal data where strictly necessary to detect and correct bias in AI systems, with safeguards applying to both high-risk and non-high-risk systems. The agreement also extends certain exemptions previously reserved for SMEs to small mid-cap enterprises, a move intended to reduce compliance pressure on growing companies.

Enforcement provisions were adjusted as well, with lawmakers agreeing to streamline oversight of certain general-purpose AI systems through the EU’s AI Office.

Speaking after the negotiations, Arba Kokalari, co-rapporteur for Parliament’s Internal Market and Consumer Protection Committee, said the agreement showed that “politics can move just as quickly as technology,” adding that the deal makes the AI rules “more workable in practice” while helping support startups and scaleups building AI systems in Europe.

Michael McNamara, co-rapporteur for the Civil Liberties committee, said the agreement balanced simplification measures with stronger safeguards against AI systems that threaten “fundamental rights or human dignity.” He specifically pointed to the inclusion of the ban on nudification apps and AI-generated child sexual abuse material as a key part of Parliament’s negotiating position.

The provisional deal still requires formal approval from both the European Parliament and the Council before it can enter into force. Lawmakers have indicated they intend to finalize the legislation before 2 August 2026, when current AI Act rules on high-risk systems are due to begin applying.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong