Global Regulators Draw a Line on AI Deepfakes as Privacy Risks Escalate
Key Takeaways
- Unprecedented Global Alignment: 61 data protection authorities issued a joint statement, signaling a coordinated response to AI deepfake risks
- Deepfakes Treated as Personal Data: Regulators clarified that AI-generated images and videos of identifiable individuals fall under data protection laws
- Clear Risk to Individuals: Authorities highlighted harms including non-consensual intimate imagery, defamation, and exploitation, particularly involving children
- Stronger Expectations for Companies: Organizations must implement safeguards, ensure transparency, enable rapid content removal, and address child-specific risks
Deep Dive
In a rare show of coordination, 61 data protection authorities from across the globe have issued a joint statement warning that AI systems capable of generating realistic images and videos of real people are creating a fast-moving and deeply personal category of risk. The concern is not just about misinformation or synthetic media in the abstract. It is about harm that lands squarely on individuals, often without their knowledge and increasingly without meaningful recourse.
The statement, coordinated through the Global Privacy Assembly, reflects growing unease among regulators that tools once framed as creative or experimental are now being used to produce sexualized, violent, or otherwise harmful depictions of real people, including children.
When Fiction Becomes Personal Data
What makes this moment particularly consequential is how regulators are framing the issue.
Even when an image or video is entirely fabricated, if it depicts an identifiable person, it can still qualify as personal data. That position brings AI-generated deepfakes directly into the scope of data protection laws across multiple jurisdictions, transforming what might once have been dismissed as a content moderation issue into a compliance and enforcement concern.
Tobias Judin of Norway’s data protection authority captured the tone bluntly, calling the functionality behind some of these systems “unacceptable” and warning that the resulting content can be used for bullying and exploitation.
Recent attention around AI tools like Grok has only sharpened that concern. The ability to generate and publish synthetic images directly onto widely used social platforms has compressed the distance between creation and harm. What once required technical skill can now be done in seconds.
A Technology Moving Faster Than Its Guardrails
Regulators are careful not to dismiss AI outright. The statement acknowledges its potential benefits. But the subtext is clear. The pace of deployment, particularly when embedded into social platforms, has outstripped the safeguards needed to prevent misuse.
The risks outlined are concrete and familiar. Non-consensual intimate imagery. Defamatory portrayals. Content used to harass or coerce. For children and other vulnerable groups, the exposure is even more acute.
What is different now is the scale and accessibility. These systems are no longer confined to niche communities or specialized tools. They are increasingly available to anyone with an internet connection.
What Regulators Expect Now
The statement does not introduce new rules so much as it draws a sharper boundary around existing ones.
Organizations developing or deploying AI image and video generation tools are expected to build in protections that prevent misuse from the outset. That includes taking steps to stop users from creating harmful or intimate imagery of others without consent, particularly where children are involved.
Transparency is another focal point. Companies are expected to clearly communicate what their systems can do, what safeguards are in place, and what constitutes acceptable use. The implication is that ambiguity is no longer defensible.
Equally important is redress. Individuals must have accessible and effective ways to request the removal of harmful content that involves their personal data, and those requests must be handled promptly.
Regulators also draw attention to a point that carries legal weight. In many jurisdictions, the creation of non-consensual intimate imagery is already a criminal offense. The technology may be new, but the underlying conduct is not beyond the reach of existing law.
A More Coordinated Regulatory Posture
Perhaps the most telling element of the statement is not just what it says, but how it came together.
This is the first time such a large and diverse group of data protection authorities has aligned on a multilateral statement addressing AI-generated imagery. That level of coordination suggests a shift in how regulators are approaching emerging technology risks.
Rather than acting in isolation, authorities are signaling a willingness to share information, align enforcement approaches, and respond collectively where harms are global in nature.
For organizations, that changes the calculus. Regulatory expectations are no longer confined to a single jurisdiction. They are converging.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

