Global Privacy Regulators Rally Around New Principles for AI Image Generation Tools

Global Privacy Regulators Rally Around New Principles for AI Image Generation Tools

By
Key Takeaways
  • Global Coalition of Regulators Speaks Out: Privacy authorities from more than 50 jurisdictions issued a joint statement warning about the risks of AI systems generating images and videos of real individuals without their consent.
  • Deepfakes and Non-Consensual Imagery Under Scrutiny: Regulators say generative AI tools integrated into widely used platforms are enabling the spread of defamatory content, non-consensual intimate imagery, and other harmful material.
  • Clear Expectations for AI Developers: Organizations developing or deploying AI content generation systems are expected to implement safeguards, ensure transparency, and respond quickly to requests to remove harmful content.
  • Special Protections for Children: The statement emphasizes stronger safeguards and clearer information for children, parents, and educators in response to growing risks of exploitation and cyber-bullying.
  • International Cooperation on Enforcement: Regulators say they will coordinate enforcement, policy, and education efforts to address what they describe as a rapidly emerging global privacy risk.
Deep Dive

Privacy regulators from around the world are stepping up scrutiny of generative AI tools capable of producing realistic images and videos of real people, warning that the technology is already being used in ways that threaten privacy, dignity, and safety.

In a recent joint statement, dozens of data protection authorities said they are increasingly concerned about artificial intelligence systems that generate imagery depicting identifiable individuals without their knowledge or consent. The statement was coordinated through the Global Privacy Assembly’s International Enforcement Cooperation Working Group and reflects a rare show of coordinated concern across the international privacy community.

The regulators noted that while AI technologies can offer meaningful benefits, recent developments in image and video generation tools (particularly those embedded in widely accessible online platforms) have made it far easier to create harmful content involving real individuals.

Among the risks highlighted are the creation of non-consensual intimate imagery, defamatory portrayals, and other forms of synthetic media that can cause reputational damage or harassment. Regulators said the risks are particularly acute when children or other vulnerable groups are involved.

Expectations for AI Developers and Platforms

Alongside the warning, regulators outlined a set of expectations for organizations building or deploying AI systems capable of generating images or other synthetic media.

First, organizations are expected to implement robust safeguards designed to prevent the misuse of personal information and the generation of harmful material, including non-consensual intimate imagery. Regulators stressed that such safeguards are especially important when content may involve children.

Transparency is another central requirement. Developers and operators of AI systems are expected to clearly communicate what their systems can do, what safeguards are in place, how the systems are intended to be used, and what consequences may arise from misuse.

Regulators also emphasized the need for practical remedies when harm occurs. Organizations should provide accessible mechanisms allowing individuals to request the removal of harmful content involving their personal information and respond to those requests quickly.

The statement also calls for enhanced protections for children, including stronger technical safeguards and clear, age-appropriate information that can help children, parents, guardians, and educators understand the risks associated with AI-generated content.

Privacy Law Still Applies

The signatories also reminded organizations that AI content generation systems must operate within existing legal frameworks. That includes compliance with data protection and privacy laws that already govern how personal information may be collected, used, or processed.

In many jurisdictions, regulators noted, the creation of non-consensual intimate imagery may also constitute a criminal offence.

By emphasizing this point, regulators appear to be pushing back against the notion that generative AI technologies operate in a regulatory gray zone. Existing privacy and data protection laws, they say, already apply to many of the risks emerging from these systems.

A Coordinated Global Response

The statement reflects a growing sense among regulators that the risks posed by AI-generated imagery are inherently global. Synthetic media can spread quickly across digital platforms and jurisdictions, making coordinated regulatory action increasingly important.

The joint statement was endorsed by dozens of data protection authorities from across Europe, Asia, the Americas, Africa, and the Middle East, including regulators from Canada, the United Kingdom, France, Singapore, Brazil, Ireland, and recently South Korea.

By issuing the statement collectively, regulators say they intend to strengthen cooperation and share information on how different jurisdictions are addressing the issue. That could include enforcement actions, policy initiatives, and public education efforts.

The goal, according to the statement, is to ensure that innovation in AI continues without undermining privacy, dignity, safety, or other fundamental rights, particularly for the most vulnerable individuals.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong