Global Regulators Rally Behind Trustworthy AI at the Global Privacy Assembly

Global Regulators Rally Behind Trustworthy AI at the Global Privacy Assembly

By
Key Takeaways
  • Global Commitment: Twenty data protection authorities signed the Joint Statement on Trustworthy Data Governance for AI at the Global Privacy Assembly in Seoul.
  • Expanding Coalition: The initiative, first launched in Paris earlier this year, now includes regulators from Europe, Asia-Pacific, and North America.
  • Risks Acknowledged: Authorities flagged privacy violations, bias, disinformation, and “AI hallucinations” as key concerns.
  • Governance by Design: The statement calls for embedding data protection into AI systems, supported by risk management and adaptable frameworks.
  • Cooperative Oversight: Regulators pledged to clarify legal bases, share information, and work with competition, consumer, and IP authorities.
Deep Dive

The world’s top privacy watchdogs are closing ranks on artificial intelligence, signaling that innovation must not come at the expense of privacy. At the Global Privacy Assembly (GPA) in Seoul last week, twenty data protection authorities from across Europe, Asia-Pacific, and North America endorsed a joint statement designed to lay down governance guardrails for AI.

The move expands on an initiative first launched in Paris earlier this year, when France’s CNIL, Korea’s Personal Information Protection Commission (PIPC), Ireland’s Data Protection Commission (DPC), and the UK’s Information Commissioner’s Office (ICO), among others, introduced the Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-Protecting AI. The statement now counts regulators from countries including Australia, Canada, Germany, Italy, the Netherlands, and the United Kingdom among its backers.

A Delicate Balance

The joint pledge reflects a tension familiar to risk and compliance professionals: how to unlock the benefits of AI while keeping its darker sides in check. Regulators flagged risks such as privacy violations, bias, disinformation, and the problem of “AI hallucinations.” Their answer is to embed privacy and data protection principles directly into system design, backed by strong governance frameworks and forward-looking risk management.

The authorities also noted that AI’s complexity, where developers, deployers, and data processors overlap in increasingly tangled ways, requires adaptable regulatory frameworks that evolve alongside technological change.

From Principles to Action

By signing, the twenty DPAs committed to clarifying the legal bases for AI-related data processing, improving information-sharing and security measures, monitoring technical and social impacts, and working with competition, consumer, and intellectual property regulators. The intent is to reduce legal uncertainty for innovators while strengthening privacy protections.

The signing ceremony in Seoul was less about lofty promises and more about shaping how AI is governed globally. While the statement began as a relatively modest initiative between a handful of regulators in Paris, it has now grown into a coalition spanning multiple continents. The authorities hope others will join, creating a stronger international front as AI reshapes industries and societies.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong