AI Without Borders, Rules Without Consensus
Key Takeaways
- No Global Rulebook: Despite the G7’s Hiroshima AI Process and the OECD’s new reporting framework, AI governance remains fragmented and largely voluntary.
- Europe Leads with Law: The EU’s AI Act is the world’s first binding framework, setting strict risk-based requirements that extend beyond Europe’s borders.
- China’s Command Model: Beijing’s measures center on state control, mandatory labeling, and alignment with political and social priorities rather than ethics-based principles.
- Compliance Collisions: Transparency rules in one region can conflict with secrecy or localization mandates in another, leaving GRC teams to manage cross-border contradictions.
- Governance as Architecture: The absence of a global accord is forcing companies to build modular compliance systems, able to toggle regional rules like features and embed governance into code.
Deep Dive
It was supposed to be a step toward global unity. The G7’s Hiroshima AI Process was meant to signal the dawn of an international consensus on how to govern artificial intelligence. Instead, it’s become a reminder that the world’s biggest powers are not building one system of AI governance, but several. Each reflects a different philosophy of risk, control, and trust. And for compliance and risk leaders, that’s where the real work begins.
When G7 leaders gathered in Hiroshima in 2023, their message was cooperation over competition. The AI boom was accelerating faster than any rulebook could keep pace with, and governments wanted to set at least a few common guardrails.
The resulting Hiroshima AI Process laid out a set of voluntary principles for trustworthy AI development, later complemented by the OECD’s 2025 reporting framework that invited companies to disclose how they manage AI risks and transparency obligations.
But “voluntary” is doing a lot of work here. The framework carries no legal force. It relies on market incentives and peer pressure, soft governance tools that only work if everyone agrees to play. And as 2025 has shown, not everyone does.
Europe Goes First and Goes Hard
The European Union is not waiting for global consensus. With the AI Act officially in force and its first wave of obligations kicking in this year, the EU has positioned itself as the world’s de facto rule-maker on AI.
The law bans certain uses outright (such as social scoring, manipulative design interfaces, emotion recognition in workplaces) and imposes sweeping requirements on “high-risk” systems, from conformity assessments to post-market monitoring. It’s the most ambitious regulatory framework since GDPR, and it’s already reshaping how multinational firms build, train, and deploy AI systems.
That ambition, however, comes at a cost. Some companies are calling for delays or revisions, arguing the Act could stall innovation. The European Commission disagrees, insisting that human oversight and accountability aren’t negotiable.
Whether Europe’s model becomes the world’s template or a regional experiment depends on what happens next. But its extraterritorial reach ensures no global player can ignore it.
China’s Mirror Image
Across the world, China has built an entirely different kind of governance, one rooted less in ethics and more in authority.
The country’s Interim Measures for Generative AI Services, coupled with new labeling rules that took effect this year, weave AI oversight into the fabric of national security and social stability. Every AI output, from a chatbot response to an image generation, must carry implicit or explicit identification. Datasets and algorithms must be registered with regulators.
It’s a governance model defined by state control and ideological alignment, not voluntary disclosure. And while foreign observers might call it restrictive, Beijing would argue it’s pragmatic. In a nation where information control is policy, not principle, AI regulation is just another layer of statecraft.
The Global Patchwork Problem
Put simply, there is no “global AI law.” There are overlapping regimes—some soft, some sharp—each with its own values baked in.
The EU focuses on human rights and safety, China focuses on content control and social order, and the G7 framework focuses on trust, transparency, and cooperation.
And the U.S.? It’s still relying on a patchwork of executive orders, sectoral guidance, and agency memos.
For organizations operating across borders, this is more than a legal headache, it’s a structural risk. A transparency obligation in one jurisdiction might conflict with a secrecy rule in another. A disclosure requirement in Brussels could violate a data localization law in Beijing.
We’re no longer talking about one AI system with global compliance, we’re talking about 20 different ones sharing the same codebase.
Governance Without a Treaty
The Council of Europe’s Framework Convention on Artificial Intelligence offers a faint outline of what a global agreement might look like, anchored in human rights and rule-of-law principles, but it’s far from universal.
In the meantime, companies are taking matters into their own hands, including publishing voluntary transparency reports, hiring “AI governance leads,” and building modular compliance architectures that can switch regional rules on or off like feature toggles.
The irony is that the absence of a global accord may actually be spurring innovation in governance itself. The smartest organizations aren’t waiting for alignment, they’re engineering it.
The New Role of GRC
AI governance is no longer just about ethics or policy, it’s about operational design. Managing risk now means mapping regulatory regimes, understanding where they clash, and architecting systems that can flex across jurisdictions. It means turning governance into code, baking accountability, auditability, and transparency into every layer of deployment.
There may never be a single rulebook for AI. But in a fragmented world, the organizations that thrive will be those that treat governance not as a burden, but as an engineering challenge. Because if the world can’t agree on how to govern AI, someone still has to.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.