AI Governance

AI Without Borders, Rules Without Consensus

It was supposed to be a step toward global unity. The G7’s Hiroshima AI Process was meant to signal the dawn of an international consensus on how to govern artificial intelligence. Instead, it’s become a reminder that the world’s biggest powers are not building one system of AI governance, but several. Each reflects a different philosophy of risk, control, and trust. And for compliance and risk leaders, that’s where the real work begins.

Mapping the Future of Risk & AI Governance

As we move further into the digital era, organizations face an increasingly complex landscape of risks—from brand reputation challenges to AI governance and cybersecurity concerns. To help professionals, and executives navigate these evolving threats, I am publishing my research categories for 2025/2026, highlighting the areas that will demand attention, insight, and innovation over the next two years.

Global Regulators Rally Behind Trustworthy AI at the Global Privacy Assembly

The world’s top privacy watchdogs are closing ranks on artificial intelligence, signaling that innovation must not come at the expense of privacy. At the Global Privacy Assembly (GPA) in Seoul last week, twenty data protection authorities from across Europe, Asia-Pacific, and North America endorsed a joint statement designed to lay down governance guardrails for AI.

Regulating the Future: America’s AI Plan

These past few months have seen AI’s explosion into the market, transforming how many businesses, companies, and even everyday consumers function on a daily basis. AI has even made its way into many governments and offices of CEOs, with many investing time and resources into furthering its function and abilities, all while trying to make sense of the rapidly evolving technology. Despite minimal conversation surrounding its debut, risk and compliance have now become a larger talking point, with officials taking notice.

Is the Digital Markets Act Ready for the Age of AI?

The European Commission has opened the floor to anyone with a stake in the digital economy (from startups to tech giants, academics to everyday consumers) to weigh in on how well the Digital Markets Act (DMA) is doing its job. The law, designed to keep the biggest platforms in check and give smaller players a fighting chance, is now under review. And this time, artificial intelligence is front and center.

FTC Sues Air AI Over Deceptive Business Claims, Seeks to Halt Scheme Targeting Small Businesses

The Federal Trade Commission (FTC) has taken legal action against Air AI Technologies, a Delaware-based company accused of deceiving small businesses and entrepreneurs with false promises of rapid earnings and ironclad refund guarantees. According to the complaint, many customers lost significant sums of money, some as much as $250,000, after buying into Air AI’s business coaching programs, access cards, and reseller licenses.

South Korea’s Privacy Regulator Steps In to Bring Order to the Generative AI Wild West

Generative AI may be the tech world’s shiny new engine, but as it powers everything from government chatbots to healthcare diagnostics, it has become appararent that these models eat data for breakfast, and a lot of that data is personal. On August 6, 2025, South Korea’s Personal Information Protection Commission (PIPC) decided it was time to lay down the law, or at least a roadmap, by releasing its first Guidelines on Personal Data Processing for Generative AI.