AI Governance

South Korea’s Privacy Regulator Steps In to Bring Order to the Generative AI Wild West

Generative AI may be the tech world’s shiny new engine, but as it powers everything from government chatbots to healthcare diagnostics, it has become appararent that these models eat data for breakfast, and a lot of that data is personal. On August 6, 2025, South Korea’s Personal Information Protection Commission (PIPC) decided it was time to lay down the law, or at least a roadmap, by releasing its first Guidelines on Personal Data Processing for Generative AI.

India’s Central Bank Unveils Framework for Responsible AI in Finance

The Reserve Bank of India (RBI) has thrown open the conversation on how artificial intelligence should shape the future of Indian finance and, importantly, how it shouldn’t.

Imagine an AI-Enabled World of Risk Management

In the latest piece from Norman Marks, the veteran governance, risk, and audit thought leader takes a bold leap into the near future, imagining how AI could fundamentally reshape decision-making, risk management, and the role of internal audit. Through a vivid crystal-ball scenario, Marks explores what happens when AI becomes a trusted partner for executives, operations, and assurance functions alike.

EIOPA Lays Out AI Governance Expectations for Insurance Sector Amidst Growing EU Scrutiny

The European Insurance and Occupational Pensions Authority (EIOPA) has published a sweeping Opinion on the governance and risk management of artificial intelligence (AI) systems in the insurance sector, offering fresh clarity to national supervisors navigating the intersection of sectoral regulation and the EU AI Act.

From Automation to Autonomy: Orchestrating GRC with Agentic AI at the Helm

The future of GRC is not simply digital, it’s decisively autonomous. It’s not just about processing power or clever dashboards. It’s about cognitive capability woven into the operational fabric of the organization—fluid, contextual, and self-directed. It’s orchestrated intelligence with agency.

This Risk Is Scary

In this article, Norman Marks breaks down the double-edged nature of AI adoption in corporate legal departments, highlighting both the remarkable opportunities for productivity and the underappreciated risks that could undermine sound judgment, legal integrity, and even corporate stability. Drawing on recent industry surveys and personal observations, Marks makes a compelling case for why risk and audit professionals must step up and get involved.

Bridging the AI Chasm with Governance that Thinks Ahead

Across boardrooms and back offices, the promise of AI is animating strategy sessions and shaping budgets. Everyone wants in on the productivity gains, the streamlined operations, the predictive insights. But behind the excitement lies a quietly growing tension: how do you govern a technology that can improvise, evolve, and sometimes go off-script?