AI Governance

Are Organizations Really Leveraging the Potential of AI?

In a recent article, Norman Marks asks a pointed question that’s becoming increasingly urgent across boardrooms, risk teams, and C-suites alike—are organizations truly leveraging the potential of AI, or are they still circling the runway while competitors take off? Drawing on new insights from Google AI and McKinsey’s latest 2025 survey, Marks explores whether companies are moving fast enough, cautiously enough, or strategically enough to turn AI from hype into real enterprise value, and what it means for practitioners who risk being left behind.

State AI Rights in the Spotlight as California Responds to Trump EO Leak & NDAA Push

The California Privacy Protection Agency isn’t staying quiet as new federal proposals surface that could sideline state-level protections governing artificial intelligence and automated decisionmaking technology. This week, the agency came out firmly against two separate federal efforts, one in Congress and one inside the executive branch, that would make it harder for states to enforce their own guardrails on emerging technologies.

Singapore Sets Out New Guidelines to Strengthen AI Risk Management in Financial Sector

Singapore’s financial watchdog is moving to tighten oversight of artificial intelligence across the financial sector, issuing a new consultation paper that lays out supervisory expectations for how firms should manage the risks of increasingly powerful AI systems.

EY Finds Responsible AI Governance Is Paying Off for Business

As artificial intelligence races deeper into the enterprise, a new global survey from EY suggests the real winners aren’t just those investing the most in AI, they’re the ones governing it best.

Agentic AI Needs an Operational Firewall

For years, AI governance has been built around preventing bad decisions before they happen. Organizations assess training data, test accuracy, evaluate bias, write principles, and sign off on models before they go live. That made sense when AI produced insights and humans made the choices that followed.

Operational Risks in AI Lifecycle Management

AI adoption continues to accelerate across industries, promising efficiency gains, enhanced decision-making, and new revenue streams. However, organizations are increasingly exposed to operational risks that, if unmanaged, can result in financial losses, regulatory penalties, reputational damage, and ethical violations. These risks are not confined to deployment—they permeate every stage of the AI lifecycle, from data collection to continuous monitoring. Effective AI governance requires a holistic understanding of these risks and the implementation of proactive risk management strategies.

AI Without Borders, Rules Without Consensus

It was supposed to be a step toward global unity. The G7’s Hiroshima AI Process was meant to signal the dawn of an international consensus on how to govern artificial intelligence. Instead, it’s become a reminder that the world’s biggest powers are not building one system of AI governance, but several. Each reflects a different philosophy of risk, control, and trust. And for compliance and risk leaders, that’s where the real work begins.