Insights

Operational Risks in AI Lifecycle Management

AI adoption continues to accelerate across industries, promising efficiency gains, enhanced decision-making, and new revenue streams. However, organizations are increasingly exposed to operational risks that, if unmanaged, can result in financial losses, regulatory penalties, reputational damage, and ethical violations. These risks are not confined to deployment—they permeate every stage of the AI lifecycle, from data collection to continuous monitoring. Effective AI governance requires a holistic understanding of these risks and the implementation of proactive risk management strategies.

What is “Risk”, Really?

In this candid and thought-provoking piece, Norman Marks challenges conventional definitions of risk and risk management, arguing that most frameworks fail to resonate with how real-world decisions are made. Drawing from his decades of executive experience and referencing the ideas of Grant Purdy and Roger Estall, Marks reframes “risk” as simply “what might happen”, a practical, plain-English approach that bridges the gap between theory and management reality.

AI Without Borders, Rules Without Consensus

It was supposed to be a step toward global unity. The G7’s Hiroshima AI Process was meant to signal the dawn of an international consensus on how to govern artificial intelligence. Instead, it’s become a reminder that the world’s biggest powers are not building one system of AI governance, but several. Each reflects a different philosophy of risk, control, and trust. And for compliance and risk leaders, that’s where the real work begins.

The Orchestrated Enterprise: A Risk Leader’s Manifesto

Technology does not create good risk management. Strategy does. Risk, by its nature, is not the enemy. As I often remind listeners on the Risk Is Our Business podcast, the company that avoids risk altogether is already obsolete. The task isn’t to eliminate uncertainty, it’s to orchestrate it. To take the right risks, at the right time, with purpose, visibility, and confidence.

Boards Still Don’t Ask: The Governance Disease Behind “Mission Critical” Blind Spots

When Delaware’s Chancery Court reminds directors that they have a fiduciary duty to oversee mission critical risks, it’s diagnosing a deeper governance disease, not just offering abstract legal theory.

From Silos to Systems: GRC Architecture

In his piece, Ayoub Fandi dives into the hidden cracks of modern GRC programs, where siloed tools, mismatched taxonomies, and broken information flows leave organizations vulnerable. Drawing on his engineering background and his work leading GitLab’s Security Assurance Automation team, Fandi makes the case for treating GRC like infrastructure, something that needs careful architecture before automation. Through practical insights and a clear-eyed critique of today’s compliance practices, he reframes GRC as a system that can scale with the speed of modern business.

Full Report: 2025 State of Risk & Compliance

NAVEX partnered with The Harris Poll to survey nearly 1,000 risk and compliance professionals globally about their R&C programs. The survey was conducted between April-May 2025, representing professionals from various industries and organization sizes globally