EY Finds Responsible AI Governance Is Paying Off for Business
Key Takeaways
- Governance Pays Off: Companies with real-time monitoring and oversight committees are 34% more likely to see revenue growth and 65% more likely to see cost savings.
- Losses Are Widespread: 99% reported AI-related financial losses; 64% lost over $1 million; average loss among impacted firms is $4.4 million.
- Top AI Risks: Non-Compliance with AI regulations (57%), sustainability setbacks (55%), and biased outputs (53%).
- Leadership Gaps: Only 12% of C-suite respondents matched the right controls to AI risks; CROs scored 11%.
- Citizen Dev and Agentic AI: About two-thirds allow citizen developers, yet only 60% have org-wide frameworks and only 50% report high visibility into that activity.
Deep Dive
As artificial intelligence races deeper into the enterprise, a new global survey from EY suggests the real winners aren’t just those investing the most in AI, they’re the ones governing it best.
EY’s Responsible AI Pulse Survey found that companies with strong oversight committees and real-time monitoring aren’t just avoiding risks, they’re outperforming. These firms were 34% more likely to report revenue growth and 65% more likely to report cost savings compared to peers with less mature AI controls. The survey, unveiled at the 2025 World Summit AI in Amsterdam, underscores that responsible AI isn’t slowing innovation, it’s fueling it.
Turning Principles into Performance
EY surveyed 975 C-suite leaders across 21 countries and 11 executive roles. The message was strikingly consistent: companies that treat responsible AI (RAI) as a business discipline, not a compliance chore, are reaping measurable rewards.
Four in five respondents said AI adoption has already improved innovation (81%) and boosted efficiency (79%), while roughly half credited it with revenue growth, cost savings, and higher employee satisfaction. Most organizations have implemented about seven of ten recommended RAI measures, and fewer than 2% said they have no plans to act, a sign that responsible AI has entered the corporate mainstream.
Raj Sharma, EY’s Global Managing Partner for Growth & Innovation, said the shift reflects a broader realization that good governance drives competitive advantage.
“The widespread and increasing costs of unmanaged AI underscore a critical need for organizations to embed practices deep within their operations,” Sharma said. “This is not simply a compliance exercise; it’s a driver of trust, innovation, and market differentiation.”
The Price of Neglect
But the data also exposes a darker side to rapid AI adoption. Nearly every company surveyed (99%) reported financial losses from AI-related risks. Two-thirds said those losses exceeded $1 million, and the average reported hit was roughly $4.4 million.
The most common causes? Non-compliance with AI regulations (57%), sustainability setbacks (55%), and biased outputs (53%). These figures reveal how governance gaps are not just theoretical risks, they’re already materializing in bottom-line losses.
Compounding the problem is what EY calls a “C-suite knowledge gap.” When asked to match the right controls to five AI risks, only 12% of executives answered correctly. Chief risk officers, the very leaders meant to own these risks, fared slightly worse at 11%. The findings suggest many leadership teams are embracing AI before truly understanding how to govern it.
Citizen Developers and Agentic AI: The New Governance Frontier
EY’s survey also highlights the growing challenge of “citizen developers”, employees building their own AI agents using low- or no-code tools. Two-thirds of companies permit some form of this experimentation, but only 60% have company-wide frameworks in place, and just half have high visibility into what these employees are actually doing.
As agentic AI systems, capable of acting independently, begin to appear in the workplace, the stakes get even higher.
“Continuous monitoring and rapid response capabilities are essential,” said Sinclair Schuller, EY’s Americas Responsible AI Leader. “The autonomous nature of agentic AI introduces new risks that can escalate quickly, making robust controls necessary to prevent costly disruptions.”
Despite the risks, EY’s findings carry an optimistic message. Responsible AI appears to be the missing link between investment and impact, the factor separating companies that see tangible returns from those that don’t.
Joe Depa, EY’s Global Chief Innovation Officer, summed it up neatly, “When we have the freedom to explore within a clear, ethical framework, that’s when real innovation happens. It’s not just about growth—it’s about growth that does good.”
That sentiment reframes the conversation. Responsible AI is no longer a philosophical debate or a regulatory checkbox, it’s operational resilience in motion. The organizations that get it right are already proving that governance, done well, doesn’t limit innovation. It sustains it.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

