Deloitte Survey Finds Enterprises Charging Ahead With AI Agents While Governance Struggles to Keep Up
Key Takeaways
- Governance Still Immature: Only 21% of organizations surveyed by Deloitte said they have mature governance frameworks in place for agentic AI.
- Adoption Is Accelerating Quickly: Nearly three-quarters of respondents expect their organizations to be using AI agents at least moderately by 2027.
- Operational Risks Are Mounting: Deloitte warned that poorly governed AI agents could create cybersecurity, compliance, reputational, and operational risks.
- Successful Organizations Are Moving Deliberately: Companies seeing stronger outcomes are scaling gradually while building governance capabilities alongside deployments.
- Cross-Functional Oversight Is Emerging: IT, legal, compliance, and business leaders are increasingly being brought together to oversee AI agent policies and escalation processes.
Deep Dive
For all the excitement surrounding AI agents and their potential to reshape enterprise operations, a new survey from Deloitte suggests many organizations are still building the guardrails long after the engines have already started running.
The firm’s latest 2026 State of AI in the Enterprise report found that companies across the globe are rapidly expanding their ambitions around agentic AI and autonomous or semi-autonomous systems capable of making decisions, taking actions, and carrying out tasks with limited human intervention. Yet even as adoption accelerates, governance frameworks appear to be lagging well behind.
According to the survey, which gathered responses from 3,235 business and IT leaders across 24 countries in the Americas, Europe, Asia Pacific, and the Middle East, only 21% of organizations said they currently have a mature governance model in place for agentic AI.
That leaves the overwhelming majority of enterprises navigating one of the fastest-moving technology shifts in recent memory without what Deloitte describes as foundational oversight capabilities.
Those missing capabilities include clearly defined boundaries that determine which decisions AI agents can make independently and which require human approval, systems capable of monitoring agent behavior in real time, and audit trails that allow organizations to trace the actions and decision paths taken by AI systems.
The governance gap comes as organizations signal plans for much broader deployment over the next two years. Deloitte’s survey found that by 2027, 74% of respondents expect their organizations to use AI agents at least “moderately.” Within that group, 23% expect extensive use of the technology, while 5% anticipate AI agents becoming fully integrated into core business operations.
The findings highlight a growing reality inside large organizations. The conversation around AI is no longer centered solely on experimentation or proof-of-concept projects. Increasingly, enterprises are preparing to embed AI agents into day-to-day operational workflows, customer interactions, and business decision-making processes.
That shift brings obvious opportunities for efficiency and scale. It also introduces new layers of operational, governance, cybersecurity, and reputational risk.
Deloitte warned that AI agents operating without proper oversight or centralized controls could make unseen mistakes, expose sensitive information, work against organizational objectives, or even create openings for cyberattacks. In customer-facing environments, poorly governed systems could also damage trust or trigger reputational fallout if agents behave unpredictably or inappropriately.
And those risks, the report suggests, become significantly harder to manage once organizations move from limited pilots into large-scale production deployments.
What stands out in the findings is that the organizations reporting stronger outcomes with agentic AI are not necessarily the ones moving the fastest. Instead, Deloitte said successful adopters are tending to scale more deliberately, often beginning with lower-risk use cases while simultaneously building governance and oversight capabilities around the technology.
That approach frequently includes establishing cross-functional governance structures that bring together IT teams, legal departments, compliance leaders, and business executives to create policies, oversee performance, and manage escalation procedures when issues emerge.
The report arrives as organizations across industries face mounting pressure to operationalize AI while regulators, risk professionals, and security leaders continue debating how these systems should be governed and monitored in practice.
For many enterprises, the challenge is quickly becoming less about whether AI agents will play a significant role in future operations and more about whether organizations can build the oversight structures needed to safely support them at scale.
Deloitte’s findings suggest the answer to that question may ultimately determine which companies are able to turn AI agents into a long-term advantage and which discover too late that speed without governance comes with a cost.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

