APRA Warns AI Risk Controls Are Falling Behind as Financial Sector Accelerates Adoption
Key Takeaways
- Controls Not Keeping Pace: APRA warns that governance, risk management, assurance and operational resilience frameworks are lagging behind the speed and complexity of AI adoption.
- Cyber Risk Escalation: AI is expanding attack pathways and accelerating the speed and scale of cyber threats, while patching and security practices struggle to keep up.
- Board Oversight Gaps: Many boards are engaged on AI strategy but lack the technical literacy required to effectively challenge and oversee AI-related risks.
- Concentration and Visibility Risks: Heavy reliance on single AI providers and limited transparency into embedded AI systems are constraining firms’ ability to fully assess and manage risk.
- Assurance Models Under Pressure: Traditional, point-in-time assurance approaches are proving insufficient for dynamic AI systems, with continuous monitoring and specialist expertise still limited.
Deep Dive
The Australian Prudential Regulation Authority is urging banks, insurers and superannuation trustees to move faster, and think harder, about how they govern artificial intelligence, warning that risk controls are struggling to keep pace with the technology’s rapid expansion across the financial system.
In a letter to industry published Thursday, the regulator said the core disciplines firms rely on (governance, risk management, assurance and operational resilience) are not evolving quickly enough to match the scale, speed and complexity of AI adoption.
The warning follows a targeted supervisory review conducted late last year, examining how AI is being deployed and managed across APRA-regulated industries. What emerged, according to the regulator, is a sector moving decisively beyond experimentation into operational and customer-facing uses of AI, but without a corresponding lift in oversight and control.
That imbalance, APRA suggests, is beginning to matter.
A Technology Outpacing Its Guardrails
Across the institutions it reviewed, APRA found AI use accelerating in areas ranging from software development and fraud detection to loan processing and customer interaction. The technology is already delivering efficiency gains and new capabilities.
But the governance structures meant to keep those systems in check are still catching up.
Boards, for instance, are showing strong interest in AI’s potential, yet many lack the technical depth needed to challenge management on how those systems behave, where risks sit, and how they should be controlled. In practice, that can mean relying too heavily on vendor summaries without fully interrogating issues like unpredictable model behavior or downstream impacts on critical operations.
More broadly, APRA observed that some firms continue to treat AI as just another layer of technology, rather than something that introduces fundamentally different risks—adaptive models, probabilistic outputs, and new privacy and ethical considerations among them.
A Faster, More Complex Threat Landscape
The regulator’s concerns extend well beyond governance into cybersecurity, where AI is reshaping the threat environment.
AI systems, APRA noted, are opening up new attack pathways through techniques such as prompt injection, data leakage, insecure integrations and the manipulation of autonomous agents. At the same time, the technology is compressing the timeline of attacks, allowing malicious actors to move faster and with greater coordination.
Frontier models, including Anthropic’s Claude Mythos, are expected to intensify that dynamic further by accelerating the discovery of vulnerabilities.
Defenses, however, are not always keeping up. APRA pointed to gaps in areas such as identity and access management for nonhuman actors, the scope of security testing programs, and the speed at which vulnerabilities are patched and remediated.
Therese McCarthy Hockey, an APRA member, said the pace of change leaves little room for complacency.
“The AI revolution presents tremendous opportunities for banks, insurers and superannuation trustees to deliver improved efficiency and enhanced customer services,” she said. “But we cannot be blind to the risks of such powerful technology—whether in our own hands or the hands of those with malign intent.”
Hidden Dependencies and Concentration Risk
Another theme running through APRA’s findings is how dependent many firms are becoming on a small number of AI providers.
Some institutions were found to rely heavily on a single vendor across multiple use cases, often without well-developed contingency plans or tested exit strategies. That creates a concentration risk that may not be fully understood until something goes wrong.
At the same time, AI capabilities are increasingly embedded within broader platforms and developer tools, making it harder for firms to see, and therefore manage, what sits beneath the surface. Limited visibility into training data, model updates or upstream dependencies can restrict an organization’s ability to assess risks around bias, performance and security.
Assurance Models Under Strain
Traditional approaches to assurance are also showing their limits.
APRA found that many firms still rely on point-in-time or sample-based reviews, even as AI systems continue to learn, evolve and degrade over time. Continuous monitoring, capable of detecting issues such as model drift, bias or control breakdowns, remains the exception rather than the rule.
That challenge is compounded by a shortage of technical capability within risk and internal audit functions, particularly when dealing with more complex systems such as automated decision-making tools or AI-generated code.
The result is a growing lag between deployment and oversight, with assurance activities often trailing behind the technology they are meant to evaluate.
No New Rules, But Clear Expectations
For now, APRA is not introducing new regulatory requirements. Instead, it is pointing firms back to existing prudential standards and making clear that they apply just as firmly, if not more so, in an AI-driven environment.
“What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up,” McCarthy Hockey said. “While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.”
The regulator said it will continue working with government agencies and peer regulators, both in Australia and internationally, as it assesses the broader implications of AI for financial stability and resilience.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

