AI in Audit Gets a Reality Check with FRC’s New Guidance

AI in Audit Gets a Reality Check with FRC’s New Guidance

By
Key Takeaways
  • Guidance with Personality: The FRC’s new AI guidance is principles-based, practical, and grounded in real-world audit scenarios, not regulatory micromanagement.
  • No New Rules, Just Clarity: The document supports innovation while aligning with existing ISQM (UK) and ISA (UK) standards. It’s about how to document AI use, not whether you can.
  • Real-World Use Case: An illustrative example shows how AI could enhance fraud detection in journal entries using unsupervised machine learning or deep learning models.
  • Explainability Matters: Whether AI tools come from in-house teams or third parties, firms must be able to explain what the tool does and how its outputs inform audit work.
  • Supports the Government’s AI Principles: The guidance is aligned with the UK’s national AI framework and is designed to evolve alongside emerging technologies.
Deep Dive

The UK’s Financial Reporting Council (FRC) has recently published its first formal guidance on how artificial intelligence should be used, and documented, in audit. The guidance doesn’t lay down the law. Instead, it offers something arguably more valuable, clarity.

Clarity on how AI can actually be applied. Clarity on what audit teams should be thinking about when they deploy it, and clarity on how to document all of it in a way that’s robust but not ridiculous. At a time when AI is hurtling forward and leaving regulators everywhere scrambling to catch up, the FRC’s approach is refreshingly measured. It’s not about locking the profession down. It’s about opening the door safely.

“AI tools are now moving beyond experimentation to becoming a reality in certain audit scenarios,” said Mark Babington, the FRC’s Executive Director of Regulatory Standards. “When deployed responsibly, they have significant potential to enhance audit quality, support market confidence, drive innovation and ultimately contribute to UK economic growth.”

A Practical Blueprint for a Complex Topic

The guidance includes a vivid hypothetical of an audit firm developing a tool to detect risky journal entries using machine learning or deep learning models. It doesn’t pretend to be the only way forward. Instead, it walks through key decisions, such as what kind of AI model to use, how to make it explainable, how to document its outputs, giving firms a tangible reference point that feels grounded in the reality of today’s audits.

In one version of the example, the firm chooses a classic machine learning model that flags outliers without training on prior data. In the other, it builds a more complex neural network, supplemented with explainability tools like SHAP or LIME to help users understand why a transaction was flagged as suspicious.

Either way, AI can add real value, but it needs to be deployed thoughtfully. That means understanding the tool, checking your data, combining machine learning with professional skepticism, and above all, documenting it well.

It’s Not About Policing, It’s About Enabling

This isn’t a new rulebook. The guidance doesn’t change any requirements under ISQM (UK) 1 or ISA (UK) 230. What it does is offer a practical framework for how firms can show they’re using AI responsibly, whether that tool was built in-house or brought in from a third-party vendor.

Documentation expectations are laid out across two levels. First, what firms should keep centrally (such as model architecture, data sources, and training materials) and then what needs to go on the audit file (like why the tool was used, what version was run, and how its outputs informed the audit).

For tools that come from outside vendors, the FRC acknowledges the real-world challenge of limited transparency. In those cases, it suggests seeking independent assurance or documenting how governance and performance were validated.

Crucially, the guidance also aligns with the UK government’s five AI principles (safety, transparency, fairness, accountability, and contestability) without dragging audit teams into a regulatory thicket.

Not Just for the Big Six

Although the accompanying thematic review focuses on the UK’s six largest audit firms, the guidance is intended for everyone, from global networks to mid-tier firms exploring how AI can improve efficiency and quality.

It’s also meant to be future-friendly. The FRC is clear that this is not a one-and-done exercise. The field is moving fast, and the guidance will evolve with it.

“We recognize that this field is moving quickly,” Babington said. “The FRC will continue to engage across the profession, both in the UK and internationally, to support innovation and the appropriate use of AI.”

By focusing on principles over prescription, the guidance gives firms the confidence to innovate without flying blind. It says yes, use AI, but do it with your eyes open. Know what your tools are doing, why they’re doing it, and how to explain it if anyone asks.

That may not sound groundbreaking, but for a profession built on documentation, transparency, and trust, it’s exactly what was needed.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong