FRC Sets Guardrails for AI in Audit While Keeping Responsibility Firmly Human

FRC Sets Guardrails for AI in Audit While Keeping Responsibility Firmly Human

By
Key Takeaways
  • First-Mover Guidance: The Financial Reporting Council has issued what it describes as the first guidance globally from an audit regulator focused on generative and agentic AI in audit.
  • Not a Response to Failures: The guidance is not tied to identified audit deficiencies but instead codifies emerging good practices and provides a forward-looking framework for AI adoption.
  • Audit Quality Risks Defined: The FRC highlights risks including misuse of AI outputs, deficient outputs due to system limitations, and potential misalignment with auditing standards.
  • Human Accountability Reinforced: Despite increasing automation, firms and Responsible Individuals remain fully accountable for audit quality under existing standards such as ISQM (UK) 1.
  • Practical Use Cases Emerging: The guidance includes real-world examples such as summarizing board minutes and reviewing contracts, signaling how AI is already being embedded into audit workflows.
Deep Dive

The Financial Reporting Council has published new guidance aimed at helping audit firms navigate the rapid adoption of generative and agentic artificial intelligence, marking what it describes as the first such guidance from any audit regulator globally.

The guidance is designed to support firms integrating AI tools into audit engagements while maintaining audit quality and regulatory confidence. It arrives amid accelerating uptake of these technologies across the profession, as firms experiment with tools capable of summarizing documents, reviewing contracts, and even automating elements of audit procedures.

Unlike enforcement-driven publications, the FRC emphasized that the guidance does not respond to identified deficiencies. Instead, it seeks to codify emerging good practices, provide a conceptual framework for evaluating AI outputs, and establish a foundation for future regulatory work in this area.

A Framework for Trust in AI Outputs

The guidance focuses on how auditors can develop appropriate confidence in AI-generated outputs. It acknowledges that the level of scrutiny required will vary depending on how a tool is used, reinforcing that professional judgment remains central to the process.

The FRC outlines several key risks tied specifically to audit quality. These include the risk of misinterpreting AI outputs, the possibility of flawed or incomplete outputs due to system limitations, and the danger that AI-enabled methodologies may fall short of auditing standards.

To address these risks, the guidance points to a combination of mitigations, including system design controls, staff training, governance frameworks, and maintaining a “human in the loop” approach to review and oversight.

The regulator also includes practical examples to ground the guidance in real-world use cases, such as using AI to summarize board minutes or review contracts for revenue recognition testing.

Adoption Accelerates, but Accountability Holds

The guidance makes clear that while AI may transform how audits are conducted, it does not alter who is responsible for the outcome.

Firms and Responsible Individuals remain fully accountable for audit quality, consistent with existing standards such as ISQM (UK) 1 and ISA (UK) 220. The FRC underscores that AI is a tool, not a substitute for professional judgment.

Mark Babington, Executive Director of Regulatory Standards at the FRC, framed the guidance as both an enabler and a guardrail.

He noted that AI adoption in audit is accelerating, with agentic AI expected to follow closely behind, and said the guidance is intended to help firms invest in these tools with confidence while managing risks effectively. At the same time, he emphasized that the fundamental principle of the regulatory framework remains unchanged, with human auditors ultimately accountable for audit quality.

Setting the Direction for AI in Audit

This is the FRC’s second publication addressing AI in audit, showing an effort to stay ahead of technological change without stifling innovation.

AI offers tangible opportunities to enhance efficiency and potentially improve audit quality, but only if its use is carefully governed, understood, and aligned with existing regulatory expectations.

In that sense, the FRC’s latest move is less about introducing new rules and more about reinforcing an old one in a new context. Technology may evolve quickly, but in audit, accountability remains firmly human.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong