The Financial Reporting Council has published the first regulator-led guidance globally on how audit firms must manage generative and agentic AI risks. The framework identifies three distinct risk mechanisms and establishes quality assurance obligations that audit firms must implement.
Regulatory Watch  Accountancy · Financial Services

The FRC's generative and agentic AI guidance, published on 30 March 2026, represents a regulatory first: no other major audit regulator has yet published comparable standards. This positions the UK as the global leader in audit-specific AI governance and sets a template that other regulators—the IAASB, PCAOB and EU audit oversight bodies—will almost certainly follow. The guidance identifies three discrete risk categories: deficient AI outputs (where the model produces factually incorrect analysis), misinterpretation of correct outputs (where auditors draw wrong conclusions from accurate AI-generated data), and non-compliance with audit methodology (where AI deployment circumvents required procedures). These are not theoretical risks. Audit firms have already deployed generative AI for audit sampling, materiality calculations, and compliance testing—tools like Trovix Aria that assist fee-earners in synthesizing evidence. The FRC's framework now establishes quality gates that those deployments must navigate.

The guidance's three-category structure is analytically sophisticated because each category requires different mitigation. Deficient outputs require model validation and comparison testing. Misinterpretation risk requires auditor training and output interpretation protocols. Methodology non-compliance requires governance frameworks that enforce ISA UK procedures even when AI accelerates testing. For audit firms, this translates into concrete compliance obligations: documenting which AI tools are deployed to which audit areas, establishing validation procedures before outputs can be relied upon, mandating partner review of AI-assisted conclusions, and maintaining evidence that ISA UK 240 (Fraud) and ISA UK 260 (Communication) procedures have been followed even when AI supported those procedures. Firms deploying Trovix Watch for regulatory intelligence will recognize this pattern—each new regulator-issued framework creates new compliance surfaces that must be mapped against existing systems.

What the FRC guidance does not do is prohibit agentic AI deployment. Instead, it establishes a quality assurance and governance framework that permits firms to capture AI benefits while maintaining audit integrity. This represents a mature regulatory approach: recognizing that AI competence is now a competitive requirement for audit firms, but demanding that competence be demonstrated through documented quality controls. The guidance will likely influence the FCA's approach to audit conduct through COBS rules and the PRA's expectations for auditors of financial institutions. The implicit regulatory message is that audit quality—measured through ISA UK compliance, partner review, and evidence of methodology adherence—cannot be compromised by AI adoption speed.

For audit firms implementing the guidance, the practical challenge is translating principles into documented compliance. Firms must establish audit committees or equivalent governance structures with real authority to approve which AI tools can be deployed to which engagement phases. They must invest in training programs that help audit teams understand both AI capabilities and failure modes. They must validate models against known test cases before deploying them to client work. Trovix Watch monitoring of regulatory developments becomes essential because the FRC will almost certainly publish follow-up guidance as agentic AI capabilities evolve; firms that track those developments systematically will adapt faster than those relying on annual review cycles. The firms that succeed will be those treating AI governance as a continuous compliance obligation rather than a one-time implementation project.

Source: ICAEW Insights

Related Trovix product:

Book a demo →