The Financial Reporting Council has published world-first guidance on how generative and agentic AI systems pose specific risks to audit quality. UK audit firms must now implement documented risk mitigations to meet FRC supervisory expectations.
AI Governance  Accountancy & Audit

On 30 March 2026, the FRC published Generative and Agentic AI Guidance—the world's first audit regulator framework to explicitly address the distinct risks posed by these AI categories. The guidance identifies three core risk families: deficient outputs (where AI systems generate incorrect or incomplete audit evidence), misinterpretation (where auditors misread or over-rely on AI-generated findings), and methodology non-compliance (where AI-augmented audit procedures deviate from ISA UK standards or firm methodology). This specificity matters because both generative AI (like large language models used for document summarization) and agentic AI (autonomous systems that perform multi-step audit tasks) operate differently from traditional audit software, and their failure modes are not captured by legacy quality control frameworks. The FRC's guidance signals that auditors cannot treat these tools as 'plug-and-play' enhancements to existing processes—they require documented governance and evidence trails.

Audit firms are already integrating generative and agentic AI into core work streams: document review, analytical procedures, compliance testing, and evidence evaluation. The pressure to do so is competitive; audit efficiency directly impacts partner profitability and client fees. However, ISA UK 220 (revised) requires firms to establish and maintain a quality management system that includes detailed review procedures, and the FRC's new guidance translates that standard into specific AI-era obligations. Trovix Audit provides the governance dashboard firms need to document their AI risk assessments and control testing, but the real complexity lies in proving to FRC inspectors that auditors are actively validating AI outputs rather than passively accepting them. The guidance does not prescribe specific tools or methodologies—it is principles-based, consistent with FRC culture—but it does require firms to demonstrate that they have identified AI-specific risks and implemented commensurate controls.

The three risk categories identified in the FRC guidance map directly onto operational failures auditors have already encountered. Deficient outputs occur when AI summarizes a complex contract or lease incorrectly, causing auditors to miss a material accounting implication; misinterpretation arises when auditors assume AI's identification of a 'non-routine transaction' is correct without independent validation; methodology non-compliance happens when an agentic system executes a test in a way that deviates from ISA UK sampling standards. What Trovix Watch monitors is the regulatory expectation layer: the FRC will now inspect for evidence that firms have designed procedures to catch these failures before they propagate into audit opinions. This requires not just tool governance but also auditor training, quality review protocols, and documented exceptions handling.

For audit partners, the FRC guidance has immediate implications for engagement risk assessment and resource planning. If an engagement involves high-volume generative AI use (e.g., AI-assisted document review on a complex acquisition due diligence), the firm must now formally evaluate the three risk categories, document the controls in place, and allocate experienced reviewer time to validate AI outputs. Smaller firms may find this compliance overhead prohibitive unless they adopt platforms such as Trovix Watch that automatically track regulatory expectations and flag gaps in documented risk mitigation. The guidance also implies that FRC inspection teams will now ask detailed questions about AI use during audits of public-interest entities—questions that firms without clear governance will struggle to answer credibly. The competitive advantage will accrue to firms that treat generative and agentic AI not as cost-reduction tools but as audit-quality enhancers that require as much rigour as the substantive audit procedures themselves.

Source: ICAEW Insights

Related Trovix product:

Book a demo →