On 30 March 2026, the Financial Reporting Council published targeted guidance on generative and agentic AI for audit firms—a regulatory first globally. The framework categorises AI risks into three concrete domains: deficient outputs (where AI generates inaccurate or incomplete content), misinterpretation of correct outputs (where users misunderstand valid AI results), and non-compliance with audit methodology (where AI application violates ISA UK or ICAEW standards). This structured taxonomy represents a maturation beyond the FRC's foundational 'AI in Audit' publication from June 2025, which established principles without prescriptive risk controls. Firms using Trovix Watch to track regulatory developments will recognise this as a watershed moment: the FRC has moved from observation to enforcement-ready guidance.
The guidance's three-category framework directly addresses audit quality vulnerabilities that have emerged as AI adoption accelerated across the profession. A deficient output risk occurs when generative AI hallucinating or confabricating audit evidence—for example, a large language model inventing sample transaction details or mischaracterising control effectiveness. Misinterpretation risk materialises when auditors correctly receive AI-generated analysis but fail to grasp its limitations, perhaps over-relying on AI risk assessments without independent professional scepticism. Non-compliance risk surfaces when firms automate audit procedures using AI in ways that deviate from ISA UK 500 (Audit Evidence) or other core standards. Each category requires distinct mitigation: validation protocols for outputs, interpretive training for teams, and methodology governance frameworks. Firms deploying Trovix Watch gain real-time visibility into FRC guidance updates, enabling rapid policy adaptation.
The timing of this guidance reflects intensifying regulatory scrutiny of AI in professional services. The FRC has positioned this as a quality-assurance mechanism rather than a prohibition, signalling that auditors may use generative and agentic AI provided they implement auditable risk controls. This echoes the Consumer Duty (PS22/9) principle that firms must manage third-party AI risks transparently—here applied to internal audit operations. Audit firms now must embed AI governance into their quality control systems under SYSC 6.1R (General principle on systems and controls), documenting how AI tools are validated before deployment, how auditors are trained to interpret outputs, and how AI usage is monitored for compliance drift. Trovix Audit provides the governance dashboard firms need to centralise these control logs and demonstrate compliance during FRC inspections.
The FRC's approach creates a regulatory template that other accountancy bodies and professions are likely to adopt. By naming three discrete risk categories rather than issuing broad warnings, the FRC has given audit firms a compliance roadmap. Firms can now benchmark their AI policies against the FRC framework, train teams using the three-category model, and report adoption progress in audit committee papers. The guidance's publication also signals that the FRC will monitor implementation: firms that fail to embed mitigations for deficient outputs, misinterpretation, or non-compliance may face quality enforcement action. This makes AI governance a core audit file matter, equivalent in rigour to audit evidence documentation under ISA UK 230. Platforms such as Trovix Watch already parse such guidance releases, enabling firms to update compliance policies within days rather than weeks.
Source: ICAEW Insights