The Financial Reporting Council has released groundbreaking guidance specifically addressing the risks and opportunities of generative and agentic AI in audit work. Published 30 March 2026, the guidance represents the first prescriptive framework of its kind from an audit regulator globally.
AI Governance  Accountancy

The FRC's March 2026 guidance on generative and agentic AI marks a regulatory milestone: the first time a major audit regulator has issued specific, technically grounded direction on how firms should manage the audit quality implications of large language models and autonomous AI agents. Targeted at system managers within audit firms, the guidance addresses the dual reality now facing the profession—AI systems offer genuine potential to enhance audit quality, efficiency, and coverage breadth, but simultaneously introduce novel failure modes: hallucination in evidence analysis, bias in sampling algorithms, inappropriate delegation of professional judgment, and data contamination of audit trails. Unlike the FCA's broader Mills Review or the Lloyd's market-wide toolkit, this is auditor-specific, which makes it both more technically precise and more operationally demanding for implementation.

The guidance sits within the FRC's existing audit supervision framework under ISA UK (UK International Standards on Auditing) and its emphasis on audit quality. Firms using Trovix Watch to track regulatory developments will need to immediately translate FRC guidance into firm-specific AI governance policies, particularly around three areas: risk assessment (where gen AI tools assist in identifying control gaps), evidence evaluation (where large language models may summarise audit samples), and reporting (where agentic systems might draft audit conclusions). The core obligation remains unchanged—auditors bear ultimate professional responsibility—but the guidance makes explicit that deploying gen AI without demonstrable risk controls violates the quality standards implicit in ISA UK 220. This is not optional sophistication; it is foundational compliance.

Implementation demands rigour at the firm level. Audit firms must document: (1) which AI tools are used at which audit stages; (2) what quality checks operate before AI-generated analysis becomes part of the audit file; (3) how auditors remain 'in control' of AI-assisted decisions; (4) what training system managers and auditors have received on AI limitations; (5) how the firm tests for algorithmic bias in sampling or risk assessment logic. Trovix Audit provides the governance dashboard infrastructure through which system managers can track these controls, document compliance testing, and create the evidence trail that inspectorates now demand. The FRC's inspection regime increasingly prioritises examination of AI governance, making that audit trail non-negotiable.

The guidance's emphasis on quality improvement alongside risk mitigation is important. The FRC is not saying 'avoid gen AI'; it is saying 'deploy it under defined controls that protect audit quality'. This pragmatic stance mirrors the FCA's approach in recent fintech guidance and the EU AI Act's emphasis on high-risk governance rather than prohibition. For audit firms navigating both FRC oversight and increasingly complex multinational audits, the framework provides welcome clarity. However, the bar for 'demonstrable risk control' is rising. Firms that treat gen AI deployment as a cost-reduction exercise without embedding independent review mechanisms will face inspection findings. The guidance effectively codifies the profession's expectation: AI augments auditor judgment; it does not displace it.

Source: ICAEW Insights

Related Trovix product:

Book a demo →