The FRC's decision to undertake formal research into AI in corporate reporting marks an important methodological shift in UK financial regulation. Rather than issuing prescriptive rules based on international models (as the EU AI Act has done), the FRC is gathering evidence through in-depth interviews and anonymous online surveys of public interest entities. This approach reflects the principles-based tradition embedded in ISA UK (International Standards on Auditing—UK adaptation) and the FRC's own existing framework for auditor independence and quality control. The research, conducted with Lancaster University, will examine how PIEs are adopting AI—where, for what purposes, with what governance consequences. This evidence base will then inform the FRC's regulatory approach to AI disclosure, audit standards, and quality monitoring. For accountancy firms, this signals that current ad-hoc AI governance may soon face codified expectations.
The FRC's research focus on PIEs is strategically important. PIEs—typically large listed companies, financial institutions, and other entities of systemic significance—operate under enhanced corporate reporting obligations. Under the ICAEW's existing frameworks and FRC ISA UK standards, auditors must assess the design and operating effectiveness of controls over financial reporting. When AI systems generate or materially influence financial data (forecasts, provisions, valuations), auditors must understand and test those systems. The FRC's research will inform whether ISA UK standards require explicit AI audit procedures, documentation requirements, or independence considerations. This is why platforms such as Trovix Watch already monitor FRC guidance publications closely: auditors and accountancy firms need early warning of emerging standards. The research phase (now underway) will likely produce draft guidance by 2027.
A second critical dimension is disclosure—what must companies tell investors about their reliance on AI in financial reporting? Current reporting standards (under the UK Corporate Governance Code and related frameworks) do not mandate AI-specific disclosure. The FRC's research will examine whether this is a gap. If CFOs are using AI for cash flow forecasting, bad debt provisioning, or lease valuations, investors arguably need to understand the model's inputs, validation, and failure modes. Under the Consumer Duty (PS22/9) and broader FCA expectations, consumer-facing firms already disclose AI use; listed companies may face similar expectations. The research will inform whether this should be mandatory disclosure and at what level of detail. Accountancy firms deploying Trovix Audit to document their AI governance practices are effectively preparing for a future state in which such documentation becomes externally reportable.
The timing and scope of the FRC's research also reflects international dynamics. The EU AI Act requires risk assessment and transparency for 'high-risk' AI systems, including those used in financial compliance. The UK, post-Brexit, has chosen not to transpose the EU AI Act but to remain principles-based. However, the FRC's research suggests the UK is not entirely passive: it is gathering evidence on which to base future principles-based guidance. This is a more rigorous approach than waiting for incidents to drive rule-making. For the accountancy profession, governed by ICAEW regulations, CCAB standards, and firm-specific compliance with the MLR 2017 (Money Laundering Regulations), this research represents an opportunity to shape standards before they harden into rules. Firms that participate in the FRC's survey or interviews—and that document their current AI governance maturity—are positioning themselves as leaders in what will become a regulated domain.
Source: ICAEW Insights