The Financial Reporting Council has partnered with Lancaster University to investigate how AI is being adopted in corporate reporting by UK public interest entities. The research, launched in March 2026, will gather baseline evidence to guide the FRC's future regulatory approach to AI in auditing an
Regulatory Watch  Accountancy · Financial Services

The FRC's decision to commission structured research into AI use in corporate reporting by public interest entities signals a shift from reactive oversight to evidence-based regulatory design. The research programme includes in-depth interviews and an anonymous online survey targeting UK PIEs and their audit firms to establish a baseline understanding of current AI adoption, use cases, governance frameworks and perceived risks. This approach mirrors the FCA's methodology in developing AI guidance for financial services regulation and reflects a professional judgement that AI adoption in audit and financial reporting is now material enough to warrant systematic study before final rules are published. The stakes are high: FRC ISA UK (International Standards on Auditing (UK)) sets the auditing standards for all UK audits, and the Financial Reporting Standard in the UK (FRS 102 and company law disclosures) sets reporting standards. If AI is being used in audit evidence gathering, analytical procedures or materiality assessments without governance frameworks aligned to ISA UK risk and materiality standards, audit quality is at risk. Trovix Watch helps accounting firms and PIEs track FRC guidance updates and standards changes, but the research now underway will generate the evidence base for more prescriptive requirements.

The timing of this research is no coincidence. Accounting and audit firms have accelerated AI adoption over the past 18 months: generalist LLMs are being used for research and documentation; specialist audit analytics tools are automating journal entry testing and anomaly detection; OCR and document intelligence are being deployed in evidence gathering. Yet the FRC has published no formal guidance specifically addressing AI governance in audit or financial reporting. This regulatory gap mirrors the situation in law and financial services flagged by the Treasury Committee. The FRC's research is designed to close that gap by understanding: (1) how many PIEs and audit firms are using AI and in which processes; (2) what governance frameworks, if any, are in place; (3) whether AI is materially affecting audit quality, financial reporting accuracy or professional scepticism; (4) what regulatory safeguards are needed. The research is framed as exploratory, but the questions being asked are inherently leading toward mandatory governance requirements aligned to ICAEW professional standards, ICAEW AML standards (where AI is used in client screening), and FRC ISA UK expectations around risk assessment and evidence sufficiency.

The implications for audit firms and PIE finance teams are substantial. Once the FRC publishes guidance informed by this research, firms can expect requirements for: (1) documented AI impact assessments aligned to ISA UK 315 (understanding the entity and its environment, including IT systems); (2) policies governing AI use in audit procedures and their alignment with ISA UK 500 (audit evidence); (3) transparency on which audit procedures involved AI and how AI output was validated by the auditor (preserving professional judgement); (4) training and competence standards for auditors using AI tools (linked to ICAEW Continuing Professional Development requirements); (5) quality control frameworks under ISA UK 220 that specifically address AI risks (model bias, data dependency, hallucination). These are not speculative requirements—they follow logically from ISA UK's architecture, which requires auditors to maintain professional scepticism and take responsibility for audit conclusions regardless of how evidence is generated. Trovix Aria can assist fee-earners in accessing audit standards and guidance during AI-assisted audit procedures, but the institutional requirement is for firms to govern AI use within the ISA UK framework itself.

For PIE boards and audit committees, the FRC research also carries implications. Listed companies, large private companies and financial institutions subject to PIE reporting regimes will need to satisfy themselves that their finance teams and external auditors are deploying AI responsibly in financial reporting and audit. FRC guidance will likely require audit committees to oversee AI governance in both external and internal audit; to understand the AI tools being used in financial reporting processes; and to assure themselves that controls around AI model accuracy, data quality and human validation are effective. This governance responsibility sits alongside existing audit committee duties under UK Corporate Governance Code and Companies Act 2006 requirements. The FRC's research, once published as guidance or standards updates, will become part of the regulatory baseline against which audit committees assess management and auditor performance. Given the Treasury Committee's emphasis on third-party risk and critical dependencies (January 2026), the FRC is likely to ask: Are audit firms and PIEs dependent on third-party AI vendors? What are the continuity and security implications? The research infrastructure now being built will inform these questions across the accountancy profession.

Source: ICAEW Insights

Related Trovix product:

Book a demo →