The Digital Regulation Cooperation Forum has released a critical new paper identifying seven key regulatory compliance risks posed by AI agents to accountancy firms, signalling that autonomous AI systems are neither exempt from nor outside existing UK regulatory frameworks. The DRCF—comprising the ICO, FCA, Ofcom, and CMA—warns that fragmented accountability, data protection concerns, and inadequate human oversight represent material risks when AI systems operate with limited intervention, particularly in advisory and compliance-related functions where professional judgment has historically been mandatory.
The four combined regulators have confirmed that AI agents deployed by accountancy firms remain subject to existing UK regimes including Consumer Duty obligations, data protection law, and professional conduct standards under the accounting bodies. Rather than creating carve-outs for autonomous systems, the DRCF recommends guardrails including robust data controls, human-in-the-loop checkpoints at critical decision points, and clear accountability chains that prevent responsibility from diffusing across technology vendors, compliance teams, and business functions. This reinforces that automation does not diminish regulatory obligations; it merely reshapes how accountability must be structured.
Accountancy firms deploying agentic AI should establish audit capabilities across the full AI lifecycle to demonstrate ongoing compliance with these guardrails. Solutions such as Trovix Watch enable practices to monitor regulatory change in real-time across the DRCF's evolving guidance, ensuring that governance frameworks adapt as the regulators publish additional standards and expectations throughout 2026. The DRCF's framework also demands that firms can explain how their AI agents maintain Consumer Duty compliance—treating clients fairly—even when decisions operate at machine speed.
The publication underscores that regulators view accountancy as a compliance-critical sector where AI deployment must be accompanied by demonstrable governance, not hope. Firms that view the DRCF guidance as a checklist to tick rather than a governance philosophy to embed will likely face enforcement attention as regulators gain experience with AI agent failures and unintended consequences in the sector.
Source: ICAEW