The Digital Regulation Cooperation Forum—a working group spanning the FCA, ICO, CMA and Ofcom—has published critical guidance clarifying that agentic AI systems do not escape existing UK regulatory frameworks, regardless of their autonomy or sophistication. The DRCF report identifies seven key compliance risk areas, including fragmented accountability, data protection failures, and inadequate human oversight, signalling that regulators expect firms deploying AI agents to apply the same rigour demanded of traditional systems.
For UK regulated firms, this guidance closes a potential interpretive gap. Some organisations have questioned whether autonomous AI agents—systems that operate with limited human intervention across multiple tasks—fall within the scope of existing FCA, ICO and CMA regimes. The DRCF's answer is unambiguous: they do not. The FCA's Consumer Duty principles, which require fair value and protection from harm, apply to AI agents used in pricing decisions, claims triage, and customer advice. The ICO's Data Protection Act 2018 requirements extend to data processing by AI systems, including the right to explanation and the right to human review of significant decisions. The CMA's consumer protection principles prevent unfair contract terms and aggressive commercial practices—constraints that apply equally to algorithmic decision-making.
The report's emphasis on fragmented accountability is particularly sharp. When an AI agent makes a customer-harming decision—whether in pricing, underwriting, or claims handling—liability cannot be diffused across software vendors, cloud providers, and internal teams. The regulated firm remains accountable to the regulator and the consumer. This implies that firms must maintain comprehensive audit trails of AI agent decisions, implement human-in-the-loop checkpoints at material decision points, and establish clear escalation protocols when systems behave unexpectedly. Trovix Audit provides the governance infrastructure to document these decision pathways and demonstrate compliance with regulator expectations, whilst tools such as Trovix Watch can alert compliance teams to evolving regulatory guidance, such as the FCA's anticipated final rules on AI use in insurance and lending.
The practical implication is that AI agents cannot be deployed as 'set and forget' systems. They require continuous monitoring, regular audits, and documented human review of high-impact decisions. For risk, compliance and legal teams already stretched by competing demands, this raises the question of how firms will resource AI governance at scale. The DRCF's message is clear: that resource burden is not optional—it is the price of regulatory approval.
Source: ICAEW