The UK Treasury Committee has publicly criticised the FCA, PRA and Bank of England for avoiding AI-specific regulation, warning that their 'wait-and-see' approach risks consumer harm. Over 75% of UK financial services firms already use AI systems.
Regulatory Watch  Financial Services

In January 2026, the Treasury Committee published a damning assessment of UK financial regulators' approach to AI oversight. The Committee noted that over 75% of UK financial services firms now deploy AI systems—ranging from chatbots and underwriting algorithms to portfolio analytics and fraud detection—yet the FCA, PRA and Bank of England maintain a principles-based, AI-agnostic regulatory framework. The criticism is sharp: while the regulators argue that existing frameworks (SYSC, COBS, ICOBS, and PRA Rulebook standards) already address AI risks, the Treasury Committee contends that this approach is insufficient precisely because AI systems operate at scale and speed that human-supervised controls cannot match. The Committee's January 2026 report explicitly recommends AI-specific stress testing and practical guidance by end-2026—a deadline that forces the regulators to move from abstract principles to concrete supervisory expectations.

The tension between principles-based and AI-specific regulation reflects a deeper regulatory philosophy question. The FCA's existing SYSC framework requires firms to maintain 'systems and controls' commensurate with their risks, and technically this encompasses AI systems. However, most firms interpret this as requiring vendor due diligence and governance checklists, not real-time AI performance monitoring or threshold-based circuit breakers. Platforms such as Trovix Watch parse regulatory announcements and expectations daily, revealing a pattern: while the FCA publishes occasional AI-related feedback via Dear CEO letters and supervisory statements, there is no consolidated regulatory expectation on, for example, how financial services firms should govern large language model (LLM) outputs in Consumer Duty PS22/9 contexts. The Treasury Committee's pressure is a political signal that this gap is no longer acceptable.

The Committee's call for AI-specific stress testing is particularly significant because it mirrors emerging EU AI Act expectations. Financial services firms operating in both jurisdictions now face fragmented guidance: the EU AI Act classifies certain AI systems as 'high-risk' and mandates testing and documentation; the FCA has published no equivalent classification. This creates a competitive disadvantage for UK firms and leaves consumers protected by different standards depending on whether they access services via UK or EU entities. Moreover, as generative AI systems become embedded in customer-facing functions—mortgage recommendations, investment advice, complaints handling—the risk of systematic consumer harm increases. Trovix Watch can monitor FCA supervisory statements and industry feedback, but firms need formal guidance on what 'adequate' AI governance looks like in practice.

By end-2026, the FCA and PRA must publish concrete AI guidance or risk losing credibility with Parliament and the financial services industry. The Treasury Committee's January report is not advisory—it reflects cross-party consensus that current regulatory passivity is untenable. The guidance will likely mirror elements already present in Trovix Brief's intake automation frameworks: documented AI decision-making, explainability standards, performance monitoring, and escalation protocols for anomalous outputs. Firms that have already invested in AI governance infrastructure—such as those using Trovix Watch to track emerging regulatory expectations—will transition more easily to formal guidance than firms that have treated AI as a peripheral operational matter. The regulatory timeline is now clear: principles-based regulation is ending, and AI-specific expectations are beginning.

Source: UK Parliament Treasury Committee

Related Trovix product:

Book a demo →