Senior leaders in UK financial services are warning of a systemic governance gap as AI-enabled fraud reaches unprecedented scale. While the US and Singapore have published AI risk frameworks, UK firms lack comparable guidance despite facing accelerating threats.
AI Governance  Financial Services · Compliance

Research from Zango AI published 30 April 2026 exposes a critical vulnerability in UK financial services: a governance and operational standards gap precisely where AI risk is accelerating fastest. The data is stark—global fraud losses reached $579 billion in 2025, and 90% of financial professionals report a rise in AI-enabled attacks. These are not theoretical threats; they are active, measurable losses affecting UK financial institutions. The FCA's existing framework—built around COBS (Conduct of Business), ICOBS (Insurance: Conduct of Business), and SYSC (Senior Management Arrangements, Systems and Controls)—addresses technology risk generically and financial crime specifically (under JMLSG and MLR 2017 guidance). However, none of these frameworks explicitly address AI-generated fraud, adversarial AI, or the governance of models used in fraud detection itself. This creates a paradox: UK firms deploy AI to detect fraud, but lack regulatory standards for validating that those AI systems are robust.

The gap between UK and peer jurisdictions is material. The US (through the SEC, Federal Reserve, and OCC) has published AI risk management expectations for financial institutions. Singapore's Monetary Authority published AI governance guidance in 2024. The EU AI Act imposes explicit obligations on high-risk AI systems in financial services. Yet the UK's approach remains decentralised and principles-based. The FCA and PRA expect firms to manage AI risk as a subset of operational risk, system risk, and third-party risk—all genuine requirements under their existing rulebooks. However, 'principles-based' governance creates discretion; two banks can satisfy SYSC compliance with radically different AI governance maturity. Firms deploying Trovix Watch to parse emerging guidance from multiple jurisdictions face a sobering reality: they can see what's expected elsewhere, but face uncertainty about UK-specific requirements. This uncertainty drives conservative (and expensive) compliance choices.

The systemic risk dimension is equally pressing. If 90% of financial professionals report AI-enabled attacks, and fraud is rising globally, then UK financial institutions collectively face a shared threat. An AI system compromised at one bank—or a universal weakness in a shared third-party AI tool—could cascade across the system. Under the PRA's operational resilience framework, firms must identify critical services and stress-test their continuity. But AI is not yet treated as a critical service in most firms' operational resilience policies. This means third-party AI vendors, LLM providers, and model hosting platforms are not subject to the same continuity and testing requirements as critical services. The FCA's expectations under SYSC 13R require firms to manage third-party operational risk, but 'manage' remains under-specified for AI. This is why organisations using Trovix Audit to map their third-party AI dependencies are taking a leadership position—they are documenting risk that regulators will soon mandate oversight of.

For senior leaders and compliance teams, the warning from Zango AI's research demands action. UK financial firms cannot wait for FCA guidance (expected by end-2026 per the Treasury Committee's demands) to harden their own AI governance. The JMLSG's guidance on financial crime and the MLR 2017 already require firms to assess the risk of their systems being misused to facilitate money laundering or terrorist financing. AI-enabled fraud detection systems carry this risk. Board-level oversight (SM&CR requirement) must extend to explicit AI governance accountability. Chief Technology Officers, Chief Risk Officers, and Compliance Officers need to work together to establish operational standards for AI model validation, monitoring, and refresh. Peer jurisdictions are ahead; UK firms that align with US or Singapore standards now will find FCA guidance alignment easier when it arrives. The window for proactive, firm-led AI governance is narrowing rapidly as regulators prepare to make it mandatory.

Source: FinTech Global

Related Trovix product:

Book a demo →