The scale of the compliance exposure is stark: 61% of UK financial services professionals now deploy generative AI daily for tasks ranging from research drafting to client communications, yet only 32% believe their surveillance systems can detect risks embedded in AI-generated content. This gap exposes firms to enforcement risk under COBS (Conduct of Business sourcebook) and ICOBS (Insurance: Conduct of Business sourcebook) requirements around communications monitoring, record-keeping, and consumer fairness. The FCA expects firms to maintain defensible audit trails of all communications that influence consumer decisions or constitute advice. AI-generated communications that bypass traditional surveillance represent a blind spot the regulator will not tolerate when enforcement cases emerge. Trovix Watch flags regulatory expectations, but firms need active surveillance of AI outputs flowing through their systems.
The 'shadow AI' problem sits at the intersection of three regulatory obligations: SM&CR accountability (senior managers must ensure controls are effective), SYSC governance (systems must be appropriate to the firm's business), and COBS/ICOBS communications rules. When a relationship manager uses a generative AI tool to draft a suitability report, and that report is never reviewed by traditional compliance surveillance, the firm has created an unmonitored communication chain. If the AI output contains misstatement, overstatement of benefits, or downplaying of risks, the firm has a COBS breach and a SM&CR failure—the individual signed off on advice they did not personally generate and may not fully understand. Trovix Sift extracts and categorises content from unstructured communications, helping firms identify where AI-generated material is entering regulated workflows without proper oversight.
Regulators have made clear their expectations through enforcement outcomes and thematic work. The FCA's recent focus on customer communications (see Thematic Reviews on suitability and fact-finding) indicates heightened scrutiny of how firms ensure communications meet regulatory standards. With AI generation now embedded in daily workflows, firms cannot claim ignorance about the content leaving their desks. The burden is shifting to the firm to demonstrate it has surveillance systems capable of detecting AI-generated risk within communications—whether that risk is factual error, misleading comparison, or inappropriate complexity for the consumer. Firms deploying generative AI without concurrent upgrades to surveillance tooling are effectively admitting they cannot prove compliance with communications rules.
The audit trail problem is equally pressing. If a firm cannot explain how a particular client communication was generated, reviewed, and approved, it cannot demonstrate compliance with record-keeping obligations under MLR 2017 (Money Laundering Regulations) or COBS record-keeping rules. Many firms are still treating AI outputs as ephemeral drafts rather than regulated communications, meaning they are not captured in the audit trails the FCA expects. Trovix Watch should alert compliance teams to FCA guidance updates on AI and communications, but firms need to act now: audit your current surveillance systems, identify where generative AI is entering workflows without being captured, and either bring those tools within existing surveillance infrastructure or remove them from regulated communications chains. The firms facing enforcement action in 2026 and 2027 will be those that deployed AI without closing the surveillance gap.
Source: FinTech Futures