The Financial Conduct Authority's decision to expand its AI Live Testing initiative with a second cohort beginning April 2026 represents a deliberate regulatory strategy: structured, real-time supervision of AI systems before they reach unsupervised production. The eight participating firms—Barclays, Experian, Lloyds Banking Group, UBS and four others—will deploy systems ranging from agentic AI to neurosymbolic models under FCA observation, creating a bridge between innovation and compliance. This approach aligns with SYSC 22 principles requiring firms to ensure effective governance of AI systems, but goes further: it embeds the regulator as an active participant in risk detection and mitigation.
What distinguishes this second cohort is the sophistication of use cases under scrutiny. Agentic AI—systems that act autonomously within defined boundaries—and neurosymbolic models that blend neural networks with symbolic reasoning represent a material step beyond the large language model deployments tested in the first cohort. Firms deploying Trovix Watch for continuous regulatory change tracking will recognise that the FCA is itself signalling through this programme that AI governance frameworks must evolve in real time. The regulator's willingness to sponsor live testing reflects confidence in firms' risk management capabilities, but also acknowledgment that traditional pre-deployment assurance cannot capture the emergent risks of agentic systems.
From a COBS and SYSC compliance perspective, participation in the FCA's Live Testing programme provides firms with explicit regulatory cover during development—a critical advantage given the absence of prescriptive AI rules in primary legislation. Firms benefit from structured risk management guidance and early feedback on governance approaches before the FCA's permanent AI ruleset emerges. However, the transparency required within the programme demands that participating institutions maintain comprehensive audit trails and decision logs that Trovix Audit provides governance dashboards to track. Non-participating firms face a narrower path: full compliance with emerging AI governance expectations without the safety net of supervised testing.
The second cohort's emphasis on agentic and neurosymbolic architectures also signals where the FCA expects innovation to drive financial services over the next 18 months. These are not incremental improvements to existing AI deployments; they represent structural shifts in how firms might automate decision-making, client interaction and risk assessment. Regulators including the PRA and the Treasury will be observing outcomes from this cohort closely, as will insurers and compliance teams managing systemic risk across the market. The evidence trail that Trovix Watch captures around regulatory announcements like this one will be essential for firms planning their own AI governance roadmaps and for legal teams advising on SM&CR accountability for AI governance.
Source: FCA