The UK Treasury Committee has issued a stark warning that regulators are failing to keep pace with the rapid adoption of artificial intelligence (AI) in financial services, potentially exposing consumers and the financial system to "seriously serious harm." In a report published on January 20, 2026, the committee criticized the Bank of England, the Financial Conduct Authority (FCA), and HM Treasury for their over-reliance on a "wait-and-see" approach and existing rules.
The committee found that approximately 75% of UK financial services firms are already using AI, with adoption most intense among insurers and major global banks. While acknowledging AI's potential benefits for efficiency and cybersecurity, MPs concluded that unaddressed risks are compromising consumer protection and financial stability. Key concerns include opaque AI-driven decisions in credit and insurance, the risk of automated product tailoring deepening financial exclusion, and the potential for unregulated AI-generated financial advice to mislead users.
The report emphasized that these issues are not hypothetical, stating that oversight has not kept pace with the scale or opacity of AI systems now embedded in core financial functions. The committee urged the FCA to publish comprehensive guidance by the end of 2026 on how consumer protection rules apply to AI and how accountability should be assigned to senior executives when AI systems cause harm.
Industry experts echoed the concerns. Dermot McGrath of ZenGen Labs noted that while the UK's pioneering fintech sandbox model worked because regulators could observe firms, AI "breaks that model completely." He warned that regulatory ambiguity could stifle responsible deployment, as many firms lack a clear understanding of the opaque systems they rely on, leaving them to infer how fairness rules apply.
The committee also highlighted systemic risks, including AI's potential to amplify cyber threats, concentrate operational dependence on a small number of US-based cloud providers (like Amazon Web Services), and intensify herding behavior in markets. Despite these dangers, neither the FCA nor the Bank of England currently runs AI-specific stress tests. Furthermore, the delayed implementation of the Critical Third Parties Regime, created in 2023 to oversee essential service providers, leaves the system exposed, as no major AI or cloud provider has yet been designated.