Bank of England deep dives into AI’s financial stability risks

Scuba diver diving towards sun and boat on background underwater
Editorial credit: Jukkis / Shutterstock.com

The Bank of England’s Financial Policy Committee (FPC) has warned of potential AI risks to financial stability amid increasing adoption of the technology across finance.

Banks and payment firms are using AI more and more to improve security and efficiency, as well as save money. However, the tech remains largely unregulated, with many governments afraid that overregulation will deter investment and stifle growth.

One of the key concerns highlighted by the FPC is over-reliance on a small number of providers which have dominated the development of the emerging tech over the past few years.

The FPC stated: “A reliance on a small number of providers for a given service could also generate systemic risks in the event of disruptions to them, especially if is not feasible to migrate rapidly to alternative providers.”

Outages have been a significant issue in recent years for UK banks. The most prominent example of this was the CrowdStrike outage last July, which was caused by a flawed update by the cybersecurity firm and caused major disruption worldwide.

Additionally, the Treasury Committee last month published data which showed that between January 2023 and February 2025, the top nine UK banks suffered at least 158 banking IT failures. These incidents accumulated at least 803 hours, the equivalent of more than 33 days, of unplanned tech and systems outages.

This is a key concern of the FPC, especially as AI is becoming more expensive to improve, meaning the number of providers is decreasing.

Growing cyberthreat

In addition to noting the risks of banks using AI, the FPC said that in the 2024 AI Survey, cybersecurity came near the top of perceived current AI-related risks, adding that respondents expected this risk to grow over the next three years.

“The use of AI by threat actors could increase their capability to carry out successful cyberattacks against the financial system, with potentially greater sophistication and scale than was previously possible”, the FPC added.

There are several ways that AI poses a threat to cybersecurity, with the FPC suggesting that bad actors could exploit vulnerabilities around the software or hardware of third-party providers.

Additionally, the committee also highlighted the potential of data poisoning, when hackers manipulate the data inputted into AI language’s learning models, which can be exploited at a later date.

Despite these threats caused by AI, the FPC has acknowledged that AI can be used to combat fraud and cybersecurity threats. However, with both sides using AI against each other, it places a greater emphasis on collaboration in the payments landscape.