The relentless march towards instant payments has created an uncomfortable paradox for financial institutions: the faster the money moves, the harder it becomes to catch criminals moving it.
Real-time settlement systems (RTSS), now processing trillions of dollars globally, have delivered undeniable benefits. Merchants receive funds within seconds rather than days, consumers transfer money instantly between accounts, and capital allocation across the economy has never been more efficient.
Yet speed comes at a price that extends far beyond transaction fees.
Each millisecond shaved from payment processing shrinks the window available to detect sophisticated fraud schemes. As criminal networks deploy increasingly complex tactics- from synthetic identity theft to AI-powered social engineering – legacy risk management systems struggle to keep pace with both the volume and velocity of modern transactions.
The result is a deepening conflict between two imperatives: the market’s demand for frictionless, instantaneous payments and the regulatory requirement for robust financial crime prevention. Traditional banks, constrained by their siloed view of customer activity, find themselves particularly vulnerable to this trade-off.
Payment processors, however, occupy a different position in this arms race. Their central role in the financial ecosystem provides them with a panoramic view of transaction flows that individual institutions cannot match; a vantage point that may prove decisive in the battle against financial crime.
Project Hertha
This challenge is exactly what Project Hertha set out to examine, a joint initiative from the Bank for International Settlements (BIS) Innovation Hub and the Bank of England. The project aimed to understand whether retail payment systems could play a more active role in detecting financial crime, rather than simply moving money from one point to another.
Working with payment service providers (PSPs) and banks, the project used real payment data to test machine learning techniques and network analysis. The goal was to spot suspicious activity patterns across the broader system.
Project Hertha found payment systems can generate useful insights into potential financial crime. These insights, if shared responsibly, could help banks and PSPs improve their detection efforts. It also revealed that detection rates improved when these system-level alerts were combined with the institutions’ own fraud cases and labelled data.
One of the key lessons was the importance of a feedback loop. In the Hertha model, when the payment system flagged suspicious behaviour, this alert was passed to PSPs for investigation. The outcome was then sent back to the system. This six-stage cycle allowed the model to keep learning and improve over time.
Without regular feedback, the system struggled. False positives increased, and the model couldn’t adapt as criminal techniques evolved.
The project also underscored the need for explainable AI. Banks and PSPs were more likely to act on alerts when they could understand why the system raised them. This included what kind of behaviour it spotted, who was involved and which type of fraud it might match.
“The results demonstrate promise but also show there are limits to the application and effectiveness of system analytics. It is just one piece of the puzzle. The introduction of a similar solution would also raise complex practical, legal and regulatory issues. Analysing these was beyond the scope of Project Hertha,” said BIS and the Bank of England in a joint statement.
Lessons from iGaming
The payments sector can also draw lessons from high-risk industries like iGaming, where fraud prevention is critical and speed is essential to user experience.
Payment Expert spoke to Armen Najarian, Chief Marketing Officer at Sift, who explained how their AI-powered platform evaluates over one trillion transactions annually, delivering real-time risk assessments in around 200 milliseconds.
“The Global Data Network learns from every transaction and event, so the feedback becomes important. So, in the example where there is a false positive, that data makes it back to Sift and it makes our model smarter. We don’t make the same mistake twice,” said Najarian.