Writing for Payment Expert, Daniel Holmes, Fraud SME at Feedzai, detailed the potential of AI when it comes to combating frauds during a period of heightened threat when it comes to fraudulent activity. 

The summer period can often bring additional financial pressures as a result of busier social calendars and holidays. The summer of 2023 has been no exception. According to recent research from the FCA, over half of UK adults (55%) are more worried about their finances this summer than last year, as global macroeconomic conditions tighten.

During times of financial strain, some people may turn to a loan to help manage their money. While in many cases this can be a useful solution, it also opens the door for bad actors to scam vulnerable consumers looking to make ends meet. 

In August, the FCA reported a 26% increase in people falling victim to loan fee fraud – a type of fraud where consumers are conned into paying a small fee for a loan they never receive. While loan fee fraud isn’t a new method used by fraudsters, it’s no less traumatic for its victims. 

Social media and other online platforms have also exacerbated fraudulent behaviour, further enabling crimes such as loan fee fraud, as well as romance scams and account takeover attacks. Criminals all over the globe can hide behind fake profiles or create very realistic business accounts to lure in unsuspecting individuals. 

Similarly, the recent widespread adoption of generative AI has also empowered bad actors to target their victims with increased sophistication, making it even harder for people to spot what is genuine and what isn’t. 

AI as a solution

However, AI in and of itself is just a tool, much like the internet – it’s not inherently good or bad. So whilst fraudsters exploit AI for nefarious purposes, AI has also been a powerful tool in the fight to tackle financial crime. Adopted correctly, AI can be used to analyse huge datasets, such as transaction trends, and learn good, genuine behaviour to build a full single view of a customer. Even further, AI can track the rhythm and cadence of how a person usually interacts with their financial accounts, including typing speed on their keyboard, the way they touch their phone screen, or even the way they move their mouse to navigate through online banking. 

Banks can also use these capabilities to begin establishing a baseline of user behaviour right from the point of account opening. Using AI and monitoring the user across their banking relationship, from the point of onboarding to future account logins, right through to payment is what gives banks the best possible opportunity to understand their customer. This therefore makes it easier to spot behavioural anomalies, such as scams, when they occur. 

The reverse effect is also true; understanding your user at this granular level means that not only can fraud be detected, but unnecessary friction and false positives can also be kept to a minimum. False positives, when a transaction is wrongly deemed suspicious, are a huge problem for banks – with an average false positive rate of >90% deemed the norm. This causes huge inconvenience for people making genuine transactions when payments are blocked or friction is added to the payment process, particularly when resolution is not instant.

Whilst it is accepted that false positives will always be a natural consequence of a fraud strategy, the key for banks is to strike that effective balance between catching criminal activity while not disrupting genuine payments. This is where AI can be extremely effective. Again, by truly understanding their customers across a range of data points, financial institutions have a much better picture of the difference between a good and a bad actor. Banks no longer need to use binary rules, such as automatically flagging a payment if a customer is in a new location for example, as broader context beyond just one or two single data points can be applied to the decision making process.

Why collaboration is king

There is no doubt that bank adoption of AI for fraud prevention is growing as the value becomes clear. However, the fraudsters are not standing still either, as they adapt and use sophisticated techniques to scam victims of all ages, education levels and demographic. This means that AI alone is not enough, and the bank’s fraud detection strategy must use a combination of the latest fraud technology and consumer education. Done properly, AI can actually influence the bank’s education and awareness strategy. Not all scams are the same, and data shows that people of different age ranges are more susceptible to different scam typologies. 

Therefore, in order to achieve maximum impact, consumer education should be data led, not simply delivered on an arbitrary basis. If we can show the right education to the right customer at the right time, we can actually turn well informed customers into the first line of fraud defence, meaning the fraud is stopped before a payment is even attempted. 

More broadly, critical in achieving reduced scam losses is closer collaboration between banks, regulators, telecoms providers and technology companies. We must recognise that the payment made at the bank is often the final step in an already complex chain of events. Data recently published shows that 80% of some scam types originate on social media, highlighting the opportunity that exists within this collaboration. 

Recent talking points in the industry have included empowering social media companies to block fraudulent accounts more easily, and regulators writing effective regulation that ensures AI can be used for good, with consumers protected while innovation remains unstifled. Only by working together to build effective prevention strategies can we win against financial fraud. 

Additionally, consumers are increasingly wanting to know how banks are enhancing their defences to protect them. Our research found that over half of consumers feel safer knowing their bank uses AI to protect them, signifying the want and need for advanced technology to help protect against criminals. 

Out with the summer, in with seasonal fraud

With the summer period almost behind us, this season has proved to be another problematic one for many. Whilst moving away from busier spending seasons may provide temporary respite, this doesn’t present the end to scammers lining up ways in which to take advantage of vulnerable consumers. Fraudsters have a habit of making scams reflective of current events and seasons, making them all the more convincing in the eyes of a consumer.

There is some good news for consumers though; recent changes announced by the PSR will aim to increase victim protection in the event that a scam takes place. Today, approximately 60% of scam losses are refunded to victims, the PSR proposal aims to boost that number nearer to 100%. Additionally, banks will be financially incentivised to monitor payments coming into their bank, as well as those going out of it. This increased focus on the end-to-end lifecycle of a fraudulent payment will improve the overall opportunity for increased scam detection.

Looking further ahead, the Christmas period will be another juncture of busier spending sprees and tougher financial constraints. As a likely result, fraud losses and higher levels of financial crime may occur. With scams maturing and growing in sophistication because of AI, consumers need to remain vigilant of fraudsters. 

While it’s positive to see AI being effectively adopted by banks and at scale, there’s clearly more to do. With the FCA increasingly warning consumers of new and old fraud techniques, banks need to enhance their fraud strategies and work together with social media companies to stop fraud in its tracks