You should all know by now the extent of people’s mutual interest and trepidation when it pertains to artificial intelligence (AI), but matching fears around AI with the ever-present threat of fraud heightens customer concerns to newfound levels. 
In a bid to find innovative approaches to developing AI-related fraud attacks, Roger Alexander, Key Advisor to Chargebacks911 reveals to Payment Expert how the company is identifying the emerging technology to fight chargeback cases. 
Alexander also outlines what more financial institutions can do to help mitigate the rise of AI-related fraud, as well as how it can be used to combat its rise. 

Payment Expert: Firstly Roger, what is the overall industry feeling you are sensing right now with AI and its role in fraud, both good and bad? 

Roger Alexander: It’s important to underline that AI has been used in anti-fraud applications for over a decade now. Virtually every transaction taking place – whether online or off – is checked for fraud, money laundering and even terrorist financing. 

This is done by systems that are, by definition, AI, and are consistently getting more sophisticated. 

PE: How is AI being used to settle the chargeback dispute process differently with AI? 

RA: One of the best things about AI and machine learning (ML) is its ability to find patterns in large amounts of data. That’s something that happens in anti-fraud applications in payments, but it’s also key in giving merchants the edge in the chargeback dispute process. 

Our technology can gather thousands of pieces of information and analyse them for patterns that indicate fraud, which can be anything from whether a customer has made multiple chargeback claims to how quickly they type. 

The AI platform then provides this information to our clients to enable them to create a dispute claim in the exact format that card schemes require, with no human input needed. This means that major organisations that get hundreds of chargeback claims per day can effectively combat them and utilise their human staff for more important tasks.

PE: Conversely, how harmful can AI be in this chargeback process? What are fraudsters doing differently than before now being able to access this technology? 

RA: The recent wave of AI is characterised by Large Language Models, or LLMs. Unlike the AI systems that we and much of the financial services industry uses, LLMs don’t analyse data; they produce realistic-looking text by re-processing existing text. This could be used to create equally realistic-looking chargeback claims, with all the information needed filled in perfectly. 

This would be far quicker than fraudsters filling in this information by hand, meaning that they would be able to make more chargeback claims in a shorter time, increasing the strain on existing systems and increasing the chance that an illegitimate chargeback claim is successful.

PE: What more can be done from financial institutions from a collaborative approach to widen its scope of analysing some of the more common chargeback fraud requests? 

RA: Put simply, the bigger the pool of data we have, the greater our ability to help brands to spot the patterns of fraud. Similarly, having greater access to computational power means that AI models can chew through that huge amount of information. 

If financial institutions were to combine their data and pool our resources to invest in computational power, then they could process more information and therefore make a greater dent in the chargeback and fraud problem.

PE: With the influx of AI investment from VC’s, do you believe that this is for profit or is there a genuine concern in the rise of AI-related fraud and that it can also be the answer to combat this? 

RA: The surge in investment we’ve seen in companies working in AI and their suppliers – NVIDIA for example – has been driven by interest in LLMs. 

As analysts are now saying, this technology has very limited use cases: it can create text and images, but little else. Unfortunately, one of the things that this technology is good at is creating convincing-looking text for fraud. 

AI can also be the answer to this: the kind of technology we use at present can look beyond whether text seems like it was written by a human being and make real judgements about what is and isn’t LLM-generated text being used to perpetrate fraud.

PE: Lastly Roger, and thank you for your time, what are your views on the new APP fraud rules set to take shape in the UK in October? Are there any recommendations you may have for them? 

RA: APP fraud is a major problem – perhaps the largest fraud problem today. There is a serious need to address this problem, but it’s too soon to understand the impact of these new rules. 

The implantation, as it stands, mandates that payment service providers (PSPs) are responsible for financial losses from fraud, meaning that smaller and more innovative PSPs are going to be disproportionately affected. 

Investment in technology and education shouldn’t be overlooked. APP fraud happens because ordinary people don’t know what to look for and the financial ecosystem can’t identify whether a transaction is suspicious, especially at the scale in which fraud is being committed. 

The technology to beat APP fraud isn’t beyond us, and educating the public is always a good thing, so I think that these should have been our first actions before putting financial burdens on PSPs.