Artificial Intelligence (AI) may be nothing new to the payments industry, but it is becoming a hot topic with global policymakers as its potential is still vast and unknown. 

Payment companies will be the first to tell you that the emerging technology has the potential to facilitate new and threatening types of fraud, but also has the capability to prevent fraud from happening too. 

Philipp Pointner, Chief of Digital Identity at Jumio, writes for Payment Expert on how AI is being deployed as a fraud risk mitigation tool and what types of AI-backed fraud it is responsible for, and preventative against. 

Cybercriminals can now easily create AI-powered deepfakes to infiltrate the payments journey when opening or taking over legitimate accounts. While rapid advances in payment speed and biometric authorisation are convenient for consumers, they also make life much easier for cybercriminals, presenting payment providers with the challenging prospect of clawing back lost funds. 

With the Payment Systems Regulator’s new rules making financial institutions on both sides of fraudulent payments liable for compensating victims, banks are facing significant financial risks. 

In this digital arms race between cybercriminals and payment providers, there’s a clear imperative: firms must adopt advanced, AI-powered fraud detection mechanisms to tackle AI-powered fraud and safeguard the digital payments ecosystem.

Evolving scale and sophistication 

Generative AI has lowered the barriers to entry that previously existed when committing fraud. Where AI used to be the domain of only the more technically inclined cybercriminals, the ease of generative AI means deepfakes are now tools in every fraudster’s arsenal, facilitating identity theft and payment fraud by bypassing traditional verification processes. In short, fraudsters can do fraud better, and do more of it. 

For example, payment authorisation is increasingly dependent on biometric methods such as facial recognition. While this streamlines user experience and can add a new layer of security, this shift has inadvertently widened the scope for deepfakes to be used in place of a real person’s face to depict their likeness, therefore offering cybercriminals a new attack vector. As biometrics become increasingly prevalent in payments, the potential for deepfakes insertion escalates, challenging the integrity of secure transactions. 

Moreover, payments have become almost instant. Once money has been transferred through payment networks and reached the payee’s bank, payment providers risk significant losses from victims seeking reimbursement if the transaction was initiated by a fraudster using deepfake identities to open or take over accounts.

In this high-stakes environment, firms must deploy AI-powered fraud detection to fight AI-powered fraud. This works in two ways – one is liveness detection, which checks for deepfakes as and when they are being used, and the other is predictive analytics, which uses AI’s analytical abilities to proactively detect fraud before it occurs.

credit: Shutterstock

Liveness detection

Many organisations are using AI-driven solutions that have been programmed to detect image tampering and facial biometrics to verify and authenticate customers.

AI-powered liveness detection uses masses of biometric markers to distinguish authentic humans from sophisticated forgeries. For example, liveness detection systems may ask a prospective user to take a selfie and collect data from their retina/iris coupled with analysing eyebrows, eyelashes, and ear movements while speaking. These details are precise, and AI systems struggle to generate images with this level of detail. 

Given the increasing reliance on biometrics for transaction authorisation, liveness detection systems can help prevent fraud by ensuring the biometric data presented is not only real but also being presented by the rightful owner in real-time, countering the threat of deepfakes aimed at hijacking financial transactions.

Despite deepfakes being indistinguishable to the naked eye or ear, the volume and variety of biometric data in AI-enabled detection models mean they are highly capable of accurate detection. 

Predictive analytics

Complementing liveness detection, AI-enabled predictive analytics represent a step change from traditional Know Your Customer (KYC) checks in technological sophistication and impact. Traditional checks verify IDs for authenticity and validity but rarely consider how often an ID has been used to open accounts or the geographical spread of those accounts.

AI’s superior computing power enables the user to sift through large volumes of transaction data to flag patterns suggestive of complex AI-enabled fraud networks, for example, several accounts opened with the same ID in different locations and generate fraud risk scores for potential new users. Handling these volumes of data was impossible before AI. 

Called graph database technology for its ability to link up different attributes together to form a more complete picture of an individual’s identity, these techniques represent a real innovation and an addition to the arsenal of fraud detection. Crucially, detection is predictive, pre-empting fraud and financial crime before fraud before a potential criminal begins their payment journey.  

With graph database technology supplemented by layers of machine learning intelligence, payment providers will be able to compare new IDs with those used in billions of transactions across a wide payment network, creating a much richer, multi-dimensional picture to stop fraud at the front door.  

credit: Shutterstock

Implementation challenges

Businesses may face cost challenges when implementing these new AI systems. The initial cost can be substantial, including not only technology procurement, but also adapting existing systems, training, and maintenance. However, this expense pales in comparison to the potential losses that accumulate over time from unchecked AI-powered fraud. The sophistication of fraud models and subsequent financial impact is only set to escalate, and so the cost of inaction is far greater than the cost of AI investment. 

Privacy may be another concern — as AI systems need a breadth of biometric data and personal identifiers, businesses must ensure they respect customer rights. To uphold both security and privacy, companies must find partners that have, and enforce, an ethical data governance strategy that adheres to principles of secure handling and data minimisation. Transparency in how data is used is crucial for compliance and maintaining customer trust. 

Looking ahead 

As AI technology advances, so too will the methods employed by fraudsters. Although AI-powered criminals are now finding fraud more fruitful, advanced AI can also be the key to not just defending against, but actively countering the future of deepfake cyber threats to digital transactions. 

As defenders continue to innovate and collaborate in refining their AI stacks to stay ahead of the curve, they contribute to a more secure, trustworthy payment environment for consumers worldwide.