Deepfakes are a widely feared although not always fully understood phenomenon which has entered public discourse in recent years, particularly in the realms of social media. The risks these Artificial Intelligence (AI) derived fakes present is felt across multiple sectors, however.

Financial services, banking, payments, cryptocurrency and gambling, among other financial sectors, should all take note of the risks posed by fintechs. To gain an understanding of the threat this phenomenon poses, Payment Expert spoke with Ofer Friedman, Chief Business Development Officer at AU10TIX.

Ofer Friedman – Source: AU10TIX

AI is taking the financial sectors, and society at large, by storm, with companies of varying sizes, varying models and varying trades looking to leverage the technology to automate processes and simplify procedures. Fraud prevention is a key area of interest – but fraudsters too are making their own, more nefarious, progress.

“The early adopters of AI are fraudsters; they are the most advanced adopters of AI,” Friedman said. “It’s got to the point where the people fighting fraud are the ones running after them, not vice versa. 

“Professional fraudsters have enough money, resources or knowledge to commit very sophisticated fraud. Fraud often goes undetected because regular fraud prevention tools do not detect it, and it simply changes the paradigm of companies such as ourselves that do ID verification.” 

The sophistication of these deepfake technologies is making fraud prevention more and more of a challenge. For ID verification firms such as AU10TIX, overcoming this challenge requires more of a multi-layered approach, as in many cases a one-dimensional one will not cut it.

This is because deepfakes themselves have different levels to them. On the bottom rung, the low hanging fruit deepfakes are the cheap-to-produce – sometimes free-to-produce – versions which many social media users may be familiar with.

Anyone who’s spent some time browsing social media will have come up with some examples of poor photoshopping skills. The most basic levels of deepfakes resemble this. Friedman jokingly added that these are the kind of pictures ‘with people that have six fingers on one hand’.

The challenge lies in countering the more advanced layer of deepfake-wielding fraudsters, which, as Friedman noted above, has access to more resources, such as more powerful computers, and the knowledge to use it.

“The threats are going two ways,” he continued. “There are two ‘parallel universes’ in identity fraud and its adoption of AI. One universe is the harnessing of Gen-AI to mass-produce infinite variations of faces and ID documents that never repeat twice. Then there is the other universe where fraudsters use innovative tools to inject their ‘creations’ behind cameras and even into live video sessions.

“The fakes that are worrying are the ones that can be done in real-time, visually and with voice, and that can be injected behind the cameras. These can be very hard to detect and that is the real threat in the market.”

Adding fuel to the deepfake fire is the increasing adoption of digital injection technology by fraudsters. Friedman observed that these technologies complement each other, particularly when used against sectors that require individuals to upload a real-time picture of themselves, such as airlines.

“You could create or buy a really good fake and use that, but in many countries we have live real-time requirements,” he said. “You have to take a picture then and there. The live session is done with the camera that you have on your phone or computer. 

“To bypass it, you need your computer or phone to show a picture that is different to what it has in front of it. That is injection.”

Fraudsters with the right knowledge and the right tech can use injection to inject deepfake pictures prepared using AI in real time. This has the potential to undermine the real-time ID verification measures many companies have been utilising.

This risk therefore carries itself across various sectors, although Friedman noted that the risk different sectors face depends on the type of fraudster. The more sophisticated and valuable the sector, the more sophisticated the fraudster.

He explained: “If you are the sophisticated professional kind, you’ll go where the big money is, because you have the means to bypass and develop fakes that are really difficult to detect. Crypto right now, and payments, are easy prey. 

“Amateurs will not go for the most lucrative but the easiest and least defended markets. These are the ones with the least stringent rules that allow easy uploads, so fraudsters can create as many fake accounts as they want.

“There are two levels, like in mythology — above the rainbow and below the rainbow, asgard and midgard — it’s the same with fraudsters.”

Payments and banking are often at risk of deepfake attacks, as are gambling and cryptocurrency – although Friedman believes that the types of attacks seen against these two groupings are not the same.

Based on AU10TIX’s statistics, payments is usually the first target of deepfake scammers, whilst cryptocurrency targeting often depends on market factors. When there is a spike in crypto prices, as seen lately with Bitcoin’s surge in value, there is often a corresponding surge in fraudulent attempts against crypto exchanges and other stakeholders.

On the topic of crypto, he observed that there has been a push in some industries for blockchain-based fraud defences, but there is a divide in the market due to some firms shying away from these due to expense.

When it comes to countering deepfake fraud, there are two types of companies, he asserted – “Those who want to be ‘covered,’ and those who strive to be ‘safe.’” To date, there are no laws or regulations mandating what identity fraud prevention should technically look like. The “what” is stipulated, but the “how” is not. 

A divide is also seen within the banking sector itself, particularly between the traditional high street banks and the emerging neobanks or challenger banks, the latter of which are much more deeply ingrained in digital technology.

There is a line between these ‘traditional players and new players’, the latter having been ‘born online’. Many of the neobanks are also utilising AI for general business purposes, and this can be adapted to counter deepfake fraud attempts.

“When it comes to ID verification, high street banks are often less equipped than challenger banks that are born online. 

“The high street banks tend to have more trust in regulations, and they always have the option of verifying a customer’s ID in the branch, if they still have one. 

“Those born online prioritised ID verification from the onset, because of the digital nature of the bank.”

This is of course not to say that companies are not fighting back against fraud attempts. Friedman noted that almost every company that requires ID verification and fraud prevention uses AI.

Any company that does not do so is not being serious, he said. The problem all goes back to sophistication, and in some cases an overreliance on human judgement despite AI also being utilised. 

AI has the ability to play with people’s senses to a degree people just a few years ago would have thought impossible. In this context, the paradigm needs to change, Friedman asserted, and companies need to up the ante when it comes to tech integration.

“A lot of the market has noticed that everyone is fascinated by deepfakes, and especially with elections at the moment and on social media, everyone is asking about it and writing about it,” he said.

“When you check out what’s in there, you see that the level of defence is very, very, very – I can’t say very enough times – basic. Most of the detection is in what we call presentation — the video, the stills, the items you need to analyse.”

Creating a multi-layered anti-fraud, specifically anti-AI fraud, solution is the key to success in this area, Friedman asserted. This is something that AU10TIX has sought to do itself with recent product launches, but the battle against deepfakes and AI-backed fraud is not an easy one, and right now Friedman is not convinced it is one the legal side is winning.

“The arms race is on, but right now overall if I look at the market – the entire market, not necessarily the clients we handle – fraudsters are on top,” he emphasised.