The past 12 months have seen technological advancements emerge as both a hugely helpful tool and a hugely menacing threat for the financial services sector, as fraudsters and anti-fraud teams scramble to make the most out of AI.
Ofer Friedman, Chief Business Development Officer at AU10TIX, an ID verification and fraud prevention firm, shares his views on how AI-backed fraud has developed in 2024 and warns that the threat this poses could escalate even further.

Payment Expert: How has the threat from AI-generated deepfake fraud evolved in 2024?
Ofer Friedman: 2024 marked a leap in the user-friendliness of Gen-AI tools used to commit fraud.
Generating and injecting near-perfect deepfakes of ID documents and selfie faces used to require fraudsters to utilise a range of tools and algorithms, but in 2024, we have seen “Ready To Fraud” solutions with user-friendly menus and complete process flows from generation to submission.
Moreover, 2024 saw a much higher presence of automated attacks using randomisation to evade the detection of repeated attacks.
Are finance industry stakeholders and regulators prepared to deal with any potential escalation of AI-backed fraud?
The faster that Gen-AI fraud develops, the wider the gap becomes between regulations and implementation efficiency. So wide is the gap that it is highly likely that stakeholders and practitioners may not be aware of the magnitude and sophistication of the threats they are facing.
Why? Because the “beauty” of Gen-AI fraud is that it challenges our senses, meaning our ability to point out an irregularity that suggests fraud.
What changing trends do you foresee around deepfake fraud in 2024, both from a perpetration and prevention point-of-view?
To paraphrase Clausewitz’s famous aphorism, 2025 will be nothing but a continuation of 2024 with the admixture of other means.
In 2024, it was only possible to commit a “perfect fraud” against one or a few targets, but in 2025, Gen-AI fraud will undergo industrialisation. This means that significantly more fraud will be produced by automation, which will connect LLM models and image/video generation models much more fluently and with evasive randomisation.
Each credential will look perfect and different from those previously submitted. It may be somewhat comforting to know that this level of tools will still only be available to professional, organised fraudsters, but the problem is that, if they are “industrialised” to that level, it takes only one or a few people to commit a lot of top-level fraud.
Is there now a more concerted effort from financial institutions to scale up AI anti-fraud teams?
Good question. We see how our clients take the initiative and gear up for fraud prevention, but not what others do. The general impression is that AI-fraud prevention is much more at the awareness level than the effective action level. Not everyone has vendors with the same level of defences.
Looking at developing regulations, the term “concerted effort” is not there yet. Moreover, global markets are facing a rather recessionary sentiment, which tends to translate to cutting and postponing plans and investments. Things may change when the inevitable mega-crisis erupts, sending regulators and financial institutions to take quick, effective action.
By extension, is the talent available at this moment in time to efficiently deal with this fast-developing fraud threat?
That’s a smart question. The answer is yes and no. A number of startups are showing promising first steps towards developing smart solutions, but that’s where the mega-talented algorithm research experts are concentrated; they are not working in the broader full-service solution market.
Actually, identity fraud detection is bound to meet cyber threat detection, although right now these are mostly people focusing on their narrow domain and working for different companies. All this means that not only is there a talent shortage, but the domain itself is still defining its boundaries.
In the UK and EU we saw the topic of digital identity re-raised this year. How much faith are policymakers putting in digital ID as a solution to fraud and money laundering?
Digital IDs are inevitable. Even today, it would be hard to find a country without a digital ID programme. From Bhutan to the USA, digital IDs are hyped as the ultimate fraud prevention method thanks to their level of encryption and their co-existence with biometrics.
The side benefit to policymakers is the reinstatement of government control over citizen’s identities, a hegemony that has been under threat from commercial enterprises such as Apple, Google and others. It will happen, but more slowly and less ironclad than it looks. Interoperability, which is also a policymaker issue, has a long way to go before it is crystallised.
Do you think these efforts around digital ID are a step in the right direction or is it an easy fix for policymakers to pursue?
Digital IDs are now at the hype phase, and the building blocks are more or less there. It is a step in “a” (not necessarily “the”) right direction. Under encryption, digital IDs are extremely difficult to compromise.
Does this mean that fraud will become an extinct species? Not really. We are already seeing the simple solution for fraudsters: Change the mode of attack. But, yes, for the amateur fraudster, life will not be as easy when digital IDs are in place.
Where do you see digital ID going in 2025? Will it make the progress some are hoping for or will it face some unexpected hurdles?
There are already digital ID programmes under every tree, but most are in their initial phases, with standards and interoperability still missing. This is a question of politics as much as a question of technology. 2025 will see more programmes and pilots, but standards will still be debated and challenged. You might consider it the adolescence phase of digital IDs.
Looking back over 2024, are there any key lessons around fraud prevention you think the industry should keep in mind for the new year?
My lesson to take from 2024 to 2025 may be different from what many would expect. That lesson is: Let Gen-AI fraud prevention learn from cyber attack prevention. Gen-AI fraud detection looks in many cases like it’s just starting the first grade, while its big brother, Cyber, has already undergone a complete evolution from template detection to anomaly detection.
Too many deepfake detection solutions I see are using big data to train AI detection algorithms, which is inefficient, slow and costly. I’m dreaming of a summit where deepfake detection professionals meet cyber defence experts, and they hackathon together to create the killer combo. Perhaps there has been such an event, but I wasn’t invited!