European ID and fraud prevention solution Signicat has revealed that organisations are concerned regarding the looming threat of deepfake and artificial intelligence (AI) fraud attacks. 

These concerns were found in Signicat’s latest report detailing the growing threat of AI to identity fraud and how key decision makers are experiencing more of these emerging attacks, with some believing that they are unprepared to tackle. 

The meteoric rise of generative AI has seen the emerging tech introduce a function known as deepfake, the ability to digitally replicate a person’s face to an extremely high standard. 

While deepfakes have now been around for the last three years, Signicat’s report states that the technology is now falling into the hands of fraudsters, using the technology to severe degrees. 

According to the report, 42.5% of fraud attempts detected use of AI, with 29% of them considered to be successful. One in nine said that estimated AI usage in fraud attempts is as high as 70% for their organisation. The report also highlighted that 38% of revenue loss to fraud is estimated to be due to AI-driven attacks. 

Fraud decision-makers recognised that AI will drive nearly all future identity fraud. However, there is confusion and limited understanding about its exact nature, impact, and the best prevention technologies, stated the report. 

Asger Hattel, CEO at Signicat, commented: “Fraud has always been one of our customers’ biggest concerns, and AI-driven fraud is now becoming a new threat to them. It now represents the same amount of successful attempts as general fraud, and it is more successful if we look at revenue loss.  

“AI is only going to get more sophisticated from now on. While our research shows that fraud prevention decision-makers understand the threat, they need the expertise and resources necessary to prevent it from becoming a major threat. 

“A key part of this will be the use of layered AI-enabled fraud prevention tools, to combat these threats with the best technology offers.” 

Hattel alludes to the need for AI prevention tools for organisations now before the AI threat can grow. Whilst this would be a wise decision, the report reveals that AI-driven fraud has not yet been as successful as some may think. 

With over three quarters of businesses establishing teams dedicated to the issue of AI-driven identity fraud and implementing measures, success rates have remained steady over the last three years. 

However, the report outlines that the technology will only become more sophisticated and whilst the success rate may stay the same in the short term, the proliferation of AI-related attacks is expected to explode. 

Because of this, there is a very high awareness of the problem of AI-driven identity fraud. Most fraud decision-makers agreed that AI is a major driver of identity fraud (73%), that AI will enable almost all identity fraud in the future (74%), and that the technology will mean more people will fall victim to fraud than ever before (74%). 

Organisations do, at least understand the threat that AI poses in its ability to make identity fraud easier, more accessible, and work at scale. They can detect AI in the attacks they face, and they understand that the problem is only going to get worse.  

Signicat does reveal though that these companies are unprepared for the looming threat, as costs and integrating fraud prevention tools takes a combination of budget, time and expertise.