Artificial Intelligence (AI) has accelerated elements of identity fraud but has yet to threaten the sector in a unique way, according to a panel of industry experts.
Discussing the potential threats from AI tools at the Future Identity Festival in London, the panel agreed that criminals are using the technology to improve their current fraud attempts rather than develop something completely new.
Fabian Eberle, COO at biometric authentication firm Keyless, explained that AI tools are making everyday fraud attempts ever more sophisticated.
“The impact of AI at the moment is showing in the highly personalised phishing campaigns that we are seeing. AI is being used to make truly personalised emails that make it look like it’s coming from employers or a person’s communities and providing a huge challenge for the ordinary person to identify.
“We need to train users to be vigilant in protecting information in the face of these ever more sophisticated fraud attempts.”
Eberle also highlighted the danger posed by deepfakes, as evidenced by a case in Hong Kong earlier this year where an employee at an unnamed company was duped into paying HK$200m (£20m) of her firm’s money to fraudsters in a deepfake video conference call.
Eberle said that while there were likely some failures of existing safeguards, the challenges that deepfake technology poses means that firms need to review and make sure their procedures are up to date and secure.
Nish Ranatunga, Head of Monitoring and Screening, Group AML at HSBC, agreed that existing fraud challenges are being supercharged by AI at the moment, but warned this might not always be the case.
“What we are seeing at the moment is the advancement of existing typologies, we are not seeing novel risk. The tools available now on the market are making existing fraud attempts more sophisticated and are more readily available to people who may want to try and use them.”
However Ratunga is concerned that it may not continue in this way. “A unique type of fraud powered by AI is in the realms of science fiction at this point. The circumvention of bank controls in a unique way is not happening at the moment. However it could be around the corner, so we need to be aware of the threat.”
Ratunga added that AI is not just a threat to the market though, and highlighted that the tech can be used to discover fraudulent activity which may otherwise have not been detected.
Paola Cristina Nunez Ameri, Compliance Risk Country Lead and AMLCO Belgium at Citigroup, also warned against an over-reliance on AI to get information and underlined that a human element should still be embedded into the process to safeguard companies.
She added that the ability for firms to share information more widely across all actors in the market would help prevent fraudulent activity and the need to have an ongoing monitoring of client information rather than relying on data obtained at the onboarding stage.