As Artificial Intelligence (AI) continues to develop, the customer needs to be front and centre of everything the financial services industry does when it comes to adopting the technology.
This viewpoint was expressed by Saira Khan, Head of Innovation & Partnerships at First Direct Bank, in a panel discussion at Pay360 last month. Khan, along with two other speakers from the payments and fintech spaces, explored how much of a profound effect AI is and will have on the payments sector.
There are a number of use cases for AI in finance. Fraud prevention, for one, is an area where AI’s potential has been widely observed and implemented. However, consumer concerns around the technology remain. Addressing these concerns and building up trust will be a key task for finance stakeholders looking ahead, as Khan explained.
“I think it’s hard for you to give you tangible real time cases for five years down,” she said. “I think no one knows or has a crystal ball on what’s going to happen, genuinely.
“We need to be responsible. We need to have trust. We need to make sure that we are keeping the customer at the heart of everything that we do.”
Raising awareness about AI will be central to building up trust in the emerging tech, Khan asserted. Trust will be the key word as tech firms continue to move forward with AI development, and if successful use cases are to be fully realised, it will be essential.
So what are these successful use cases? As noted above, fraud prevention is an area many payments, fintech and banking firms have identified a key opportunity with AI, although fraudsters have also been using the tech for their own illicit purposes.
“If you look at the use cases that GenAI stands for,and can be more impactful, from that perspective I’m looking at it from fraud detection, prevention and KYC, naturally,” said Jovi Overo, Director and Member of the Board at Unlimit.
“I’m looking at it from the ability to really embrace that customer experience, how to use data to enrich and personalise that for the customer. I look at it from the perspective of what kind of impacts can create with respect to B2B payments.”
AI can make fraud prevention processes easier, but it can also be a useful tool in the arsenal of fraudsters, particularly those that are more technologically literate and have the right computing knowledge and skill sets.
Deep fakes are a particular area of concern in his regard. Khan noted the risk this emerging technological threat poses, raising concerns around the way voices and people’s faces can be mimicked, making fraud prevention a more difficult task.
On the expo floor at Pay360, Payment Expert had a brief conversation with SEON’s Director of Solution Engineering, Logan Porter, who cited a survey the firm had conducted finding that 87% of respondents viewed fraud as a growing threat, of which 71% said AI-backed fraud was the biggest growing threat.
He continued: “We’re seeing the wide-scale adoption of AI both in terms of fraud fighting technologies but also in the committing of fraud itself. We’ve seen the rise of fraud as a service (FaaS) in recent years, where fraudsters charge each other for solutions that explain or provide the tools to commit fraud.
“These fraudsters are sophisticated and train other aspiring fraudsters in classes – AI has accelerated that with the availability of deep fakes, the ability to stimulate someone’s voice and create soundbites of that person speaking, and the creation of synthetic identities.
“This has allowed fraudsters to significantly scale their techniques. We’ve seen that in the last several years and we believe that trend will continue and accelerate.”
To respond to these risks, businesses need to make sure they are one step ahead of fraudsters which are in turn always on the lookout for innovations that can assist them in committing their crimes.
Taking control of AI to take the fight to the fraudsters using this same technology, alongside other ‘foundational tools’ as Porter put it, will be invaluable to protecting both businesses and customers.
“What AI is very good at doing is sifting through mass volumes of data points, and coming up with data points and anomalies that are connected to fraud,” he continued.
“You can do that in a much faster way than an individual human having to look at hundreds or thousands of transactions. We can utilise AI and ML to identify how things are happening, and prevent them.
“If a user is subject to phishing or impersonation using a deep fake, we can see whether that person logs into a bank account to transfer funds, we can get their IP and device intelligence – such as whether they’re on a phone call or are sharing their screen – and we can flag that as out of the norm when taking other behaviour into account.”
Back on the panel, Unlimit observed that no matter how effective an Ai tool is developed for fraud detection and prevention and for KYC, there will always be bad actors wishing to utilise AI for fraud.
“There will always be bad actors, so it’s going to be a perpetual cycle of bad versus good,” he said, noting that as the industry moves ahead with faster and more seamless payments, ‘faster payments means faster fraud”.
Regardless of peoples’ view on AI, whether it be positive or negative, it is hard to disagree with the opinion that there is a lot of hype around AI – whether that be hype around its risks or its benefits.
Marcus Martinez, Industry Advisor – EMEA at Microsoft’s Worldwide Financial Services, explained that there needs to be a distinction between traditional AI and generative AI (Gen AI).
Building on some comments Khan made about the power AI can have in the payments space, notably driving forward innovative technologies and practices such as embedded payments, Martinez pointed to the vast datasets available.
“The scope of data we’ll be able to use is much, much broader,” he said. What I mean by that is that one of the superpowers of generative AI is really to deal with unstructured data – text, video, images, audio and when you think about the need to provide more personalised, hyper personalised experiences, GenAI is really something.”
However, amidst all this hype and debate, it is important that stakeholders do not get ahead of themselves. AI is still developing after all, and many of the use cases have not yet been fully tested or realised.
We need to have the ‘right expectations’ of AI, Martinez emphasised, adding that developers need to choose whether to set an autopilot model or adopt a more ‘copilot’ approach. Above all this, ensuring trust and responsibility, as Khan outlined, is important for AI developers to factor in.
AI development, scaling and adoption is increasing week-by-week, but its full use in fintech and payments – as well as in other business sectors and also for more general consumer use – has not yet been fully seen. As the panel observed, there are a range of factors firms and regulators must keep in mind as this process continues.