The Financial Conduct Authority (FCA) has taken note of the extent of Artificial Intelligence (AI) adoption in the British financial services sector.
More banks, payments providers and fintech companies, among other financial stakeholders, are utilising AI for day-to-day operations and advanced tasks.
For regulators overseeing this sector, like the FCA, responsible AI development and usage is a key factor. In its latest move, the FCA is establishing an AI Lab, announced by Jessica Rusu, Chief Data, Information and Intelligence Officer at the company’s 10th anniversary event.
“One of the most transformative of these is AI,” Rusu remarked. “AI will revolutionise financial services, providing solutions to improve consumer financial inclusion, help prevent market abuse, and support the delivery of new products and services.
“And whilst we are just starting to see AI’s benefits emerge, we are clear that those benefits do not come without risks. As a regulator, we must play a critical role in ensuring AI is deployed in a way that is safe, fair and in the best interests of consumers and the market as a whole.
“Even some of the world’s most profusive backers of AI recognise the importance of ensuring the risks of AI are mitigated, as we all work to realise the undoubtedly enormous benefits the technology has to offer.
“That’s why AI is a priority for us. We are committed to enabling a safe and responsible environment for the beneficial use of AI in UK financial markets, in a way that supports the growth and competitiveness of the sector.”
Four factor approach to AI regulation
The AI Lab will consist of four components, Rusu detailed – the AI Spotlight, AP Sprint, AI Input Zone, and the Supercharged Sandbox and testing initiative. Starting with the Spotlight, this function will serve as a space for firms and tech developers to share real-world examples of AI usage.
The AI Spring will consist of people from industry, academia, regulation, technology and consumer representatives, focusing on safe adoption of AI in financial services. The Input Zone, meanwhile, will function as a platform for financial services stakeholders to share input on the FCA’s regulatory approach on AI.
Lastly, the FCA plans to enhance its digital sandbox infrastructure via ‘greater computing power, enriched datasets and increased AI testing capabilities’.
The Sandbox concept is one which is being implemented across financial services at the moment, particularly with regards to digital assets and digital securities, with the UK launching a sandbox in this area earlier this year.
The AI Lab demonstrates how regulators are becoming increasingly aware of the role AI is playing in financial services and finance. The UK has already established an AI Safety Institute, which is working with its counterpart in the US – the FCA’s AI Lab will likely build on any progress made here.
As regulators take more action over AI, relevant industries can expect action. This goes beyond technology and financial services, with high-risk and tech driven sectors like gambling likely to feel some impacts.
Speaking at the Payment Expert Summit last month, Monika Grue, Regulatory Compliance Director at LiveScore Group, a betting and media firm, shared her view that ‘at some point we will have to have an AI officer in the same way we have a GDPR officer’.
As regulatory approaches and AI technology develop, Rusu states that the FCA is dedicated to ‘working alongside industry leaders, academia, consumer groups, Government and fellow regulators to explore how AI can be safely and effectively integrated into financial services’.