The European Union‘s (EU) new AI Act took effect yesterday (2 February) and is set to lay the groundwork for how AI is handled for operators across the continent. 

Formally launched in August 2024, the legislation is the first of its kind surrounding how AI is regulated and how companies which either develop AI models, or offer it as a service, must comply with its guidelines and rules.

Yesterday marked the deadline for companies to officially comply with the requirements and guidelines laid out by the EU AI Act. This means that any company active in Europe isn’t complying with and/or is violating the rules will be hit with large penalties.

In summary, the EU AI Act aims to ensure that AI is being responsible and to give customers and users the utmost protection from the capabilities of the technology. This particularly relates to the risk of data and personal information being protected. 

The Act classifies AI systems into varying risk categories, with more stringent guidelines set for those AI models that are deemed ‘high-risk’. These systems must adhere to requirements such as being transparent, accountable and conduct regular risk assessments to ensure it is compliant with the regulations. 

Diyan Bogdanov, Director of Engineering Intelligence and Growth at Payhawk, believes the EU AI Act will bring forth a better framework for how payment service providers deal with AI for the next several years.

He said: “The EU AI Act isn’t just another compliance burden – it’s a framework for building better AI systems, particularly in financial services. By classifying finance applications like credit scoring and insurance pricing as “high-risk,” the Act acknowledges what we’ve long believed: when it comes to financial services, AI systems must be purposeful, precise, and transparent.

“We’re already seeing this play out in the market. While some chase the allure of general-purpose AI, leading financial companies are embracing what we call “right-sized” AI, focusing on ‘targeted automation’ through AI agents and/or the deployment of smaller-scale models — all within robust governance frameworks.

“The path forward in financial services is clear: success will come not from ambitious AI claims but from focused, practical implementation that puts security and reliability first.”

credit: khunkornStudio/Shutterstock

How EU AI Act impacts finance industry

The new rules surrounding AI in Europe will have a great impact on how payment companies handle their biometric verification services. 

There are guidelines on how biometric categorisation systems will be based on certain sensitive characteristics, untargeted scraping of facial images and social scorings. This comes as there is a new and continuing surge in the amount of AI-related fraud cases, such as deepfakes, that have occurred over the past year. 

This is why it has become essential now for payment and financial companies alike to employ a head of compliance officer that deals with AI-related high risk cases. A damning report from Signicat in May 2024 revealed that companies are lagging behind the rapid development of AI as the staff are underdeveloped and/or scarce, whilst budgets need to be re-adjusted to combat AI-related fraud. 

While these issues will remain persistent, payment service operators, such as payabl., will now view the EU AI Act as an opportunity to bring new ideas to the table under the guidance of the guidelines laid out by the new regulations. 

Mario Joannou, Head of Digital Risk and Privacy at payabl., commented: “The EU AI Act marks a defining moment for artificial intelligence in financial services. As regulators push for greater transparency, fairness and accountability, companies must shift their AI strategies to promote responsible AI. 

“The era of opaque, black-box AI is ending, replaced by a future where transparency and trust become as critical as innovation and efficiency,” said Joannou. 

“The shift isn’t just about avoiding fines or regulatory checkboxes, it’s about redefining how AI operates in financial decision-making. Credit scoring, fraud detection, and customer authentication – until now dominated by complex machine learning models – must now be built with interoperability at their core. Regulators and customers must be able to understand the ‘why’ behind every decision.  

“In this sense, compliance shouldn’t be seen as a burden. Trust is the new currency, and companies that can demonstrate fairness, bias mitigation and robust AI governance will win both customer and regulatory approval faster than their peers.”

credit: Koshiro K / Shutterstock

EU takes different approach to AI Arms Race

The EU AI Act now in effect comes at a very prominent time when AI is significantly at the forefront since the turn of the year. 

The launch of Chinese AI model DeepSeek sent shockwaves across the market last week and brought forth a new competitor to the likes of US-based companies, such as OpenAI, Microsoft, Google and Nvidia.    

Bogdanov adds that whilst the US and China are competing to build the largest, most expansive AI models, Europe’s AI Act enables it to provide a foundation for companies to build models under secure guidance for future long-term growth. 

He said: “Europe is setting the global standard for how AI should work in financial services — and it’s exactly what the industry needs.

“While the US and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones. The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation – they’re defining what good looks like in financial services AI. 

“This regulatory framework gives European companies a significant advantage. As global markets increasingly demand transparent, accountable AI systems, Europe’s approach will likely become the de facto standard for financial services worldwide.”