Apple contactless payment antitrust commitments bound in EU law
Credit: Xavier Pironet / Shutterstock

The EU AI Act comes into force today (1 August). The legislation is the first of its kind globally in governing Artificial Intelligence (AI), the rapidly advancing technology which is redefining various sectors including payments.

The Council of the European Union approved the Act on 21 May following legislative approval in the European Parliament on 13 March 2024. The Act seeks to regulate the development and use of the technology, particularly around safety.

AI’s use in public life and business has taken off rapidly over the past couple of years. Its use cases in payments and finance are well documented – fraud prevention and anti- money laundering, product design and development, and customer support, to name a few.

The tech’s quick development and adoption has not gone unnoticed by politicians and regulators, however. Although seeking to capitalise on the growth opportunities of AI, leaders are also keen to ensure its progress is monitored and checked – the EU’s AI Act seeks to do just that.

Shaun Hurst, Principal Regulatory Advisor at Smarsh, a digital communications company, said: “As the world’s first legislation specifically targeting AI comes into law today, financial services firms will need to ensure compliance when deploying such technology for the purpose of providing their services.”

One of the main changes the EU AI Act is introducing is a categorisation of AI products and tools by risk level, the most worrying being ‘unacceptable risk’ and ‘high risk’.

Unacceptable AI models are those which are capable of cognitive behavioural manipulation of people or specific vulnerable groups; capable of social scoring, where people are classified based on behaviour or socio-economic status; biometric identification and categorisation; and real-time and remote biometric ID systems like facial recognition.

High risk categories are divided into two areas. The first are those used in products governed by EU safety legislation, such as toys, aviation, cars, medical devices and lifts/elevators.

The second high risk category encompasses the following – management and operation of crucial infrastructure, education and vocational training; employment, worker management and access to self-employment; and access to essential private and public services.

AI laws and tools relating to legal fields will also fall under this category. This covers law enforcement, migration, asylum, border control management, and assistance in legal integration and application of the law.

When many people speak of AI, including in financial circles, they are most often referring to Generative AI. This is the type of AI which underpins the now-widely used ChatGPT chatbot, to offer a well-recognised example.

These chatbots will not be classed as high-risk under EU law, but will be subject to some transparency conditions. FIrms must disclose if content was generated by AI, models must be designed to prevent generation of illegal content and copyrighted data used for trading must be published.

Lastly, and perhaps most significantly in the long-term, the Act will be overseen and implemented by the newly created European AI Office. To support the AI Act, the Office has the ability to conduct evaluations of general-purpose AI models, request information and measures from model providers and apply sanctions.

Although the Act comes into force today, not all rules will apply immediately. Unacceptable AI use cases will be banned six months after day one. AI codes of product will apply after nine months and rules on general purpose AI systems that need to comply with the Act’s transparency requirements will apply 12 months after.

Hurst remarks: “Banks utilising AI technologies categorised as high-risk must now adhere to stringent regulations focusing on system accuracy, robustness and cybersecurity, including registering in an EU database and comprehensive documentation to demonstrate adherence to the AI Act.

“While the aim is to ensure accountability and the ability to audit AI systems for fairness, accuracy and compliance with privacy regulations, the increase in regulatory pressure means financial institutions must assess their capabilities in keeping abreast with these changes in legislation and whether existing compliance technologies are up to scratch.”