EU AI Act – What financial institutions need to know

EU AI Act - What financial institutions need to know

The impact of artificial intelligence (AI) on the development of financial services ranges from the personalisation of the banking experience for customers and the optimisation of lending to alpha investment strategies and modern chatbot services. The Artificial Intelligence (AI) Regulation adopted by the EU on 13 March 2024 is a groundbreaking and world-first piece of legislation that aims to regulate the use of AI in various sectors, with a focus on the financial sector. The AI Regulation will have a significant impact on how financial institutions operate, innovate and manage risks associated with AI technologies.

This article looks at the impact of the AI Regulation on the European financial sector.

Shaping Europe’s digital future

Following the adoption of the Artificial Intelligence (AI) Act on 13 March 2024, the European Union is positioning itself at the forefront of global efforts to create a strictly regulated AI framework. The law is a central component of a broader package of political measures to support the development of trustworthy artificial intelligence and is flanked by other initiatives (e.g. AI innovation package). There is a particular focus on the financial sector, which has taken on a pioneering role in the implementation of AI technologies, from credit checks to fraud detection.
The AI Regulation is part of a broader European data strategy that aims to harness the potential of data for innovation while ensuring privacy and data protection. It facilitates the reuse of public sector databases and access to private datasets, enabling financial institutions to develop personalised and more efficient services. This will increase competition and improve consumer choice in the financial sector. In addition, the Financial Data Access Regulation (FiDA), which is still at the draft stage and complements the AI Regulation, will further democratise data access by enabling consumers to share their financial data with third parties in a secure manner.

What is AI?

The EU Commission defines an “artificial intelligence system” (AI system) as software that has been developed using machine learning, logic- and knowledge-based concepts or statistical approaches and is capable of producing results such as content, predictions, recommendations or decisions that influence the environment with which it interacts, in relation to a set of objectives defined by humans (see Article 3 No. 1 of the AI Regulation). On the one hand, this definition offers the necessary adaptability to the rapid technological change in the field of AI systems, but on the other hand, it also leads to a certain degree of legal uncertainty for developers, operators and users of such systems, as it is very broadly defined.

At its core, the AI Regulation aims to combine the advanced potential of AI technology with the essential need to minimise risks and ensure consumer and data protection. The European Commission has adopted a risk-based approach and determined whether a system poses an unacceptable, high, limited or low/minimal risk to the end user.

Risk-based approach for AI technologies

The AI Act states that “AI should be a human-centred technology. It should serve as a tool for humans, with the ultimate goal of enhancing human well-being“. To achieve this, the EU has banned the use of AI for a number of potentially harmful purposes (with some specified exceptions for government, law enforcement or scientific purposes). These include, for example

  • biometric categorisation systems based on sensitive characteristics,
  • the untargeted extraction of facial images from the internet or from video surveillance recordings to create facial recognition databases
  • recognising emotions in the workplace and at school,
  • Social scoring
  • predictive policing (if it is based solely on profiling or the assessment of a person’s characteristics)
  • AI that manipulates human behaviour or exploits human vulnerabilities.

Unfortunately, some passages of the law are formulated very vaguely and openly, which can lead to different interpretations. In order to understand the full implications of these regulations, it is necessary to wait for additional guidelines or information on the practical implementation of these regulations.

New rules for high-risk AI in the financial world

High-risk systems – i.e. systems that could potentially have a “detrimental impact on people’s safety or fundamental rights” – are permitted under the AI Regulation, but must comply with a number of new rules. High-risk systems have a significant impact on a person’s life and livelihood or can make it more difficult for them to participate in society. This category therefore also includes AI applications that are important for financial decision-making processes such as credit scoring, risk assessment and fraud detection. Also included are biometric systems, the operation of critical infrastructures and AI-supported HR software, for example for job applications.

For such high-risk AI applications, the AI Regulation prescribes a comprehensive risk assessment and risk mitigation measures to ensure transparency, accuracy and fairness. Financial institutions must therefore carry out thorough risk assessments and introduce robust risk mitigation systems for the use of such AI applications. This includes

  • ensuring the quality of the data fed into AI systems in order to minimise distortions and discriminatory results,
  • Keeping detailed documentation for transparency
  • the automatic logging of processes and events in the AI system,
  • the fulfilment of transparency and provision obligations towards users and
  • the establishment of mechanisms for human oversight.

High-risk AI systems must also be designed and developed to achieve an appropriate level of accuracy, robustness and cybersecurity and to function consistently in this respect throughout their lifecycle.

However, the AI Act differentiates the applications and specifies that AI systems are not considered high-risk if they do not pose a significant risk to the health, safety or fundamental rights of natural persons. This also applies if the AI system significantly influences the outcome of the decision-making process. AI systems are therefore not considered high-risk if one or more of the following criteria are met:

  • The AI system is designed to perform a narrow procedural task;
  • the AI system is designed to improve the result of a previously performed human activity;
  • the AI system is intended to recognise decision-making patterns or deviations from previous decision-making patterns and is not intended to replace or influence the previously performed human assessment without appropriate human review; or
  • the AI system is intended to perform a preparatory task for an assessment relevant to the use cases listed in the AI Act. Notwithstanding this, an AI system is always considered to be high-risk if it carries out profiling of natural persons.

The corresponding classification as high-risk AI must therefore be examined on a case-by-case basis and may well fall into a grey area under certain circumstances.

Systems that pose a limited risk to the end user (including chatbots and biometric sorting systems), on the other hand, only need to operate under “a limited number of transparency obligations”. This means that, for example, AI-generated audio, image and video content must be labelled as such so that the user has the choice of whether or not to continue their interaction with the technology.

Although low or minimal risk systems are not subject to additional regulatory requirements under the AI Act, the Act encourages providers of such systems to adhere to a theoretical “code of conduct” that takes a similar form to regulation for their high-risk counterparts. This is primarily intended to promote market conformity.

Entry into force

The obligations of the AI Regulation apply to manufacturers, providers and distributors of AI systems, product manufacturers who integrate AI systems into their products and users of AI systems. In fact, every company that comes into contact with AI is affected by the AI Regulation.

Depending on the risk class of the underlying AI systems, the AI Act provides for transitional periods of 6 to 36 months within which the requirements must be implemented.

As with the General Data Protection Regulation, the purpose of this delay is to allow companies to ensure that they comply with the regulations. Once this period has expired, significant penalties can be expected for non-compliance. These are staggered, with the most severe penalties reserved for those who violate the “unacceptable use” ban. At the upper end are fines of up to 30 million euros or six per cent of the company’s global turnover (whichever is higher).

However, the impact on a company’s reputation may be even more damaging if it breaches the new laws. Especially in the financial world, the trust of customers and partners should not be underestimated            .

Double-edged sword for the financial industry

The AI regulation has both positive and negative aspects for the financial sector:

Financial institutions need to adapt their AI systems and ensure that they are compliant, especially with regard to high-risk systems such as credit scoring. With the AI regulation, the need for transparent, interpretable AI models and the use of unbiased data of the highest quality becomes very clear.

The implementation of the AI law requires financial institutions to integrate the new AI governance and risk management requirements into their operational framework. This includes alignment with sector-specific guidelines and the use of new technologies for supervisory purposes. The European Commission will establish a new AI Supervisory Office, which will play a crucial role in enforcing the law and ensuring that AI systems used in the financial sector are compliant and do not pose undue risks to consumers.

However, for institutions, adapting to the AI regulation also means gaining consumer trust more than ever before, ensuring ethical AI operations and potentially achieving competitive differentiation in the market. Financial institutions should therefore proactively assess their AI systems and conduct a gap analysis to determine which systems are vulnerable to the high-risk scenarios of the law and which new regulatory requirements are not yet met.

In addition, a new culture of transparency regarding the collection and use of consumer data will be necessary to maintain consumer trust. However, the need to adhere to higher, cleaner and stricter transparency standards may also have the potential to inspire the next generation of services and innovation across the industry.

Outlook

While the EU is mainly concerned about AI hallucinations (when the models make mistakes and make things up), the viral spread of deepfakes and the automated manipulation of AI that could mislead voters in elections, the financial sector has its own challenges with the new AI regulation.

Given the tight deadlines for implementation, it is crucial to start planning the appropriate compliance measures immediately. An early and comprehensive introduction of AI governance strategies can give financial institutions a head start in terms of time to market and the quality of high-risk AI systems. However, the requirements of the corresponding regulation are very complex and demanding. In the long term, the ability to combine quality, compliance and the scalability of AI systems will therefore be decisive for the success of AI systems in Europe.

The law’s focus on general-purpose AI systems, including large-scale language modelling and generative AI, opens up new opportunities for financial institutions to improve their services. These technologies can be used for a range of applications, from personalised financial advice, which could be as crucial as choosing the right investments, to more efficient customer service, bringing both innovation and competitive advantage.

You May Also Like