AI in the financial sector: Classification and practical significance of the new BaFin guidance on the use of AI

Illustration zum Artikel KI im Finanzsektor
Bild: Pragati/Adobestock

With the ongoing integration of artificial intelligence into the business models, processes and decision-making structures of financial companies, regulators are focusing not only on issues of efficiency and innovation, but increasingly also on those of digital operational resilience. Against this backdrop, on 18 December 2025, BaFin published comprehensive guidance on ICT risks associated with the use of artificial intelligence (AI) in financial companies. The paper is aimed in particular at financial companies that are required to comply with Regulation (EU) 2022/2554 – the Digital Operational Resilience Act (DORA) – and is expressly intended as non-binding guidance on how to properly apply the requirements of DORA to AI systems when using them.

Noteworthy is the consistent classification of AI systems within the existing DORA framework. BaFin makes it clear that, from a regulatory perspective, the use of AI does not require special treatment, but should be treated as a specific type of network and information system – with all the resulting requirements for governance, ICT risk management, third-party control, and cyber and data security.

The new guidance is based on discussions between BaFin and financial companies and does not constitute a binding interpretation of DORA by BaFin. Accordingly, no supervisory expectations are defined, but BaFin emphasises that the risk-based approach and the principle of proportionality must always be observed.

Nevertheless, the paper is of practical relevance, particularly against the backdrop of the increasing use of generative AI and large language models in core and support processes at banks, insurance companies and other financial institutions. It addresses a key question in current regulatory practice: How can innovative AI applications be reconciled with strict digital resilience requirements without stifling innovation or downplaying risks?

The following article presents the main contents of the guidance, systematically classifies them within the existing regulatory framework and highlights the practical implications for the legally and DORA-compliant use of AI in financial companies.

Classification: AI under the regime of digital operational resilience

First of all, it is noteworthy that there is a clear distinction from Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Regulation). The guidance deliberately limits itself to ICT risks associated with the use of AI and their treatment under DORA. Issues of model logic, autonomy, data ethics or substantive AI obligations are excluded. The document thus functionally complements the (future) European AI supervisory regime without creating regulatory overlaps.

BaFin strictly locates the guidance within the DORA regulatory framework and the relevant Regulatory Technical Standards on ICT risk management (RTS RMF, Delegated Regulation (EU) 2024/1774) and on the outsourcing of ICT services (RTS SUB, Delegated Regulation (EU) 2025/532).

AI is not treated as a separate ‘new technology’, but as a sub-case of network and information systems within the meaning of Art. 3 No. 2 DORA. AI systems must therefore – regardless of their specific use case – be fully embedded in ICT risk management and assessed according to the same standards as other ICT systems, particularly with regard to risk profile, complexity, data processing and the support of critical or important functions.

BaFin attaches particular importance to ICT third-party risks, as many AI systems can in fact only be operated on a cloud basis. In addition, there are specific risks in the area of data quality and integrity, for example due to manipulated training data, as well as dangers arising from the unsafe reuse or decommissioning of AI applications. Cybersecurity is also gaining in importance: backdoors in training, vulnerabilities in open-source components, adversarial attacks or unauthorised access can directly lead to wrong decisions and significant operational risks. The systematic control of these risks is – consistently – the task of ICT risk management.

AI as an ICT asset: BaFin’s understanding of the term

BaFin adopts the legal definition of an AI system from Art. 3 No. 1 AI Regulation, but reduces the focus to the technical and organisational level. For guidance purposes, an AI system is not understood as an abstract decision-making or autonomous system, but as a concrete manifestation of an ICT system within the meaning of DORA.

From a regulatory perspective, an AI system consists of a combination of ICT assets (hardware and software) and the associated ICT infrastructure, into which a complex mathematical model is integrated. The AI model itself is explicitly classified as an ICT asset in the form of software. The decisive factor is the integration of the system into the company’s existing ICT landscape.

AI systems are therefore fully subject to the DORA regulations. All classic obligations of ICT risk management – from identification, protection and prevention, detection, response and recovery to learning processes and communication – apply to AI without restriction. This applies regardless of the technical or organisational form in which AI is used, i.e. equally to self-developed AI models, open source solutions, cloud-based large language models and AI functions that are part of standard software.

Life cycle-oriented approach instead of use case thinking

AI systems – like all other ICT assets – must be fully integrated into the ICT risk management framework according to DORA. This includes, in particular, the systematic identification of vulnerabilities, for example in model training, data pipelines or inference, as well as the assessment of quantitative and qualitative risks (Art. 8 DORA). Appropriate risk mitigation measures must be defined, documented and regularly reviewed for identified risks, for example through adversarial training approaches or the monitoring of model drift (Art. 9 DORA).

BaFin structures the requirements along the AI life cycle:

  1. Development and testing,
  2. Operation,
  3. Decommissioning.

The security and resilience of an AI system must be ensured throughout this cycle, from data collection and model development and deployment to ongoing operation and decommissioning.

This approach is relevant in practice because ICT risks do not arise from the position of AI in the value chain, but from its integration into the existing ICT landscape.

The risk-based approach and the principle of proportionality (Art. 4 DORA) remain guiding principles: AI systems that support critical or important functions are subject to significantly stricter requirements than, for example, purely assistive tools (such as AI-based self-service assistants) under full human supervision that are not involved in decision-making processes.

As with other ICT assets, AI systems must also be integrated into the ICT risk management framework. This includes the identification of vulnerabilities (e.g. in model training, data pipelines or inference) and the assessment of quantitative and qualitative risk criteria (Art. 8 DORA). Furthermore, measures to address identified risks, such as the use of adversarial training methods or the monitoring of model drift, must be documented and reviewed regularly (Art. 9 DORA).

The ICT risk management framework must be reviewed at least annually (Art. 6(5) sentences 1 and 2 DORA). Furthermore, financial firms must be able to submit a report on the review of their ICT risk management framework in a searchable electronic format at the request of the competent authority. The report must also document the current risk status, the measures taken and any weaknesses identified (Art. 27 RTS RMF). If necessary, this report can also be supplemented with specific information on AI systems.

Governance, strategy and competence building

BaFin takes a particularly clear stance on governance requirements:

  • Financial institutions should define an AI strategy that is either independent or integrated into their IT or DORA strategy. This is particularly important when AI systems support critical or important functions.
  • A consistent, documented end-to-end process is recommended, covering the entire AI life cycle from strategic decision-making to development and operation to decommissioning. Before implementing AI, it is necessary to check whether existing processes, controls and the handling of information assets are suitable for AI use.
  • The systematic development of AI expertise is also crucial. Training and further education must ensure that employees have the knowledge appropriate to their tasks when dealing with AI systems (Art. 13(6) DORA). This applies equally to specialist departments, IT and management.
  • The ultimate responsibility for managing ICT risks lies with the management body (Art. 5 (2) (a) DORA), which in turn must have sufficient AI and ICT expertise (Art. 5 (4) DORA). Roles and responsibilities – particularly when using AI-generated results in decision-making processes – must be clearly defined.
  • AI governance frameworks should regularly provide for the integration of ICT risk management, control functions and internal audit – graded according to the criticality of the respective AI system.
  • Finally, BaFin emphasises the importance of risk-based management of third-party ICT service providers. Contract design, risk assessment and control measures must be adapted to the specific AI use case and, if necessary, supplemented with AI-specific precautions.
  • The ICT risk management framework must be reviewed at least once a year (Art. 6 (5) DORA). The report on the status of the ICT risk management framework documents the current risk status, measures implemented and weaknesses identified (Art. 27 RTS RMF). This report can – and should – be supplemented with AI-specific risk information.

Development, testing and change management: ‘AI is software’

The guidance follows a clear and highly relevant principle: AI is software. The established standards of software and ICT risk management under DORA therefore apply to the development, operation and further development of AI systems.

If financial institutions develop AI systems themselves, these are subject to regular software development and ICT processes. A key requirement is proof of sufficient technical competence. Employees who develop, operate or maintain AI systems must have adequate knowledge of how the AI used works, the associated risks and the specifics of cloud and on-premise operation (Art. 13(6) DORA). This also applies explicitly when specialist departments outside the ICT function use AI assistants to develop software. The management body must also have a sufficient understanding to be able to properly manage the risks associated with AI software development (Art. 5(4) DORA).

At the procedural level, BaFin requires a structured development process covering planning, development, testing, rollout and operation. Proven software engineering methods – such as unit and integration tests, code reviews and the operation of separate development and test environments – must also be applied to AI systems. Comprehensive technical documentation, including the algorithms, data and parameters used, is essential to ensure traceability and verifiability.

Changes to AI systems are subject to strict change and version management. Every adjustment – including to the model or training data – must be independently reviewed, tested and documented. Emergency changes require special protective measures (Art. 17 RTS RMF). Version control systems and the archiving of model statuses are necessary in order to be able to track errors and reproduce results.

Finally, special attention must be paid when using open-source components and when generating code using AI assistants. Open-source libraries can pose additional risks due to malicious code or lack of maintenance and must therefore be carefully checked. AI-generated code must be treated in the same way as manually created code and, in particular, examined for undesirable functions and security vulnerabilities using static code analysis (Art. 16(3) RTS RMF).

Consequently, AI systems must be embedded in structured software development life cycles. This also includes comprehensive, risk-oriented testing procedures. In addition to functional tests, BaFin requires security, integration and stress tests in particular, the scope of which is based on the criticality of the respective AI system. Increased attention is also required when using open-source components and AI-supported code generation, as these can give rise to additional sources of attack and error.

BaFin emphasises the increased testing requirements, particularly for the use of generative AI and large language models (LLMs). Due to complex and often non-transparent model architectures, possible unannounced model changes by third-party providers, and novel AI-specific attack vectors – such as data poisoning, model poisoning, or prompt injection – traditional testing approaches are often insufficient. Instead, financial companies are required to expand their testing concepts to include AI-specific security and robustness tests and to continuously adapt these to the state of technical development.

Operation and decommissioning of AI systems

According to the supervisory authority, risks associated with the use of AI arise not only in the development phase, but also in particular during ongoing operation and when the use of the system is terminated.

For operation, BaFin requires clearly defined and documented processes that fully integrate AI systems into asset and configuration management. In particular, model versions, training data, software libraries and interfaces must be recorded. Key elements include continuous monitoring, risk-oriented logging of accesses and model decisions, and the integration of AI systems into ICT business continuity and recovery management, including regular testing.

AI systems must be protected against common cyber threats such as adversarial attacks, model poisoning or inference attacks during operation. Appropriate technical security measures are required to counter these threats (Art. 9(3) DORA). Role-based access rights for AI models and training data are also an effective tool. These must be regularly reviewed and documented (Art. 21 RTS RMF).

BaFin pays particular attention to the controlled decommissioning of AI systems. Financial companies should have binding rules for uninstalling systems to prevent data leakage, misuse, or the uncontrolled reuse of outdated models. This includes completely removing models, securely deleting or archiving data, deactivating access rights, and keeping clear records.

Special features of operating AI in the cloud

Since AI systems – especially generative AI and LLMs – are regularly operated in the cloud, BaFin pays particular attention to cloud specifics. It consistently applies the standards of its supervisory notice on outsourcing to cloud providers to AI applications and supplements them with AI-specific risk perspectives.

A comprehensive risk assessment prior to concluding a contract is essential. In addition to classic cloud risks, this assessment also covers model-related aspects such as data flows, model changes by the provider, and dependencies on computing capacities. Transparency regarding subcontracting chains is particularly important, as specialised third-party providers are often involved in AI operations.

Contractually, BaFin requires robust SLAs that explicitly regulate availability, performance, security requirements and incident reporting for AI systems as well. These are supplemented by comprehensive audit and control rights, which must extend to subcontractors.

Finally, the supervisory authority requires viable exit strategies to enable a change of provider or repatriation. Avoiding the vendor lock-in effect is not only economically relevant, but also relevant from a regulatory perspective.

Cyber and data security particularly exposed in AI

BaFin attaches central, cross-lifecycle importance to cyber and data security in the use of AI systems. In its view, AI systems are particularly exposed points of attack because they process sensitive data and are often involved in decision-making processes. Financial companies must therefore explicitly include AI in their ICT security guidelines in accordance with DORA and implement appropriate technical and organisational measures to protect systems, models and data so that the secure and resilient operation of AI can be ensured within the ICT risk management framework in accordance with DORA. These include, in particular, strict access and authorisation concepts, encryption, continuous monitoring, logging and protection mechanisms against AI-specific attacks such as data poisoning, adversarial inputs or unauthorised model queries. Requirements for data classification, integrity and secure deletion are also emphasised. Overall, BaFin makes it clear that cyber and data security in the use of AI is not an additional requirement, but a core component of DORA-compliant ICT risk management.

Serious ICT-related incidents involving AI systems are, of course, fully subject to the reporting requirements under DORA. Security incidents, disruptions or manipulations of AI systems should therefore not be viewed in isolation, but rather as ICT incidents within the meaning of Art. 17 ff. DORA, provided that they impair the availability, integrity, confidentiality or authenticity of data or services.

Financial institutions must have appropriate processes in place to detect, classify, handle and report such incidents. These processes should explicitly cover AI-specific scenarios, such as manipulation of training data, model errors, unexpected model behaviour or security incidents in cloud-based AI services. BaFin also expects incidents related to AI to be identified internally as such in order to enable targeted evaluation and follow-up.

When using AI systems via third-party ICT service providers – especially cloud providers – contractual provisions must be put in place to ensure timely information about relevant incidents. In addition, the supervisory authority requires financial companies to have sufficient internal resources to professionally evaluate AI-related incidents, assess their impact and initiate appropriate remedial measures.

Appendix particularly helpful: case study

The comprehensive appendix to the guidance, which includes a case study on the operation of an LLM-based AI assistant, is particularly helpful. The otherwise rather abstract DORA requirements are translated here into a realistic, technically concrete application situation. The case study translates the otherwise rather abstract DORA requirements into a concrete regulatory blueprint for AI deployment. Here, BaFin demonstrates how ICT risks can be identified, assessed and managed throughout the entire AI lifecycle without setting new binding standards.

Of particular practical relevance is the fact that the case study compares three common infrastructure variants (on-premise, cloud in own tenant, cloud outside tenant). This makes it clear that the risk profile does not result from the AI use case, but rather from the technical operating model and data flows. Financial companies can immediately classify their own implementations and review existing risk assumptions.

In addition, the case study addresses specific attack scenarios and undesirable developments – such as data poisoning, prompt injection, uncontrolled data leaks or lack of access restrictions – and then assigns exemplary countermeasures to them. This provides structured argumentation and documentation support, particularly for compliance, IT and risk functions, without the BaFin actually setting new minimum standards.

Finally, the case study is particularly valuable because it explicitly serves as an illustration of best practice. It enables companies to properly justify their own AI concepts to supervisory authorities, internal auditors or external auditors by showing the considerations that BaFin bases its assessment of AI risks on.

Conclusion: No new law – but new depth

BaFin’s guidance gives the existing DORA requirements a noticeably greater depth and practical contour for the use of AI in financial companies for the first time. It makes it clear that AI offers considerable potential for efficiency and control, but that this is inextricably linked to increased ICT, cyber and third-party risks. With the increasing use of complex and often cloud-based AI systems, the requirements for governance, organisation and technical control are growing.

BaFin’s key finding is that DORA provides a sufficient and viable framework for the safe use of AI – provided that the requirements are implemented consistently, on a risk-based and life-cycle-oriented basis. The development, testing, operation and decommissioning of AI systems should not be viewed in isolation, but as interrelated elements of effective ICT risk management. AI systems developed in-house and by third parties are subject to the same standards; differences arise solely from their criticality and integration into business processes.

With its deliberately non-binding nature, the guidance document positions itself as a practical working tool. It does not formulate rigid expectations, but rather clarifies the considerations on which BaFin bases its assessment of AI risks. The case study in particular serves as a regulatory blueprint that financial companies can use to align their own AI concepts and justify them appropriately to supervisors, auditors or inspectors.

Ultimately, BaFin makes it clear that the use of AI in the financial sector is neither privileged nor critically assessed across the board from a regulatory perspective. The only decisive factor is whether financial companies are able to integrate AI systems into their ICT landscape in a controllable, transparent and resilient manner. Innovation is therefore permissible – and in fact expected – but only viable where it is accompanied by robust governance, effective ICT risk management and a high level of operational maturity. In practice, this means that AI is not an experimental space outside the scope of regulation, but must prove itself as a regular ICT system within the existing DORA framework.



By continuing, you accept our privacy policy.
You May Also Like