In the financial sector, the Organisation for Economic Co-operation and Development (OECD) has for many years played a central role as a policy and thought leader, particularly where technological developments cut across national regulatory approaches and international coordination becomes necessary. With the OECD paper “Supervision of Artificial Intelligence in Finance – Challenges, Policies and Practices”, published on 27 January 2026, an international stocktaking is now available on how supervisory authorities practically oversee the use of artificial intelligence (AI) in the financial sector. The OECD deliberately adopts a high-level perspective: it assumes that member states generally already have suitable regulatory frameworks in place, but identifies significant challenges in the supervisory interpretation and application of these rules in the context of increasingly complex AI systems.
Table of Contents
The particular added value of the OECD paper lies in its focus on supervisory practice and its examination of how supervisory authorities worldwide seek to reconcile the responsible use of AI with core supervisory objectives – financial stability, market integrity and the protection of financial consumers. The paper therefore explicitly does not present a regulatory blueprint, but rather a guide to supervisory understanding and practice.
Against this background, the recently published BaFin guidance on ICT risks in the use of AI can also be classified. It represents a national example of how the challenges described in the OECD paper can be operationalised in supervisory practice. While the OECD contribution outlines the international framework, the BaFin guidance shows how existing European requirements, in particular DORA, are applied specifically to AI systems (see also: “AI in the Financial Sector: Classification and Practical Relevance of the New BaFin Guidance on the Use of AI” of 18 December 2025 on PayTechLaw.com https://paytechlaw.com/einsatz-von-ki-im-finanzsektor/).
The following article first presents the key findings of the OECD paper and then classifies them in light of German supervisory practice.
Methodology and Structure of the OECD Paper
The OECD paper is based on a qualitative survey of financial supervisory authorities from OECD member states as well as selected non-members. The focus is on supervisory authorities’ experience with the concrete use of AI systems in supervised institutions, in particular in the areas of credit granting, fraud prevention, market surveillance, trading, compliance and customer interaction. The OECD does not analyse individual institutions, but aggregates supervisory experience derived from inspections, reporting, dialogue formats and day-to-day supervisory practice.
Structurally, the paper is divided into three central levels of analysis:
(1) the use of AI in the financial sector,
(2) the supervisory challenges in overseeing these systems, and
(3) the supervisory approaches and instruments used to address these challenges.
The aim is not to formulate best practices in the sense of binding requirements, but to make recurring patterns and problem areas visible.
Typical Areas of AI Use from a Supervisory Perspective
The OECD paper shows that AI is already widely used in the financial sector today, albeit with very different levels of risk intensity. While classical machine learning models have for years been a fixed component of areas such as fraud prevention or credit scoring, supervisory attention is increasingly shifting towards generative AI applications, for example in customer communication, internal analysis and decision-support systems, or in the compliance environment.
The OECD highlights that many supervisory authorities have fewer issues with clearly defined, tightly controlled AI applications than with systems that
- affect several business processes simultaneously,
- evolve dynamically over time, or
- are heavily dependent on external data sources and third-party providers.
From a supervisory perspective, these characteristics in particular make it more difficult to assign clear responsibility and to carry out a robust risk assessment.
Core Supervisory Challenges from the OECD’s Perspective
Particularly instructive is the presentation of those aspects that supervisory authorities repeatedly identify as problematic. These include in particular:
The limitations of traditional supervisory approaches: Many existing supervisory and audit mechanisms are designed for deterministic systems. AI models whose outputs are probabilistic or change over time can only be captured to a limited extent using these tools.
The asymmetry between institutions and supervisors: The paper emphasises that supervisory authorities often have less technical insight into AI systems than the supervised entities or their technology providers. This exacerbates dependency and information risks.
The shifting of risks along the value chain: Risks do not arise solely within the institution itself, but often already at the stage of model training, data procurement or decisions by external providers – without supervisors being able to intervene directly at these points.
These findings explain why the OECD calls less for new regulation and more for the further development of supervisory tools.
Technology Neutrality as a Guiding Principle – with Practical Limits
One of the central findings of the OECD paper is that the vast majority of OECD states assume that they already have sufficient regulatory frameworks for the use of AI in the financial sector. These frameworks are based on the principle of technology neutrality: existing requirements on governance, risk management, consumer protection, IT security and outsourcing apply regardless of whether decisions are made by humans, classical statistical models or AI systems.
At the same time, the OECD makes it clear that this technology neutrality is increasingly reaching its limits in supervisory practice. The more complex, dynamic and less explainable AI models become, the more difficult it is to integrate them into traditional control and audit mechanisms.
Challenges in Practical Implementation
The OECD paper identifies a number of recurring problem areas that appear in almost all jurisdictions examined.
At the centre are initially model risks and validation issues. While existing model risk frameworks generally also apply to AI models, in practice there is considerable uncertainty as to how validation, ongoing monitoring and governance should be designed for self-learning or only partially explainable systems.
Closely linked to this is the issue of explainability and fairness. The limited traceability of many AI-driven decisions not only complicates internal control processes, but also the fulfilment of supervisory transparency and justification obligations vis-à-vis supervisory authorities and customers.
In addition, data and governance questions come to the fore. The quality, origin and ongoing management of data used for AI are receiving increased attention, as is the concrete design of “human-in-the-loop” concepts and responsibilities.
Finally, the OECD points to the growing dependence on third-party providers. The use of external AI models, cloud infrastructures and specialised technology providers leads to new concentration, control and resilience risks that cannot always be captured solely through the traditional supervision of individual institutions.
Supervision Requires Its Own AI Expertise
Another focus of the OECD paper lies on the institutional capabilities of supervisory authorities themselves. Effective AI supervision presupposes that supervisors have sufficient technical expertise and increasingly deploy AI-supported SupTech tools. Investments in staff, training and international cooperation are described as indispensable prerequisites for keeping pace with the speed of technological development.
Supervisory Approaches in International Comparison
The OECD paper shows that supervisory authorities worldwide are taking very different approaches to dealing with AI. While some supervisors primarily rely on ex post reviews and traditional supervisory instruments, others are experimenting with dialogue-oriented and iterative formats, such as:
- structured preliminary discussions on AI use cases,
- topic-specific reviews (e.g. fairness, bias, data quality),
- supervisory sandboxes and controlled testing environments.
It is striking that supervisory authorities which invested early in specialist expertise and internal specialisation tend to view AI less as a regulatory risk and more as a supervisory governance challenge.
Supervisory Practice: More Guidance Instead of New Rules
Notably, the OECD explicitly does not advocate comprehensive new regulation. Instead, it recommends targeted supervisory clarifications and guidance where existing rules are perceived in practice as unclear or difficult to apply. Such guidance is intended to create legal certainty without impairing the innovative capacity of the financial sector through overly detailed requirements.
At the same time, the paper emphasises the importance of intensive dialogue between supervisors and market participants. Sandboxes, model testing or structured exchange formats are described as suitable instruments for jointly developing new assessment and supervisory approaches. One example cited is the “AI Live Testing” offered by the UK Financial Conduct Authority.
Implications for Europe and Germany – Alignment with DORA and BaFin Practice
For the European context, the OECD paper is particularly relevant against the backdrop of the EU AI Act, DORA and existing sectoral financial regulation. The OECD explicitly warns against layering new AI-specific requirements uncoordinated “on top of” existing supervisory regimes. Instead, supervisors must ensure that overlaps, contradictions and duplicate requirements are avoided and that existing frameworks are further developed in a consistent manner.
This is precisely where the BaFin guidance comes in. It translates the abstract challenges described in the OECD paper into concrete supervisory expectations and fully integrates AI systems as ICT assets into the DORA framework. This makes it clear that AI is not a regulatory special case, but a particularly demanding manifestation of existing risks that must be managed using established instruments of governance, risk management and supervision.
Conclusion: From Regulation to Supervisory Reality
The OECD paper makes it clear that the real challenge in the use of artificial intelligence in the financial sector does not lie at the level of rule-making, but at the level of supervisory and operational implementation. Unlike earlier phases of financial market regulation, the key question today is not whether AI is regulated, but how existing, largely technology-neutral regulatory frameworks can be practically applied, supervised and enforced in relation to AI-driven business processes.
The OECD thus confirms a paradigm shift in financial supervision: AI is not treated as an independent object of regulation, but as a cross-cutting technology that concentrates known types of risk – such as model, IT, outsourcing, governance and consumer protection risks – at a new level of intensity and complexity. For supervisory authorities, this means that traditional audit and control mechanisms must be further developed without abandoning the principle of technology neutrality.
For financial institutions, this leads to a clear conclusion. The legally compliant use of AI is less a question of formal compliance with new individual rules and more a question of designing internal structures that can withstand supervisory scrutiny. Institutions must be able to explain, manage and take responsibility for the use of AI not only technically, but also organisationally, procedurally and legally. Governance structures, responsibilities, model validation, data quality, documentation and third-party management therefore become central compliance topics.
This is particularly evident in the European context. With the EU AI Act, DORA and existing sectoral financial regulation, no new self-contained AI supervisory regime is emerging, but rather a complex interaction of existing frameworks that must be implemented coherently. The OECD paper rightly warns against viewing AI-specific requirements in isolation or as purely additive. What matters instead is the ability of supervisors – and institutions – to apply these frameworks consistently and to avoid contradictions, duplicate requirements and blind spots.
The BaFin guidance on the use of AI illustrates exemplarily how such operationalisation can look. It shows that, from a supervisory perspective, AI is not an innovation space outside established control mechanisms, but an integral part of existing ICT and risk management. At the same time, it becomes clear that supervision focuses less on abstract prohibitions and more on the quality of implementation and governance in individual cases.
Against this background, the OECD paper can also be read as an implicit call to action – for both supervisory authorities and market participants. Successful use of AI in the financial sector will increasingly take place where institutions invest early in robust governance, substantive explainability, clear responsibilities and an open supervisory dialogue. AI thus becomes not primarily a legal risk, but a stress test for the maturity of existing compliance and risk structures.
Ultimately, the OECD paper shows that the future of AI in the financial sector will not be decided in new legislative texts, but in supervisory practice. Those who already understand AI today as a regulated component of their business model – rather than as a technical add-on – will be better positioned from a regulatory perspective as supervisory expectations continue to evolve.