Is artificial intelligence (AI) currently regulated in the financial services industry? “No” is usually the intuitive answer.
But a deeper look reveals bits of existing financial regulation that implicitly or explicitly apply to AI, for example automated decision-making in the GDPR, algorithmic trading in MiFID II, algorithm governance in RTS 6 and many provisions of various cloud regulations. .

While some of these statutes are very advanced and future-ready, notably GDPR and RTS 6, they were all written before the more recent explosion of AI capabilities and adoption. As a result, they are what I call “pre-AI”. Additionally, AI-specific regulations have been under discussion for at least a couple of years, and various industry and regulatory bodies have produced high-profile white papers and guidance, but no regulations officers per se.
But everything changed in April 2021 when the European Commission issued its proposal for the Artificial Intelligence Act (AI Act). The current text applies to all sectors, but as a proposal, it is not binding and its final language may differ from the 2021 version. While the act seeks a horizontal and universal structure, certain industries and applications are detailed explicitly
The act takes a risk-based “pyramid” approach to AI regulation. At the top of the pyramid are prohibited uses of AI, such as subliminal manipulation such as deepfakes, exploitation of vulnerable individuals and groups, social credit scoring, real-time biometric identification in public spaces (with certain exceptions for law enforcement), etc. Then there are high-risk AI systems that affect basic rights, security, and well-being, such as aviation, critical infrastructure, law enforcement, and healthcare. Below are several types of AI applications on which the AI Act imposes certain transparency requirements. After that is the unregulated “everything else” category, which by default covers more mundane AI solutions such as chatbots, banking systems, social media and web search.
While we all understand the importance of regulating AI in areas that are central to our lives, such regulations could hardly be universal. Fortunately, Brussels regulators included a key, Article 69, which encourages vendors and users of low-risk AI systems to voluntarily observe, on a proportionate basis, the same standards as their high-risk counterparts risk that they use the system.
Liability is not a component of the AI Act, but the European Commission notes that future initiatives will address liability and be complementary to the law.

The AI Law and financial services
The financial services sector occupies a gray area in the law’s list of sensitive industries. This is something a future draft should clarify.
- The statement of reasons describes financial services as a “high impact” sector rather than a “high risk” sector such as aviation or healthcare. Whether this is just a matter of semantics is not clear.
- The financing is not included among the high-risk systems in annexes II and III.
- “Credit institutions” or banks are referred to in various sections.
- Credit score is listed as a high-risk use case. But the explanatory text frames this in the context of access to essential services, such as housing and electricity, and such fundamental rights as non-discrimination. This is generally more closely related to the prohibited practice of social credit scoring than financial services per se. Still, the final draft of the law should clarify this.
The position of the law on financial services leaves room for interpretation. Currently, financial services would be subject to Article 69 by default. The AI Act is explicit about proportionality, which strengthens the case for applying Article 69 to financial services.
The main roles of interested parties specified in the act are “supplier” or the seller and “user”. This terminology is consistent with white laws related to artificial intelligence published in recent years, be it guides or best practices. “Operator” is a common designation in AI parlance, and the law provides its own definition that includes suppliers, vendors, and all other actors in the AI supply chain. Of course, in the real world, the AI supply chain is much more complex: third parties are providers of AI systems to financial firms, and financial firms are providers of the same systems to their clients.
The European Commission estimates the cost of complying with the AI Law at between 6,000 and 7,000 euros for sellers, presumably as a single per system, and between 5,000 and 8,000 euros per year for users. Of course, given the diversity of these systems, one set of numbers could hardly apply to all industries, so these estimates are of limited value. In fact, they can create an anchor point against which the actual costs of compliance in different sectors will be compared. Inevitably, some AI systems will require such close oversight of both the vendor and the user that the costs will be much higher and cause unnecessary dissonance.

Governance and Compliance
The AI Act introduces a detailed, comprehensive and new governance framework: the European Artificial Intelligence Council proposal would oversee individual national authorities. Each EU member can designate an existing national body to take over AI oversight or, as Spain recently chose to do, create a new one. Either way, this is a great company. AI providers will be required to report incidents to their national authority.
The law sets out many regulatory compliance requirements applicable to financial services, including:
- Ongoing risk management processes
- Data and data governance requirements
- Technical documentation and registration
- Transparency and provision of information to users
- Knowledge and competence
- Accuracy, robustness and cyber security
By introducing a detailed and strict penalty regime for non-compliance, the AI Act aligns with GDPR and MiFID II. Depending on the severity of the breach, the penalty can be up to 6% of the offending company’s global annual revenue. For a multinational technology or financial company, this could amount to billions of US dollars. However, the penalties under the AI Act actually occupy the middle ground between GDPR and MiFID II, with fines reaching 4% and 10% respectively.

What’s next?
Just as the GDPR became a benchmark for data protection regulations, the EU AI Act is likely to become a model for similar AI regulations around the world.
Without regulatory precedent to build upon, the AI Act suffers from a certain “first mover disadvantage”. However, it has been thoroughly consulted and its publication sparked vigorous discussions in legal and financial circles, which will hopefully inform the final version.
An immediate challenge is the law’s overly broad definition of AI: the one proposed by the European Commission includes statistical approaches, Bayesian estimation and potentially even Excel calculations. As the law firm Clifford Chance commented, “This definition could capture almost any business software, even if it does not involve any recognizable form of artificial intelligence.”
Another challenge is the proposed regulatory framework for the law. A single national regulator should cover all sectors. This could create a fragmentation effect where a dedicated regulator would oversee all aspects of certain industries, except AI-related matters, which would fall under the separate regulator mandated by the AI Act. Such an approach would not be optimal.
In AI, one size may not fit all.
Moreover, the interpretation of the act at the individual industry level is almost as important as the language of the act itself. Both existing financial regulators and newly created and designated AI regulators should provide the financial services industry with guidance on how to interpret and implement the law. These interpretations must be consistent across all EU member countries.
Although the AI Act will become a strict legally binding law if and when it is enacted, unless section 69 is substantially changed, its provisions will be flexible laws or recommended best practices for all industries and applications except those that are explicitly indicated. It seems like a smart and flexible approach.

With the publication of the AI Act, the EU has boldly gone where no other regulator has gone before. We now have to wait—and hopefully not long—to see what regulatory proposals are made in other technologically advanced jurisdictions.
Will they recommend that individual industries adopt EI regulations, that regulations promote democratic values or strengthen state control? Could some jurisdictions opt for little or no regulation? Will AI regulations be unified into a universal set of global rules, or will they be “Balkanized” by region or industry? Only time will tell. But I believe AI regulation will be a net positive for financial services: it will demystify the current regulatory landscape and hopefully help provide solutions to some of the industry’s most pressing challenges.
If you liked this post, don’t forget to subscribe to Entrepreneurial investor.
All posts are the opinion of the author. Therefore, they should not be construed as investment advice, nor do the views expressed necessarily reflect the views of the CFA Institute or the author’s employer.
Image credit: ©Getty Images / mixmagic
Professional training for CFA Institute members
CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Entrepreneurial investor. Members can easily register credits using their online PL tracker.