AI & Funds #3 – Risk Classifications 

February, 2023 - Mario Mizzi

 

If the draft EU regulation on Artificial Intelligence (“AI”) titled ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence’ (the “draft EU AI Act”) becomes law, investment funds could have an additional risk which would need consideration, namely, the AI risk.

As outlined in previous briefings in this insight-series, the definition of AI as provided in Article 3 of the draft EU AI Act will be an umbrella term which includes machine-learning and algorithms which are already being used in the asset management industry according to a report1 by the European Securities and Markets Authority. The draft EU AI Act will not give legal personality to AI. Instead, any liability will fall on the AI provider rather than on the AI system itself.

The approach taken by the draft EU AI Act to regulate AI is a risk-based approach because it is proposing the introduction of 4 risk classifications for AI. For each classification, different rules will apply.

The explanatory memorandum of the draft EU AI Act provides that the AI risk-classification “puts in place a proportionate regulatory system centred on a well-defined risk-based regulatory approach that does not create unnecessary restrictions to trade.2 In other words, the more powerful the AI system is, the riskier it will be considered, which would consequently trigger more rules.

The four risk classifications which are envisaged by the draft EU AI Act can be summarised as follows:

  1. Prohibited AI systems, which include “prohibited AI practices” in terms of Title II of the draft EU AI Act;
  2. High Risk AI systems as regulated by Title III of the draft EU AI Act;
  3. Limited Risk AI systems which are not high-risk in terms of Title III but are subject to certain transparency requirements in terms of Title IV of the draft EU AI Act. The term “limited risk” is not in the draft regulation itself but is used by the European Commission in its digital strategy3 on AI.
  4. Minimal (low) Risk AI systems can be offered without any regulatory obligations under the draft EU AI Act. The term “minimal risk” is not provided in the preamble or actual text of the draft regulation but in section 5.2.24 of the explanatory memorandum of the draft EU AI Act.

Prohibited AI systems include an “AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm5, social scoring systems, and certain real-time biometric surveillance. For the purposes of investment funds, it is unlikely that there will be any outsourced services which would fall under this classification.

High-risk AI systems are listed in Annex II6 and Annex III7.  Annex II provides a list of harmonised legislation which regulates products whose use of AI would be considered as high-risk in terms of Article 6 of the draft EU AI Act. Neither Directive 2014/91/EU of the European Parliament and of the Council of 23 July 2014 (“UCITS V”) which amends Directive 2009/65/EC on the coordination of laws, regulations and administrative provisions relating to UCITS (“UCITS Directive”) nor Directive 2011/61/EU of the European Parliament and of the Council of 8 June 2011 on Alternative Investment Fund Managers (“AIFMD”) are included in Annex II.

Annex III provides a list of industries whose AI systems would be considered as high-risk. The list includes: critical infrastructure (i.e. “AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity8), education, migration, and other categories which mostly relate to the operation of the administrative duties of the state.

Investment services are not included in the non-exhaustive list of high-risk AI systems in the current version of Annex III of the draft EU AI Act. However, Article 29 of the draft EU AI Act provides certain obligations for users of high-risk AI systems. Specifically, it is provided that for “credit institutions regulated by Directive 2013/36/EU, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes, and mechanisms pursuant to Article 74 of that Directive”. 9

Providers of high-risk AI systems have various obligations under the draft EU AI Act. These include technical documentation (Article 11 of the draft EU AI Act), record keeping (Article 12), data governance (Article 10), transparency (Article 13) and, human oversight (Article 14).

Given that investment funds and investment managers who provide and/or use AI are not classified under high-risk, they would only need to comply with Title IV on ‘Transparency Obligations for Certain AI Systems’ of the draft EU AI Act. The only obligation emanating from this title is an obligation of information disclosure; but even this obligation will be constrained in its application in finance due to the fact that it only encompasses human interaction with AI. The explanatory memorandum of the draft EU AI Act says that “businesses or public authorities that develop or use any AI applications not classified as high risk would only have minimal obligations of information.”10

Therefore, if the draft EU AI Act becomes law in the form which is being proposed, funds governed by UCITS, AIFMD and other types of investment funds governed by local laws (such as Professional Investor Funds in Malta) would not be considered as making use of high-risk AI systems. This means that they do not need to follow Article 29 of the draft EU AI Act which provides the obligations of users of high-risk AI systems.

Funds, fund administrators and investment managers making use of complex AI systems would only have a transparency requirement in terms of Article 52 of the draft EU AI Act under the proposed draft EU AI Act. This echoes the explanatory memorandum’s text which provides that the aim of the EU legislator in the draft EU AI Act is to reach a balance between the use of AI and the fundamental rights of EU citizens as enshrined in the Charter of Fundamental Rights of the European Union.

The omission of investment services from the draft EU AI Act is unlikely to be a pretermission. It indicates that either the EU Commission plans to overhaul the draft EU AI Act, or (more likely) that the EU is planning on taking a piecemeal approach for regulating the use of AI in this industry whereby MiFID, UCITS, AIFMD and SFDR would be updated for AI utilisation accordingly.

 

Click here for the previous article in this series

 

Footnotes:

 

  1. European Securities and Markets Authority, Artificial Intelligence in EU Securities Markets (TRV Report, February 2023)
  2. European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence, (2021) <https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence> page 4
  3. European Commission, Digital Strategy <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai>
  4. European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence, (2021) <https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence> page 12
  5. Ibid, Article 5
  6. Annex II, European Commission, Annex to Proposal for AI Regulation (2021) <https://ec.europa.eu/newsroom/dae/redirection/document/75789>
  7. Annex III, European Commission, Annex to Proposal for AI Regulation (2021) <https://ec.europa.eu/newsroom/dae/redirection/document/75789>
  8. Ibid, Article 2(a)
  9. European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence, (2021) <https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence> Article 29
  10. Ibid, page 11

 

Disclaimer: This document does not purport to give legal, financial or tax advice. Should you require further information or legal assistance, please do not hesitate to contact Dr. Mario Mizzi

The post AI & Funds #3 – Risk Classifications appeared first on Mamo TCV.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots