In addition to the draft EU regulation on Artificial Intelligence (“AI”) titled ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence’ (the “draft EU AI Act”), the EU Commission is also proposing a separate draft directive on non-contractual liability relating to AI titled ‘Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence’ (the “draft AI Liability Directive”).
Whilst the draft EU AI Act aims to define AI and mitigate risks by safeguarding fundamental rights in the use of AI within the EU, the draft AI Liability Directive sets forth provisions allowing persons to bring legal action against service providers for compensation in the event of harm caused by an AI system. The directive is a continuation of the draft EU AI Act. In fact, Article 2 of the draft AI Liability Directive provides that “ ‘AI system’ means an AI system as defined in Article 3 (1) of the [proposed] AI Act.”
The draft AI Liability Directive is not being proposed in the context of asset management but in the general context of Article 114 of the Treaty on the Functioning of the European Union (“TFEU”) which gives power to EU institutions to enhance legislative harmonisation to protect the EU’s internal market. Nevertheless, an analysis of the draft AI Liability Directive sheds light on how the EU legislator is approaching non-contractual liability arising from AI and how this might influence future EU legislation relating to AI and funds.
In the explanatory memorandum contained in the draft AI Liability Directive, the EU Commission notes that the tort laws of EU Member States “are not suited to handling liability claims for damage caused by AI-enabled products and services.”1 Moreover, the EU Commission explains that AI’s characteristics of “complexity, autonomy and opacity, may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim.”2
Whilst taking the EU principles of subsidiarity and proportionality into consideration as well as the EU Member States’ varying AI national policies, the EU Commission concludes that the purpose of the draft AI Liability Directive is to enhance legal certainty in the internal market by ensuring that “victims have the same level of protection as in cases not involving AI systems.”3
Article 1 of the draft AI Liability Directive provides that it does not apply to criminal proceedings; and adds that it will not interfere with the EU Member States’ interpretation of fault and damage. This corresponds to Recital 10 of the directive’s preamble which says that the AI Liability Directive “should not harmonise general aspects of civil liability which are regulated in different ways by national civil liability rules.” However, Article 1(3)(d) provides two exceptions relating to disclosure of evidence and the burden of proof as outlined in Article 3 and Article 4 of the draft AI Liability Directive.
Article 3 of the draft AI Liability Directive provides that national courts should have authority to order the disclosure of evidence by the AI service providers. Moreover, there is a rebuttable presumption of failure of duty of care if such disclosure is not abided by the AI service provider. However, Article 3 allows for certain limitations on the disclosure of evidence if the AI provider’s request is legitimate or proportionate, or if “trade secrets” relating to third parties would be in jeopardy.
Article 4 of the draft AI Liability Directive provides a lax interpretation of the rule of onus probandi incumbit ei qui dicit, non ei qui negat (i.e. plaintiff must bring proof). If the victim could only prove that the defendant caused the damage without having the proof that it was actually the AI which caused the damage, the national courts would still be required to accept the rebuttable presumption of causal link. The reason being that the plaintiff might not be privy to how AI systems operate.
The legal regimes regulating EU investment funds, namely, the Undertakings for Collective Investment in Transferable Securities (“UCITS”) and Alternative Investment Funds (“AIFs”) both originate from EU directives rather than directly applicable regulations, meaning that EU Member States can transpose the directives with supplementary obligations for additional investor protection.
Currently, neither Directive 2014/91/EU of the European Parliament and of the Council of 23 July 2014 (“UCITS V”) which amends Directive 2009/65/EC on the coordination of laws, regulations and administrative provisions relating to UCITS (“UCITS Directive”) nor Directive 2011/61/EU of the European Parliament and of the Council of 8 June 2011 on Alternative Investment Fund Managers (“AIFMD”) refer to artificial intelligence. Nevertheless, both of them contain provisions emanating from the civilian legal principle of non-contractual liability.
The draft EU AI Act will not give legal personality to AI but will assign risk classifications. Thus, the AI itself will not be assigned liability during civil litigation. Instead, the liability is placed on the designer and/or provider of the AI system. Consequently, the use of AI would probably also be regarded as a service provider if it were to be specifically included by the EU legislator in the UCITS Directive or AIFMD.
Taking the investment fund’s depositary (i.e., the fund’s banker) as an example of a service provider in asset management, Article 24 of UCITS V provides that “the depositary is also liable to the UCITS, and to the investors of the UCITS, for all other losses suffered by them as a result of the depositary’s negligent or intentional failure to properly fulfil its obligations.” Similarly, Article 21 AIFMD says that “the depositary shall also be liable to the AIF, or to the investors of the AIF, for all other losses suffered by them as a result of the depositary’s negligent or intentional failure to properly fulfil its obligations.”
Both the UCITS framework and AIFMD provide an exception to the depositary’s non-contractual liability if the damage was caused by an “external event beyond its reasonable control.”4 Nevertheless, whilst the AIFMD allows the depositary to transfer liability for the loss of financial instruments held in custody to the relevant sub-custodian, Article 13(2) of the UCITS Directive says that the liability of the depositary “shall not be affected by delegation…of any functions to third parties.”
This means that in relation to non-contractual liability of this specific service provider, the AIFMD (which regulates riskier hedge funds for professionals and/or qualifying investors) allows for the depositary’s non-contractual liability to be transferred, but the UCITS Directive (which contains more protection for retail investors) prohibits the transferring of the depositary’s non-contractual liability.
A continuation of the existing divergent approach could be one of the various possible routes of how the EU legislator could legislate for AI’s non-contractual liability in EU investment funds. In other words, if AI is used in a UCITS, the EU legislator might argue that liability should be retained with the particular service provider using it, but if an AIF is using AI, the EU legislator might allow non-contractual liability to be transferred to the actual AI provider to match the higher-risk tolerance of AIFs.
The differentiation in assignment of liability by UCITS’s depositaries and assignment of liability by AIFs’ depositaries, albeit unrelated, is similar to the differentiation which is laid out by paragraphs 2 and 3 of Article 4 of the draft AI Liability Directive.
The fourth article of the draft AI Liability Directive distinguishes between claims brought against the provider of a high-risk AI system and claims brought against the user of such systems. Paragraph 3 of Article 4 says that if the claim is against the user, the criteria for the rebuttable presumption are (i) either that the user failed to monitor the high-risk AI system or (ii) if the AI system was not used according to its intended purpose. On the other hand, Paragraph 2 of Article 4 provides more options for when there would be a rebuttable presumption if the claim for non-contractual liability arising from AI is being made directly against the provider of the AI. The numerous criteria in the wider rebuttable presumption are not cumulative and include evidence relating to the design, the operationality and the transparency requirements of the AI system in terms of the draft EU AI Act.
The EU’s draft AI Liability Directive attempts to regularise scenarios which are probably already factored in the outsourcing policies of the licensed investment funds. Although financial regulators might introduce specific prudential rules relating to AI’s non-contractual liability in UCITS and AIFs, the investors already have a contractual relationship with the funds through the acceptance of the offering document.
This raises the question of non-cumul, a legal dubiety on whether one can sue for tortious liability when one would already have a remedy under a contract. Whilst every EU Member State can set its jurisprudential position on concurrent liability, the draft AI Liability Directive provides legal certainty for situations of non-contractual liability arising from AI which would otherwise not have been covered by existing laws.