Transatlantic discord? An international perspective on AI regulation 

November, 2022 - Shoosmiths LLP

Recent months have seen a flurry of developments globally towards the regulation of Artificial Intelligence. Government bodies in the US, UK and EU have released proposals and updates around regulating AI, with each approach showing important distinctions. 

On 4 October 2022, the White House released a “Blueprint for an AI Bill of Rights”, referring to a set of principles the US Government hopes will provide guidance to US companies who control and deploy automated systems that can affect the rights of US citizens. Although the guidelines are not binding, the White House appears hopeful they will encourage companies to take steps to protect consumers, particularly in the case of “Big Tech”. 

On 18 July 2022, the UK Government published its policy paper containing proposals for establishing what it calls a “pro-innovation approach” to regulating AI. As with the US Bill of Rights, the UK policy paper is non-binding, being an interim paper in advance of an AI White Paper due to be published later in 2022 (whether this timetable is preserved following recent political disruption in the UK remains to be seen). Although the policy paper offers only a brief outline of the regulatory framework the UK Government may set out through the White Paper, it gives an indication of the Government’s intended approach to regulation of AI. 

In contrast to the US and UK, the EU, in its own draft EU AI Act (announced in 2021), appears to be steering towards an all-encompassing and prescriptive approach. The draft Act represents a well-developed set of regulations for governing the use of AI in the EU, and is scheduled to receive the approval of the Committee of Permanent Representatives imminently, with final adoption by EU ministers likely to be at the Telecommunications Council meeting on 6 December 2022. There will however be a grace period once the legislation comes into force, allowing organisations to ensure alignment and compliance over a transitional period. 

The three approaches slant upwards on a scale of complexity. The US AI Bill of Rights can be seen as a first step towards AI regulation, with five relatively pithy principles outlined from the point of view of the consumer. The UK policy paper sets out clear proposals, but these still give only broad indications of the Government’s approach. The draft EU AI Act by contrast comprises a fully-fledged, comprehensive and detailed set of regulations.

Defining AI

The theoretical approach of the three players towards AI also seems to differ, notably around the core question of ‘what is AI?’. Following extensive back and forth as the legislation made its way through the EU institutions (reflecting the inevitable tensions between the bloc’s key political groupings, representing the interests of Big Tech on the one hand, and the primacy of individual citizen’s rights on the other) the draft EU AI Act proposes a detailed, technology-neutral definition of ‘AI System’. The definition refers to a range of software-based technologies that encompass machine learning, logic and knowledge-based systems, and statistical approaches, and attempts to cover AI systems both on a standalone basis and as a component of a product. The definition seems intended to ‘future proof’ against AI technological developments, by including broad and somewhat vague language, and allowing for amendment over time by the Commission. The broad definition may, however, mean that many existing applications of AI, based on commonly used technology, fall within the remit of the legislation. This appears to be intentional on the EU’s part.

By contrast, the UK Government seems to have rejected a uniform definition of AI.  The Government notes in the policy paper, “we do not think that [the EU’s approach] captures the full application of AI and its regulatory implications”, and “this lack of granularity could hinder innovation”. The UK Government instead seems keen to devolve responsibility to industry regulators. 

With that in mind, and to ensure the rules that govern the development and use of AI in the UK ‘keep pace with the evolving implications of the technologies’, the UK has indicated two core characteristics by which regulators will be able to assess whether the AI presents risks. These are (i) adaptiveness: the ability of the AI system to ‘learn’ or be ‘trained’; i.e., the risk that the AI may make decisions for itself that may seem illogical or be difficult to substantiate from a human perspective; and (ii) autonomy: the ability of the AI system to operate and react at speed in complex situations in a way humans cannot, the risk being that this could lead to little or no ongoing control by humans. 

The UK regulators may then determine whether the AI should be subject to a higher level of regulation, and may apply more detailed definitions of an AI system that are specific to their sector, giving reference to the context in which the AI will be used.  

Finally, the US Government in its AI Bill of Rights defines “automated systems” but notes that the only automated systems in scope are “those that have the potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access”. This indicates that the US Bill is very much aimed at Big Tech, at least for the moment. During a press briefing, Alondra Nelson, Deputy Director of the White House Office of Science and Technology Policy, noted of the Bill, “Much more than a set of principles, this is a blueprint to empower the American people to expect better and demand better from their technologies”. Despite this, there seems little prospect of these principles being enshrined in federal law in the near future. Commentators have noted the difficulty of putting in place any legally binding legislation regarding AI while there is no comprehensive federal privacy law in place. 

Risk based approach versus guidance and principles

The EU has proposed a classification of AI systems with requirements and obligations tailored around a “risk-based” approach (these risks are tiered as Unacceptable Risk, High Risk and Low/Minimum Risk).  Businesses must complete conformity assessments and ensure they monitor their AI Systems post-assessment to ensure continuous compliance with the EU Act.  Businesses must also comply with the “four-eyes principle” which requires at least two natural persons to verify a decision of a High Risk AI System.  Failing to meet the requirements of High Risk AI Systems may result in administrative fines of up to €30 million, or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year. These are significant fines with a quantum comparable to those already imposed for personal data breaches under the General Data Protection Regulation, demonstrating the EU’s commitment to ensuring it has the necessary ‘teeth’ to enforce compliance with the new legislation. It is not intended, however, that EU member states will create additional bodies to monitor compliance at a national level.

In contrast, the UK approach refers to a number of overarching “principles”, designed to ensure that AI is used safely, is technically secure, functions as designed, and is appropriately transparent and explainable. The principles further aim to embed considerations of fairness into the use of AI in the UK, and to clarify routes of redress or contestability for individuals. Individual UK regulators will have a degree of autonomy to translate and implement the cross-sectoral principles within the specific context of the markets or sectors they regulate, but with support from the Government in respect of collaboration and uniformity, to avoid contradictory approaches. 

The UK has not yet provided details of its intended enforcement or penalty regime for regulatory breaches; it seems this will be left for regulators to determine.  This does not, however, rule out the need for future legislation, which may be necessary to provide regulators with the requisite powers to enforce compliance. 

It remains unclear how (or whether) the US Government intends to provide any means of enforcement for its own AI Bill of Rights.

Global context 

The EU approach towards regulation of AI can be seen as intended to mirror the approach it took with the GDPR, where its complex and comprehensive set of rules set the benchmark for global data processing practices.  In that case the EU did not face any meaningful competition for regulatory dominance from the US, China or UK (the UK having played a key role in the development and implementation of the GDPR as an EU member state).  The different routes now being taken by the UK and US in the case of AI, though, give weight to Anu Bradford’s 2021 prediction in The Economist that “the EU may have a harder time setting global rules, or at least strict ones”.  Given that certain types of AI may be modified or tailored through algorithms, AI providers may, for instance, opt to offer variant models of their AI systems in the EU and the UK that comply with specific jurisdictional requirements.  The technical resources required to ensure compliance with different sets of rules could well be outweighed by the benefits of deploying additional AI functionality in less stringent jurisdictions.  This may mean companies are less inclined to adopt a ‘one-size-fits-all’ approach based on one set of regulations applied globally (as with the GDPR).

However, the lack of consistency in approach may give rise to other considerations. If the EU, UK and US deviate too significantly on AI regulation, could this provide an opening for other regimes with ambitions in the tech space to take advantage of a confused AI regulatory landscape in the West? The UK Government notes that it intends to “promote a…regulatory environment for AI which fosters openness, liberty and democracy”, and that it will “reject efforts to apply AI technologies to support authoritarianism”. The EU goes further, in specifically noting that “applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned”. It remains to be seen whether or how the EU, UK and US approaches may dovetail in the coming years, and whether in doing so they will be able to guard against a creep towards digital authoritarianism elsewhere.

What next?

Looking ahead, organisations developing or deploying AI solutions in the UK and beyond will need to prepare for increased scrutiny and regulatory compliance around how they roll out those solutions across jurisdictions, whether that involves conformity/compliance assessments, record-keeping requirements or executive oversight and accountability frameworks.

Our Shoosmiths AI Working Group will be closely monitoring further developments and providing ongoing commentary on what organisations need to be doing to align to the significant and diverse changes these regulations present. 

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots