Regulating AI in the EU, US and OECD: the difficult balance between security and driving innovation 

February, 2024 - Alejandro Padin Vidal

The regulations that are starting to emerge in various different jurisdictions pose major challenges, not just for users, but also for developers of AI systems. In this article we will look at the main differences and the areas of common ground.

Artificial intelligence has revolutionized many aspects of our lives, from healthcare to national security. However, its use has also raised concerns in connection with privacy, discrimination and security. For this reason, the EU, the US and the OECD have recently issued or are in the process of issuing regulations, with a view to controlling or supervising the use of AI and protecting individuals’ fundamental rights.

We start with the US, where President Joe Biden issued an Executive Order on October 30, 2023. In the EU, in 2021 the European Commission drafted a proposal for a Regulation on Artificial Intelligence, which is about to end its long legislative process following the political agreement reached between the European Parliament and the Council in December 2023. The Organization for Economic Co-operation and Development (OECD) in turn published a Recommendation on Artificial Intelligence in 2019, which was updated in 2023.

In this article we will look at the main characteristics of these regulations and their correlation as well as the implications for the development and use of AI. In relation to the text of the proposal for an EU regulatory framework on AI, we will refer to the text published unofficially in January 2024 which emerged following the trilogue negotiations, since a definitive official text has still not been made public.

What is artificial intelligence? A difficult concept to define

The OECD provides a definition of “AI system”, that both the US and EU have taken as a basis to prepare their own regulatory definitions. However, whereas the EU only defines an “AI system”, the US has chosen to define “AI” itself, the term “AI Model” and “AI system”.

The European Union’s definition sticks closely to that of the OECD; conversely, there are substantial differences in the US definition since it supplements its description of an AI system with the definitions of AI and AI model.

Despite the nuances, the three regulations do use the same terms as part of the definition: it is a machine-based system, with explicit or implicit objectives that makes inferences based on input, generating output that can consist of predictions, content, recommendations, or decisions which can all influence physical or virtual environments.

Classification of AI systems and the obligations that must be fulfilled in each category

The OECD does not classify AI systems. However, it does provide a number of principles that need to be borne in mind in the development of AI systems, such as transparency and explainability.

The EU classifies AI systems according to their use (certain uses are considered as high-risk, whereas other practices are prohibited) and what it is used for (general AI use). Different obligations are imposed for each category, particularly with high-risk systems which must, inter alia, submit to a conformity assessment and implement risk management systems. In addition, the high-risk AI systems must be designed in a manner that helps to reduce the structural harm and discrimination that may exist, and periodic audits must be carried out in this regard. Furthermore, the high-risk systems implemented by public authorities must be registered on a public EU-wide database.

The US in turn distinguishes between dual-use foundation models and generative AI models. In contrast to the EU, the US does not assign obligations to each of these types of AI.

Actors in the market and lifecycle of AI

The OECD defines the lifecycle of AI (1. design, data and models; 2. verification and validation; 3. deployment; and 4. operation and monitoring) and based on this definition, establishes who the actors in the lifecycle of an AI system are, defining them as those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI. However, it does not go into further detail regarding each operator.

The EU in turn identifies each operator in that cycle, namely the provider, the deployer, the authorized representative, the importer and the distributor, on whom it imposes a number of obligations, both generic in relation to transparency, as well as specific for each one in the development of high-risk AI systems.

The US on the other hand does not identify any of the players in the lifecycle, yet despite this seeks to establish certain obligations for the developers of AI systems, particularly for the developers of the strongest AI systems. One of these obligations is to share with the US government, the results obtained in security tests and other critical information.

Governance of artificial intelligence

In order to continue to develop policies as this technology progresses, the OECD has set up an AI Group of experts (AIGO) and a Working Group on classification of AI (WG CAI).

In its future Act, the EU envisages the creation of a number of specific institutions in this regard, which will work together and coordinate with each other. In this regard, each country will be expected to designate at least one national AI supervisory authority and at Union level, an Artificial Intelligence Board will be established, comprised of representatives from each one of the national supervisory authorities. Said Board will be an independent institution of the EU and will issue recommendations and opinions to the European Commission on high-risk AI systems and other important aspects for the effective, uniform implementation of the new rules. A European AI Office will also be created within the Commission, which will supervise general-purpose AI models, be supported by a scientific panel of independent experts and will work together with the Artificial Intelligence Board.

The US in turn established an AI Advisory Committee and an AI Safety and Security Board, but will not create an agency to supervise the implementation of the practices and rules put in place by Biden’s Executive Order. It has also created two AI working groups in specific areas: the AI Task Force at the Department of Health and the AI and Technology Talent Task Force.

Promoting innovation and development in AI

In order to foster AI innovation and help businesses develop and deploy AI in the EU, the proposal for an AI Act allows the creation of specific regulatory sandboxes established by the competent authorities. The aim is to give providers or possible providers of AI systems the possibility of developing, training, validating and testing, in real-world conditions where applicable, an innovative AI system in accordance with a sandbox plan for a limited period of time and under regulatory supervision.

In the case of the US, although a general testing scheme is not envisaged, President Biden has ordered the Secretary of Energy to implement a plan for developing AI model evaluation tools and AI testbeds.

Penalty rules and protection of fundamental rights

In order to ensure the protection of fundamental rights, the EU has established disciplinary rules according to which penalties of up to €35 million or 7% of the total worldwide annual turnover in the previous fiscal year may be imposed. The US has not established a direct penalty regime.

Regarding direct action by those affected by a breach of the AI Act, the parties concerned may submit a claim to the national authority and request compensation in the terms established in the AI Liability Directive proposed in 2022, which is still undergoing the legislative process.

In the case of the US, the Executive Order also provides the following measures, among many others: (i) the approval of an AI Bill of Rights for use in connection with the authorities; (ii) to ensure equitable treatment, by developing best practices with respect to the use of AI in the criminal justice system, in connection with, inter alia, sentencing and parole; (iii) to encourage the responsible use of AI in healthcare and the development of affordable drugs that save lives and (iv) proposes that a report be prepared on the possible impact of AI in the workplace and to study the options available to strengthen additional federal support for workers.

Apart from the publication of the Executive Order, President Biden has asked Congress to approve data protection legislation, which is an important milestone in the regulatory approach to privacy in the US.

Conclusions

These three initiatives seek to regulate the use of AI in different areas, establishing obligations and protection for citizens and companies, but they each approach this from a different angle: the OECD offers a number of recommendations to countries; the EU establishes a regulatory framework for all the actors in the AI lifecycle, while the US establishes the government’s action plan for the forthcoming years, in order to control the development of AI in its territory and the impact that this will have on the different aspects of society.

All these provisions will give rise to considerable challenges both for users as well as AI developers, who must adapt to the new rules and obligations or face fines, which, in the case of the EU, can amount to millions of euros, which for some, could hinder the progress of this technology.

Although these initiatives represent significant progress in the protection of fundamental rights and the promotion of a responsible and safe use of AI, they are general rules which must be applied in specific cases subsequently, such as, for example, the use in education, health or justice (this more industry-based approach is being adopted in the UK, for example, sidestepping prior general rules). It is therefore crucial to continue to work on the regulation of AI in order to guarantee its ethical and beneficial use for society as a whole.

 

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots