AI and Employment – could an employer be liable? 

October, 2023 - Shoosmiths LLP

The regulatory landscape for AI, in the UK and beyond, is evolving rapidly, with proposed statutory regimes beginning to emerge from legislators across the globe (read more here). What, then, is the current position for employers?

Overview of current regime

There remains a notable gap when it comes to concrete principles of liability or accountability for AI-generated or -supported outcomes. It is interesting that Meta took a step back from facial recognition technology in November 2021 amid their concerns about the rules being unclear.

While the UK government, in their early Policy Paper on AI back in 2022, outlined their intention to ensure that an identifiable legal person or entity would be accountable for AI outcomes, the subsequent (and more detailed) White Paper of March 2023 shied away from enshrining that level of accountability into the UK’s proposed regulatory approach. As such, there remains no current or proposed UK framework regulating liability for AI decisions.

By contrast, the EU has shown a willingness to at least explore what a legislative liability regime around AI could look like. Whilst not yet in force (indeed, it remains to be seen whether it will ever be implemented given political disagreements in Brussels over the legislation’s reach, and the rapidly diminishing electoral term of the current European Parliament), the EU’s draft AI Liability Directive includes mechanisms that would assist ‘victims’, both individuals and businesses, with bringing claims for AI-related harm, together with enhanced rights to access evidence by allowing victims to request disclosure of information about high-risk AI systems and potentially identify the person liable for outcomes generated by those systems. Although the UK is no longer required to follow EU law after Brexit, the Directive would (if implemented) be relevant for UK employers who operate in the EU. It would also, mindful of the UK’s own electoral cycle, provide a potential model for the UK to follow in its own evolving approach to regulating AI.

So where does this leave UK employers?

It understandably leaves employers, or the responsible individuals, wondering what their role and responsibility in checking AI outcomes is. It also leaves potential claimants in the lurch as without a framework or legislation, there is no objective standard that their employers should meet, making it difficult for claimants to challenge AI decisions that impact them.

Can employers be liable for wrongdoing caused by AI?

The short answer is yes, depending on the nature of the harm caused. The case of Manjang v Uber, currently going through the Employment Tribunal, provides an illustrative example of how issues caused by AI may result in employer’s liability.

Uber used facial recognition software to verify drivers’ identities which is a prerequisite to drivers accessing work and pay. Mr Manjang failed his facial recognition check and was permanently suspended from the platform. He complained that the algorithm used for this check was racially biased and provided more false negatives for people of colour. He requested a human being check the photographs he submitted. This was not provided, and Mr Manjang was told the decision to deactivate his account was maintained. He raised a claim against Uber for harassment, victimisation and indirect discrimination.

It is, therefore, the employer’s action (or inaction) stemming from their use of AI software that is the subject of this ongoing discrimination claim. If the claim succeeds, liability will rest with the employer because of its inaction, not with the AI provider itself.

Whether future regulation in this area changes the position remains to be seen. In the meantime, this case serves as a reminder to employers that before adopting such technology they need to carefully consider:

  • how they will use AI;
  • the extent to which they will rely on AI to make decisions or carry out tasks;
  • the limitations with any AI they use and how these might be overcome; and
  • any regulatory restrictions on the use of AI within their industry.

In addition, a reasonable and proportionate level of human review on AI decisions could also help reduce the potential for discriminatory decisions to be made.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots