AI's regulatory dilemma - Lords report calls for more positive UK vision 

February, 2024 - Shoosmiths LLP

In the latest contribution to the debate on the UK's approach to AI regulation, the Lords Communications and Digital Committee has published a report urging the government to adopt a 'more positive' vision for AI, rather than concentrating on its "far-off and improbable" risks.

That signpost to risks presumably refers to the AI Safety Summit hosted by the Prime Minister in London last November, which sought international consensus on how to manage the fundamental dangers posed by 'Frontier AI', with more powerful and potentially threatening capabilities.

With the EU in the process of finalising the text of its AI Act, and the UK government poised to publish its long-awaited response to the consultation on its 2023 AI White Paper, the next few months are likely to crystallise the UK's position on how to govern AI - at least until the general election later this year.

In that context, this latest Parliamentary report perfectly captures the dilemma that legislators are confronted with, with some calling for a light-touch approach aimed at unleashing the potential of AI and stealing a march on (for example) the EU's less flexible emerging regulatory regime. Others, by contrast, believe regulation is exactly the route needed to give certainty and confidence to both technology providers and consumers in deploying and using AI solutions as they become more widely adopted. Only last year, the Lords Science, Innovation and Technology committee issued a contrasting report on AI governance, concluding:

"We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed."

On reading today's report in more detail, whilst it urges a bolder approach, it's notable that it still touches on some fundamental legal and regulatory issues that need to be addressed, such as reform of copyright laws to clarify the rights of LLM developers to access and use large volumes of human-generated content to train their models (well illustrated by current litigation between Getty Images and Stability AI, concerning the former's attempt to protect its IP rights), and the need to take steps to avoiding dominance of the AI market by the world's tech giants.

How the government deals with issues such as those, and at the same time reconciles the many contrasting opinions on AI regulation, remains to be seen.

 



Link to article

MEMBER COMMENTS

WSG Member: Please login to add your comment.

dots