Global adoption of AI has more than doubled since 2017, and AI budgets within organisations have increased alongside this rising adoption. We expect to see significant shifts forward in AI capabilities over the next few years, and as the technology advances so too does government and regulator interest across the globe. As a result, legal teams within all types of companies need to proactively ensure that they have considered and managed a wide range of issues when developing or deploying AI solutions.
To help you navigate this changing landscape, we have brought together our insights on AI across various topics, including:
For further AI related insights and publications, see our:
IS IT ENOUGH TO REIN IN THE ROBOTS?
The AI market is a global one and competition is fierce. Encouraging investment is key, but responsible development and deployment of AI also requires innovation friendly regulation. What does that look like?
In Europe, the EU is pushing ahead with new cross-cutting AI specific legislation while the UK Government’s recent white paper confirms that it will continue with its current sector specific approach to regulation.
In this briefing, Partner Rob Sumroy and PSL Counsel Natalie Donovan from Slaughter and May’s Emerging Tech team analyse the UK’s latest policy proposals around AI regulation, breaking down the key principles and outlining the timetable for implementation.
As AI becomes ever more popular, organisations are grappling with the reality of how to balance AI design and deployment with data protection compliance.
When looking at the ways in which AI can work, it is easy to see where this tension arises. The processing of large quantities of data, sometimes for new purposes, to produce outcomes where it can be unclear why or how that decision was reached, can bring transformative beneﬁts to those adopting and beneﬁting from AI. However, it does seem at odds with many of the key principles underpinning data protection regulation. It is therefore vital, when advising on AI and its privacy risk proﬁle, to understand how and when personal data is used, and how this use ﬁts with the requirements of the GDPR.
In this briefing, Partner Rob Sumroy and PSL Counsel Natalie Donovan look at:
- The rise of AI and why it poses particular privacy concerns.
- How AI ﬁts with some of the key principles of the GDPR.
- How the ICO is responding to the new challenges that AI raises.
HOW DOES THE CURRENT UK REGULATORY FRAMEWORK APPLY TO AI AND MACHINE LEARNING?
AI technologies have the potential to make financial services and markets more efficient, accessible, and tailored to consumer needs. They are increasingly being used by financial services firms across a range of business areas.
However, AI also raises novel challenges and poses potential new risks to consumers, the safety and soundness of firms, market integrity, and financial stability. The Bank of England, the Prudential Regulation Authority and the Financial Conduct Authority therefore have a particular interest in the safe adoption of AI in UK financial services, including how policy and regulation can best support it.
The question currently rattling their cage is whether AI can be managed through clarifications of the existing financial regulatory framework, or whether a new approach is needed.
In this video, Senior Counsel Tim Fosh and Senior PSL Selmin Hakki, consider the challenges associated with the use and regulation of AI in financial services, examining how existing legal requirements and guidance apply, and what is on the horizon.
WITH A FOCUS ON ALGORITHMS
Competition authorities are becoming increasingly focussed on tech – both in terms of the technologies being used and the big tech companies themselves. An example of this is the work being done by the UK’s Competition and Markets Authority (CMA) to see if algorithms can reduce competition in digital markets and harm consumers if misused.
In this podcast, Competition Partner, Jordan Ellison and Senior PSL Annalisa Tosdevin discuss the competition law considerations arising from the use of algorithms. They consider some of the concerns around algorithms, the extent to which these concerns are really a competition law issue and how competition regulators might get involved. They finish with some practical takeaways for clients who use AI and algorithms in their business.
IP ISSUES WITH AI DEVELOPMENT AND USE
Countries around the world are jostling to attract AI investment and expertise and position themselves as leading AI nations, and robust IP protection will be central to achieving this. Intellectual property laws encourage, protect and reward innovation, which in turn propels investment and research in AI.
But are our current IP laws fit for purpose in a more digital, AI enabled, world? In this podcast IP Partner Laura Houston and PSL Richard Barker discuss some of the key intellectual property law considerations and issues relating to AI. They consider which IP rights are relevant, look at whether AI itself can own or infringe IP, discuss some of the outcomes of the UKIPO’s recent consultations and calls for views and talk us through some of the changes that are expected to be made following those consultations.
Since the podcast was recorded we have also seen new IPO guidelines on AI patentability (see blog) and understand the proposed new TDM exception for copyright and database rights is no longer going ahead (see blog), further illustrating the fast paced nature of legal developments in this space.
IS AI IN EMPLOYMENT A NEW FRONTIER OR A STEP TOO FAR?
The value of AI in employment has increased significantly in recent years, driven by pressures in the labour market for more remote working and greater efficiency.
AI can now regulate the entire employment cycle from recruitment through performance management, allocation of work, discipline and even dismissal.
The pace of change is outstripping the legal and regulatory framework, leaving employers with many opportunities but also an array of risks to navigate. In this briefing Employment Partners Phil Linnard, Padraig Cronin and PSL Counsel Clare Fletcher explore some of these risks in more detail, and provide practical tips for employers on their use of AI.
THE INTERSECTION BETWEEN AI, CORPORATE TRANSACTIONS AND UK NATIONAL SECURITY
Under the National Security and Investment Act 2021 (the “NSIA”), the UK Government has the power to scrutinise and potentially intervene in corporate transactions which raise national security concerns. Recognising the potential national security implications of artificial intelligence technologies, AI is classified as a “high risk” sector under the NSIA regime. This means that corporate mergers and acquisitions in which the target entity undertakes AI activities in the UK will be subject to a mandatory notification requirement.
In this briefing Competition Partner Lisa Wright explains how the NSIA regime operates and how AI is defined for the purposes of this regime. She goes on to discuss the trends emerging from the NSIA’s first year of operation as well as several practical takeaways that parties looking to buy or sell an entity active in the UK’s AI space should bear in mind.
We have also recently published some guidance covering the key issues potential investors should focus on from a legal perspective when approaching AI investments. While these are good practice in any investment scenario, they are particularly crucial when investing in AI - see blog.
HOW CAN AI MEET ESG GOALS?
As the importance of ESG compliance grows, both from a regulatory and reputational perspective, a key question for organisations to ask is will AI help, or hinder my ability to meet my ESG obligations? In this blog PSL Counsel Natalie Donovan and Senior PSL George Murray look at how AI can help meet ESG goals, as well as briefly discussing some of the risks to consider.
You can find more ESG content from Slaughter and May here.
UNDERSTANDING THE OPPORTUNITIES, AND RISKS, FOR YOUR ORGANISATION
It is fair to say that large language models (LLM) and generative AI like ChatGPT and Bard have caught the world’s attention. They have generated headlines in the tabloid and tech press alike, predicting their impact on the way we work, learn and search the internet. User numbers have also increased at an exponential rate. ChatGPT, for example, is one of the fastest growing consumer applications ever, reaching 100 million users a mere two months after launch.
The speed at which their capabilities are developing has, however, raised some concerns. Over 1000 AI experts (including Elon Musk and Steve Wozniak) have called for a pause in the “out-of-control race” to develop ever more powerful AI.
They are also firmly on the radar of regulators and national bodies, who are starting to take steps to help manage potential risks. The Italian data regulator temporarily banned the use of ChatGPT following data privacy and security concerns, and in the US the White House has advised tech firms of their fundamental responsibility to make sure their AI products are safe before they are made public.
In the UK, the National Cyber Security Centre has warned of potential security risks (see our blog), the ICO has produced a set of questions for developers to ask themselves, and the Competition and Market Authority announced it will start examining the impact of AI foundation models (including large language models and generative AI) on consumers, businesses, and the economy (see our blog).
But what does this all mean for your organisation? Do you know how the tech works, the benefits it could bring and the potential risks that will need to be managed?
In our publication Generative AI: Practical Suggestions for Legal Teams, we provide practical tips on how you can use LLMs today and what their development means for your organisation. We also look at what’s on the horizon in this fast changing space.
Our short blog Generative AI: Three golden rules, helps organisations who are, or are planning to use, generative AI to understand the tech itself as well as the opportunities and risks it presents.
You will find below the contact details for the contributors to all of the topics we have covered in our Regulating AI series. You can find more content relevant to AI (and other digital topics) on our Lens blog and Regulating AI hub. For more information on how we can help you across all Tech and Digital topics, please visit our Tech and Digital content page.