REGULATING AI

Global adoption of AI has more than doubled since 2017, and AI budgets within organisations have increased alongside this rising adoption. We expect to see significant shifts forward in AI capabilities over the next few years, and as the technology advances so too does government and regulator interest across the globe. As a result, legal teams within all types of companies need to proactively ensure that they have considered and managed a wide range of issues when developing or deploying AI solutions.

To help you navigate this changing landscape, we have brought together our insights on AI across various topics, including:

  • Regulation: An update on plans for AI-specific regulation
  • Data Privacy: How to balance AI design and deployment with data privacy compliance
  • Financial regulation: How the current UK financial regulatory framework applies to AI
  • Competition: Considerations arising from the use of algorithms
  • Intellectual Property: Addressing key intellectual property issues with AI development and use
  • Employment: Navigating the rapid change in use of AI within employment processes
  • M&A: The intersection between AI, corporate transactions and UK national security
  • ESG: Can AI help meet ESG goals?
  • Generative AI: Understanding the opportunities, and risks, for your organisation
  • For further AI related insights and publications, see our:

    REGULATION

    IS IT ENOUGH TO REIN IN THE ROBOTS?

    The AI market is a global one and competition is fierce. Encouraging investment is key, but responsible development and deployment of AI also requires innovation friendly regulation. What does that look like?

    In Europe, the EU is pushing ahead with new cross-cutting AI specific legislation while the UK Government’s recent white paper confirms that it will continue with its current sector specific approach to regulation.

    In this series of briefings, Partner Rob Sumroy and PSL Counsel Natalie Donovan from Slaughter and May’s Emerging Tech team analyse the UK’s policy proposals around AI regulation, and the EU's AI Act which has been agreed on a political level.

    DATA PRIVACY

    BALANCING TENSIONS

    As AI becomes ever more popular, organisations are grappling with the reality of how to balance AI design and deployment with data protection compliance.

    When looking at the ways in which AI can work, it is easy to see where this tension arises. The processing of large quantities of data, sometimes for new purposes, to produce outcomes where it can be unclear why or how that decision was reached, can bring transformative benefits to those adopting and benefiting from AI. However, it does seem at odds with many of the key principles underpinning data protection regulation. It is therefore vital, when advising on AI and its privacy risk profile, to understand how and when personal data is used, and how this use fits with the requirements of the GDPR.

    In this briefing, Partner Rob Sumroy and PSL Counsel Natalie Donovan look at:

    • The rise of AI and why it poses particular privacy concerns.
    • How AI fits with some of the key principles of the GDPR.
    • How the ICO is responding to the new challenges that AI raises.

    FINANCIAL REGULATION

    HOW DOES THE CURRENT UK REGULATORY FRAMEWORK APPLY TO AI AND MACHINE LEARNING?

    AI technologies have the potential to make financial services and markets more efficient, accessible, and tailored to consumer needs. They are increasingly being used by financial services firms across a range of business areas.

    However, AI also raises novel challenges and poses potential new risks to consumers, the safety and soundness of firms, market integrity, and financial stability.  The Bank of England, the Prudential Regulation Authority and the Financial Conduct Authority therefore have a particular interest in the safe adoption of AI in UK financial services, including how policy and regulation can best support it.

    The question currently rattling their cage is whether AI can be managed through clarifications of the existing financial regulatory framework, or whether a new approach is needed.  

    In this video, Senior Counsel Tim Fosh and Senior PSL Selmin Hakki, consider the challenges associated with the use and regulation of AI in financial services, examining how existing legal requirements and guidance apply, and what is on the horizon.

    COMPETITION

    WITH A FOCUS ON ALGORITHMS

    Competition authorities are becoming increasingly focussed on tech – both in terms of the technologies being used and the big tech companies themselves.  An example of this is the work being done by the UK’s Competition and Markets Authority (CMA) to see if algorithms can reduce competition in digital markets and harm consumers if misused. 

    In this podcast, Competition Partner, Jordan Ellison and Senior PSL Annalisa Tosdevin discuss the competition law considerations arising from the use of algorithms. They consider some of the concerns around algorithms, the extent to which these concerns are really a competition law issue and how competition regulators might get involved. They finish with some practical takeaways for clients who use AI and algorithms in their business.

    INTELLECTUAL PROPERTY

    IP ISSUES WITH AI DEVELOPMENT AND USE

    Countries around the world are jostling to attract AI investment and expertise and position themselves as leading AI nations. An appropriate IP regime will be central to achieving this – with the holy grail being a system that provides appropriate protection for novel AI technologies whilst at the same time striking the right balance between the interests of rights-holders whose content may have been used in training those AI systems and those of the AI developers themselves. Intellectual property regimes are intended to encourage, protect and reward innovation, which in turn propels investment and research in AI, but go too far and the law risks having the opposite effect.

    Given that delicate balancing act, are our current IP laws fit for purpose in an increasingly digital, AI-enabled, world? In our latest content piece, we consider some of the key questions surrounding intellectual property law and its interaction with AI, including: does training generative AI using unlicensed third party material infringe UK copyright? Can AI-generated outputs be protected by copyright? Can AI be an inventor for UK patent purposes? And are AI inventions themselves patentable in the UK?

    EMPLOYMENT

    IS AI IN EMPLOYMENT A NEW FRONTIER OR A STEP TOO FAR?

    The value of AI in employment has increased significantly in recent years, driven by pressures in the labour market for more remote working and greater efficiency.

    AI can now regulate the entire employment cycle from recruitment through performance management, allocation of work, discipline and even dismissal.

    The pace of change is outstripping the legal and regulatory framework, leaving employers with many opportunities but also an array of risks to navigate. In this briefing Employment Partners Phil Linnard, Padraig Cronin and PSL Counsel Clare Fletcher explore some of these risks in more detail, and provide practical tips for employers on their use of AI.

    M&A

    THE INTERSECTION BETWEEN AI, CORPORATE TRANSACTIONS AND UK NATIONAL SECURITY

    Under the National Security and Investment Act 2021 (the “NSIA”), the UK Government has the power to scrutinise and potentially intervene in corporate transactions which raise national security concerns. Recognising the potential national security implications of artificial intelligence technologies, AI is classified as a “high risk” sector under the NSIA regime. This means that corporate mergers and acquisitions in which the target entity undertakes AI activities in the UK will be subject to a mandatory notification requirement.

    In this briefing Competition Partner Lisa Wright explains how the NSIA regime operates and how AI is defined for the purposes of this regime. She goes on to discuss the trends emerging from the NSIA’s first year of operation as well as several practical takeaways that parties looking to buy or sell an entity active in the UK’s AI space should bear in mind.

    We have also recently published some guidance covering the key issues potential investors should focus on from a legal perspective when approaching AI investments. While these are good practice in any investment scenario, they are particularly crucial when investing in AI - see blog.

    ESG

    HOW CAN AI MEET ESG GOALS?

    As the importance of ESG compliance grows, both from a regulatory and reputational perspective, a key question for organisations to ask is will AI help, or hinder my ability to meet my ESG obligations? In this blog PSL Counsel Natalie Donovan and Senior PSL George Murray look at how AI can help meet ESG goals, as well as briefly discussing some of the risks to consider.

    You can find more ESG content from Slaughter and May here.

    GENERATIVE AI

    UNDERSTANDING THE OPPORTUNITIES, AND RISKS, FOR YOUR ORGANISATION

    It is fair to say that large language models (LLM) and generative AI like ChatGPT and Bard have caught the world’s attention. They have generated headlines in the tabloid and tech press alike, predicting their impact on the way we work, learn and search the internet. User numbers have also increased at an exponential rate. ChatGPT, for example, is one of the fastest growing consumer applications ever, reaching 100 million users a mere two months after launch.

    The speed at which their capabilities are developing has, however, raised some concerns. Over 1000 AI experts (including Elon Musk and Steve Wozniak) have called for a pause in the “out-of-control race” to develop ever more powerful AI.

    They are also firmly on the radar of regulators and national bodies, who are starting to take steps to help manage potential risks. The Italian data regulator temporarily banned the use of ChatGPT following data privacy and security concerns, and in the US the White House has advised tech firms of their fundamental responsibility to make sure their AI products are safe before they are made public.

    In the UK, the National Cyber Security Centre has warned of potential security risks (see our blog), the ICO has produced a set of questions for developers to ask themselves, and the Competition and Market Authority announced it will start examining the impact of AI foundation models (including large language models and generative AI) on consumers, businesses, and the economy (see our blog).

    But what does this all mean for your organisation? Do you know how the tech works, the benefits it could bring and the potential risks that will need to be managed?

    In our publication Generative AI: Practical Suggestions for Legal Teams, we provide practical tips on how you can use LLMs today and what their development means for your organisation. We also look at what’s on the horizon in this fast changing space.

    Our short blog Generative AI: Three golden rules, helps organisations who are, or are planning to use, generative AI to understand the tech itself as well as the opportunities and risks it presents.

    THE TEAM

    You will find below the contact details for the contributors to all of the topics we have covered in our Regulating AI series. You can find more content relevant to AI (and other digital topics) on our Lens blog and Regulating AI hub. For more information on how we can help you across all Tech and Digital topics, please visit our Tech and Digital content page.

    Rob Sumroy

    Rob Sumroy
    Partner
    Tech, Regulation and Data Privacy

    Natalie Donovan

    Natalie Donovan
    PSL Counsel
    Tech and Digital

    Tim Fosh

    Tim Fosh
    Senior Counsel
    Financial Regulation

    Selmin Hakki

    Selmin Hakki
    Senior PSL
    Financial Regulation

    Jordan Ellison

    Jordan Ellison
    Partner
    Competition

    Annalisa Tosdevin

    Annalisa Tosdevin
    Senior PSL
    Competition

    Laura Houston

    Laura Houston
    Partner
    IP

    Phil Linnard

    Phil Linnard
    Partner
    Employment

    Padraig Cronin

    Padraig Cronin
    Partner
    Employment

    Clare Fletcher

    Clare Fletcher
    PSL Counsel
    Employment

    Lisa Wright

    Lisa Wright
    Partner
    Competition, M&A

    Jennyfer Moreau

    Jennyfer Moreau
    Associate
    Competition

    Harry Hecht

    Harry Hecht
    Partner
    ESG, M&A

    George Murray

    George Murray
    Senior PSL
    ESG

    James Cook

    James Cook
    Partner
    Generative AI

    Nick Johnston

    Nick Johnston
    Associate
    Generative AI

    Richard Batstone

    Richard Batstone
    Senior K&I Manager
    Generative AI

    Cindy Knott

    Cindy Knott
    PSL Counsel Data Privacy