The UK AI Opportunities Action Plan: what it means for business


The UK AI Opportunities Action Plan: what it means for business

The government wants to ramp up AI adoption across the UK, and this much-heralded plan sets out what it thinks will be needed

On 13 January 2025, to much fanfare, the UK government published tech entrepreneur Matt Clifford's AI Opportunities Action Plan, an ambitious proposal for ramping up artificial intelligence (AI) development and adoption across the UK. At the same time, the government published a detailed response, accepting all 50 recommendations and setting out how it proposed to take them forward.

The government believes that AI is game-changing and can power the UK's growth, but that there is a real risk that the UK will fall behind on AI development unless it seriously ups its game, and quickly. The raft of recommendations includes urgently reforming intellectual property (IP) law to facilitate use of content by AI developers, and pressing ahead with AI-specific regulations for frontier AI models. There are changes on the way for planning, and for public sector procurement, and though the extent of regulatory change is unclear, the government is said to be intent on taking action with various regulators to ensure that they do not stand in the way of economic growth.

Government's ambition

Soon after winning the election, the government commissioned Matt Clifford to produce an independent report on what the UK's AI strategy should be, identifying how AI "can drive economic growth and deliver better outcomes for people across the country".

At 25 pages, the resulting plan, is (by government standards) succinct, but it is ambitious. Matt Clifford says that turbo-charging AI is "a crucial asymmetric bet" which the UK must make. Peter Kyle (the minister at the Department of Science, Innovation and Technology (DSIT), sponsor of the plan) made clear that Britain must "shape the AI revolution rather than wait to see how it shapes us". The prime minister wrote that "AI has arrived as the ultimate force for change and national renewal".

Core principles

The stated core principles which underpin the recommendations are, broadly, for the government to: be on the side of innovators; invest in becoming a great AI customer; "crowd in" capital and talent; and build on UK strengths, especially in catalysing emerging areas such as in AI application and integration, and use of AI in science and robotics.

The highest-profile recommendations are the following:

The fifty recommendations cover a broad range of issues; some of the key ones are discussed below, but the plan is engagingly written and rewards reading in full.

Data is central

The report makes clear how crucial data will be for the realisation of the government's AI growth ambitions, repeatedly underscoring the importance of access to high quality data for the successful development and deployment of AI.

It recommends that the government should build on the planned establishment of the National Data Library (NDL), which is said to present an enormous opportunity for UK AI, by:

With the planned NDL, it appears that an objective is to build "sovereign" AI models that are not reliant on Silicon Valley. While this seems a good idea in principle, there are significant implementation issues. For example, there are questions about scale: current generations of generative AI models have been trained on terabytes of data, but the next generation will require petabytes. This is an almost unimaginable amount of data, and it is not clear how the UK national data resources will fit into this. That said, there are many types of AI model that can work well with much smaller, specialised datasets. Access to the NDL will also need to be managed very carefully to safeguard human rights and personal privacy, as acknowledged in the plan.

The relevant public or private sector bodies will need to ensure that they own these strategically important datasets (or have strong licences to them) and have collected or produced the data in a way which means that they are free to use it as envisaged by the action plan. This will involve considering possible restrictions flowing from data privacy laws, IP rules, or contractual obligations.

Improving the collection of and access to data will need careful handling, especially where personal data is concerned. NHS patient data is a uniquely comprehensive, and therefore extraordinarily valuable national asset, but there are obvious concerns about patient confidentiality. Failures here could lead to breaches of IP and of data protection laws, as well as reputational risk and political fallout which could set back the project to unleash the power of this sovereign data resource.

Copyright law changes

A key legal point is a call for the government to urgently amend UK IP laws in order to facilitate use of copyright works for the purposes of training AI models.

Specifically, the plan wants the existing text and data mining (TDM) regime to be made at least as AI developer-friendly as under the existing EU position. This would mean, as a minimum, amending the law (in particular the Copyright, Designs and Patents Act) by introducing an exception which would by default allow the use of data for TDM, subject to a right for the IP owner to reserve its rights and opt-out of such use.

Unsurprisingly, this recommendation aligns with the government's preferred position, as set out in its recently launched consultation on AI and changes to copyright law. In its response, the government points to the consultation, one of the key proposals of which is an "opt-out" copyright exception, allowing use of content for training AI systems where IP holders have not reserved their rights. That consultation paper set out a range of options but made clear that "TDM exception with opt-out" was its favoured one. (See this Regulatory Outlook for more on the consultation.)

IP holders will need to be reassured that any changes to copyright law to support AI use of their content will benefit them, for example, by ensuring that any rights reservation mechanisms are simple and cheap to use.

AI-specific regulation

Since well before last year's election, the Labour party has said that it intends to legislate to regulate the providers of the largest, most cutting-edge AI models (the so-called frontier models), including a requirement for developers to share these AI models for testing before they are publicly released. The anticipated AI legislation is expected to put existing voluntary commitments signed by some of the leading AI companies onto a statutory footing, as well as moving the AISI (which conducts model testing) onto an arm's length footing from the government.

The plan does not express a view as to whether or not there should be such regulation. Instead, it stresses the importance of acting quickly to provide clarity on how frontier models will be regulated and urges the government to ensure that any such regulation prioritises preserving the capability, trust and collaboration that the AISI has built up.

In response, DSIT says that it will consult on proposed legislation to provide regulatory certainty and to help kickstart growth, while protecting against the critical risks associated with the next generation of powerful AI models. It also confirms that it will establish the AISI as a statutory body.

This will mean that the UK position differs markedly from the EU's, whose AI Act imposes significant restrictions and obligations which affect a much wider range of businesses. For now, the EU's first mover advantage means that it continues to shape AI governance and compliance for many UK-based organisations.

Improving the regulators

The paper also has something to say about regulators more generally. It stresses that the government must shake things up to ensure that all regulators are working towards safely growing the AI sector, including ensuring that each has a focus of enabling safe AI innovation (which should be part of their strategic guidance from the relevant government department) as well as having the necessary AI expertise, and the funding to enable regulators to achieve this.

If regulators do not act to enable AI growth, it recommends that the government should consider more drastic changes, such as creating a central body with higher risk tolerance, which could override regulators to in order to promote AI innovation. Comments about the regulators' approaches will chime with many businesses, but it does not seem likely that the government would want to saddle itself with the cost and bureaucratic burden of adding yet another regulation-related body. The government's response document does not directly address this aspect of the recommendations, although it does commit to empowering the Regulatory Innovation Office to "drive regulatory innovation through behavioural changes within regulators".

Competition law

The plan's focus on AI and innovation may be at odds with previous suggestions that the UK Competition and Markets Authority (CMA) could potentially regulate large AI firms with its new powers under the Digital Markets, Competition and Consumers Act, and the growing competition regulatory scrutiny of the use of AI and key components, such as data.

The CMA has issued multiple papers on AI and guidance on potential competition harm arising from algorithms. This is a key focus area for the UK regulators who make up the Digital Regulation Cooperation Forum too, including the interaction of AI with other focus markets such as search, cloud and data.

There is a growing tension between the government's drive for innovation and investment in the UK and this increased competition regulatory focus on AI. The CMA is expected to be keen to demonstrate that any future competition regulation of AI supports these goals, for example by fostering UK AI start-ups, and this will be an area to watch as implementation of the action plan unfolds.

Building and powering AI infrastructure

An eye-catching feature of the action plan is the recommend establishment of AI Growth Zones (AIGZs) with the intention that these will be sited in areas needing economic regeneration, in order to drive economic growth in these areas. The first has been announces for Culham in Oxfordshire (the site of the UK Atomic Energy Authority headquarters).

The idea is that AIGZs (and data centres in other locations) should have a simpler (and hence faster) planning regimes, which will be supported by appropriate policy and guidance. The government has already announced proposals for data centres to be classed as critical national infrastructure, and possibly to be consented as Nationally Significant Infrastructure Projects (via the Development Consent procedure).

Data centres are notoriously energy intensive to operate, both in running their computer systems and in cooling the computers to maintain their functionality. Negotiating the interplay between AI growth, energy demands, and sustainability will be critical for the plan's success, with the National Energy System Operator's projecting that data centres will quadruple their electricity demand by 2030, primarily due to the increasing reliance on AI applications. This will place significant strain on an already overburdened grid, with high energy prices and long wait times for grid connections exacerbating the situation. The action plan acknowledges the problems with access to power, and partly addresses this with the AIGZ proposal involving improved access to power for data centres.

The plan raises the question of mitigating the sustainability risks associated with building out AI infrastructure. These are not dealt with in any detail in the government's response, beyond stating that the government will seek to address the challenge. However, the government will set up a new AI Energy Council which will work with energy companies "to understand the energy demands and challenges which will fuel the technology's development".

In the USA, major technology companies are already exploring innovative energy solutions, including nuclear power from small modular reactors. These new reactors could offer a reliable electricity supply without carbon emissions and are increasingly seen as a viable option for powering hyperscale data centres.

While it is positive that the government looks to make the planning regime for data centres quicker and simpler, particularly through the introduction of AIGZs, this initiative conflicts somewhat with proposals to include data centres as Nationally Significant Infrastructure projects to be consented through the Development Consent procedure, which is hardly quick or simple.

Public procurement

Public procurement is at the heart of the action plan. In most cases, every "contracting authority" (for example, every local authority, NHS trust, and utility) has autonomy in what it procures and how it procures it. This can make it difficult for new innovations to be rolled out at scale. But as highlighted in the paper, the only way to deliver progress on the scale envisaged is for the public sector as a whole to adopt a more joined-up approach to procuring the best innovations.

While there have been plenty of isolated examples of AI being used to incredible effect in the public sector over the last couple of years, there are relatively few examples of AI innovations being procured at a national level. The plan addresses this head on with some bold aims for public procurement. For it to work, it will need real buy-in from public sector bid teams across the breadth of public services. This is what the report means when it says, in the opening lines, that putting the plan into action "will require real leadership and radical change, especially in procurement" (our emphasis).

Rapid piloting and scaling of AI products and services

The government will adopt a "Scan > Pilot > Scale" approach to procuring new AI innovations for the public sector to speed up the time it takes for new tech to reach national scale, and support start-ups through what can be a daunting and confusing public procurement process.

New approaches to public procurement include:

While light on detail, there are some revealing insights here. The intention to move away from commercial frameworks aligns with recent commentary, particularly from the NHS, that the public sector is using too many frameworks, resulting in a confusing and disjointed landscape for suppliers. The reference to a "multi-stage gated" procurements which compensate bidders for progressing through rounds sounds a lot like the (widely underused) "innovative partnerships" procedure under the Public Contracts Regulations 2015. Innovative partnerships are being scrapped by the incoming Procurement Act 2023, in force from 24 February 2025, and so the plan reveals the potential for the competitive flexible procedure to be used creatively to achieve a similar effect.

Creation of a new UK Sovereign AI research unit to 'partner with the private sector'

One of the more ambitious aims of the action plan is for the government to have a hand in creating the next global Big Tech from the UK's private sector. It intends to create a new unit, UK Sovereign AI, with a "clear and powerful mandate" to do the following:

The new unit will focus its efforts on areas of AI that are the most strategically important for the future of the technology. Those efforts go further than ever to try and elevate key suppliers to the global stage. To this end, the government will need to consider carefully how to ensure that such support and direct investment will comply with the Subsidy Control Act 2022, which imposes legal limits on the extent to which companies may receive subsidies of any kind from UK government and other public sector sources. In particular, the government may want to consider creating a new Streamlined Scheme to enable funding for AI research activities to be awarded in a quick and compliant manner from the outset.

A significant increase in the number of public procurements being advertised for AI can be expected in the coming weeks. The action plan means that bidding for UK public contracts might now offer the chance to rapidly reach a global scale. The new Procurement Act 2023 radically increases the amount of information public bodies must publish about tenders.

Those wanting to get their technology into the public sector should register themselves on the Find a Tender platform, and the Central Digital Platform once released, and keep a very close on new AI notices that provide concrete information on what the public sector wants to buy, and when.

Osborne Clarke comment

Inevitably, there are questions about the practical feasibility of implementing some of the proposals. An overarching one is the question of how implementation will be funded, given the government's cash-strapped position. Large AI providers in the United States are spending billions of dollars on a quarterly basis in order to fund data centre and energy commitments to power compute and AI training requirements. They will always be able to outspend most UK organisations, so the UK will need to focus its efforts in places where it can make the most impact. The plan acknowledges this, and stresses the importance of identifying the most likely areas.

Or, to pick a legal example, funding a significant increase in AI capability for the regulators will be expensive, especially if the government has to fund increased salary bills as regulators compete with the private sector for AI specialists with the skills needed. Some regulators, in particular the Information Commissioner's Office and the CMA, have already done a considerable amount of work on AI issues, and so have acquired significant internal expertise, but others will be facing a steep learning curve.

Overall though, while alive to the many risks, the plan presents a compelling picture of the exciting opportunities for the UK if it can boost the domestic creation and adoption of AI. Businesses generally (not just those that are part of specialist AI ecosystems) will need to be alert to the significant legal and regulatory implications.

Previous articleNext article

POPULAR CATEGORY

entertainment

12264

discovery

5593

multipurpose

12761

athletics

12969