The clearest signal from the National AI Strategy is that it places innovation at the forefront.Footnote 2 Innovation is read into all other streams, such as a vehicle for economic growth and competitiveness globally. Indeed, research is discussed in the context of adoption (i.e. industry) and in terms of ‘catalytic contribution’ to national aims and challenges (such as in health and net zero). Further, the skills agenda (skill read in terms of vocation and ability, which is subtle but notably different than speak of education) is two-pronged, with both the facilitation of talent (via visa provision) and in the national educational programmes (to be consulted upon). Finally, innovation, in the pro-innovation sense, underpins the governance agenda, which is explicitly discussed in terms of enabling innovation.
An alternative ecosystem of trust
The UK’s Information Commissioner agreed on the adequacy for UK-EU data transfers quite early on in the Brexit negotiation process. On the other hand, an opinion by the European Data Protection Board around the possibility of the adequacy of data transfers from the EU to the UK only materialised in March 2021. This opinion states the similarity of the UK data protection framework to that of the EU and grants adequacy but finishes by stating: ‘(…) whilst laws can evolve, this alignment should be maintained.’ . Whilst this is a seemingly innocent statement, it is a reflection of months of tension and negotiations, in the middle of which the UK proposed a National Data Strategy  that in some ways steps away from the European framework and inches closer to a US approach to privacy, much more focused on economic outcomes and innovation. It is also worth noting this decision is effectively temporary as, for the first time, it includes a sunset clause which effectively allows it to expire after 4 years. For a critique of the Data Strategy see . This tension can have significant impacts in the tech sector in general  and AI technology in particular.
There is a particularly delicate balancing act to be done between incentivising innovation and indirectly encouraging isolationism and degradation of individual rights, or doing so by providing a space which creates opportunities for ethical and regulation innovation. We can state the opportunity and risk in terms of isolationism and an alternative ecosystem of trust:
Isolationism: here the concern is that the UK will become isolated in its regulatory and innovation ecosystem, where industries will choose to follow larger regulatory/market ecosystems—similar to how EU GDPR became the de facto global standard (and thus effectively universalising the EU ecosystem), the UK may find itself in a position that it will be, ultimately, compelled to conform to the larger regulatory-market force. An additional possibility is that the UK becomes an incubator—where testing and innovation occurs—thereby functioning as a launchpad viz. a more innovation-friendly space for start-ups and industry. Given the desire to be a global leader, these possibilities are read negatively with respect to the stated aims of the Strategy.
Alternative ecosystem of trust: here the opportunity is for the UK’s regulatory-market norms to become a preferred ecosystem for innovation and trust. The UK’s approach may be thought of as an alternative trust scheme that is (just as) trustworthy (as the EU) but more pro-innovation. Through the UK’s various influence mechanisms (soft power through language and culture; a permanent seat on the UN security council; the CommonWealth; etc.) a sufficiently large regulatory-market ecosystem can emerge (think of UK-Canada-India-Australisia etc.) to rival the EU (and perhaps even become the de facto global norm). However, to provide this level of assurance, the UK will need to have robust alternative frameworks in place and an accepted regulatory system.
In sum, there is a delicate balancing act between incentivising innovation and indirectly encouraging isolationism and a retreat from being a trusted data custodian.
Defence, security and risk
References to defence, security and risk are made throughout the text. The publication of a defence strategy through the Ministry of Defence (including regarding the governing of related defence implications) is a short term aim. More generally, two dimensions are found:
Utilisation: The first is ‘defence’ in the form of research and utilisation of systems. Here a new Defence AI centre (viewed to become ‘a science superpower in defence’) is proposed—although not explicitly mentioned, we read this as a significant signal regarding military capabilities. Additionally AI is to be used in modernisation and operations of the Ministry of Defence.
Governance: The second is governance based. The ethical (and legal) implications of AI in defence are likely to be central to the defence strategy—the Strategy suggests research and understanding of long term risk (safe advancement of AI and the mitigation of catastrophic risks) and a strategy to defend against the malign use of AI.
Given that it is often concerned with the use of AI in defence capacities that garners the most attention, the navigation of advancing AI research and utilisation with the need for good governance is particularly acute. Additionally, we have chosen to highlight this dimension because it indicated the seriousness with which the British state takes the development and deployment of AI. It is important to note the protected landscape in which defence operates and in fact to consider whether in this regard Britain could make changes whilst being more open and transparent than national norms.
Revision of data protection
Data protection provisions in the UK have been largely aligned with those of the European Union, with the UK taking a leading role in the development and implementation of the EU’s approach to data protection, throughout its years as a Union member-state (c.f. UK GDPR).
It has been an open question as to whether or not the UK would move towards a data protection regime that is ‘lighter’ than the EU approach. Although talk is mainly of ‘revision’ and ‘review’, the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection.
There are two dimensions to this:
Enablers: here, opening access to data (including public data), data standardisation, and the cyber-physical infrastructure support is included in the Strategy. This is critical because access to quality and sufficient data is crucial to the development of AI. It is hoped that a more open regime with respect to data protection will increase the use and possibilities of innovation with respect to AI.
Data protection as value: European GDPR (boldly) claims that data protection is a fundamental right—given such an explicitly stated value, the prominence of data protection in data governance is understandable. However, as we have discussed elsewhere, the relationship between data protection and AI performance (how accurate a system is), fairness (how does a system impact people with respect to protected characteristics, such as race and religion) and transparency (how much explainability a system is said to have) is one where it is often a trade-off . An indicative example would be that securing a high level of data protection is likely to result in a diminished level of transparency. In effect, the UK strategy is challenging the primacy of data protection as the foundational value of data governance by putting forward other values (performance of a system, etc.) and opportunities (expressed in terms of the value 'opportunity and innovation’). For our commentary on the interrelation between data and AI ethics see .
This signal in the shift with respect to the primacy of data protection in data governance will have implications on the economy (see point 3.1 above), innovation (what kinds of products may emerge, the make-up of the start-up ecosystem, etc.), governance (the nature of accountability) and law (revision of existing laws).
A focus on innovation and economic advancement is continuously touted as a step away from the EU’s regulatory strategy. However, in its own aspirations, economic development is a key factor of the European approach to privacy: data protection is intended to regulate and allow for the processing of personal data, when required and in a transparent manner, thus supporting economic goals . Some propose the idea that “(…) some of the most important implications of the GDPR may not relate to privacy, but to antitrust and trade policy” (, p. 50).
Additionally, a potential move away from the European approach might have global implications as “the GDPR holds some sway outside of the EU as well, since any business dealing with the bloc has to adhere to the rules when managing European’s data” (, p. 3). Whilst some of the intended goals of driving innovation have merit on their own, the government must carefully assess the consequences of any changes which might disrupt existing data flows and enforcement cooperation.
This raises the question—is the National AI Strategy simply pro-innovation or is it, in fact, a step back in terms of data protection rights? Indeed, we believe it is critical to explore, at length and through consultation with all stakeholders (industry, startup, NGOs and academia) the nature of how a relaxing of some data protection provisions may impact innovation—this is not to deny the apparent maxim ‘more data—more innovation’ but instead to think about innovation as enabling standards and regulation (which the EU will claim is indeed their stated aim).Footnote 3