The final overarching task we have identified concerns a country’s international positioning in the field of AI. This task is slightly different from the previous four because it plays out at a level influenced to some degree by all of them. Myths about AI also exist in the international domain, which are addressed in the international media and by international companies and research institutes. Several issues of contextualization also have an international dimension. For example, the global discussion about the rollout of 5G and European ambitions to develop a common data infrastructure (such as Gaia-X) affect the development of the technical ecosystem for AI. Stakeholder engagement can affect the local or national situation, but also has an international dimension; for example, in the role played by global scientific associations and NGOs. To a large extent the international arena is also where applicable regulation is implemented, such as treaties that govern dangerous applications of technology, ethical guidelines or the EU’s ambitions to regulate AI.

Even though there is a strong interrelationship with the tasks previously discussed, it makes sense to address the international dimension of AI’s integration into society separately. Firstly, because it involves a specific category of actors. Countries are represented in international bodies by specific parties who negotiate and co-operate with other international players. These are not only state actors but also international organizations, multinationals and even individuals.

There is also another reason to look separately at the international field. That is because of two issues specific to it. The first has to do with the competitiveness or earning power of a given country. What competitive advantage does a nation have? Can its position be strengthened? How does this relate to the capacities of other countries? The second, also with an eminently international dimension, is security.Footnote 1 The most extreme example of this issue is war. New technologies have a great impact on how armed conflicts are fought and how they can be won. In this context AI is often discussed in relation to autonomous weapons, although its influence on warfare is in fact much broader. Security also plays a role in less extreme situations, becoming implicated in such activities as foreign influence and the export of ideologies, as well as sabotage and industrial espionage. Issues of this kind arise not only between countries with hostile relations, but even between allies. One example here is the dimension of ‘flow security’, which is about safeguarding all the flows of all manner of goods: food and medicines, for instance, but also data, capital and people.Footnote 2

The issues of competitiveness and security can also become intertwined. In the discussion about 5G technology the Chinese company Huawei is seen not only as an economic competitor but above all as a security risk. The economic arguments in the US-China trade conflict go hand in hand with questions of national security.Footnote 3 The debate in Europe about the power of America’s ‘big tech’ companies was also initially about competitiveness, but is now increasingly being interpreted in terms of the demand for strategic autonomy and digital sovereignty.Footnote 4 A growing list of publications on ‘geo-economics’ emphasizes the strong connection between economics and competitiveness on the one hand and geopolitics and security on the other.Footnote 5 In this chapter we first examine the two issues separately and then discuss what connects them. Figure 9.1 reveals how those themes relate to competitiveness, national security and the underlying geo-economic situation.

Fig. 9.1
An illustration of the issues related to national competitiveness, national security, and geo economy. The issues are race to A I, international cooperation, security in the civilian domain, variety of military applications of A I, and autonomous weapons.

Issues related to national competitiveness, national security and geo-economics

A final reason to consider the international field separately is the question of the level at which some tasks should be addressed. A number of domains, such as the global financial system, are so internationally intertwined that certain challenges cannot be addressed adequately at the national level. In our region the European Union has become the level at which rules and agreements in many areas are established, but in others global organizations like the United Nations (UN) or alliances such as NATO play a more important role. In part therefore, the international field therefore needs to be examined separately to establish the best level at which to tackle certain issues.

The central question in this overarching task is, ‘what is our international position?’ We first discuss international positioning in relation to competitiveness (Sect. 9.1), then specifically examine national AI capacities, the phenomenon of AI strategies and the often-associated idea of a global race to establish AI dominance. After that we look at international positioning in relation to security (Sect. 9.2). As well as examining the rise of autonomous weapons, we also examine other ways in which AI can influence warfare. Finally, we address broader security issues between countries and the rise of a ‘digital dictatorship’.

1 AI and Competitive Advantages

1.1 AI Capacities

Like previous technological revolutions, AI will change the relative competitive positions of countries and there are great expectations about the economic value it could generate. In Chap. 3 we mentioned PwC’s prediction that AI will contribute US$15.7 trillion to the global economy in 2030.Footnote 6 So what does the international economic domain currently look like? As discussed in previous chapters, AI is a complex phenomenon with various dimensions. As such it is impossible to explain this domain based on a single criterion. There are indices for the economic value of AI activities, the number of AI actors per country and the number of AI patents, and there is now also an AI index.Footnote 7 None, however, explains the entire picture. Authors Jeffrey Ding and Kai-Fu Lee have attempted to come up with a classification that can be used to estimate a country’s AI capacities.Footnote 8 If we combine the various approaches, we arrive at five relevant dimensions.

  1. 1.

    The quality of fundamental research.

  2. 2.

    The availability of data.

  3. 3.

    The required hardware.

  4. 4.

    A dynamic private sector to commercialize the technology.

  5. 5.

    An enabling government.

The first three of these have been discussed in Chap. 6 as aspects of the technical ecosystem (requirements for a functioning AI ecosystem). The private-sector dimension includes both large technology companies and innovative start-up culture, as well as the AI investments made by major companies in other sectors. A government can enable development through investment, but also by implementing legislation that is clear at the very least and perhaps even creates room for experimentation.

Taking these five dimensions as a starting point, it becomes clear that the US and China are the two great world leaders. Both are strong in all the dimensions. Both have access to huge amounts of data, due to a combination of their sheer size and relatively lenient legislation. Both also have a wide variety of technology companies: the US has big tech in Silicon Valley, with firms such as Alphabet, Amazon, Facebook, Microsoft and Apple, while China boasts giants like Baidu, Alibaba, Tencent (a trio sometimes abbreviated as ‘BAT’) and Huawei. In addition, both have enterprises specializing in the application of AI that are rapidly growing into major market players, such as Uber and Netflix in the US and Bytedance and Hikvision in China.

The adoption of AI on consumer platforms in China is very high; voice recognition software is widely used and consumers can make payments using facial recognition.Footnote 9 In the field of fundamental research the US is still in the lead but China is well on the way to narrowing that gap.Footnote 10 A study of research institute citations between 2012 and 2016 showed that China is currently in second place behind the US, and that Tsinghua University scored even higher than Stanford University for the total number of AI citations in the ‘elite institutes’ category.Footnote 11 Another study of academic papers presented at major AI conferences showed a decrease in the proportion of authors from American institutes (from 41% in 2012 to 34% in 2017) and an increase in Chinese authors from 10% to 24%.Footnote 12

Anecdotal information also confirms this trend. The Chinese start-up Face++ dominated an international image recognition competition in 2017, beating teams from Google, Microsoft and Facebook. In the field of speech recognition China’s iFlytek has overtaken America’s Nuance and both companies can now be called international leaders. Former Google CEO Eric Schmidt warned in 2017 against complacency towards Chinese AI capabilities and predicted that the country would match the US in five years.Footnote 13

Jeffrey Ding and Kai-Fu Lee have different ideas about their respective competitive positions, however. According to Ding, the US is still the world leader by some distance and will remain so for the foreseeable future. Lee on the other hand is betting on China. Their divergence has to do with the weight they accord the various domains involved. Ding explains that the US has a particularly strong position in the field of hardware, most notably specialized chips. China remains dependent on these physical components and the US is making it harder for it to gain access to them. Lee, by contrast, points to the fact that China has access to far more data and is relatively unhindered in what it does with it, a factor he believes will be of paramount importance in the application phase of AI. He also points out the role of government; China has a far more ambitious AI strategy than the US.

China’s New Generation Artificial Intelligence Development Plan (AIDP) was published in July 2017. It sets out the nation’s precise goals for the nature and scale of AI in the coming years: to be on par with the most advanced countries in the field by 2020, to be the world leader in certain areas by 2025 and to be the world’s primary AI innovator by 2030. In addition to these ambitions, the plan also prompts local authorities to establish their own plans and funds and sets out the key policy instruments that will be deployed to achieve the goals.

The emphasis on establishing technical standards is striking: this point is mentioned no fewer than 24 times in the AIDP. We return to standardization later in this chapter. The Chinese plan further emphasizes the importance of international co-operation in regulation and ethical standards for AI.Footnote 14 Meanwhile, the technology is now being used widely in China. The tech platform Tencent has launched a healthcare system called Miying to assist medical professionals with diagnoses. The police use facial recognition, and even software that analyses body positions. Funds are available for applications in education and business. In Hangzhou Alibaba is building City Brain, an AI system to improve traffic management and the response times of emergency services.Footnote 15

It is impossible to say whether Ding or Lee will be right. What is clear, however, is that these two countries are undisputedly the world’s AI superpowers. So what about the rest of the world? As a whole, the EU is not in a bad position. It is ahead of China and comparable with the US in fundamental research. There is less data available here, though, in part because of the national diversity within the EU but also due to stricter legislation.Footnote 16 The EU has a strong position in hardware for AI. European countries are dependent on the US for specialized chips but have unrestricted access to them thanks to their friendly relations. Furthermore, the EU is making progress regarding government support. In addition to national AI strategies, there is now also one at the European level.Footnote 17 Total investment remains relatively limited but there is a growing momentum in the EU to promote AI as a strategic technology. The COVID-19 pandemic also seems to be contributing to this momentum. Twenty percent of the €670 billion allocated to the EU recovery plan has been earmarked for the wider development of digitalization and AI will inevitably benefit from this. The same applies to the funds being allocated for the EU4Health programme, the Connecting Europe Facility – Digital (to finance infrastructure) and the Digital Europe Programme.Footnote 18

The EU’s biggest weakness lies in the business environment for AI. Companies in other sectors, such as infrastructure and energy, are developing their AI capacities. There are the traditional technology companies as well, like SAP, Dassault, ASML and TomTom, and a number of tech start-ups have also grown to become major players, amongst them Spotify, Zalando and Adyen. But there are still no large, diversified technology platforms in the EU of the kind found in both the US and China, and many European start-ups are tied to the US market through acquisitions or invested capital.

The United Kingdom, which is no longer part of the EU post-Brexit, has the most developed AI ecosystem in Europe. That nation is particularly strong in fundamental research, which dates back to the early work on AI by scientists such as Alan Turing, after whom the major national AI institute is named. British research was behind the development of DeepMind, the advanced AI lab acquired by Google in 2014. DeepMind has been responsible for many algorithms that have caused controversy in recent years, including AlphaGo. Other European countries with strong AI capabilities are Germany and France. Germany is particularly strong in robotics and uses AI for smart applications in factories. France also has industrial applications but focuses strongly on AI in healthcare and defence.

Another country with an internationally competitive position in AI is Japan. There too, the technology is closely linked with the industrial sector and specifically with car manufacturers. AI has also developed strongly in Canada. As in the UK, this is based on a thriving ecosystem for fundamental research driven in part by the presence of prominent scientists Geoffrey Hinton, Yann LeCun and Yoshua Bengio, whose work has been funded by the Canadian Institute for Advanced Research (CIFAR).Footnote 19

Russia’s AI capacities are relatively limited, especially in terms of research investment.Footnote 20 At the same time, though, it does have a strong position in specific domains within AI. In 2015 the United Instrument Manufacturing Corporation announced a major research project in the field of AI and semantic data analysis. Russia’s answer to Google, Yandex, has been using AI for search results for years. ABBYY focuses on text recognition. VisionLabs specializes in facial recognition for banks and the retail sector. N-Tech.Lab won first place in a global facial recognition competition in 2015 with its FaceN algorithm.Footnote 21 The software developed by this firm linked images of Russian citizens mined from various data sources and platforms. A conference in 2018 set out a path for a Russian AI strategy that focuses on developing expertise, training and education programmes, identifying global developments and the use of AI in war games.Footnote 22

One important conclusion we can draw from this brief overview is that many of the countries mentioned are committed to developing a national version of what we have called an ‘AI identity’ (see Chap. 6). Countries that invest in distinctive AI capacities and specialize in specific domains can use this strategy to strengthen their competitive position, allowing even relatively small nations to hold their own in the international AI arena. Several relatively small economies appear to be very successful in AI when you consider the number of relevant actors in this field. Israel, South Korea and Singapore are particularly notable in this respect.Footnote 23 As another relatively small country, the Netherlands excels in fundamental research in AI Footnote 24 and education.

Key Points: AI Capacities

  • AI is a complex phenomenon, but we can estimate a country’s capacities based on five dimensions: the quality of fundamental research, the availability of data, the required hardware, the business ecosystem and an enabling government.

  • The US and China are the two world leaders. Both score well in all five domains, but interpretations of their relative positions vary. The EU also scores well, except in that it lacks an ecosystem for the commercial production of AI applications.

  • The UK, Germany, France, Japan, Canada and Russia are medium-sized actors that excel in specific domains or applications and therefore have an ‘AI identity’.

  • Smaller countries can also be relevant actors on the world stage if they have specialized in a particular area of AI or fundamental research.

1.2 National AI Strategies

The field of AI is highly dynamic. This applies not only to private-sector players but also to governments. As we have seen in Chap. 3, many countries have presented AI strategies in recent years. But while these address various AI-related issues, such as its use by government and ethical principles, they are often aimed primarily at strengthening a country’s competitiveness.

We can distil a number of patterns from these documents. Canadian researcher Tim Dutton compared several of them and identified the following general themes: ‘research’, ‘talent’, ‘industrial strategy’, ‘ethics’, ‘the future of work’, ‘data’, ‘AI use by government’ and ‘inclusion’.Footnote 25 Of these, research, industrial strategy and talent are the most commonly mentioned. Ethics also appears relatively often, but the paragraphs mentioning it are generally generic. Considering the consequences for ethics and broader civic values in relation to AI seems to lag some years behind the publication of the national strategies.

Investment in research and talent is addressed in many strategies. For example, the German one announced the establishment of twelve R&D centres and a hundred professorships. The American university MIT reported an investment of a billion dollars in an ‘AI college’. Co-ordination and co-operation in research are also cited regularly. The French document provides for four interdisciplinary AI institutes, and in Canada CIFAR is working with several institutes to co-ordinate research. The Alan Turing Institute was established in the UK in 2015, with a growing number of research centres affiliating with it.

Another pattern revealed by comparing the various documents is that many place AI in a broader perspective of technological development. The Chinese strategy, for example, is linked to other plans for key technologies such as ‘Made in China 2025’. The Japanese one positions AI within the ‘Fourth Industrial Revolution’. The same applies to the South Korean version, which also speaks of an ‘intelligent information society’. As mentioned in Chap. 3, the Dutch action plan for AI has also been incorporated into the government’s broader digitalization strategy.

In line with the idea of an AI identity, it is salient that many countries link the development of AI in their strategy to sectors and domains in which they are already competitive. In Germany the federal government’s Artificial Intelligence Strategy emphasizes the implementation of AI in heavy industry. This is in line with previous strategic initiatives such as ‘Industry 4.0’, which focused on robotics and smart manufacturing. As a major producer of machinery, infrastructure and transport technology, Germany wants to lead the ‘smartification’ of these sectors.

The Japanese strategy emphasizes three areas, one of which is mobility. With companies like Toyota, Nissan, Honda and Mitsubishi, Japan has much to gain from implementing AI in that market. The same applies to France, home to major car manufacturers such as Peugeot, Renault and Citroën. In a report that mathematician Cédric Villani wrote for the French government, he highlighted four areas for developing AI in France. Mobility is one, defence another (also a sector in which the French economy is strong). A third is health, in which a data hub is being established to combine information from healthcare providers, hospitals, health insurers, pharmaceutical companies, laboratories and other relevant parties.Footnote 26 Again this project – and others in the health domain – is building on national strengths. The French economy is characterized by a high degree of centralization (‘dirigisme’). In healthcare the country has huge, centralized databases that can be further developed to serve as a basis for AI projects. That is not the case in many other countries.

Other nations that are strong in defence are also focusing on that sector. Israel does not yet officially have an AI strategy, but it does have strong capacities and has expressed an ambition to become a leader in AI in the fields of defence and cybersecurity. As mentioned earlier, part of Russia’s strategy includes organizing AI war games. Moreover, governments in countries like Russia and China have a great deal of control over their people. Not surprisingly then, both are very strong in AI applications in the field of facial recognition. We return to this in Sect. 9.2.

A further pattern in many of the strategies is to focus on areas where there are major social issues that AI can provide an answer to. This seems to be why the domain of healthcare has been included in the Japanese strategy. Japan is the world’s fastest ageing society and therefore faces an increasing demand for healthcare services. AI could help meet this need. The Indian strategy is entitled AI for All and its explicit goal is ‘inclusion’. This is a major challenge for a nation faced with huge economic and social inequality. Several of the country’s recent digital strategies, such as a programme for financial inclusion and services using biometric data, aim to achieve a more inclusive society. India’s strategy for AI can thus be seen within the context of this ambition. Alongside the three domains already mentioned, a fourth in France’s AI strategy is ecology – another area in which that nation’s economy is strong, particularly regarding energy. Moreover, the Paris Agreement on climate change has made ecology a global challenge and an area in which the French are keen to build their global standing.

A final pattern in various strategies is the development of policies aimed at applying AI research in practical and commercial contexts. One of the five pillars of the UK’s ‘AI Sector Deal’ is the establishment of an AI Council whose task is to improve co-operation between universities and industry. Another approach is Canada’s ‘Scale AI’, part of the national ‘superclusters’ policy. This brings together companies in retail, manufacturing, transport, infrastructure and ICT to develop smart logistics chains using AI and robotics, and so improve the competitive position of Canadian business. Then there is Singapore’s ‘100 Experiments’. In this innovative programme companies are invited to submit problems that could be solved using AI-based products, but where none is currently generally available. One condition is that it has to be possible to build such a product within nine to eighteen months. Applying firms are paired with Singaporean AI researchers, who receive special funding from government.

Key Points: National AI Strategies

  • More than 60 national AI strategies have been published since 2017. A number of patterns can be discerned in these documents:

  • A number of patterns can be discerned in these documents: they emphasize investment in research and talent; they place AI in a broader perspective of technological development; they highlight links with sectors in which a country is already competitive; they identify challenges AI could help overcome; and they encourage the commercialization and practical application of research results.

1.3 An International AI Race?

Many of the national strategies place a strong emphasis on strengthening the international position of the country concerned. Some appear to be in a race to become AI leaders and so increase their competitive advantage. A lot of the documents contain passages that reflect this in some way. The Chinese one refers to developing a “first-mover advantage in the development of AI”, while the US version mentions “accelerating American leadership in AI”.Footnote 27 Many authors also use the metaphor of the global race.Footnote 28 Kai-Fu Lee sees an analogy with the ‘space race’ during the Cold War. The victory in go over Lee Sedol can be seen as China’s ‘Sputnik moment’, and the presentation of its national AI strategy a few months later as the equivalent of President Kennedy’s speech calling on America to put a man on the Moon.Footnote 29 The Soviet Union’s launch of Sputnik I led to the founding of NASA and DARPA, the innovation arm of the US military.Footnote 30 Like space then, AI is now the focus of numerous innovation plans.

Many of these developments can indeed be interpreted as a race. Countries are competing to attract talent. Germany’s policy of establishing more professorships in AI could draw talented researchers away from Dutch institutions. As mentioned in Chap. 3, this ‘brain drain’ is a frequent topic of discussion – including in the Netherlands.Footnote 31

The broader policy of competitiveness can also be understood as a race. Embracing AI too late can lead to a loss of earning power. There is also a risk that phenomena like network effects and dependencies will make it hard to catch up. In the specific case of AI, access to large amounts of useful data leads to better algorithms, creating a vicious circle that is hard for other parties to break out of. This is often referred to as the ‘winner-takes-all’ dynamic of AI.

Also consider the impact of AI on specific sectors. If American companies achieve success thanks to investment in AI for self-driving cars, that could be very detrimental for German economy’s huge motor industry. The focus of AI strategies on sectors in which a country already excels involves both opportunities and risks: countries with large car manufacturers have expertise and data that can give them an advantage in the development of self-driving vehicles, but if they fail to grasp these opportunities their industry may fall behind and lose market share. The same applies to the Dutch agricultural sector. China is investing in innovative agricultural technology, and American companies like John Deere have access to data on Dutch agriculture and horticulture through the machines they sell. So, there is an opportunity here for the Netherlands in the field of agricultural and horticultural AI, but at the same time also a threat that the country’s traditionally strong position in this sector could be weakened.

Finally, the metaphor of a global race could also be applied to the military domain: AI can give a specific country a strategic advantage over others. We discuss this in more detail in Sect. 9.2. But this metaphor also has serious shortcomings. Some of its implicit assumptions about the global development of AI are incorrect. Figure 9.2 reveals these issues, which we discuss further below.

Fig. 9.2
An illustration of the 4 shortcomings of the A I race are there is not one winner, there is not one goal, public values do not prevent innovations, and innovation crosses national borders.

Four shortcomings of the metaphor of an AI race

The first shortcoming of the race metaphor is that it suggests that there can only be one winner. The development of AI is presented as a ‘zero-sum’ situation. This may be the case with some specific goals in other domains, such as being the first to reach the Moon, but it does not apply to the development of a system technology. Kai-Fu Lee – who makes the analogy with space, as we noted earlier – likewise suggests that a race is not the right metaphor. The development of AI, he says, is ultimately more like the Industrial Revolution or the rise of electricity than the space race. As with electricity, there is no such thing as ‘zero-sum’ in AI. One country’s gain is not necessarily another’s loss. Technologies spread all over the world and lead to progress and more prosperity in all sorts of places.Footnote 32 We could even say this of the space race. Even though only one country could be the first to land on the Moon, the innovations that made space travel possible, such as satellites and GPS technology, benefited people worldwide.

But some countries certainly gained more than others – for example, because their innovative companies exported industrial products, energy resources or space technology to other countries. However, it is important to emphasize that the benefits of those technologies were also shared by citizens elsewhere. This perspective helps shift the focus from the development of the technology to its diffusion.

In AI the focus is often on competition at the most advanced level, in which only the leading companies based in the richest countries can participate. But the less advanced forms of AI are much more widespread, and they are having a far greater impact on the world. To stay with the analogy of electricity, only a few countries are able to develop nuclear power because this form of energy generation is highly advanced, risky and requires huge investments. However, there are many other ways to generate electricity that benefit citizens and create markets for businesses all over the world. In other words, while electricity innovations are often concentrated in only a few places, electricity production is much more widely distributed and its use even wider still.Footnote 33

The focus on diffusion is also important because this aspect raises different questions than when focusing on the ‘technological frontier’ of AI, the place where innovation takes place. Developing world-leading laboratories not the same as ensuring that AI is widely embedded in society. The US economy is at the forefront of technology in many areas, but the general population benefits less from this than people in other countries.Footnote 34 Conversely, countries that were not the developers of a new technology can still be very successful at implementing and disseminating it.Footnote 35

Another implication of the focus on diffusion is that the technical expertise of the inventors and major laboratories becomes less important than that of the people responsible for maintaining the technological infrastructure. For example, a very large proportion of the people who work on electricity or IT systems do so as repair or maintenance staff. This important form of technical expertise will also need to be given due attention in the development of AI.

If we reason in terms of diffusion rather than the ‘technological frontier’, it becomes apparent that there is a real concern that certain groups in society will not be able to keep up with this development. The idea of a global race focuses on a struggle between countries and compares their dominant companies, but neglects to examine the effects on their wider populations. It can thus overshadow the issue of AI inequality in a country.

A second problem with the metaphor of a global race is that it suggests that everyone is aiming for the same goal. In the first place it is unclear what that goal should be. As has become apparent in this report, AI is a complex phenomenon with many potential areas of application. Its goal, therefore, is very difficult to define. For example, the goal of ‘leadership in AI’ is much harder to quantify than the first Moon landing. Success in AI can occur in several domains, which makes the idea of a single finishing line problematic.

Countries can also follow very different paths to achieve their AI goals. In Chap. 4 we saw that the development of electricity in continental Europe involved different applications and different organizational models than in the US. While countries such as Canada and the United Kingdom are focusing their AI strategies on fundamental research, other nations have chosen specific sectors or areas of application. The idea of an ‘AI identity’ in which countries are distinguished by their AI capacities is incompatible with the metaphor of an AI race. That suggests that everyone is on the same path, and in so doing blinds us to the various ways in which AI can be successful and make countries more competitive.

Perhaps even more important than the previous objections, is the fact that the idea of a ‘global race’ suggests a conflict between competitiveness on the one hand and the protection of civic values on the other. Based on the idea that some countries are becoming too dominant, it is often argued that those which are lagging behind should make haste and not be held back too much by discussions about the protection of rights because this will only increase the distance between them and the dominant players. So restricting access to data for privacy reasons or reining in experiments with surveillance to protect the freedom of citizens would be detrimental to a country’s competitive position. Nick Bostrom emphasizes that the dynamics of an AI race could come at the expense of caution and safety.Footnote 36

Although such tensions certainly exist in practice, it is dangerous and unjustified to try to place competitiveness and the protection of fundamental rights in opposing camps. Firstly, it is not clear whether economic competitiveness that violates fundamental rights can be sustainable in the long term. The benefits of implementing extreme surveillance to reduce crime do not weigh up against the costs for individual freedom. Such innovations will encounter particularly fierce resistance in countries with strong democratic traditions, however effective they may be. It should also be emphasized that innovations can have more economic success if civic values are safeguarded. We have seen this with previous system technologies. At first there was resistance to safety measures in cars because critics feared that they would push up prices and stifle innovation. But in fact, such measures eventually led to people having more confidence in cars and using them more.

This is exactly the argument that the European Union uses in its ‘ethical’ approach to AI. The European Commission states, “Building on its reputation for safe and high-quality products, Europe’s ethical approach to AI strengthens citizens’ trust in the digital development and aims at building a competitive advantage for European AI companies.” Pekka Ala-Pietilä, chair of the Commission’s High-Level Expert Group on AI, has said, “Ethics and competitiveness go hand in hand. Businesses cannot be run sustainably without trust, and there can be no trust without ethics. And when there is no trust, there is no buy-in of the technology or enjoyment of the benefits that it can bring.”Footnote 37 Later in this chapter we look into the role of the European Union with regard to legislation and ethics. A few points of criticism aside, we support the European Commission’s conclusion that the goals of competitiveness and civic values do not have to be at odds with one another.

The EU’s approach is sometimes criticized for emphasizing ethics over competitiveness in AI. The EU is working to improve the latter and there is a public discussion as to whether this is happening to a sufficient extent. Another justified criticism of this policy is that the EU is a little too eager to position itself as the world’s ethical leader, with the market-driven US and state-driven China as two opposite extremes. In so doing it disregards the activities of other countries in this area. Nations like Japan, Canada, Dubai, Singapore and Australia have also developed their own ethical guidelines for AI; and even China, which is often portrayed as having little regard for ethical standards, published the Beijing Principles for Ethical AI in 2019.Footnote 38

A similar idea to the race with a single winner is the notion that dominant countries can contain the development of a technology within their own borders. As we have seen with previous system technologies, this has never actually been the case; the development of revolutionary technologies has always been a global affair driven by researchers and companies in various countries. Efforts by governments to nationalize innovation have never succeeded and often even backfired: their companies lost their leading market positions because export controls and nationalistic government policies only encouraged the competition abroad.

This dynamic now seems to be affecting the Sino-American trade dispute, which also involves AI. The US government is trying to repress China’s rise in this area. One of its main weapons is to ban chip manufacturers like Intel from selling their advanced products to China. It is also pressuring European companies to restrict their exports to China. It is quite possible, however, that this will only boost China’s ambitions to develop its own chip sector. The US policy could also serve to strengthen ties between China and other countries in Europe or Asia. This trade conflict is now forcing European countries to take a stand in this global arena.

Key Points: International AI Race

  • A number of international developments in AI, such as access to scientific talent and strengthening national security, could be considered part of a race dynamic. However, the metaphor of a race also has serious shortcomings.

  • The metaphor incorrectly suggests a zero-sum situation and ignores the importance of diffusion.

  • Moreover, not all countries have the same goal and so there cannot be just one ‘winner’.

  • The race metaphor also wrongly suggests friction between competitiveness on the one hand and civic values on the other.

  • Finally, it is impossible to contain AI innovations within a country’s borders.

1.4 From Competition to Co-operation

The clear historical lesson is that no country will be able to contain and develop AI completely within its own borders – not even the US or China, despite their advanced ecosystems. This applies even more to smaller countries, which can only benefit from open international co-operation.

In fact, focusing on international co-operation – between the European member states, for instance – could actually strengthen a country’s competitive position. Nations should invest in co-operation to strengthen their international profile. This will require an integrated policy of ‘AI diplomacy’. We have distinguished five areas on which this policy could focus: fundamental research, commercial applications, regulation, ethical guidelines and standards.

Box 9.1: CLAIRE and ELLIS

CLAIRE and ELLIS are prominent AI research partnerships. CLAIRE, the Confederation of Laboratories for AI Research in Europe, is an alliance of AI scientists founded by Dutch professor Holger Hoos, Philipp Slusallek from Germany and Morten Irgens from Norway. Their vision document has been signed by more than 550 AI experts. The goals of the alliance are to strengthen European excellence in all areas of AI, with a human-centred focus. According to the founders, a strong European research organization is needed for the development of AI; if Europe is allowed to fall behind, this could lead to negative economic consequences, an academic brain drain, less transparency and increasing dependence on foreign technologies.

CLAIRE hopes to achieve these goals by creating a network of regional centres of excellence across Europe, with or without specializations, and a central hub. The project is intended to do for AI what CERN has done for particle physics: establish a central European institute with state-of-the-art infrastructure for AI research. In 2020 the head office was established in The Hague.

ELLIS, the European Laboratory for Learning and Intelligent Systems, is a pan-European AI ‘network of excellence’ that focuses on fundamental science, technical innovation and societal impact. It builds upon machine learning as the driver for modern AI and is aiming to secure Europe’s sovereignty in this field by creating a multi-centric ‘European AI Lighthouse’. There are currently 34 ELLIS units in 14 countries, either located at existing AI research institutes or created from scratch. Through them ELLIS aims to create new working environments for outstanding researchers and to enable combinations of cutting-edge research with the creation of start-ups and industrial impact.

Fundamental research partnerships are the clearest form of co-operation in AI. Encouraging and attracting international groups to their shores is a way for countries to directly strengthen their own AI development programmes. There are a number of such projects in Europe, of which ELLIS and CLAIRE are the two most prominent. Both are involved in broad AI research, and specifically in machine learning. CLAIRE has also been described as a ‘CERN for AI’ (see Box 9.1).

A second area in which co-operation can improve competitiveness is commercial projects. This could involve establishing new services and organizations as part of international partnerships, as well as strengthening and co-ordinating the AI activities of existing companies. In pursuit of greater strategic digital autonomy, much has been achieved in the EU in this area in recent times.Footnote 39 The Gaia-X project for cloud and data infrastructures is part of this ambition (see Box 9.2). There are also similar European projects to advance cybersecurity.Footnote 40

Box 9.2: Gaia-X

Gaia-X started life as a project in Germany, which has since been joined by France and other European countries. The project charter was presented in October 2019, and on 15 September 2020 a group of 22 organizations signed an ‘incorporation paper’. They included German firms such as Bosch and Siemens and French ones like Orange and Atos. Full details of the project have yet to fully crystallize, but its basic goals are to strengthen European data sovereignty, reduce dependence on foreign players (through lock-in clauses in contractual arrangements), make cloud services more attractive and create an ecosystem for innovation. To achieve this the project aims to build a cloud infrastructure. This is intended not so much to provide an alternative to American cloud services as to make it easier for European parties to compete with these services, so that European data can stay under European control. The infrastructure will be governed by common rules, standards and technology. The project has identified various application domains for AI, such as ‘sustainable finance’ and ‘ambient assisted living’.

Commercial projects could also involve encouraging more co-operation with existing companies active in the field of AI or a related discipline, such as the Scandinavian suppliers of telecom infrastructure Nokia and Ericsson. The Dutch chipmakers ASML and NXP are also examples. To achieve greater strategic digital autonomy, countries can contribute to protecting and strengthening European industries in specific domains and encourage further European co-operation if desired. Although there are legitimate reservations about such industry-focused policies, Europe has shown in the past that they can be successful. By working together European countries were able to create the aviation giant Airbus and the Galileo satellite navigation system, despite their initially weaker position.

A third way a country can improve its competitive position in AI is through co-operation in the domain of international legislation. This is an area in which the European Union can add real value. Regulatory enforcement is often seen as a form of soft power that Europe is able to wield effectively.Footnote 41 One obvious recent example is the General Data Protection Regulation (GDPR), which has set a worldwide standard. But it has also come in for some criticism. According to some it puts the EU at a disadvantage compared with China and the US because it hampers innovations that use personal data.Footnote 42 With its introduction, however, the EU has not only established a global benchmark for data protection but also opened up debate on this issue.Footnote 43 Several other countries and a number of US states have adopted its underlying principles, and the GDPR has even influenced Chinese personal data protection policy.

At the 2019 meeting of the G20 in Japan, Angela Merkel stated that the challenge for the next European Commission was to draw up legislation for ‘trustworthy AI’ similar to the GDPR.Footnote 44 Researcher Nathalie Smuha speaks of ‘regulatory competition’, an international process of co-operation and competition in the regulation of AI, including legislation.Footnote 45

Such processes not only involve legislation, they also affect the fourth domain of international co-operation: guidelinesFootnote 46 and ethical principles. A great deal is happening in this area but a few prominent projects stand out. In May 2019 the OECD adopted a set of common ethical principles for AI. This was the first such set of intergovernmental guidelines in this area. A month later they were formally adopted by the G20. There is also a UNESCO project to develop a global code of ethics for AI. Finally, the Council of Europe has established an ad hoc committee for AI, CAHAI, with the aim of drawing up binding rules to govern the protection of human rights, democracy and the rule of law in co-operation with all member states.Footnote 47

Because international co-operation typically receives less attention than the other domains, we discuss it in more detail here. Our analysis of earlier system technologies has revealed how important it is to establish expert forums for the development of technical and other standards. Behind the scenes, much is going on in the forums where standards for AI are being developed. In the previous chapter we considered standards from the perspective of their role in regulating a new technology. For example, they are a way to enhance the interoperability of a technology. This does not concern its functions, but rather the dimension of international co-operation and its influence on a country’s competitive position. The ability to set standards has a major impact on a country’s competitiveness because the national industries that implement those standards thereby gain a ‘first-mover advantage’. A country that develops its own standards rather than adopting those compiled elsewhere has more control over them and is also able to influence other countries that have to follow those standards and so potentially benefit from lock-in effects.

As we saw in Part I of this report, when it comes to the interpretation of standards the historical development of system technologies has always been characterized by a certain friction between technocratically oriented experts on the one hand and national politicians and officials on the other. So, what is the current international situation regarding standards and their implementation?

It is noteworthy that Europe is playing an internationally leading role here, and even has what is also called ‘standard power’.Footnote 48 This is connected to a long track record of development in this area and of processes established to integrate standards across national borders – initially within Europe, but now also internationally. But the specific European model of standardization, characterized by public-private partnerships, also plays a role here. Standards are developed by private organizations licensed by governments. Each country has established separate bodies for specific purposes. For general standards these are NEN (the Netherlands Standardization Institute) in the Netherlands and DIN (the German Institute for Standardization) in Germany. There are also entities that focus on specific sectors, such as the German DKE for electrical engineering.

Furthermore, Europe has a clear hierarchical structure in place. The national bodies fall under the umbrella of European organizations: NEN is a member of CEN, the European Committee for Standardization, and DKE a member of CENELEC, the European Committee for Electrotechnical Standardization. In turn these are part of global organizations – respectively the ISO (International Organization for Standardization) and the IEC (International Electrotechnical Commission). International agreements have established that the higher its level in the hierarchy, the more priority a body has in setting standards.Footnote 49 Those lower down the chain describe the requirements imposed by the relevant standards in more specific detail.

In this respect the European model differs substantially from its American counterpart, which has a much greater commercial orientation. That means that a multitude of standardization bodies are often found competing in a particular domain. This lack of co-ordination makes the US system less influential globally than Europe’s. Standards for AI are mainly developed in three forums: a joint initiative of the ISO and IEC, the IEEE Standards Association for engineers and the International Telecommunication Union (ITU), a UN specialist agency.Footnote 50

A number of things stand out regarding AI standardization in the international domain. The first is that China is playing an increasingly important role. As we have seen, exerting an influence over global standards is part of that country’s broader strategy. It has contributed high-ranking officials to ISO, the IEC and the ITU, and the proportion of Chinese appointees to the committees and working groups of these organizations is growing. In absolute terms it still has fewer representatives than many other countries, but that is changing.Footnote 51 Its so-called ‘Belt and Road Initiative’, a major international project, also has an explicit standardization component.Footnote 52 Finally, the Chinese system is due to be reformed as a consequence of the exploratory study China Standards 2035.Footnote 53 With regard to AI specifically, China is using its growing sway to shape approaches to facial recognition and surveillance in particular. Firms like ZTE, Dahua and China Telecom have submitted proposals for international standards to the ITU. In this way their home country is gaining influence in emerging economies in Africa, Asia and Latin America where these standards are often adopted, and thus strengthening its access to important markets for the technologies they govern. The Chinese proposals for surveillance standards, for instance, coincide exactly with those applied in the design of ZTE’s Smart Street 2.0 traffic light.Footnote 54

China is not the only country responsible for the ‘geopolitization’ of standards. The US is playing its part, too. Indeed, telecommunication standardization has become one theatre in the wider trade dispute between the two nations,Footnote 55 part of what has become known as the ‘connectivity wars’.Footnote 56 For example, they have been fighting for leadership of the subcommittee for AI standardization of the ISO/IEC Joint Technical Committee.Footnote 57

As a result of these developments, the European role in and approach to setting standards – a relatively technocratic process driven by commercial parties with specific expertise – is coming under pressure.Footnote 58 These developments call for a European response to the new politics of standardization.

Key Points: From Competition to Co-operation

  • Since it is clear that no nation will be able to contain and develop AI completely within its own borders, individual countries can only benefit from international co-operation, as within the EU.

  • Countries can strengthen their competitiveness in five specific domains by engaging in international co-operation and an integrated policy of ‘AI diplomacy’.

  • One of those domains is fundamental research. Projects such as ELLIS and CLAIRE are working to improve Europe’s position in this regard.

  • Countries that co-operate in the area of regulation can benefit from market influence and establish a regulatory ‘first-mover advantage’. The individual countries in the partnership are then able to profit indirectly from this.

  • Countries can also play a more active role in international organizations in the development of ethical guidelines and principles.

  • Finally, Europe needs to respond to the current ‘geopolitization’ of standardization processes.

2 AI and National Security

Positioning a country in the field of AI is not just a matter of improving its competitiveness. The international dimension also involves issues of national security and sovereignty. As we have already seen, competitiveness and security need not be mutually exclusive. In this section we focus primarily on issues of national security (so not on the security of AI applications for individual citizens, for example). This theme has been developed in more detail in the WRR report Security in an Interconnected World.Footnote 59

The influence of AI on the international balance of power, and hence national security, is widely recognized. As a famous quote by Russian President Vladimir Putin in a speech to students and scholars in 2017 has it, “The country that leads in AI will become the ruler of the world.” This is also the American view. The US military has historically had strong ties with big tech through project funding by organizations like DARPA and the Department of Defense.Footnote 60 In 2015 the Defense Innovation Unit (DIU) was established to harness Silicon Valley technologies for use by the military. The Secretary of Defense at the time, Jim Mattis, wrote a memorandum advocating an integrated national strategy for AI in 2018. Later that year President Trump signed the National Defense Authorization Act (NDAA), which also established the National Security Commission for AI. The Pentagon went on to launch the Joint Artificial Intelligence Center (JAIC).Footnote 61 In March 2021 the National Security Commission on Artificial Intelligence (NSCAI), chaired by former Google CEO Eric Schmidt, published a hefty final report.Footnote 62

We have questioned the idea of a global AI race in terms of economic competitiveness. This metaphor does, however, seem more appropriate when it comes to national security. Unlike economic gains, military capabilities do offer zero-sum advantages to individual countries. This has been noted around the world: while there were fewer than 300 online hits for ‘AI arms race’ before 2016, in 2018, that number grew to 50,000. Newspapers like The Guardian and the Wall Street Journal now write extensively about the phenomenon.Footnote 63

As far as the impact of AI on national security is concerned, the first application that often comes to mind is autonomous weapons. This is currently a much-discussed issue. We therefore begin by examining this particular phenomenon and then broaden our scope to consider other, lesser-known ways in which AI affects security.

2.1 Autonomous Weapons

Autonomous weapons appeal to the imagination and have been the subject of many dystopian novels. The theme of ‘the rise of the machines’ often takes the form of robots deciding to attack humankind. In Chap. 5 on ‘demystification’ we have demonstrated why the fear that robots will become conscious entities and decide that humans are their enemies is really quite unrealistic. However, we have also seen that robots do not have to become conscious to pose a threat to us. What is currently happening in the field of autonomous weapons, and what implications does this have for the international order and thus for the national security of individual countries?

It is actually not easy to define an autonomous weapon. Before we can focus on this issue, it is a good idea to address some of the existing technologies in this area, because there is a huge diversity of them. Israel’s Harpy drone flies autonomously in search of enemy radar systems and is programmed to attack them without having to ask permission (Box 9.3). China has built its own version of this device using reverse engineering. South Korea has deployed a robotic weapon in the demilitarized zone on the border with North Korea that can shoot at moving objects autonomously. Russia is building armed ground robots for conflicts on the European plains. At least 30 countries have defence systems that, when activated, can autonomously intercept incoming threats such as missiles. The US has the Aegis system for ships and the land-based Patriot. Other examples are the German Mantis, the Israeli Trophy and the Russian Arena systems.Footnote 64 In 2017 sixteen countries possessed weaponized drones. Ninety percent of international sales involved Chinese technology.

Box 9.3: Drones and Warfare

Drones were first used in warfare in Vietnam but really took off after 11 September 2001. The US Army used them extensively in Afghanistan. In 2018 Syrian rebels carried out a major attack on a Russian airbase with thirteen drones. Later that year Russia brought the heavily armed Uran-9 system to that same conflict. In August 2018 an attempt was made to assassinate President Maduro of Venezuela with a drone.Footnote 65

So a lot is happening in the development of autonomous weapons. The news reports have created a real stir, and there have been several international campaigns to have such systems banned (see Chap. 7). But there are various obstacles to achieving this.Footnote 66 Several of these are illustrative of broader problems with the global regulation of AI and thus the task of positioning, and so we discuss them in more detail here. They concern the issues of definitions, the dual-use nature of AI and motivating countries to participate in a ban.

We have seen in Chap. 2 saw how difficult it is to define AI. Although autonomous weapons form a fairly specific application of the technology, they are also very hard to define, and this creates difficulties when it comes to establishing regulations governing them. Paul Scharre explains that there is considerable ambiguity about what constitutes an autonomous weapon because there are three different dimensions of autonomy.Footnote 67 The first concerns the nature of the tasks involved. Some are undertaken by humans and others by a machine. A fully autonomous vehicle will be able to do everything itself, but many current cars can already perform many of these tasks itself, such as cruise control, parking or braking. The latter ability enables the car to take over control from the human driver to prevent an accident. But when is a weapon autonomous? What if it can navigate of its own accord and avoid other objects, but not actually fire a weapon?

Specifically regarding the task of firing weapons, we can question whether a system is still autonomous if a human first has to press a button to activate its ‘attack mode’ and only then is the robot able to deploy deadly force itself. There are also technologies whereby a human marks an object for destruction and the robot subsequently proceeds to attack it until it is destroyed. Is this an autonomous weapon? One way of delimiting autonomy is to allow it only for tasks of a defensive nature. There are systems that can automatically shoot incoming missiles out of the air, a feature that many people will see the benefit of. But is it still a defensive response if that system fires back at the source of those missiles? If this system is moved into the opponent’s territory, it becomes even more difficult to distinguish offensive from defensive use of an autonomous weapon.

The second dimension of autonomy is the role of the human being. For this purpose we use the same framework for the military domain as already discussed in Chap. 6. Semi-autonomous systems are the equivalent of ‘human in the loop’ where a machine – having completed a task – waits for human input to continue. In supervised autonomous systems a human remains ‘on the loop’ and can intervene or stop the process. Finally, in fully autonomous systems humans are ‘out of the loop’ and have no role in the decisions the system makes.

The third dimension of autonomy is the level of intelligence. A traditional landmine basically acts autonomously by exploding when someone steps on it. But few people would call it an autonomous weapon. This is because the level of intelligence required is too low. The mine does not need to conduct any complex calculations before it explodes. It can therefore better be described as ‘automatic’. At a higher level a number of variables need to be considered before action is taken. This is what we call ‘automated’. Only when the activity becomes much more complex, and a system is able to determine independently how to achieve a goal can we speak of ‘autonomy’.

The fact that autonomous weapons can be defined in three dimensions makes it all the more difficult to reach international agreement on a definition and to determine how an individual country should position itself in this area. Unlike with other types of weapons, the definition here involves the question of whether autonomy should be granted solely to defensive systems, for example, or also to systems where humans provide general instructions only, to ones where a human only operates the controls or to ones with a low level of inherent intelligence.

In addition to definitions, there is another problem with the international co-ordination of autonomous weapons that also illustrates a broader AI issue: the dual-use nature of these systems. This refers to applications that can be used for both peaceful civilian purposes and as weapons in a conflict.

One example is DARPA’s Fast Lightweight Autonomy (FLA). Videos of this technology went viral worldwide. Accompanied by the James Bond theme, a swarm of drones can be seen flying through windows into houses and carrying out all kinds of complex manoeuvres in the air. The films caused quite a stir and the movement against autonomous weapons subsequently targeted FLA. Thus far the system has not been equipped with weapons, but it is clear that highly advanced aerial drones that can move around objects and fly information will be of great interest to the military. As long as there are no weapons involved, the technology underlying FLA is similar to that in a self-driving car, involving localization, mapping, object detection and dynamic navigation at high speed.Footnote 68 But these same technological features do not make self-driving cars a security threat.

Another example is a drone’s ability to target and track an object. The military could put this functionality to good use – to follow a moving truck, say. But this same technology also has civilian applications; several commercial drones already have this capability and are used, for example, to track and film wedding processions.

Another capability that can be applied to autonomous weapons is the ability to detect human faces or identify specific people. If this is combined with firepower, it becomes very dangerous indeed. But the software that enables it is already being used in many other applications. In fact, such software can be downloaded free of charge from large open-source software databases such as TensorFlow, complete with support for training the necessary algorithms.

So the technology underlying autonomous weapons, from navigation and object tracking to facial recognition, is already used in a variety of peaceful civilian applications, which makes some kind of general ban difficult to implement. A drone that recognizes someone’s face and then delivers them their package uses almost exactly the same technology as the drone designed to shoot that same person. Moreover, if facial recognition is used to avoid killing people rather than to kill them, is it still reprehensible? Even if this reduces the likelihood of civilian casualties? The wide range of peaceful applications of AI thus makes it far more difficult to restrict in military contexts – especially by comparison with, say, chemical weapons or long-range missiles (Box 9.4).Footnote 69

Box 9.4: Views on the Impact of Autonomous Weapons

Various AI researchers anticipate that the use of autonomous weapons will actually make war less destructive. Nick Bostrom stresses the importance of removing people from the battlefield as much as possible and the prospect of fewer fatalities occurring thanks to the precision of autonomous weapons.Footnote 70 Rodney Brooks points out that, unlike a human being, a robot can afford not to shoot back until after it has been shot at itself.Footnote 71 According to Yann LeCun, the greater precision and potentially less lethal nature of these weapons is turning the military into something more like a police force.Footnote 72 This sounds like a positive development, but according to Frank Pasquale it also brings new dangers. If combat increasingly takes on the character of an international policing mission, with fewer casualties, politicians will also feel less pressure to spare the lives of soldiers and so wars may continue to smoulder and be less easy to end.Footnote 73

The third obstacle in the way of unambiguous international agreements on autonomous weapons concerns the motivation of powerful countries. There is now some international consensus concerning the idea of ‘meaningful human control’.Footnote 74 The Dutch government has mentioned this in a comprehensive position paperFootnote 75 and in late March 2018 reaffirmed that “meaningful human control is always necessary for the deployment of autonomous weapon systems”.Footnote 76 Two ministers noted that “the Netherlands is one of the few countries to have formulated a comprehensive government standpoint on this subject, and has submitted a summary hereof as a non-paper” to contribute towards the debate on autonomous weapons systems as part of the UN Convention on Certain Conventional Weapons (CCW). Eleven principles for policy on lethal autonomous weapons systems (LAWS) were finally adopted under the CCW in 2019, with ‘human responsibility’ second on the list.Footnote 77

However, it will still be difficult to prevent countries with technologically advanced armies from developing weapons with more and more autonomy. As already mentioned, there are various strong international lobbies against such weapons. It is striking that these are largely driven by NGOs and less so by states. In fact, not a single powerful nation has yet supported a complete ban on these weapons. Those which have spoken out in favour include Ecuador, Ghana, Iraq and Pakistan, as well as states that have no armies like Costa Rica and the Vatican. In other words, there are no military superpowers among them, nor any of the front runners in the protection of human rights. Instead, these are mainly countries that fear what stronger nations might be able to do to them and hope to mitigate that threat with a ban.Footnote 78 Although some countries in Europe, such as Austria, are also moving in this direction, without the support of nations with a strong military it will be very difficult to actually implement such a ban.

Scharre also notes that an international treaty is not the most effective tool to counter the use of certain weapons. There are instances of weapons being used despite a treaty ban, and there are also examples of weapons that have not been used despite the lack of a ban. What is important here is whether countries expect reciprocity. The fear that another nation might also use the weapon acts as a deterrent. Countries that do not have this fear are less inhibited in the use of new technologies.Footnote 79 Another deterrent is transparency about the use of a particular weapon, which allows a country to be held accountable. This transparency is difficult to achieve with autonomous weapons. Their autonomous nature is determined not by their hardware or certain distinct physical characteristics, but by their software. This makes it a lot harder to provide transparency about what they do in a conflict situation.

The three issues of defining autonomous weapons, their interdependence with civilian technologies and the motivation of powerful states illustrate how complex it is to reach an agreement on the international co-ordination of this technology. However, this does not make that impossible. This case study is illustrative of some of the obstacles to successful international co-ordination of other AI-related issues. By co-operating, individual countries can reinforce their position in this power play and exert more influence over the international agreements for the use of autonomous weapons.

The discussion on the impact of AI on security is, as mentioned, often about autonomous weapons. Because these weapons appeal to the imagination, other applications are unfairly given less attention. However, AI can also influence warfare in other ways, which will be discussed in the following paragraphs.

Key Points: Autonomous Weapons

  • When it comes to AI and national security, much attention is being paid to autonomous weapons since they capture the imagination and also meet with a lot of resistance. Several obstacles make it difficult to reach clear international agreements on this issue.

  • Autonomous weapons can be defined in three dimensions, which prevents the establishment of a generally accepted definition and therefore makes them difficult to regulate.

  • The dual-use nature of many of the technical applications makes it very difficult to restrict or ban the technology.

  • Countries with a strong military are resisting such impediments.

  • Individual countries must take these obstacles into account in the international power play that affects their security at home and abroad.

2.2 Other Military Applications

As mentioned, the discussion on the impact of AI on national security often focuses on autonomous weapons. Because these attract so much attention, other applications unjustly receive too little. In a study for NATO on the influence of AI on warfare, in addition to autonomous robotic systems Matej Tonin also distinguishes ‘information and decision support’.Footnote 80 AI can increase the speed of analysis and decision-making in war by shortening the response time of defensive systems, providing more relevant information to decision-makers (giving them a potential advantage over rivals), enabling early detection of cyberattacks and helping identify attempts to spread disinformation. Not only can AI increase the speed of decision-making, moreover, it can also improve its quality. Tonin quotes a British officer who observed that, in a particular situation, his forces were “swimming in sensors, drowning in data and starving for insight”. AI can help here by, for example, analysing surveillance data, highlighting abnormal patterns or picking up weak signals of potential threats.

It is relevant to note at this point that several very important revelations about military organizations in recent years have come from very basic data sources. Information gleaned from the running and cycling app Strava revealed the location of a secret US military base in Africa and a top-secret Chinese ship was revealed in the background of a picture taken by a tourist.Footnote 81 In 2021 researchers discovered silos holding nuclear weapons in China by interpreting satellite data. AI could potentially make a significant contribution to the analysis of such basic information sources for relevant military data.Footnote 82

The use of AI for information provision also brings to light broader issues concerning its use in the defence domain. The first is ‘novelty detection’.Footnote 83 This concerns the ability of AI systems to ascertain that certain input falls outside the range of what they have been trained to do. An algorithm trained to find dogs in a picture must be able to recognize a dog that it has never seen before. But when presented with an image of a dolphin it must be able to recognize that that is so different from any dog that it is something entirely new, not categorize it as the type of dog the dolphin most closely resembles. So, the algorithm has to distinguish between ‘different in detail, but very similar’ and ‘highly dissimilar’. This is particularly important in the military domain, where different types of incoming missiles need to be detected accurately, but aircraft must never be identified as missiles.

A second issue concerns the manipulation of data to mislead an opponent. For example, data manipulators can look for ‘edge cases’, where a weakness in an algorithm leads to a totally different outcome. Laboratory researchers made subtle manipulations to images of buses so that they were identified as ostriches and did the same with turtles to make them become guns. In another study only a few pixels had to be manipulated for a neural network to identify a picture of an elephant as a car.Footnote 84 We saw in Chap. 6 how this may cause problems for self-driving cars, and such manipulation is even more dangerous in the military domain. Systems could be misled so that they fail to recognize attacks. Conversely, a non-threatening activity could be presented as a threat in order to provoke an attack. Extremely complex deceptions can be conceived where combatants could subtly manipulate data to make a hospital appear as a military facility so that it is targeted by the enemy. If the deception is small and subtle enough, it will be difficult to trace and so it will be nearly impossible to prove that it was not the attacker’s intention to bomb the hospital.Footnote 85

The impact of AI as a system technology on the military domain is not easy to predict. The technology can be applied in practice in myriad different ways. This is also one of the reasons why military forces are so interested in it: they do not want to miss out on a major strategic advantage. It is also why it is so important to look further than just the influence of autonomous weapons in this field. Moreover, military innovations do not form the only threats to national security and sovereignty; developments in civilian AI can also give rise to security issues, as the Strava example shows, and so these must also be closely monitored.

Key Points: Other Military Applications

  • AI can contribute to the speed and quality of analyses and decision-making in the military arena.

  • The use of AI to interpret and manipulate data throws up new technical challenges such as ‘novelty detection’ and ‘edge cases’.

  • AI can also be deployed to support all manner of military operations.

  • It is ultimately impossible to predict what impact AI will have on warfare, which is why it is important to closely monitor the use and usefulness of this system technology in the military domain.

2.3 Security Beyond the Battlefield

Issues of national and international security are increasingly becoming intertwined.Footnote 86 In recent years there has been increasing attention for the impact of cyberattacks on vital infrastructure. Examples are the rise of ransomware and the major cyberattack on Ukraine in 2017. Such attacks have had ramifications all over the world, including effects for the container company Maersk that in turn lead to problems at the Port of Rotterdam. In 2019 the WRR published a report on this threat of digital disruption.Footnote 87 The rise of the use of sensors in physical objects (the ‘internet of things’) creates new vulnerabilities to AI attacks.Footnote 88

In addition to the digital infrastructure, the flow of the digital information itself is now increasingly at the centre of conflicts and security concerns. We have already described the security implications that seemingly harmless smartphone apps and tourist photos can have. In fact, all manner of everyday information can play a role in countries’ endeavours to outcompete each other. Take the details people share on social media. It is well-known that Russian secret services analysed President Trump’s tweets to create a psychological profile of him.Footnote 89 The information that world leaders, officials or even ordinary citizens post on social media may contain valuable information for foreign rivals.

Not only can AI applications quickly analyse huge quantities of such data, they can also distil new patterns from it. Research by platforms like Facebook has revealed psychological insights about people. One claim is that the consistent use of black-and-white filters on Instagram and posting face-only photos are indicators of clinical depression.

More and more research involves distilling medical data from video images of people.Footnote 90 Various nations are quite likely already collecting as much information as possible from the social media pages of other countries’ citizens and running their algorithms on the data.

As well as gathering valuable information, the manipulation and sharing of data on social media have become the latest battlefield in what Singer and Brookings have termed the ‘Like War’. The terrorist organization IS frequently used social media alongside more traditional military tactics. In fact, its campaign was fought online as well as on the battlefields of Iraq and Syria. It reported on military actions, recruited new members using professional action videos and sowed confusion in the cities it attacked through its own Twitter accounts. Much like the Nazis used the radio for fast communication and to spread confusion in France in 1940 (helping them bypass the impenetrable Maginot Line), so IS deployed social media as part of a twenty-first century Blitzkrieg.Footnote 91 Much of this manipulation was done by humans, but its huge impact was down to the way algorithms can make messages go viral. The physical military conflicts of today are fought simultaneously on a ‘digital battlefield’ where the belligerents try to frame each other and influence public opinion in their own country and beyond using social media.

AI, and more specifically machine learning, is being deployed in many ways in this ‘information war’ (see also text Box 9.5).Footnote 92 First of all through the method of microtargeting, which we have discussed in Chap. 3. This involves creating digital profiles of people who use social media to find out what messages resonate best with them. Sentiment analysis and natural language processing (NLP) are used to gain a greater understanding of specific populations so they can be sent targeted messages. One application of NLP is chatbots, which are becoming better and better at imitating people and can so be used to influence them.

Box 9.5: Modern Propaganda

Whereas propaganda used to be spread by broadcasting radio programmes into another country to influence opinions and feelings there, today it is spread through social media. People can now even be contacted directly by sending them friend requests and then bombarding them with information. By giving ‘likes’, citizens in far-away countries can participate in a conflict and contribute to the propaganda surrounding it. Online fundraising campaigns make this contribution even more tangible.

The functioning of democratic institutions can be jeopardized by information wars. During the 2016 US presidential election, both domestic and foreign actors (amongst them the Russian state) attempted to influence public opinion through social media. Jeff Giesea worked for Donald Trump’s online campaign. In a paper for a NATO journal, he described the tactics it used as ‘memetic warfare’, after the memes that go viral online.Footnote 93 He drew parallels between these tactics those employed by the propagandists of IS and Russia.

Another emerging application that deserves close attention is the deepfake. Deepfakes are falsified images or audio clips that are hard to distinguish from the real thing. In 2017 the company Lyrebird presented a hair-raisingly authentic-sounding recording of a conversation between Barack Obama, Hillary Clinton and Donald Trump. In the field of imaging, researchers have succeeded in converting a two-dimensional photograph into a three-dimensional face that could be given distinct expressions. Using ordinary cameras, other scientists were able to capture the facial expressions of an individual talking and then transfer them onto the face of another person (this is called ‘deformation transfer’). Video and audio footage of Barack Obama, of which a great deal is publicly available, can now be used to create fake video clips in which he can be made to say anything the creator wants him to.Footnote 94 Lyrebird claims that it now only needs a few minutes of training data to create realistic deepfake audio fragments.Footnote 95

It has also become possible to create completely new images from scratch using the generative adversarial networks (GANs) discussed in Part I. This technology is used in Hollywood films and computer games to create new objects and environments. But it can generate fake faces of non-existent people as well. These images are now so realistic that many observers are unable to distinguish them from photographs of real people. This means that it is now possible to create footage of events and human actions that never actually took place, but are indistinguishable from actual occurrences. So fake news, manipulation and polarization are new weapons in modern warfare. This is forcing defence forces and others to rethink their position.Footnote 96

Thanks to the rise of technologies for producing fake material, technology researcher Aviv Ovadya foresees something he calls the ‘infocalypse’, a compound of information and apocalypse.Footnote 97 This will be brought about through the combination of deepfake images, chatbots that can imitate people very accurately (‘laser phishing’) and all kinds of other forms of manipulation that make it almost impossible to distinguish fake from real. Even if the material is recognized as fake with hindsight, realistic-looking falsified images can still cause plenty of damage if people initially believe they are real.

Current phenomena such as fake news and hacked Twitter accounts that spread false reports are but a small taste of what will become possible in the infocalypse. That could deepen social divisions by pitting groups against each other and augmenting distrust in institutions, but it could also lead to general apathy towards news reports when it appears that anything and everything can be manipulated. The philosopher Daniel Dennett even speculates that the end of the modern age of photographic evidence is approaching and that we will return to a world where people rely more on memory and trust than on incontrovertible proof.Footnote 98 These technologies can also be used to stifle critics. An Indian journalist who was highly critical of the government was inserted into a deepfake pornographic video that was distributed by politicians.Footnote 99 More and more countries are using disinformation as a weapon, too. According to researchers from the University of Oxford, while 28 nations carried out disinformation operations in 2017 that number had increased to 70 by 2020.Footnote 100 This phenomenon thus now forms a threat to the proper functioning of democracy.Footnote 101 Which brings us to the broader phenomenon of non-military threats involving AI, our next topic.

Key Points: Security Off the Battlefield

  • In addition to the digital infrastructure itself, the information available through this infrastructure is increasingly a cause of security concerns.

  • AI can be used to distil sensitive information from the increasing number of data sources in the civilian domain.

  • State and non-state actors are engaged in manipulating and influencing people in an information war.

  • All manner of applications of AI, such as microtargeting and deepfakes, are being used in the information war and are forming a growing threat to democracy.

2.4 Digital Dictatorship

The question of how a technology like AI affects different political regimes is also relevant to international security. For a long time, the answer was simple and reassuring: modern technology decentralizes and democratizes. For example, its complexity was seen as a factor in the failure of Soviet central planning. Centralized regimes were thought incapable of generating innovation and resolving the issues of co-ordination.Footnote 102

As we have seen in Chap. 5, from the outset there was a strong ideological current that the internet was a force that could counter centralized power and free individuals from the grip of the authorities. The Arab Spring that began in 2011 has been partly attributed to the democratizing effect of internet platforms. However, this view of digital technology is changing. The author Evgeny Morozov argues that, while individuals and companies were the first to find their way to digital technology, governments have now discovered it too and they are proving that internet technologies are just as suitable for increasing centralization, control and surveillance.Footnote 103 This applies to the regimes of countries such as Iran, Russia and China, of course, but also to Western security services such as the American NSA.

There are even reasons to believe that AI technology can actually facilitate centralization. It allows governments to monitor their populations cheaply and on a large scale. The Stasi had to employ a huge network of human informants to spy on a section of the East German population. This limited the scope and extent of its activities. Now algorithms can do the job by analysing patterns in much larger quantities of data than were ever available before. This is another aspect of the dual-use nature of AI. It no longer takes huge additional investments in money and personnel to conduct mass surveillance, because to a large extent governments can simply build on the capacities already found in private-sector applications. This also makes it more attractive for them to use AI for such purposes.Footnote 104 The simple awareness that one might be under surveillance can also lead to ‘chilling effects’ and self-censorship by the public.

Previously decentralization was necessary in order to be able to co-ordinate the distribution of information. The free market generates an immense number of signals concerning supply and demand that need to be interpreted to set prices. No twentieth-century planning office could do it better. Looking at the example of navigation apps, it is now possible for a central organization to collect all the data they generate and use it to manage traffic and create the optimum flow. The founder of the Chinese platform Alibaba, Jack Ma, argues that AI thus enables better central planning. With sufficient information planners can better understand, predict and manage the economy.Footnote 105

Technologies such as AI offer new opportunities to influence human behaviour and subtly nudge people in a certain direction. The fear is that not only computers will be ‘programmed’ in this way, but also people.Footnote 106 The authors of an article in Scientific American point to the danger of a computerized society with totalitarian tendencies – a digital version of George Orwell’s Big Brother.Footnote 107

Of course, technologies such as AI can also strengthen democracy. In addition to the older ideas of decentralization and participation, more and more parties are using the new instruments for democratic objectives. GVA Dictator Alert is an algorithm that scans flight data to warn when a dictator is landing at Geneva Airport. In Chap. 7 we gave examples of civil society actors who use AI to promote equity and inclusion. There are several projects that are currently using AI to achieve the UN’s Sustainable Development Goals (SDGs) – for example, to improve the position of small farmers in developing countries.Footnote 108 AI is also being used to monitor human rights violations.Footnote 109

At the same time the momentum of the non-democratic effects of AI seems to be growing, driven partly by the rise of authoritarian countries. Political scientists speak of three historical waves of democratization: the first from 1820 to 1926, the second just after the Second World War and the third from 1975 onwards.Footnote 110 Every wave so far has been followed by a democratic reversal or countermovement. Nicholas Wright makes an interesting suggestion that is relevant here. Every setback for democratization was accompanied by a different form of dictatorship. The first wave was stalled by fascism. The second was followed by ‘bureaucratic authoritarianism’, a term for the kind of dictatorships found in Latin America and elsewhere from the 1960s onwards. Wright claims that the third wave could give way to ‘digital authoritarianism’.Footnote 111

According to the annual Freedom House study, however, this democratic reversal has been going on for a good while. For some time, the regimes of leaders like Rodrigo Duterte, Recep Erdogan, Vladimir Putin, Viktor Orbán and Jair Bolsonaro have been referred to as ‘illiberal democracies’, a concept in which new technology does not play a central role. It is possible, though, that digital technologies are now increasingly reinforcing such authoritarian governments.

Not only is a form of dictatorship emerging that relies on digital technology, but this model of governance is also increasingly being exported – and not just to relatively weak democracies in Africa or Latin America, say, but even to developed ones in Europe. Steven Feldstein has developed an AI Global Surveillance Index and revealed that at least 75 countries are using forms of AI surveillance in smart city platforms, facial recognition systems and smart policing. Chinese companies play a key role here, but firms from the US (Cisco, IBM), France (Thales, Teleste), Japan (NEC) and Germany (Bosch) are also contributing their expertise.Footnote 112

A 2020 Amnesty International report on the export of European surveillance technology uncovered the activities of a Dutch company, Noldus Information Technology. It supplied the product FaceReader, which analyses facial expressions, to the Chinese Ministry of Public Security – according to the report, a heavy user of biometric data for mass surveillance.Footnote 113 The global proliferation of such technologies is a problem for the international order and the values that countries like the Netherlands want to uphold.

To shed more light on the nature and extent of digital dictatorship, we end this chapter with two case studies. In them we examine the phenomenon in more detail using the examples of China and Russia, the countries with the most advanced capabilities in this domain.

Key Points: Digital Dictatorship

  • Technologies such as AI can have a decentralizing and democratizing effect. But authoritarian regimes are also increasingly capable of using such technologies for their own ends, and we now speak of ‘digital dictatorships’.

  • The instruments of such dictatorships are increasingly being exported, putting pressure on democracies worldwide.

  • Not only are authoritarian regimes contributing to this, but so too are Western companies.

2.5 Case Study: Digital Dictatorship in China

We saw in Sect. 9.1 how China has embraced AI and is pursuing global leadership through the AIDP. The country also uses this technology explicitly to monitor and control its own population.

It is important to realize that, while AI does play an important role here, it is being used as part of a much broader set of technologies and non-digital methods. One key example is the ‘grid management system’, where ‘grid managers’ are responsible for collecting information about a section of a neighbourhood. The Golden Shield Project develops broader digital technologies for governing the population and co-ordinating the actions of government. Data is also collected through a large network of sensors in the physical environment, part of the Internet Plus project.Footnote 114

Another component of the Chinese data collection strategy is SkyNet (oddly enough, the same name as the malicious machine that turns against mankind in the film The Terminator), a programme to install a nationwide network of CCTV cameras. By 2010 it had already installed 800,000 cameras in Beijing, and in 2015 the police claimed they could now monitor 100% of the city. In the same year the National Development and Reform Commission (NDRC), the state planning body, announced plans to monitor all public spaces and leading industries with a surveillance system entitled Sharp Eyes by 2020.

AI is required to analyse so many sources of data, and companies such as Hikvision, SenseTime, Yitu and Megvii are developing smart cameras for this purpose. SenseTimewants to be able to monitor 100,000 high-resolution video feeds simultaneously and to identify and track individuals in real time using this technology. In 2018 the police used the system to identify and arrest a fugitive from among 60,000 concertgoers.Footnote 115 Facial recognition is in wide use in China.Footnote 116

Pivotal to the aim of controlling the Chinese population is the famous ‘social credit system’. This is not actually a single system, but comprises a number of regional and national projects. At the national level there is the Xinyi+ Project, in which companies such as Ant Financial (financial affairs), Didi Chuxing (a ‘Chinese Uber’) and Ctrip (a travel agency) are co-operating in the field of transport and rental in order to exclude certain people and offer greater convenience to others, based on their scores. An example of a regional system is found in the city of Fuzhou, where the company JD Finance is using AI to develop a ‘smart city credit platform’.Footnote 117

More and more information is coming to light about the oppression of Muslim Uyghurs in China’s Xingjiang province, particularly focusing on the ‘re-education camps’ in which more than a million people have been imprisoned. Also relevant is the Strike Hard campaign launched in 2014, which also has a strong digital component and uses technologies such as AI. The numerous police checkpoints in the province are equipped with biometric sensors and iris scanners, and can monitor the CCTV cameras installed in the local area. The DNA of many Uyghurs has been collected and they are forced to install the Jingwang app, which not only enables the authorities to track and block their messages but also provides direct access to their phones. The police monitor the population to ensure they have actually installed these ‘electronic handcuffs’.Footnote 118 All cars in the province are required to have navigation software installed that runs on BeiDou, the Chinese version of GPS, and drone swarms are used to monitor places where CCTV cameras cannot be installed. According to a paper by the Brookings Institution, in addition to physical prisons China also has “the world’s largest open-air digital prison”.Footnote 119

China is thus a textbook example of how AI can be used for the goals of authoritarian regimes. It is all the more important to keep a close eye on these developments now that such technologies are increasingly being exported, including by state actors including the military (the People’s Liberation Army) and the Ministry of Public Security, state-owned enterprises such as CEIEC and private companies like Huawei, ZTE and Tencent.Footnote 120

These exports are ending up all over the world.Footnote 121 China’s so-called ‘Great Firewall’ is being copied in Vietnam and Thailand. The company Yitu supplies portable cameras with AI for facial recognition to the Malaysian police and tendered to install facial recognition cameras in public spaces in Singapore.Footnote 122 Ethiopian security services use ZTE’s telecommunication products to monitor journalists and activists. Zimbabwe and Angola have both signed AI deals to bolster their own regimes. In Venezuela ZTE has a contract to roll out a national ID card, a payment system and a ‘homeland database’ that will allow the regime to introduce the Chinese social credit system to the country. Surveillance systems are used in government cameras in Ecuador, and in Dubai Chinese technology is used for the Police Without Policemen programme to fight crime with the aid of video surveillance and facial recognition technology.Footnote 123 It should also be mentioned that Chinese companies like Huawei and SenseTime are entering into partnerships with universities around the world, including some in the West.

2.6 Case Study: Digital Dictatorship in Russia

The other major developer and exporter of technologies in support of digital dictatorships is Russia. This activity is founded on a long tradition of controlling information that goes back to the time of the Soviet Union and began to be reinstated quite soon after the fall of that authoritarian regime. In 1995 a law was passed that allows the FSB, the successor to the KGB, to monitor all private communications. Since then, a series of acts has increased the government’s grip on RuNet, as the Russian internet is called. These allow the authorities to block websites, register bloggers, store data and give the FSB access to encrypted data. The pressure put on VKontakte, Russia’s largest social media platform, to provide access to information about opposition leader Alexei Navalny’s presidential campaign (among other things) was enough to force the company’s boss to sell his shares. He later founded the chat app Telegram, which also clashed with the Russian authorities because of the encryption it uses.Footnote 124

Like China, Russia’s instruments of digital dictatorship are also exported abroad (see Box 9.6). The importance of digitalization in conflict situations has long been recognized. The Russian general Valery Gerimasov is said to have emphasized the use of the asymmetrical possibilities offered by the internet for international competition. The FSB has since directed 75 educational and research institutions to study how information can be weaponized. A NATO researcher summarized the strategy as the ‘4Ds’: “dismiss the critic, distort the facts, distract from the main issue and dismay the audience”.Footnote 125 Traditional and online media channels such as Russia Today, Sputnik and Baltica are playing an important role in this information war. Russia’s seizure of Crimea in 2014 has been dubbed ‘Schrödinger’s War’ because of the way it exploited disinformation, confusion and hybrid warfare.Footnote 126

Box 9.6: The Russian Toolbox

Russia uses a variety of instruments for internal control. One key surveillance system is SORM, the System for Operative Investigative Activities. Under this internet service providers are required to install a special device that enables the secret services to copy and monitor all their online traffic.Footnote 127

In addition to hardware, Russia also carries out digital control using people; paid troll factories, so-called ‘hacktivists’ and the notorious Internet Research Agency (IRA) in St Petersburg are used to project online influence at home and abroad. AI plays an important part in the strategy, too. The country started using Safe City in 2015. This system recognizes faces and moving objects on video images captured by numerous cameras and shares the data directly with the authorities. Between 2012 and 2019 the country invested US$2.8 billion to equip all the host cities of the 2018 FIFA World Cup with the system. More than 100,000 cameras in Moscow are linked to facial recognition software provided by the company NTechlab.Footnote 128

The Russian security services also use the Semantic Archive Platform supplied by the software company Analytical Business Solutions to collect, process and analyse open-source data.Footnote 129

There are clear differences between the instruments of digital dictatorship exported by China and Russia. While China’s so-called ‘50 Cent Army’ of online commentators is deployed to spread positive messages about their own nation, Russia is mainly concerned with spreading negative news in countries where it wants to sow discord. Nina Schick draws a parallel between this current policy and the ‘active measures’ during the time of the Soviet Union. Their aim was to change others’ perception of reality to such an extent that they were no longer able to draw sensible conclusions about how to defend their own interests.Footnote 130

A second difference is that the Chinese export product is technologically far more advanced and expensive, because it enables almost total control of the internet. Russia’s tools rely more on specific hardware and the use of intimidation and legislation to control the population. According to a study by the Brookings Institution, the Russian product may appeal more to poorer regimes that lack the resources to control the entire internet in their country.

The SORM system is widespread in the countries that were formerly part of the Soviet Union, such as Kyrgyzstan, Belarus and Kazakhstan. The companies that export it, Protei and Peter-Service, also have telecom businesses in the Middle East and Latin America as customers. The Semantic Archive Platform is used in Belarus, Ukraine and Kazakhstan.Footnote 131

One final characteristic of the digital dictatorship export drive concerns Russia’s policy towards international institutions and forums. This attempts to blur the line between cybersecurity and information control, so that countries concerned about the former will also want to do more about the latter. Moscow has submitted documents to the UN proposing an ‘International Code of Conduct for Information Security’ that, if implemented, would pose a threat to human rights and international law. Russia also wants to bring the internet under the control of the ITU, and hence states. Finally, it wants the internet cable infrastructure between the BRICS – the emerging economies of Brazil, Russia, India, China and South Africa – to bypass the US.Footnote 132 We looked at the importance of such international forums in the first part of the chapter on competitiveness. The Russian policy just described illustrates the extent to which negotiations in those arenas are intertwined with security issues.

Key Points: Digital Dictatorships in China and Russia

  • China and Russia are world leaders in digital dictatorship and the export of the instruments used to enable it. Each employs a different set of instruments to achieve authoritarian objectives, but both are exported all over the world.

  • The Chinese model is technologically advanced and primarily aimed at encouraging positive coverage of the regime.

  • The Russian model is less advanced and uses more hardware and analogue forms of intimidation. Abroad, it aims mainly to create confusion and conflict.

  • The phenomenon of digital dictatorship is complex and multifaceted and deserves serious attention.

3 In Conclusion

The overarching task of international positioning all about a country’s place and role in the international arena. This includes how it interacts with other nations, but also with non-state actors such as companies and criminal organizations. In this chapter we have seen that there are several international arenas where countries can and must take a stance on matters such as autonomous weapons, the regulation of AI and standardization, plus the challenges they all entail. In the final part of this report, we consider how this relates to the idea of ‘AI diplomacy’ and its implications for policy.

We have also seen how phenomena in which AI plays a role, such as the information war and digital dictatorship, are a cause of true concern. They pose a threat to freedom and democracy worldwide, but also to national security. It is important to invest in a response to these phenomena and to formulate answers to them.

The two issues of competitiveness and national security are clearly intertwined in the context of AI. In this chapter we have highlighted the importance of AI diplomacy and raising awareness of the risks the technology poses to national security.