Now we know what AI is and have seen how the technology has made the transition from the lab to society in recent years, we turn our attention to the process of embedding AI into society. What is required to incorporate AI into our society? To answer that question, this chapter presents a framework within which AI can be viewed as a particular type of technology, namely a system technology, with a number of historical precedents. By viewing AI in this way, we can draw various conclusions from the history of other system technologies. That in turn provides a basis for reflecting on what we need to do with AI and how we can address the many issues associated with it. It is not our intention to imply that history always repeats itself or that technological development has deterministic characteristics. We do not set out a rigid framework but identify general patterns that shed light on the present, while recognizing that the past and the present differ. By adopting this approach, we seek to look beyond the current situation and thus beyond the whims of the day.

Various prominent commentators have drawn parallels between AI and other technologies. According to researcher Andrew Ng, the impact of AI is “comparable to that of electricity a century ago.”Footnote 1 Google’s CEO Sundar Pichai and his predecessor Eric Schmidt have also compared AI with electricity. Indeed, Pichai went so far as to liken AI to fire.Footnote 2

In the policy world too, similar comparisons are often made. In a paper on the strategic implications of AI, Michael Horowitz wrote that it is not an isolated technology but similar to general-purpose technologies such as electricity and the internal combustion engine.Footnote 3 The breadth of AI’s potential applications has also been highlighted by the European Commission’s European Political Strategy Centre: “It is hard to think of any sector of society that will not be transformed by AI in the years ahead.”Footnote 4 The EU accordingly regards AI as a ‘key enabling technology’ (KET).

In short, many authors and organizations have hinted at similarities with earlier technologies. But few have gone on to make a detailed comparative analysis. We therefore seek to do that in this chapter. To that end we consider the implications of placing AI in the same bracket as technologies such as electricity. We discuss the literature on various types of technology, focusing particularly on the concept of general-purpose technologies, and we introduce the term ‘system technology’. By tracing the historical development of system technologies, we identify a number of general patterns in the way they are embedded in society. From there we define five overarching tasks associated with the process of societal integration. In Part 2 of this report, we consider how each of the five overarching tasks applies to the embedding of AI within society.

1 Classification of Technologies

Academics have long been interested in how different types of technology exert a general influence over the economy and society. An early example is the Kondratiev wave theory, in particular as elaborated by Joseph Schumpeter. He observed that periods of high economic growth alternate with periods of lower growth, a pattern he attributed to the effect of new technologies; sets of new technologies periodically boosted growth, after which the effect gradually waned over time. According to Schumpeter, such dynamism was inherent to a capitalist market economy: “it is essential to understand that capitalism is an evolutionary process … ‘industrial mutation’ … is constantly bringing about revolutionary change to the structure of the economy from the inside, constantly destroying the old structure and creating a new one.”Footnote 5 Such reasoning forms the basis of the familiar concept of creative destruction.

In his acceptance speech when presented with the Nobel Prize for Economics in 1971, Simon Kuznets introduced the idea of ‘epochal innovations’ driving periods of great economic development. Innovation scientists Carlota Perez and Chris Freeman have written about a similar phenomenon, which they refer to as ‘new technology systems’ and ‘technological revolutions’.Footnote 6 A new technology system is a powerful and conspicuous cluster of new and dynamic technologies, products and industries that lead to major change throughout the economy and ultimately to economic growth. Perez has identified five such clusters since the Industrial Revolution, including the eras of steam power and railways, of steel and electricity, of oil, cars and mass production and of information and telecommunications. She argues that each of these brought its own ‘techno-economic paradigm’: a way of thinking and acting, leading to the relevant technologies becoming integral to the fabric of society.Footnote 7 Alessandro Nuvolari has made a significant addition to Perez’s theories by emphasizing that the observed effects are attributable not so much to individual technologies as to blocks of radical innovations that together bring about revolution.Footnote 8 Some researchers accordingly take the view that innovation consists not of the development of major new things but of the combination of things that already exist.Footnote 9

1.1 General-Purpose Technologies

AI can be classified based on its general, transformative impact on society. It is useful to view the technology in relation to the concept of the general-purpose technology (GPT). GPTs are technologies whose potential applications are not specific, like those of a lawnmower, toaster or microscope, but generic insofar as they lend themselves to countless, highly diverse purposes. GPTs can therefore have a major influence on the economy and society. Timothy F. Bresnahan and Manuel Trajtenberg introduced the concept in an article published in 1992,Footnote 10 which cited three criteria for the classification of a technology as a GPT. First, a GPT is highly pervasive, being utilized in numerous sectors, production processes and products. Second, there is great scope for its technical improvement, meaning that the cost of the technology keeps falling and its efficiency increasing. Third, a GPT spawns numerous ‘innovational complementarities’, leading to generalized economic productivity improvements.

A large body of literature on the concept of the GPT is now available. However, that has not led to the adoption of a uniform definitionFootnote 11 or consistent use of the term. Some authors recognize only a small number of historical GPTs, while others argue that there have been many throughout human history, going back as far as the domestication of livestock and the forging of bronze. One author suggests that the literature identifies twenty-eight technologies as GPTs.Footnote 12

Another topic of debate amongst academics is the existence of technologies that have great societal impact but are not particularly generic. Examples include the printing press and the steamship: technologies whose applications are limited but have radically changed society. The technologies most widely recognized and cited as GPTs are the steam engine, electricity, the internal combustion engine and IT.Footnote 13

Notwithstanding the qualifications made above, a number of interesting studies in recent years have related AI explicitly to the concept of the GPT. Following a conference organized by the US National Bureau of Economic Research (NBER) in 2017, for instance, a collection of papers entitled The Economics of Artificial Intelligence was published in 2019. The first part, entitled ‘AI as a GPT’, includes contributions by renowned technology researchers and economists and contains various interesting analyses that we draw on in this report, although – in keeping with the nature of the original conference, but in contrast to our own focus – they are concerned primarily with the macroeconomic effects of AI.

Also of interest in this context is the thesis by Jade Leung of Oxford University, entitled Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies. In this she places AI alongside aerospace technology, biotechnology and cryptography as an example of what she calls ‘strategic GPTs’, and in that context emphasizes the relationship between governments and new technologies, particularly in the defence sector. Leung identifies three key actors here, the government, business and the research community, and demonstrates that each has different aims, instruments and limitations, which may converge in certain phases but are at odds in others.

Various researchers have recently sought to place AI in a broad historical perspective without making explicit use of the term ‘GPT’. In a polemic on Andrew MacAfee and Erik Brynjolfsson’s famous book The Second Machine Age, Carlota Perez wrote a nine-part series of articles entitled Second Machine Age or Fifth Technological Revolution? In these she explores how today’s digital technologies – including AI – compare with previous technologies.Footnote 14

All of these studies, and particularly the perspective developed in them, are relevant to the theoretical framework we use to view AI in this report. We additionally draw on a number of more empirical studies of the effects of specific technologies. Sarah A. Seo has written about the best-known application of the internal combustion engine, namely the motor car. She also demonstrates how this symbol of freedom has simultaneously led to an enormous increase in the power that the state – particularly the police – have over citizens’ private lives.Footnote 15 In a general survey of a series of technologies ranging from tractors and margarine to electricity and GMOs, Calestous Juma investigates the dynamics of social resistance to new technologies.Footnote 16 This report thus draws not only on research into GPTs but also analogies with recent technologies such as genetically modified organisms (GMOs) and nanotechnology, which have interesting parallels with AI.Footnote 17

1.2 AI as a GPT

The question we need to ask at this point is whether AI can actually be regarded as a GPT. But the answer seems quite clear: although its global impact is currently in its early stages, it already appears that AI is indeed a GPT. If we consider Bresnahan and Trajtenberg’s three criteria for classification as a GPT, a strong case can be made for saying that all apply to AI.

The first of these criteria is pervasiveness. Although AI’s perfusion of the economy and wider society has gathered pace only in recent years, the technology is already used in a variety of sectors and products. Earlier in this part of the report (2.2), we presented a range of examples illustrating how AI is being used in manufacturing, agriculture, the public sector, entertainment, financial services and medical practice. Given that versatility, it is already apparent that AI is well on the way to pervading society and the economy.

The second criterion is inherent potential for technical improvement, leading to lower cost and increased efficiency. Again, it is evident that AI passes this test. In Chap. 1 we highlighted how Moore’s Law states that computing power doubles every 2 years, opening the way for the further improvement of AI technologies. We also saw how scientific research has fuelled the development of new and improved technologies. As a result, the application of AI has passed numerous milestones in recent years. Furthermore, as highlighted in our discussion of the future of the lab, promising new technologies are being developed, which are expected to further boost the performance and efficiency of AI.

Finally, classification as a GPT depends on the presence of complementary innovations that lift general productivity. Numerous signs of a positive influence on general productivity can already be discerned, but AI is simply too young for us to demonstrate conclusively the existence of complementary innovations. Nevertheless, various authoritative research bodies and consultancies, including Accenture, PwC, McKinsey and Deloitte, have forecast major productivity increases over the decade ahead. We set out the three defining characteristics of system technologies in Fig. 4.1.Footnote 18

Fig. 4.1
An illustration of the three characteristics of system technology. 1. Pervasiveness. 2. Technical improvements. 3. Innovational complementarities.

The three defining characteristics of system technologies

1.3 AI as a System Technology

We can conclude, then, that AI satisfies the three criteria for classification as a general-purpose technology. The GPT concept and the wealth of literature considering AI as such a technology provide useful starting points to understand what kind of technology we are dealing with. Nevertheless, we have chosen not to apply the term ‘GPT’ here. Rather, we have elected to define AI as a ‘system technology’. That choice reflects significant focal differences between our analysis and the literature on GPTs.

Firstly, the GPT literature from the earliest Kondratiev wave sources to the recent NBER study has a strong focus on the macroeconomic effects of the technologies in question. Many researchers seek to quantify the effects of the technologies they study. That gives rise to debate as to whether and how a GPT can be shown to support a prolonged increase in economic growth. Given the huge number of variables to be accounted for, a model capable of demonstrating such an effect has to be extremely complex. By contrast, we have chosen to concentrate not on the quantitative effects of system technologies but primarily on the qualitative changes they bring about.

Secondly, the literature on GPTs pays particular attention to historical classifications. As indicated earlier, there is considerable debate as to how many historical technologies may be considered GPTs. One researcher recognizes dozens, Perez distinguishes five clusters, authors such as Chandler refer to three Industrial Revolutions,Footnote 19 Schwab identifies four and Brynjolfsson and McAfee speak of two ‘machine ages’. Furthermore, many authors make use of highly schematic timelines with precise start and end dates for individual technologies. This report differs from those approaches in that we refrain from introducing such demarcations. Because we are concerned mainly with qualitative impact rather than quantitative effects, we do not need to commit ourselves to a strict classification system or definite start and end dates for technologies. What we are seeking to do is highlight general patterns. To that end we concern ourselves primarily with a small number of previous system technologies – the steam engine, electricity, the internal combustion engine and the computer – and draw pragmatically on historical sources to identify relevant parallels.

Another reason for not adopting the term ‘GPT’ is that it emphasizes a technology’s versatility. We prefer ‘system technology’ because we wish to emphasize the systemic nature of certain technologies and to broaden the focus to their systemic effects on society. In the context of ‘system technology’, therefore, the word ‘system’ has two implications. First, it implies that the technology consists of a system with multiple components. Electricity, for example, works in conjunction with generators, cables, batteries and so on. Similarly, AI is part of a wider technical system of data and hardware. The second implication of ‘system technology’ is that the technology influences a variety of systems and processes within society. Exercising such influence involves a complex process of adaptation, trial and negotiation. In other words, our chosen term reflects the process of societal integration and the associated qualitative effects.

1.4 Similarities and Differences Between AI and Earlier System Technologies

AI is a system technology and therefore comparable with earlier technologies of that type such as the steam engine, electricity and the internal combustion engine (Fig. 4.2). Furthermore, we can define AI even more precisely given that particular characteristics make it more similar to one technology than another in certain respects. For example, the internal combustion engine and the steam engine are tangible, whereas AI is like electricity in being intangible to some extent. It does not exist in isolation but only as part of a product or service. In that sense, devices such as toasters, lamps and radios that work by means of electricity are comparable with thermostats, watches and machines that work by means of AI.

Fig. 4.2
An illustration of the system technologies with the image arranged one by one where Artificial intelligence is highlighted and the years from 1700 to 2000 are indicated below. 1. Steam engine. 2. Electricity. 3. Combustion engine. 4. Computer. 5. Artificial intelligence.

AI as a new system technology

Another respect in which AI is more like electricity than the internal combustion engine is that it is ‘technology-radical’, rather than ‘use-radical’. The descriptor ‘technology-radical’ is applied to technologies driven primarily by technical and scientific progress; their development is propelled by the curiosity of researchers, without any clear notion of how or for what purpose the technology will ultimately be used. By contrast, ‘use-radical’ implies a clear understanding of the applications from the outset, with commercial factors playing a role early on. The development of use-radical technologies is goal-oriented. That was the case with the internal combustion engine. Like with electricity, researchers were working on AI long before people recognized the lucrative applications we are now aware of.

A distinction can also be drawn between system technologies in which governments play an obvious role from the start and those whose development has no such involvement. The first group includes technologies developed specifically for defence purposes and dependent on the defence sector for their further application and development. This differentiates them from ‘civilian-first’ technologies, whose development is attributable mainly to their economic potential. Governments have more control over the development of the first group of technologies than the second. Space technology is an example of a technology developed with direct government involvement, while biotechnology is an example of one whose development fits the second model. However, both are examples of what Jade Leung calls strategic GPTs.Footnote 20

With AI, the US military institute DARPA was a key financier in the early stages. Nevertheless, early government-funded AI research was of a fundamental nature and military applications represent only a small portion of its full range of uses. In that respect AI is more akin to biotechnology than aerospace technology. However, it differs from biotechnology insofar as the latter’s developers are largely attached to major (academic) laboratories, whereas innovation in the field of AI is more decentralized. That has implications for researchers’ ability to define universal standards.

It is important to consider not only such technical similarities and differences between AI and other system technologies, but also how AI compares in terms of its societal and temporal context. Take the role of the government, for example. The steam engine was developed in a laissez-faire climate in the UK, with the government playing only a very limited role. On the other hand, the combustion engine and the motor car were developed in an era when government economic policy was led by Keynesian thinking. Although governments now exercise considerable influence over the economy by means of standardization and legislation, AI emerged at a time when there was significant resistance to strong guidance of the economy. It is important to bear those circumstances in mind when seeking to identify historical patterns that are instructive in relation to AI.

The societal context of AI also differs from that of earlier technologies in terms of the mobilization of social actors. Increasing prosperity and the progress of democratization have empowered more people to express themselves in the public arena. Whereas enterprises and governments could once shape society with relative ease, nowadays civil society, the academic community, individuals and the media have much more influence than in the past. The mobilization of these actors therefore plays a more significant role in relation to AI and its integration into society than it did in relation to earlier system technologies.

This phenomenon ties in with what Trajtenberg calls the ‘democratization of expectations’: factory workers during the Industrial Revolution had little power because most struggled to make ends meet. We return to this point in Sect. 4.5, in relation to the Luddites. Today far more people participate in public life and workers have much better representation. Moreover, people are less inclined to bear the cost of technological change while also having greater expectations in terms of sharing in the benefits of such change.Footnote 21

The world today is not only more democratized than in the past but also more globalized. Consequently, the issues associated with AI have always been more global. The extensive nature of modern markets and the consequently wide geographical impact of AI’s applications are relevant in this context, as is the existence of all manner of international constraints such as trade agreements, human rights and technical standards. Interestingly enough, the rise of earlier system technologies has often been an impulse for the formation of new international organizations for standardization,Footnote 22 and these are now playing a role in relation to AI. Examples include entities active in the fields of telecommunications and the internet, standardization bodies such as the ISO and international engineering associations such as the IEEE. Although the development and embedding of earlier technologies had an international dimension, the significance of that dimension has increased over time under the influence of globalization.

One final difference between the development context of AI and that of earlier system technologies is the increased level of organization and communication amongst scientists. The scientific community was not well integrated at the time of the steam engine’s development, whereas academic organizations, codes of conduct and standards now exert significant influence.

1.5 The Techno-Economic Paradigm of AI

Finally, Carlota Perez’s notion of the techno-economic paradigm warrants attention.Footnote 23 She argues that major technological change leads not only to new products and services but also to new ways of thinking and working and new principles of organization. For example, the Industrial Revolution led to the rise of factories while electricity enabled ‘networked’ production. Similarly, the invention of the internal combustion engine gave us not only cars but also the conveyor belt. Fordism, Taylorism and just-in-time production are all derived from organizational principles. Although it is too early to characterize the techno-economic paradigm of AI in definitive terms, we can already discern certain outlines that follow earlier forms of digitalization but also exhibit new features.

We believe that three aspects of the techno-economic paradigm of AI are already distinguishable. The first relates to changes in the nature of objects and products. As discussed at the end of the previous chapter, in the digital domain we are dealing not so much with end products as with semi-finished ones. A digital product is never finished. Unlike traditional products and services, which ultimately leave the factory and are sold, digital products are constantly being revised and adapted. By means of updates, digitally-enabled objects such as computers, cars, cameras and medical devices are always changing. In the words of Kevin Kelly, everything is in a continuous ‘state of becoming’.Footnote 24 Or, as Luciano Floridi puts it, ‘things’ are being replaced by ‘-ings’, such as interact-ing, process-ing, network-ing, do-ing and be-ing.Footnote 25

Related to this is the phenomenon that physical objects that acquire a digital aspect cease to be discrete entities. In this regard Adam Greenfield highlights porosity as a common characteristic of modern-day technologies. The boundaries between objects and between user and platform, and even the walls of our homes, have become porous due to bilateral interconnection and intermingling. Numerous actors are therefore involved with and present in all those products. The changes to the nature of physical objects raise a variety of security, privacy and responsibility questions.

A second feature of the technical paradigm of AI is, paradoxically, that while objects associated with individuals are becoming more transparent, much of the technology is becoming invisible. At a meeting in Davos in 2015, Eric Schmidt predicted that the internet would disappear. He did not mean that it would fall into decline but was referring to an idea derived from an influential 1991 article by Mark Weiser entitled The Computer for the twenty-first Century.Footnote 26 That introduced the concept of ‘ubiquitous computing’, an omnipresent architecture of digital technology. According to Weiser, “The most prominent technologies are those that disappear. They become integral to the fabric of daily life, with the result that we cease to be aware of them.” Hence, “computers can disappear into the background”.Footnote 27

Luciano Floridi has made the same point using a metaphor. He suggests that we are now living on the ‘piano nobile’, the central upper storey of a Renaissance home visible from the outside. However, below us are numerous servants – in our case digital servants – at work in the service rooms.Footnote 28 An interesting feature of this spatial metaphor is that it emphasizes the existence of a vertical structure. The building has multiple superimposed levels, not all of which are visible. Benjamin Bratton sees in such verticality the core of digital technology.Footnote 29 He argues that we used to live in a horizontal world, with people, objects and countries adjacent to each other on the map. Digitalization has introduced a vertical structure, however, the layers of which are formed by internet addresses, cloud services and data centres running through everything largely unnoticed. In the world of technology, the ‘stack’ is a familiar concept: an entity made up of superimposed layers of hardware, software, network and applications. The existence of that largely invisible layering raises questions regarding power relationships and dependencies.Footnote 30 Jose van Dijck uses another metaphor to describe the vertical structure of digital technology. She refers to the treelike structure of platformization, focusing attention on the power concentration associated with, for example, vertical integration.Footnote 31

One final aspect of the technical paradigm of AI that warrants attention relates to Floridi’s concept of technology. He argues that the idea of technology as an instrument is problematic because it suggests that a person uses an instrument, and by doing so exercises influence over the outside world. That obscures the fact that much of our technology today acts not act on an external physical world but on other technologies. Our attention should therefore be directed towards that ‘inter-technological’ dynamism. Floridi calls technologies that act on other technologies ‘second-order technologies’. One example is a brake, which acts on the wheel of a car. In that case the process is activated by a person pressing the brake pedal.

However, the world of AI is complicated by the existence of ‘third-order technologies’: technologies that cause other technologies to act on yet other technologies, without human intervention. In an autonomous vehicle, for example, the decision to activate the brake is taken by the vehicle’s control system. Wherever an AI system can make decisions autonomously, a third-order structure may be formed. Many road-traffic penalty systems are already characterized by such structures. A vehicle is photographed infringing a traffic regulation – breaking a speed limit, for example – triggering the issue of a penalty notice, which is sent to the address of the vehicle’s registered keeper. The autonomy that technology acquires with the integration of algorithms and that is ultimately integral to the definition of AI used in this report gives rise to questions about matters such as human control, responsibility and legal liability.Footnote 32

Key Points – AI as a System Technology

  • There is a large body of literature characterizing innovative technologies as ‘epochal innovations’, ‘technological revolutions’ and ‘general-purpose technologies’. A general-purpose technology (GPT) is distinguished by pervasiveness, great potential for technical improvement and complementary innovations. AI has all three characteristics.

  • In this report, we refer to AI as a ‘system technology’. Unlike the literature on GPTs, we do not apply a rigid classification, but instead emphasize qualitative characteristics and their impact on society.

  • As a system technology, AI is comparable with technologies such as the steam engine, the internal combustion engine and electricity. In some respects, it resembles electricity more than the others. Over time, the societal context in which system technologies develop has changed.

  • AI is associated with a distinct techno-economic paradigm characterized by continuous change to products and services, a largely hidden vertical structure of hardware and software, and the potential for technology to act autonomously.

2 The Societal Integration of System Technologies

Having defined AI as a system technology, we now consider what is required for its integration into society. By analysing the history of earlier system technologies, we identify a number of characteristic patterns that can be instructive in relation to AI. In this section we consider the general lessons of the past as a precursor to examining the five overarching tasks a society faces in relation to a new system technology.

2.1 Co-evolution of Society and Technology

Initially, a process of societal integration or ‘embedding’ involves prolonged co-evolution of the society and the technology concerned. Such a process requires practice, experimentation and negotiation, all of which take time. That immediately places the strongly polarized debate regarding AI in perspective. There are, for example, techno-optimists who believe that AI can fundamentally enrich society in the short term with autonomous vehicles, sophisticated medical diagnoses and automated production systems. By contrast, sceptics argue that AI is overhyped and highlight the lack of evidence regarding the technology’s impact to date and the continual revision of predictions about matters such as the speed at which applications like autonomous vehicles will be realized.

Both viewpoints contain an element of truth. As the optimists suggest, AI has many potential applications. However, it would be a mistake to suppose that integrating these into society will be straightforward. Sceptics rightly draw attention to the problems that AI presents in the foreseeable future, but those problems do not justify generalized scepticism about the technology. A system technology necessitates a bilateral process of social and technological adaptation and that takes time, even in the modern era of rapid technological development and globalization. Although technologies nowadays spread around the world more quickly than in the past, embedding them, ensuring that they work and that people trust them all depend on societal processes that are not necessarily faster-moving now than they used to be. Such processes tend to proceed in fits and starts, and often span decades.

2.2 Unpredictable Development and Impact

A related observation is that the introduction of a new system technology is to a large extent an unpredictable process. New technologies are often used for purposes other than those for which they were originally intended or initially adopted. Don Ihde accordingly refers to ‘multistability’ and Wiebe Bijker to ‘interpretative flexibility’.Footnote 33 Cars were originally used for sport and medical purposes in the belief that the ‘thin air’ breathed when driving at speed was good for the lungs.Footnote 34 Similarly, Thomas Edison did not develop the gramophone with entertainment in mind but envisaged his ‘talking machine’ as a business tool akin to a dictaphone.Footnote 35

Shoshana Zuboff refers to the inability to predict accurately how a technology will be used and the consequent underestimation of its effects as ‘horseless carriage syndrome’.Footnote 36 Major technological revolutions involve unpredictable novelties, an understanding of which is inevitably shaped by the familiar. Hence, the car was initially seen as a horseless carriage. By regarding it as a more efficient version of something familiar, people underestimated both the car’s ultimate impact on society and the associated hazards. From a present-day perspective the thinking of the time may seem naïve, but we could well be making the same mistake when we speak of ‘autonomous vehicles’. We may be guilty of viewing a new technology merely as an enhanced version of something we know, whose true impact we are unable to foresee.

Another misconception prevailed at the time of the car’s introduction in the early twentieth century: it was widely assumed that the new vehicles would reduce urban pollution.Footnote 37 The reason being that the use of horse-drawn transport generated large amounts of animal droppings, which caused unpleasant odours and spread disease. The motor car was consequently seen as a faster and cleaner form of transport. The removal of horse dung from the urban environment did indeed make cities cleaner and more pleasant. However, people failed to realize that the car would cause its own forms of pollution and its own liveability challenges. The history of its introduction thus illustrates that new technologies often have unintended side-effects. The installation of running water and domestic sewerage was originally intended to prevent disease, but also relieved women of one of the most laborious domestic tasks: fetching and disposing of water.Footnote 38 An unintended side-effect of electric lighting was a significant fall in deaths in domestic fires, many of which were associated with oil lamps.Footnote 39

Moreover, technological changes can also lead to behavioural changes whose effect is the opposite of what was originally intended. For example, energy-saving light bulbs were developed to reduce energy consumption but in fact increased it because people started using them in places that previously had no lighting, such as gardens.Footnote 40 This is what Edward Tenner calls the ‘rebound effect’ of technology.Footnote 41 Another example is the introduction of domestic appliances, which made housework much less physically arduous but also raised expectations – regarding clean clothing, for example – and thus increased workloads in the home.Footnote 42

The unpredictability of system technologies stems in part from the long-term structural changes they bring about, which are impossible to foresee. Railways had a major impact on urban planning, for example, because their arrival meant that people no longer had to live within walking distance of their work. Later the car helped to shape youth culture in the 1960s and enabled new leisure facilities such as drive-in cinemas, drive-through restaurants, motels and roadside diners.Footnote 43

All these uncertainties have implications for AI’s integration into society. It is therefore important to recognize that many developments cannot be predicted. We must be very cautious about framing scenarios in definite terms or making linear extrapolations from the past because such approaches are inherently liable to disregard the unexpected effects of new technology.

2.3 Impact on Civic Values

That lesson underpins our decision to consider AI’s impact on civic values on the basis of structural overarching tasks. Our rationale is that a system technology’s effect in this regard is impossible to predict in definite terms and is often unclear. That is evidenced by the examples presented above: the effect of the car on urban liveability, the effect of electricity on female emancipation and the effect of railways on town planning. The general nature of a system technology makes it impossible to determine what civic values it will affect – in fact, such technologies have the potential to influence them all. In that respect AI is like any other system technology. We can, for example, be confident that it will influence security and health, autonomy and freedom, civil rights and the rule of law, justice and inclusion. However, it is impossible to predict what form that influence will take.

Nevertheless, numerous attempts are being made to identify the values, principles and rights influenced by AI. Such initiatives are an important means of surveying topical issues as a basis for targeted intervention. But if we want to protect our social values in the long term, it is also necessary to look beyond the present and AI’s current influence. Our approach, centring on societal integration, is intended to contribute towards the current discourse by focusing attention on the long-term process whereby society and technology influence one another, with potential implications for all civic values.

2.4 Regulation and Success Are Not Mutually Incompatible

One final general observation is in order regarding the societal integration of system technologies: there is no inherent tension between civic values and their regulatory protection on the one hand and the economic success of a technology on the other. As we shall see in the next chapter, the frequently cited incompatibility of regulation and success is a myth. The history of technological revolutions shows us that the coexistence of normative parameters and innovation is entirely possible. Of course, regulations can sometimes inhibit technological development by imposing explicit prohibitions, as with the use of nuclear technology for military purposes. Often though, regulation and standardization help make a technology more reliable. That in turn increases public and corporate willingness to utilize and embrace it.

Again, the history of the motor car is instructive here. Over the years a complex system of automotive regulations and standards has been developed, involving mandatory testing and certification, supervisory bodies, safety requirements (seatbelts, airbags, spare tyres and so on), public and private support services, mandatory insurance and, of course, traffic regulations and driving tests. Far from hindering car use, that extensive normative framework has reduced the associated dangers and so promoted confidence. Without roadworthiness tests, seatbelts, insurance, airbags and traffic regulations, car travel would entail far greater risk and probably be less popular. It should also be noted that the process of automotive regulation and standardization remains ongoing even now.

Much the same is found in the history of the railways. The first trains were dangerous, uncomfortable and dirty. The wooden seats were uncomfortable, the carriages stank of food and tobacco and travellers typically arrived at their destinations covered in soot. In 1879 Robert Louis Stevenson described the train as a ‘Noah’s ark’ on wheels.Footnote 44 Gradually, however, the introduction of regulations and standards made rail travel safer and more pleasant. Another interesting analogy is provided by the rise of industrial food production in the nineteenth century. Early manufactured foods were often unsafe and unhealthy. Until the arrival of certification, supervision and legislation, people were exposed to all sorts of dubious practices. Adulteration was common, for example. Chalk and gypsum were added to milk to make it whiter, and it was sometimes diluted with dirty water, leading to the spread of tuberculosis and typhoid.Footnote 45

A similar pattern is likely to emerge where AI is concerned. Although the technology is already entering our lives, the regulations, standards and practices needed to embed it within society largely remain to be developed. We therefore have an unregulated landscape in which individuals and society in general are exposed to a variety of risks. But that does not justify eschewing the technology. Rather, it implies that we need to start work on the long process of enabling the responsible deployment of AI within society.

Key Points – The Embedding of System Technologies

  • System technologies are associated with prolonged processes of social and technological co-evolution, often involving profound social change.

  • The development of system technologies is often unpredictable, and their effects cannot be fully anticipated.

  • It is not possible to distinguish those public values that a system technology will influence from those it will not. The generic nature of such technologies implies that they have the potential to affect all public values.

  • There is no inherent tension between regulation and standardization on the one hand, and further development and application of new technologies on the other.

As well as the general patterns characterizing the way that system technologies are embedded within society, we have identified five overarching tasks that form the cornerstones of that process. We now look at these in turn.

3 Overarching Task 1: Demystification

There are no myths about lawnmowers or toasters. It is clear what their purpose is and how they work, and they leave little to the imagination. With a system technology, however, the situation is different. The generic nature of such technologies makes them somewhat intangible, facilitating the development of myths that bear little relationship to reality. On the one hand, unrealistically high expectations are liable to develop regarding the capabilities of a new system technology, with some people inclined to see it as a panacea for all manner of social ills. On the other we see the rise of exaggerated fears and doomsday scenarios concerning its impact. Myths of the first kind easily lead to disappointment, while the fears encourage aversion. Moreover, both lead to attention being focused on the wrong questions and issues. Properly integrating a system technology into society therefore requires a realistic understanding of what it is capable of and what its effects are. This is what we mean by demystification, a task that asks ‘what are we talking about?’ (see Fig. 4.3).

Fig. 4.3
An illustration. The task name, Demystification with the bulb symbol, and the question of what are we talking about are displayed.

Overarching task 1: Demystification

Various social actors play a part in this task. Because we are concerned here with public perceptions, the role of the general public is particularly significant. Through their marketing, companies involved in development of a new technology often contribute towards the emergence of unrealistic expectations. Meanwhile, competitors with interests in rival technologies or more traditional industries can play a role in raising fears about a new technology. Civil society organizations can also give credence to myths through their focus on potential risks. Finally, governments often have an interest in the use of new technologies and that can sometimes contribute to overenthusiasm. On the other hand, they can also feed negative perceptions by acting in ways that reinforce certain associations with a new technology.

3.1 Unrealistic Expectations

What patterns of demystification can be discerned in the history of system technologies? Taking optimism first, it is clear that since the Industrial Revolution new technologies have been associated with progress and civilization. Electricity was described as a ‘defining element of a great civilization’ and inspired many utopian books.Footnote 46 Widespread use of electric lighting led to Berlin becoming known as the ‘City of Light’. Electricity was linked not only with emancipation (as alluded to above) but also with cleanliness, flexibility and the general improvement of living conditions. This was an example of the wider phenomenon of scientism – the notion that scientific progress leads to social progress – and belief in mankind’s ability to manipulate and even perfect society. In 1917, in a manner reminiscent of the expectations surrounding AI, General Electric (GE) advertised its appliances as ‘electric servants’ that worked ‘without complaint’.Footnote 47

Another example of high-flown expectations relevant to the present-day debate regarding digitalization is the belief that new technologies can bring peace. Nineteenth-century engineer Michel Chevalier described the railway as “the most important medium for peace in Europe and human happiness”.Footnote 48 Similarly, the telegraph was expected to facilitate ‘harmony between peoples and nations’ and, by uniting humanity, to eliminate barriers of ‘prejudice and custom’.Footnote 49 In the 1920s Henry Ford, the pioneer of automotive mass production, viewed modern industry in much the same way. He is worth quoting at length:

Machinery is accomplishing in the world what man has failed to do by preaching, propaganda or the written word. The airplane and radio know no boundary. They pass over the dotted lines on the map without heed or hindrance. They are binding the world together in a way no other systems can. The motion picture with its universal language, the airplane with its speed and the radio with its coming international programme – these will soon bring the whole world to a complete understanding. Thus, we may vision a United States of the World. Ultimately, it will surely come!Footnote 50

Utopian visions have always found channels through which to disseminate. One, of course, is science fiction. William Gibson wrote a novel entitled Neuromancer (1984) about an idealized new world he refers to as ‘cyberspace’ – the first use of that word.Footnote 51 Another such channel is high-profile competitions. Historically, innovative entrepreneurs have often competed both to supersede older technologies and with each other. During the rollout of electricity, Thomas Edison and George Westinghouse battled publicly for the ascendency of their respective AC and DC standards. Similarly, the first car manufacturers raced their vehicles against each other. Earlier, the famous steam locomotive Rocket won a series of trials prior to the opening of the Liverpool and Manchester Railway, demonstrating the capabilities of rail transport to the general public.Footnote 52

Because the technologies in question were very new at the time and it was unclear how they could be used, these competitions helped familiarize the public with them. However, they often took place in controlled environments and the accompanying rivalries produced bold statements inflating expectations about what the technologies would be capable of in practice. More recently we have again witnessed public rivalry in the space technology domain, with powerful entrepreneurs like Elon Musk, Jeff Bezos and Richard Branson vying to surpass each other’s rocket launches and openly mocking their competitors’ technology.Footnote 53

Public exhibitions form another channel that gives rise to utopian expectations. At the 1881 Paris Exposition and the following year’s Crystal Palace Exhibition in London, Edison extravagantly demonstrated the potential of electricity to the general public, eliciting enthusiastic newspaper reviews.Footnote 54 But such events also became focal points for critics and activists. At the Crystal Palace, for instance, campaigners drew attention to the need for better working conditions and improved safety.Footnote 55

3.2 Serious Concerns

The arrival of a new system technology invariably gives rise not only to unrealistic expectations but also to anxieties. One recurring topic of concern is how the technology will affect employment. With a technology that lends itself to widespread application, there is often a fear that it will replace people, thus depriving workers in certain occupations of their livelihood. Related to this is the image of the human-made instrument that rebels against its creator by destroying their income. Although an idea frequently associated with AI, that scenario is far from new. Jonathan Taplin has demonstrated how the internet deprives musicians of income,Footnote 56 but in fact the music business has a long history of technological disruption. Describing how the record industry was affecting musicians, a union leader once said that at no other point “in the machine age has the worker created the instrument of his own destruction, but that happens when a musician plays for a recording”.Footnote 57

Another prevalent dystopian view is that a new system technology will bring about the loss of a valuable way of life. This argument was used against agricultural mechanization at the end of the nineteenth century, causing anxious farmers in the US to flock to the Populist Movement.Footnote 58 Established industries and their workers are often the sources of distrustful views of new technologies.

One anxiety particularly relevant in the present context concerns the artificial nature of a new technology. This can lead to it being perceived as a sin against nature or the will of God. We see that today with biotechnology, but at one time electric street lighting was portrayed as contrary to the separation of light and darkness in Genesis. Although Berlin enjoyed a positive reputation as the ‘City of Light’, Jules Verne portrayed the typical German city as ‘Stahlstadt’ (steel city), symbolizing power and destruction.Footnote 59 Even an innovation like margarine had to overcome the criticism that it was an artificial, unnatural form of butter and therefore inherently undesirable.Footnote 60

Fear of a new technology may stem not only from arguments of the kind described above but also from emotional sources such as the power of words. Biotechnology has suffered from the currency of phrases such as ‘genetic contamination’, ‘Frankenfoods’ and ‘Frankenfish’ (farmed salmon).Footnote 61 As indicated earlier, there was considerable rivalry between Edison and Westinghouse when electricity was introduced. In that context Edison deliberately sought to make people fearful of his rival’s technology. He performed experiments with a dog to demonstrate that Westinghouse’s AC standard, unlike his own DC, could be fatal to animals. He also campaigned for AC to be used for the electric chair to associate it firmly with death. In 1889 a magazine created the portmanteau word ‘electrocution’ by combining ‘electro’ and ‘execution’.Footnote 62 Fears can also be fanned more subtly, by rumours. Many technologies have initially been beset by unfounded claims that they were detrimental to human health, contained hazardous or impure ingredients or could even cause sterility.

History thus shows us that the introduction of a new technology is often met with anxiety. Unjustified or exaggerated fear can lead to general aversion, with the result that the benefits of a new technology are never obtained. Juma highlights the simultaneous rise of the mobile phone and GMOs. Whilst the former technology was adopted globally with little resistance, the latter was embraced in the US but rejected in Europe. Perceptions of nuclear technology have been strongly influenced by the disasters at Chernobyl and Fukushima, with implications for subsequent policy in many countries.Footnote 63 The point here is not that GMOs or nuclear power should be more widely used, but that the framing of a new technology and public perceptions can play a decisive role in its acceptance.

The scope for countering anxiety with arguments is limited. Authoritative explanations and technical solutions have often proven insufficient to dispel negative perceptions, especially if they are supported by the – often emotional – power of words and rumours. Once established, public mistrust – a ‘social backlash’ against a technology – is very difficult to counter. Often, separate issues become associated in the public consciousness and unrelated problems are conflated. A further complication is that the cause of public concern is not necessarily the technology itself but the impression that the authorities are not doing enough to ensure that its use respects the interests and safety of ordinary people.Footnote 64

In the case of a system technology of great potential benefit to society, such as AI, it is advisable to prevent such situations arising. At the same time the scale of a system technology’s potential benefits should not be overhyped. Regarding our first overarching task, the government’s scope for action is limited. Demystification depends on general public perceptions, which are shaped to a considerable extent by interaction between researchers, the media, schools and private citizens. Nevertheless, the government can exert significant influence in its role as a major user of new technology.

More direct public policy can also have a positive effect. Appropriate tools here include communications by the government, its exemplary use of the technology and support for actors involved in public education such as experts and the media. To facilitate the mechanization of American agriculture, for example, the US government established institutes and groups at universities to promote public awareness of new technologies.Footnote 65

Key Points – Overarching Task 1: Demystification

  • The generic nature of system technologies means that they appeal to the imagination. They are associated both with unrealistic expectations of progress and with doomsday scenarios.

  • General optimism about technology, public contests and events can inflate expectations regarding a new technology.

  • Fears commonly associated with the introduction of system technologies relate to the loss of employment, the loss of a way of life, and the perceived ‘unnaturalness’ of the technologies.

  • Fearful perceptions are shaped not only by arguments but also by emotions, the power of words and framing.

  • Both unrealistic expectations and fearful perceptions can lead to an aversion to technology. Realizing the opportunities afforded by a new system technology and ensuring that attention focuses on the right risks during the process of societal embedding therefore depend on demystification.

4 Overarching Task 2: Contextualization

Whereas our first overarching task is concerned with perception, the second relates to the use of a system technology. More specifically, to what is required for something developed in the lab to be put to practical use within society. This is a wide-ranging task with multiple dimensions. It is also a complex one. Indeed, its complexity goes a long way to explaining why integrating a system technology into society is such a lengthy process. The fact that something works in the lab does not automatically imply that it will function in practice. Numerous reports have appeared in recent years about algorithms that can apparently diagnose various diseases more accurately than human doctors, reach more reliable verdicts than human judges or produce better translations than human linguists. The fact that such algorithms have yet to replace their human counterparts has much to do with contextualization. Societal integration can be impeded not only by resistance or disillusion supported by myths, but also by problems involving the way the technology works in practice. Central to the overarching task of contextualization is the question ‘how will the technology work?’ (see Fig. 4.4).

Fig. 4.4
An illustration. The task name contextualization, the symbol image on the top, and the question of how to make it work are displayed.

Overarching task 2: Contextualization

In order to answer this question, we have adopted an ecosystem approach. Contextualization as a task relates to the need for a technology to be embedded in a variety of contexts or ecosystems to function as intended. We distinguish two such ecosystems, the technological and the social.

4.1 The Technological Ecosystem: Supporting Technologies

No new system technology – be it the steam engine, electricity, the internal combustion engine or AI – can function independently in a technical sense; it always operates as part of a cluster or blockFootnote 66 of other technologies. In this context two types of technology are of interest: supporting and emergent.

Supporting technologies are not strictly speaking related to the system technology itself, but nevertheless are essential from the outset if it is to work. The internal combustion engine cannot be used in the automotive industry without steel technology. Furthermore, the success of pioneer car manufacturer Ford owed much to the existence of a large network of dealers and outlets for tyres, batteries and spare parts.Footnote 67 Another supporting technology on which the car relied was a suitable road network. In the US the Federal Aid Road Act of 1916 and the Federal Aid Highway Act of 1921 were crucial to creating the car’s technical ecosystem.Footnote 68

Without such supporting technologies, a system technology can be no more than partially functional at best. Worth remembering in this context is the fact that many people in the early twentieth century doubted that the motor car would actually enter practical use. By comparison, the horse must still have seemed an attractive alternative thanks to its manoeuvrability and its ability to function in an unmodified environment.

The same issue was pertinent to the introduction of the tractor. Its adoption in agriculture was not merely a matter of replacing one instrument with another, it required the creation of a completely new infrastructure of raw materials and suppliers. Moreover, early tractors were less reliable than horses. As a result, it was long assumed that the horse would remain in use alongside the tractor, each for its own purposes. The first tractors in the US were no better than horses, but proved useful on the large expanses of open prairie in the Midwest where there were not enough animals to work the land.Footnote 69 It was only with the passage of time that it became clear that they would replace the horse throughout the American agricultural economy.

4.2 The Technological Ecosystem: Emergent Technologies

The second cluster within the technological ecosystem of a system technology consists of what we refer to as ‘emergent technologies’. Unlike supporting technologies, these develop independently but over time become linked and ultimately coalesce into a cluster. Their existence in the ecosystem allows a system technology to receive a major, unforeseen boost from an external development – as was the case with electricity. Its domestic adoption was initially slow, partly because there were easier ways of lighting homes such as candles and gas lamps. However, the development of domestic appliances like the electric iron, and later electronic devices, made new applications possible and adoption of the technology gathered momentum.

The barcode is another innovation that only came into its own after contextual adaptation. The first barcode scanning systems appeared in the mid-1970s, but it took another 30 years before organizations along the length of the production chain implemented the complementary technological, organizational and process changes needed for their general introduction.Footnote 70

A more recent example can be found in the rise of e-commerce. Expectations of a boom in online retailing had been high ever since the internet first become popular. Amazon was founded in 1994 and in that period was one of the most hyped businesses in the ‘dot-com bubble’, when markets anticipated a general migration to online shopping. Despite continuing to invest in e-commerce even after the crash of 2000, however, Amazon still failed to turn a profit for some years. It was more than two decades before online shopping really took off. The development required a raft of complementary innovations such as secure and convenient payment systems and improved logistical infrastructures with regional distribution centres. A similar pattern is apparent where transport services like Uber, SnappCar and Greenwheels are concerned. The idea of organizing taxi services and car sharing online has been around for decades, but again it is only in recent years that they have become commonplace. Their success now is closely related to the rise of technologies such as GPS-enabled mobile phones, which allow for local service delivery.

Dependency on a complete technical ecosystem of supporting and emergent technologies means that it typically takes a long time before a new system technology becomes fully functional in practice. Furthermore, the course of that process is inherently unpredictable: the technology itself improves, complementary innovations occur, prices fallFootnote 71 and new systems and applications are developed. Even if a system technology initially appears unable to gain traction, the developments necessary for its success may be taking place unseen until suddenly the new technology has a real advantage over established ones and its use acquires momentum.

4.3 Enveloping

Where the technological contextualization of AI is concerned, ‘enveloping’ is an important concept. This refers to the creation of an environment within which a technology can thrive. The concept was popularized in relation to AI by Luciano Floridi, whose work is referred to earlier in this chapter. He is opposed to viewing technology as an instrument, arguing that that implies an old-fashioned model in which the human user exerts influence over a natural environment by means of a technology. While a spear, an axe or a parasol may be regarded as an instrument that impacts an element of the natural environment (a prey animal, a tree or sunlight), there are many technologies that do not conform to that model. Technologies that act on other technologies, for example, like the hammer when used with a nail – or, indeed, all technologies developed since the Industrial Revolution. A car is not ideally suited for travel in a natural environment but performs very well in one modified by the creation of paved roads. This process of adapting a technology’s environment so that it functions better is what we call ‘enveloping’,Footnote 72 its crucial characteristic being that use of the technology is promoted not only by improving the technology itself but also through that adaptation.

In this respect it is pertinent to ask whether the technology is adapting to people or they are adapting to technology? Although the latter idea tends to meet resistance, we have to accept that it is far from uncommon. The average street, for example, is heavily tailored to the motor car, with tarmac, parking spaces, traffic signs and a regulatory system. The people using it, pedestrians, adapt to that by walking on the pavement, using designated crossings and so on. Similar dynamics are likely to become commonplace in the case of AI.

4.4 The Social Ecosystem: Macroeconomic Context

Contextualization involves integration not only within the technological ecosystem but also within the social ecosystem. One important element of the latter is the macroeconomic context. A new technology has its own logic, which is not necessarily aligned with existing organizational processes. Moreover, achieving alignment is not a quick and easy process. Organizations have fixed ways of working, making it difficult to try out new approaches.

Those in established industries are often also hampered by ‘the curse of knowledge’.Footnote 73 Simply purchasing new machines or even setting up new departments – an IT or an AI department, for example – is not sufficient. A modern organization does not have an electrification department; electricity is an established system technology integrated into all its processes. However, that did not happen overnight. Factories had to be reorganized to accommodate power cables, for example.Footnote 74 Similarly, the telephone and the typewriter ultimately contributed significantly towards the mechanization and bureaucratization of the office, and thus to the growth of many organizations, but this transformation occurred over an extended period.Footnote 75

Not only is transformation time-consuming, but determining the pathway to be followed is also a capital-intensive process. Consequently, the introduction of a new system technology is often characterized by a ‘productivity paradox’. It took years for electricity to yield a net productivity benefit for the economy, for instance.Footnote 76 One explanation for the delay in realizing productivity benefits concerns the energy supply. In Britain, for instance, steam engines were initially used only in the vicinity of coal mines – the source of their fuel.Footnote 77 In order to make a system technology productive, therefore, it is important to consider the wider organization of the processes within which the technology must function.

4.5 The Social Ecosystem: Behavioural Context

Another important feature of the social ecosystem is the behavioural context into which a new technology must be embedded. In that regard, the behaviour of both consumers and users within the organizations where the technology is to be applied is significant. Internal users often need to be trained to use the applications it facilitates. The more general question of adaptation to the labour market is therefore relevant here as well.Footnote 78 Whereas the lab phase requires fundamental knowledge of a technology, the emphasis during the embedding phase shifts to knowing how it should be applied in a variety of domains. During the process of integrating electricity into society, for instance, countless engineers and inventors applied themselves to identifying contexts in which it could be put to effective use.

People who are going to utilize a new technology must gain confidence in it and some understanding of how it works, and must perceive its use as desirable. That in turn depends on the presence of positive stimuli and the absence of deterrents to its integration. People will not embrace a new technology if they fear it will make them redundant or undermine their earnings. Artists working in recording studios prior to the development of a new income model based on streaming services form a good example of this. Likewise, professionals such as doctors, judges and accountants will be reluctant to accept or fully utilize a new technology if it is not – or not yet – capable of satisfying the standards of their profession.Footnote 79

New technology often requires behavioural change from consumers as well. Consider again the example of music recordings. Before they became possible, people could listen only to live music and only at scheduled performance times. The wireless and gramophone enabled entirely new ways of listening to music at home, but consumers still had to accustom themselves to those new opportunities.

Like demystification, contextualization is a broad task over which governments have relatively little control. To a large extent, the contextualization of a new technology occurs in the many thousands of occupational settings where people make use of it and learn when and how it is effective. That is an iterative process. Governments can nevertheless facilitate and guide the broad task of contextualization in various ways.

For example, governments can invest in supporting and emergent technologies. The US government aided the contextualization of the car by building highways. Another option is to participate in the process of contextual experimentation. As a new technology user, the government plays a role in the creation of a market. It can also define standards and set an example for the private sector. Public-sector procurement policies are influential too, due to the government’s sizeable purchasing power.

Key Points – Overarching Task 2: Contextualization

  • Contextualization is necessary for a new technology to function in practice.

    • That implies understanding and approaching the technology within its wider social and technical ecosystems.

  • The technical ecosystem consists partly of supporting technologies that enable a system technology to work.

  • It also includes emergent technologies: completely separate technologies that develop independently but can add surprisingly strong impetus to a technology.

  • An important process in the contextualization of system technologies is ‘enveloping’: adaptation of the environment to a technology.

  • A technology’s social ecosystem consists firstly of the macroeconomic context and is characterized by complex productivity and work process organization issues.

  • The second element of the social ecosystem is the behavioural context, which is characterized by the stimuli, practices, standards and convictions of people involved with the technology.

5 Overarching Task 3: Engagement

As we have seen, our first overarching task is concerned with image and the second with usage. The third, engagement, relates to the social environment. It focuses on the people affected by the system technology and the actors who therefore are or need to be involved with it (see Fig. 4.5). They include technical experts, ordinary citizens and civil society organizations.

Fig. 4.5
An illustration. The task name engagement, the symbol image on the top, and the question of who should be involved are displayed.

Overarching task 3: Engagement

5.1 Values, Interests and Ideals

As previously stated, the five overarching tasks are closely related. We have already considered the human environment in terms of the social ecosystem’s role in contextualization, centring on the question of how we could make the technology work. In the context of the engagement task, by contrast, our focus is on people’s involvement in the design and use of the technology – and it is important that they are involved, so that their values, interests and ideals contribute towards its integration into society. People’s interests can of course play a role in building a technology’s functionality as well, but the principle underpinning the engagement task is that involvement by various groups in the process of societal integration is intrinsically important to its long-term success. Effectively, engagement is about humanizing or democratizing the technology.

Engagement has proven particularly important in the phase where a technology transitions beyond the lab, because at that point the requirements society will demand of have yet to crystallize fully. The engagement of civil society is also vitally important because every technology is associated with power structures. The first users of a new technology are typically powerful actors such as large corporations and governments. Consequently, it is initially likely to reinforce existing power structures. Engagement is required to ensure that other social actors also have a voice in the way it is used.

5.2 A Spectrum of Engagement

Engagement can take various forms. At the one end of the spectrum are people strongly opposed to the technology who want to see it banned. In extreme cases, their resistance can turn violent. At the other end of the spectrum is supportive input, with actors offering their expertise and voicing their own values and wishes so as to influence how the technology is used. That can even lead to stakeholders themselves developing alternative uses.

Stakeholders can also engage indirectly by calling on governments to regulate the technology (regulation is our fourth overarching task; see 3.6.). In this respect it is important that engaged social actors mobilize themselves to exert the necessary influence, and that they do so from an early stage – the reason being that uncertainty regarding the direction a developing technology will take can make it difficult for government to know how it should be regulated. In that situation, civil society actors can assist politicians and governments by playing vital signalling and deliberating roles. Which brings us to the core question in this overarching task: ‘Who should be involved?’

5.3 Winners and Losers

Individual citizens and interest groups engage with the process of embedding a system technology within society for various reasons. Often, these reflect whether the person or group in question stands to gain or lose from the technology. Although a new technology may be beneficial to society, that benefit is liable to be distributed unevenly, creating both winners and losers. When Schumpeter referred to creative destruction, he recognized the misery new technology could cause and visualized large elements of society being crushed under ‘the wheels of innovation’.Footnote 80 As well as threatening jobs, the process of innovation and experimentation often involves accidents and even reckless and dangerous behaviour. We have already mentioned the malpractices associated with the early mass production of milk and the introduction of margarine. Manufacturers often used colourant and preservative chemicals that were harmful to public health.Footnote 81 Many new technologies have also had a negative impact on particular groups in society, such as consumers or workers. That has tended to happen where vulnerable or dependent groups have been disadvantaged by more powerful early adopters of the technology and first movers exploiting their expertise and position. In such cases, new system technologies initially amplify existing power imbalances.

The introduction of the steam engine induced fear that the working classes would be marginalized. When railway travel became popular, wealthy people were concerned about close contact with the poor, leading to a system of multiple travel classes.Footnote 82 Electric street lighting was perceived as increasing the government’s power over its citizens. Class differences created issues in relation to the motor car as well: cars were seen as the preserve of a wealthy elite, who were gradually driving other members of society off the roads.Footnote 83

Such issues have repeatedly prompted those affected to engage with new technologies. One form that engagement has often taken historically is protest, in extreme cases descending into violence. In the 1810s a movement of English factory workers known as the Luddites rebelled against the mechanization of labour, destroying the machines their employers had been installing. During the 1842 Plug Riots, half a million workers went on strike and disabled steam engines. Workers resorted to such tactics because the British government of the day did very little to protect them.Footnote 84 The word ‘Luddite’ has since come to mean anyone who makes futile attempts to resist technological progress. However, that definition rests on a simplistic view of history. By rioting, the Luddites succeeded in slowing the process of mechanization in the textile industry and building solidarity amongst its labour force, thus laying the foundations of the trade-union movement.Footnote 85 They were not rejecting the new technology per se but standing up for workers’ rights.

The introduction of the motor car was also accompanied by protests from disadvantaged groups. Some of these were sparked by the hazards made clear by the first fatal accidents. The main focus of dissent, however, was the ‘battle for the street’, as the car gradually pushed market traders, horse riders and pedestrians off the roadway. Horses were perceived by motorists as causing congestion, while their riders complained about the space devoted to car parking. During the 1930s, the car lobby succeeded in persuading the public that the roads were meant primarily for motor vehicles. That perception was encouraged by education, with children taught to look out for cars when crossing the road. Regulations were introduced not only for motorists, but also for cyclists and pedestrians. Crossing intersections diagonally was made an offence, for example. Campaigners called for fast roads exclusively for motorists, eventually leading to the creation of motorways. In short, the rise of the car brought with it disputes over who was and was not legitimately entitled to use the road, and under what conditions.Footnote 86

More recently, the introduction of nuclear power attracted protest. Posters, newspaper adverts, stickers and demonstrations such as ‘die-ins’ and human-chain protests were used to oppose the technology. In some cases, protestors also sabotaged equipment.Footnote 87 Such actions ultimately helped initiate a general public and political debate regarding nuclear power.

5.4 Demand for Regulation

As the example of the car demonstrates, engagement by civil society sometimes takes the form of campaigns calling for regulation or government policy. In the mid-nineteenth century, for instance, the Chartist movement in the UK secured legislation to limit the maximum number of working hours for young people and women.Footnote 88 Later that century, women’s organizations in the US pressed for better working conditions. The Woman’s Christian Temperance Union (WCTU) campaigned not only against alcohol but also against the widespread use of many new medications. Its activities contributed towards the introduction of legislation requiring the listing of ingredients on product labels and restricting the distribution of medications.Footnote 89 In 1970 an activist engineer in the Netherlands invented the ‘speed hump’ to improve road-traffic safety. A few years later the Dutch government approved the concept of the ‘woonerf’, a residential zone where pedestrians have priority over cars.Footnote 90

Civil society actors have also been able to influence the use of new technologies directly, rather than by pressing politicians to act. One way they have done that is by using a technology as they see fit. In the US, for example, co-called ‘Bellamy clubs’ were formed to employ technologies for utopian social purposes. Unions, feminists, doctors and food specialists have pressed for modern domestic technologies and appliances to be made more healthy, safe and pleasant. User communities have even designed housing blocks with shared spaces for cooking and childcare to promote community spirit and equality. When the telephone was introduced, women and migrants started using it in ways the phone companies had not intended, ultimately leading to the modification of services.Footnote 91

The White Label League succeeded in persuading clothing producers to attach a white label to garments made in factories where the working conditions had been approved by the organization.Footnote 92 In the field of digitalization, the Claudette project is a good example of civil society influencing a technology’s use: it seeks to reinforce the position of consumers by automatically scanning countless online platforms to check the legality of their terms and conditions and help buyers understand them.Footnote 93

5.5 Defending Public Interests

Citizens affected by new technologies engaged in many different ways: by experiencing their effects, guiding their use and making their own views known. Certain social actors play particularly significant roles. Considerable influence is exercised by the media, whose involvement we have already considered as it relates to demystification. In that context, its role is to inform the public; where engagement is concerned, by contrast, it is to air issues relating to public interests.

One historical example of this kind of mobilization relates to the introduction of urban electricity cabling. In October 1889 Western Union employee John E. H. Feeks was electrocuted in a gruesome accident on a cable installation project in New York City. His body was left hanging, smoking and sparking, for 45 minutes before it could be brought down. The incident caused a widespread backlash, with newspapers reporting acts of sabotage all over the city. The prevailing view was clearly that the power companies were putting profits ahead of public safety. The New York Times argued that the people should no longer have to tolerate the activities of selfish entrepreneurs and ignorant, corrupt officials. The commotion led to a major inquiry into the power of dominant companies and even to new governance models, in which municipal authorities were given more control and greater emphasis was placed on public participation.Footnote 94

In the same city but a very different field, the rise of the refrigeration industry provides another example. Refrigeration technology enabled goods to be stored in artificially cooled warehouses, removing the need for natural ice. Some people, however, grew suspicious of the power of the ‘ice trust’. Encouraged by the newspapers, a public outcry ensued, leading to regulations requiring products to be labelled with their refrigeration date.Footnote 95 Another problem was that the doors of early coolers and freezers were difficult to open, with the result that playing children could become trapped in them and suffocate. Media outrage led to the introduction of safer door designs.Footnote 96

Scientists and other experts form another important group within an engaged civil society. They can exercise influence by, for example, publishing books and articles that raise public awareness and draw attention to problems and malpractices. In 1962, for example, biologist Rachel Carson famously published Silent Spring, a book that did much to launch the ecology movement. Her analysis exposed the downside of industrial manufacturing and agriculture, thus mobilizing opposition to big business.

Similarly, the work of critics such as Guy Debord, Constant Nieuwenhuys, Jane Jacobs and Lewis Mumford created awareness of malpractices in the automotive industry.Footnote 97 Artists and fiction writers can also contribute towards public engagement. The Bellamy clubs mentioned earlier were inspired by Edward Bellamy’s book Utopia: Looking Backward. Famous authors such as H. G. Wells and Mark Twain also wrote about the influence of technologies such as electricity.Footnote 98 In 1906 Upton Sinclair published The Jungle, a novel about the dreadful conditions in Chicago’s meat industry. The book led to an immediate decline in meat consumption, and public disquiet resulted in the formation of a system of inspectors.Footnote 99

Earlier, during the Industrial Revolution, a medical commission reported that the people of Manchester were being made unwell by the city’s smoky air.Footnote 100 It was a long time, however, before anything was done about the situation. As mentioned in Sect. 4.1, social actors’ degree of organization – and hence their influence –has grown over time. Professional groups, associations and commissions made up of academics and other experts have started to play an increasingly influential role in the societal integration of new technologies. The academic press is an important medium for the exercise of such influence, along with appeals and conferences. In 1955 philosopher Bertrand Russell and physicist Albert Einstein published a manifesto calling for the academic world to contribute towards the peaceful resolution of international conflicts. These contributions were followed by a series of expert gatherings known as the ‘Pugwash Conferences on Science and World Affairs’.Footnote 101

Following the development of genetic cloning technology in 1973, 2 years later the Asilomar Conference on Recombinant DNA agreed a voluntary moratorium on genetic modification to allow the medical authorities to develop safety guidelines. This laid the foundations for an evidence-based system of risk analysis.Footnote 102 Another example of experts influencing technological development is the work on climate change done by the IPCC, whose members are all leading academics. Scientists, other experts, writers and artists, as well as private citizens and interest groups, may campaign against the use of new technologies, then, but for the most part they contribute towards bringing about more responsible application of those technologies, thus actually encouraging their use.

One final observation is that, with their professional emphasis on open publications and knowledge, academics and researchers can stand up against governments and businesses in situations where the latter have an interest in maintaining secrecy. The Human Genome Project was an international collaborative initiative to make the genome publicly available. At the same time, however, a company called Celera was working to sequence it privately for commercial exploitation. That brought it into conflict with the academic world. The scientific and business communities also clashed over the question of whether genetic sequences were patentable.Footnote 103

In the field of cryptography, scientists also find themselves at odds with the policies of secrecy pursued by governments and private corporations. In the 1990s, legislation banning the export of sensitive knowledge made it difficult for US academics to know what they were and were not allowed to teach their foreign students. In defiance of US government pressure to keep encryption software secret, programmer Philip Zimmerman open-sourced his code, leading to his prosecution.Footnote 104

The open-source movement is an important group of civil society experts in the field of digital technology. Numerous court cases attest to the tensions that surround publication. In the Netherlands, for example, a case was brought against Radboud University’s Bart Jacobs after he discovered a security flaw in the Mifare Classic chip, used on Dutch public transport smartcards, Transport for London’s Oyster cards and elsewhere. The court refused to grant an injunction preventing publication of the details, however.Footnote 105 Of significance in the context of our analysis is the judge’s observation that “in a democratic society, important interests are associated with the ability to publish the results of scientific research and to inform the public about a product’s shortcomings, so that steps can be taken to mitigate the risks.”Footnote 106 The publication of a paper explaining how a dangerous variant of smallpox had been developed was the focus of similar tensions.Footnote 107 Experts employed by commercial organizations play a role not only with regard the issue of publication, but also in relation to other ethical issues within businesses. After the Second World War, for example, the members of a German engineers’ association took an oath not to work for companies that infringed human rights.Footnote 108

Key Points – Overarching Task 3: Engagement

  • The engagement of civil society is important for drawing attention to relevant values and interests affected by using a new technology.

  • Civil society plays an important role through a wide range of engagement forms, from resistance and protest to campaigning, and driving change in the design and use of a technology.

  • The media and journalists are important for highlighting malpractices and mobilizing public opinion.

  • Scientists and other experts can, for example, develop standards and principles of good practice, promote a culture of openness regarding a technology, and utilize a new technology in accordance with public values.

6 Overarching Task 4: Regulation

Our fourth overarching task is pan-societal: the regulation of new technology. In this context we define regulation broadly as including not only legislation and government policies but also professional norms and technical standards. Central to this task is the question, “what parameters are required?’ (see Fig. 4.6). Although national and international government bodies play a defining role here, other players are also influential.

Fig. 4.6
An illustration. The regulation symbol, the task name regulation, and the question of what kind of framework is necessary are displayed line by line.

Overarching task 4: Regulation

6.1 The Collingridge Dilemma

Defining rules for something as extensive, complex and versatile as a system technology brings numerous challenges, problems and dilemmas. One of the best known is the so-called ‘Collingridge dilemma’. On the one hand a new technology is difficult to regulate in the early phase because much remains unclear regarding its workings and effect. Moreover, the need for regulation is initially less apparent. Later, once the technology’s effects on society are more conspicuous, it becomes clear what regulation is needed and why. By then, however, many of the decisions taken earlier are difficult to reverse. A further complication is that power structures develop around a technology, and these cannot be modified easily or quickly. Primarily, therefore, we first encounter an information and knowledge problem and then later a power problem.

The Collingridge dilemma is exemplified by the architecture of the internet, which was developed in a spirit of openness and market freedom. Today, however, it is clear that many safety and security issues were not adequately addressed by the original design, meaning that we are now vulnerable to digital disruption, for example.Footnote 109 Rectification of the design flaws at this stage, however, would require large sections of the internet to be completely restructured – a huge, if not impossible, task.

6.2 Concentration of Power

Once a new technology has been widely adopted – that is, integrated or embedded in society – it is difficult to make major changes. However, the need for such changes only increases over time. As indicated in connection with the previous overarching task, the first signs that change is needed are acute issues, often highlighted by civil society. They typically involve accidents, abuses, opportunistic use and dangerous practices. By gradual degrees, it becomes clear that more structural regulation is required in order to manage the technology and its impact on society. Central to the regulation process is an expansion of the field of focus from acute issues only to more structural problems.

One structural issue that arises repeatedly in the history of system technologies is concentration of power. The dynamism and innovation associated with new system technologies tend to result in monopolistic or oligopolistic power being heavily concentrated in the hands of certain actors. As well as causing economic problems, such concentration results in the powerful actors gaining disproportionate influence over society, threatening civic values.Footnote 110 At first, the companies in whose hands power is concentrated are typically seen as wonderful innovators and social benefactors. Over time, though, a more negative view of them develops as their power is increased by the spread of the technology.

In the US, railway pioneers such as Andrew Carnegie and Jay Gould built huge business empires. However, the negative view of their power and influence that ultimately prevailed is clear from the nickname they acquired: ‘robber barons’. The introduction of electricity was also accompanied by an immense concentration of power. In 1894 the Edison Company merged with Thomson-Houston to form the giant GE, which together with Westinghouse dominated the US market. In Europe, Siemens was formed in Berlin and Ganz in Budapest – two of the first true multinationals. Immediately before the First World War, GE and Westinghouse in the US and Siemens and AEG in Europe were the world’s biggest companies. At that time there was considerable fear of this ‘global cartel’. Indeed, AEG general manager Emil Rathenau did actually reach an agreement with GE in 1903 about dividing up global markets.Footnote 111

The oil industry’s boom period occurred at around the same time, leading to creation of John D. Rockefeller’s giant Standard Oil corporation. A little later the rise of the internal combustion engine was associated with an enormous concentration of automotive industry power in Detroit, Michigan, the Silicon Valley of its day. The city was home to the industry’s ‘Big Three’: General Motors, Ford and Chrysler. In the 1920s the US and Canada were responsible for nearly 90 per cent of global production of trucks, cars and tractors.Footnote 112 Detroit continued to dominate the industry for many years, both within America and beyond. The saying “what’s good for General Motors is good for America, and vice versa”, attributed to Charles Erwin Wilson, reflects the influence the company had over the nation. In the mid-twentieth century AT&T dominated the telecommunications world, and its research arm Bell Labs was a global driver of innovation. In the computer industry, IBM came to enjoy similar power. The classic film 2001: A Space Odyssey depicted the dangerous side of the company’s influence. The film’s malicious computer intelligence is called HAL, a name created by taking the three letters that come before I, B and M in the alphabet.

A historical pattern can be discerned where, as the concentration of power has become a greater issue for society, powerful companies campaign for the resolution of market problems to be left to the market or self-regulatory systems. Often, they do so in an effort to avoid the imposition of external controls. ‘Robber barons’ like J. D. Rockefeller and J. P. Morgan portrayed their power as the result of entrepreneurial genius and a necessary by-product of technical progress.Footnote 113 Shoshana Zuboff explains how they cited the ‘laws’ of economics and evolution in their defence. Legislation was unnecessary, they argued, because they were subject to regulation by the laws of evolution, capital and supply and demand.Footnote 114 Many employers also maintained that workplace safety was the responsibility of the workers themselves.Footnote 115 Similarly, it was suggested that the safety of a car was the user’s responsibility, not the manufacturer’s.

6.3 New Legislation and Regulations

As the need to regulate a new technology becomes clearer, we need to ask whether existing legislation provides an adequate mechanism for its control or are specific new laws required to address the novel circumstances associated with it. When bespoke have been considered necessary in the past, it has often proved possible to legislate or regulate successfully even at the international level to mitigate the adverse effects and applications of a new technology. The 1925 Geneva Protocol is a good example. Following the widespread use of poison gas in the First World War, this treaty measure agreed a ban of the use of both chemical and biological weapons.Footnote 116 Another case is the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer, which successfully combined restrictions on the use of certain chemicals with stimuli to use technological alternatives.Footnote 117 Also instructive in this context are the arrangements made nearly 15 years ago by various countries within the Council of Europe to tackle the online sexual exploitation of children.Footnote 118 The international dimension of societal integration is addressed explicitly by our fifth overarching task (see 3.6 below). As far as regulation is concerned, it is important to note that a technology can be controlled successfully, particularly with a view to mitigating the associated hazards, by means of legislation at both the national and international levels.

As the Collingridge dilemma reveals, especially early in a new technology’s trajectory it can be difficult to know what types of regulation are required. The reason being that some regulations can undermine the advantages of a new technology. One example is provided by the so-called Red Flag Acts passed in the UK in the second half of the nineteenth century. With the aim of promoting road safety, these laws required that a mechanical vehicle must be preceded on the public highway by a person walking with a red flag.Footnote 119 Their effect was to seriously limit the maximum speed of the new transport mode and thus diminish its value.

6.4 Diverse and Flexible Instruments

One important lesson we can draw from the history of the regulation of system technologies is that there are no silver bullets: no single measure is able to ensure that a new technology is embedded in society in a totally responsible way. As we saw with the introduction of the motor car, regulation involves many years of constantly responding to new issues and hazards. In the Netherlands, for instance, the first urban speed limit was imposed in 1957. It was not until 1974 that motorway speed limits followed, though, in response to the new dangers associated with traffic growth. Seatbelts were made compulsory for drivers and front-seat passengers in 1975, but not for other passengers until 1992. Only in 1982 were rules introduced requiring all vehicles to undergo regular roadworthiness tests. Even now, the process regulating the societal embedding of the car continues. Regulation is a learning process that takes an increasingly substantive form with the passage of time.

History also teaches us that the extent of government intervention follows a pattern as well. Initially, it is considered prudent to use the most flexible instruments available. Then, as more knowledge and experience are acquired, there is a gradual transition towards more mandatory instruments.

Various flexible instruments are possible. First, there is analogous legislation. When a new technology emerges, such as biotechnology or nanotechnology, regulators look for analogies in other fields. In the case of nanotechnology, for example, that was the chemicals industry.Footnote 120 Other flexible instruments that are used include experimental legislation, ‘soft law’ and ‘regulatory sandboxes’ in which new business models can be tested.

The information problem with a new technology can also be addressed through public-private co-operation. This collaborative model is increasingly common around the world, as we shall see in the context of the next overarching task. It is most common in highly technical fields, where private sector expertise is very important. The International Organization for Standardization (ISO) is a good example.Footnote 121 In the regulation of biotechnology too, various softer governance instruments are used, with researchers, governments and companies collectively working out the best way to manage a new technology.Footnote 122

6.5 Oversight

In addition to legislation and standardization, regulation requires oversight and enforcement. Again, a dynamic, learning approach is required, especially in the early part of a new technology’s trajectory. One particular issue arises out of the generic nature of system technologies, which means that they can be used in a wide variety of contexts, each with its own rules, values, principles and history. As a result, it is difficult to ensure that legislative arrangements and oversight bodies specifically address all possible applications.

Let us return to the example of electricity. Some of the associated questions are universal, such as the type of voltage and the cabling. But in reality electricity features in people’s lives of citizens through all manner of specific applications, from factories and street lighting to toothbrushes, escalators and computers. The vast majority of rules governing electricity therefore relate to those particular applications. Furthermore, generic technologies are often dual-use technologies; that is, they have both military and civil potential.Footnote 123 This is a complicating factor because the two types of use require very different rules and enforcement mechanisms. Domain-specific knowledge is therefore required for the application-level regulation of system technologies.

The institutions and bodies responsible for enforcement of the applicable rules must also be involved with the societal integration of a system technology. The regulatory influence of the judicial system should not be underestimated either, particularly when a new technology’s impact on society has yet to become clear. This is illustrated by a 1995 US court ruling on cryptography, in which the judge decided that a ban on the distribution of encryption software would infringe the constitutional right to free speech ¬– a central tenet of US democracy.Footnote 124

Parliament also plays a material role in shaping rules and regulations. In fulfilment of their oversight function, MPs can draw attention to malpractices and issues. The legislature may also politicize technology, as again illustrated by the history of encryption. Although the US government and executive agencies such as the NSA and FBI wanted to restrict the distribution of encryption software as far as possible, Congress repeatedly stood up for the rights of citizens versus the state.Footnote 125 In order for the judiciary and parliament to perform their supervisory functions, it is important that they possess the means and the knowledge needed to monitor the use of new technologies effectively. In the US, for example, the Office of Technology Assessment played a key role in assisting Congress between 1972 and 1995.

6.6 A Growing Role for Government

The foregoing illustrates that the role of government, and thus of legislation, democratic control and oversight, increases as a system technology becomes embedded in society, not least because its effects become clearer as that process proceeds. The more embedded a technology is, moreover, the harder it is for society to do without it. As a result, it (or aspects of it) are increasingly regarded as public property, sometimes even as a utility. Technologies viewed in that way include public transport, the electricity grid, the road network and broadband cable infratructure.Footnote 126 The power problem described earlier is also significant in relation to the government gradually acquiring a greater role than it had at the outset, when it was primarily private companies shaping the technology.

In that context, there is a history of governments using a variety of means to tackle the power of dominant system technology players, who we can regard as the predecessors of today’s big-tech companies. The power of the ‘robber barons’, for instance, was challenged during the so-called ‘Progressive Era’. The Sherman Act of 1890, originally passed to address the power of the big US ‘trusts’ (cartels)Footnote 127 was later utilized by President Theodore Roosevelt to break up Rockefeller’s Standard Oil and Morgan’s Northern Securities.Footnote 128

As well as addressing concentrations of power, a government can protect public interests by obliging businesses to comply with certain conditions. The US government established the Rural Electrification Administration to force electricity companies to make their services available in rural areas where they had little commercial incentive to operate.Footnote 129 When AT&T had a monopoly of the US telecommunications market, it was required to adhere to strict requirements such as relinquishing patents.Footnote 130

Finally, we should point out that significant international differences exist in terms of the traditional role of government and the way intervention is viewed. Contrasting with the situation in the US, in Europe there has been considerable public involvement from the outset in many of the new technologies considered in this report.Footnote 131 It is certainly the case that whenever a system technology is embedded in society, public interest in it increases over time and that in turn strengthens the rationale for the government to play a regulatory role.

Key Points – Overarching Task 4: Regulation

  • Although regulating a technology is easiest early on, at that stage there is often uncertainty about what is required and little sense of urgency.

    • By the time a sense of urgency develops, it tends to be harder to introduce regulations or change established practices.

  • With a new system technology, the initial approach is usually to rely on self-regulation. However, the concentration of power in the hands of a few companies and the rise of malpractice gradually make legislation necessary.

  • Where legislation is concerned, there are no silver bullets. The control of a new technology therefore requires a wide range of instruments. Both flexible instruments such as experimental legislation and soft law, and public-private cooperation on standards are useful ways of acquiring knowledge and expertise and dealing with uncertainties.

  • The generic nature of system technologies and the associated diversity of their applications necessitates a primarily contextual approach to oversight and enforcement.

  • The role and influence of the government in the embedding of a technology differs from country to country, but the need for intervention increases over time, as the technology acquires a more prominent position in society and the public becomes more dependent on it.

7 Overarching Task 5: Positioning

The final overarching task we have identified is positioning, which involves embedding AI at the international level – although each of the other four tasks also has an international dimension. Regulation, for example, is not an exclusively national matter, but also involves supranational organizations. To some degree, the engagement of actors such as scientists and activists is often an international process as well. Nevertheless, international positioning is a distinct task for two reasons. First, because it involves different players than those encountered at the national level. Second, because certain issues are specific to the international stage, such as the competitiveness and security of nations. The question at the heart of the positioning task, therefore, is ‘How do we compare with other countries?” (see Fig. 4.7).

Fig. 4.7
An illustration. The positioning symbol, the task name positioning, and the question of how do we relate to others globally are displayed line by line.

Overarching task 5: Positioning

7.1 Economic Competitiveness

In the international context, one of the characteristic features of system technologies is the tendency for a race to develop between nations. The belief prevails that countries that lead the way in the development and application of the technology will gain various advantages over others. Any country that believes itself in danger of being left behind will therefore strive to improve its position in the race.

The resulting emphasis on international competition can complicate the process of dealing with normative issues associated with the technology. During the Industrial Revolution, for example, the nations of mainland Europe were envious of the economic and technological development they could see occurring in the UK. British steam engines made a profound impression. Britain was dubbed ‘the Realm of Vulcan’, after the Roman god of fire, while the country’s railways, chimneys and factories were likened to the architecture of the Roman Empire. The model was impressive and simultaneously repellent. The British were perceived as materialistic and greedy, contributing to a mood of Anglophobia elsewhere.Footnote 132 Germany and the US were viewed with similar ambivalence in connection with later technologies. Hence, a sense developed that technological leadership was acquired at the cost of various fundamental values.

History shows that the successful development and application of a new system technology does indeed contribute to a nation’s competitiveness, since the generic nature of the technology means that it facilitates generalized economic and social progress. National strategies of public investment in infrastructure and education can make a useful contribution in that regard. In the late nineteenth century, for instance, Germany’s rapid economic development owed much to the country’s co-ordinated approach to the integration of science and industry. Public investment in new technologies also helped East Asian countries such as Japan, South Korea, Taiwan and China to become powerful modern economies in the twentieth century.Footnote 133

7.2 Military Relations

The international competitive advantage conferred by system technologies is not only economic. Leadership in a major new technology can also strengthen a nation’s military position on the international stage. Railways facilitated the Prussian victory over France in 1871, for example.Footnote 134 They also played an important role in European countries’ colonization activities around the world.Footnote 135

During the twentieth century, investment in the development and application of new technologies continued to have a major bearing on conflicts. During the Second World War, the British and American scientific communities, including code-breaker Alan Turing and the ballistic scientists whose work laid the basis for the development of computers, were in direct competition with German science, including rocket pioneer Werner von Braun. When the Soviet Union launched Sputnik 1 in 1957, there was widespread fear in the West that the US might lose the Cold War as a result of being left behind in the space race. A year later the Defense Reorganization Act was passed, leading to the creation of ARPA. Later renamed DARPA, the newly formed military research body went on to drive the development of many new technologies, amongst them GPS and the internet. Meanwhile, the National Aeronautics and Space Act formed NASA in 1958. The new agency’s staff included Von Braun, who had been brought to the US after the war as part of the Operation Paperclip mission to kick-start the development of American space technology.Footnote 136 Subsequently, in the 1960s, the Kennedy administration made the creation of a global US satellite system a national priority. President Eisenhower also sought to ensure the technological leadership of US companies against the backdrop of the geopolitical rivalries of the Cold War.Footnote 137

7.3 Attempts at Nationalization

National strategies regarding system technologies do therefore contribute to the competitiveness and geopolitical strength of the countries in question. However, viewing system technology development and deployment as a global race has limits in terms of its validity. There exists no historical basis for believing that one country can win such a race by monopolizing a technology and thus securing a permanent advantage over others. System technology development has generally been an international process, to which multiple countries have contributed.

Early contributors to the internal combustion engine, for example, included the Swiss François Isaac de Rivaz, Belgian Jean Joseph Etienne Lenoir, Germans Nikolaus Otto, Karl Benz and Rudolf Diesel and Americans George Brayton and George B. Selden. The development of electricity was an international effort as well.Footnote 138 Although the steam engine was developed largely in Britain, that nation obtained no consequent lasting advantage. The US may have entered the ‘race’ later, but the engine developed by American engineer George Henry Corliss ultimately proved superior and eventually conquered the British market as well.Footnote 139

Moreover, system technologies tend to be characterized by an international approach that owes much to the involvement of the scientific community. Scientists generally attach great importance to knowledge being freely accessible and contribute enthusiastically to international conferences and journals. Efforts to ‘nationalize’ system technologies consequently tend to be driven by governments rather than academics.

When electricity was introduced, for example, the British responded to the rise of the US and Germany by enlisting the help of Italian engineer Guglielmo Marconi to create wireless telegraph networks in an effort to dominate international communications. Their hope was that an ‘imperial chain’ would confer an unassailable advantage. Later, America sought to establish a rival network and the US navy blocked the sale of GE’s sophisticated technology. The Radio Corporation of America (RCA) was founded with the aim of securing global wireless hegemony. However, these British and American bids for dominance failed to prevent countries such as France and Germany from setting up their own radio stations for national communications.Footnote 140

Indeed, history teaches us not only that efforts to nationalize new technologies repeatedly fail but also that they are often counterproductive. This is attributable in part to the way politicization motivates other countries to create rival systems. It also undermines the market position of the country seeking dominance, because customers elsewhere are wary of foreign interference or because the country’s best products are no longer available on international markets.

One example of a leading country weakening its own market position is provided by the aerospace industry. The space rivalry between the US and China is instructive in relation to the current competition between the two countries in the field of AI. In 1989 concerns about Chinese espionage led the US Congress to block the export of American satellites intended for launch by Chinese rockets. That decision followed on the heels of a 1998 report, which said that China’s technology acquisition threatened US security and that satellites should be subject to tighter export controls. The strict Strom Thurmond Act was duly passed in 1999.

However, the policy had an adverse effect on the competitiveness of the American satellite industry. Whereas the Americans had 90 per cent of the satellite component market in 1995, their share fell to 56 per cent in 1999. In the face of supply uncertainties, companies in other countries, such as DaimlerChrysler Aerospace in Germany and Telesat Canada, severed ties with their American partners and sought alternatives.Footnote 141

The field of cryptography gives us another example of misguided efforts at nationalization. Here too, the US federal government sought to secure control over sensitive technology. In the 1980s, for example, the NSA proposed the use throughout American industry of algorithms the agency had developed itself. However, there was widespread suspicion that the NSA’s motive was not to improve security but to gain universal access to communications. In 1993 the Clinton administration launched the Clipper initiative, which would require companies to share their encryption keys with the government. The proposal was met with fierce criticism. Exporters complained that they would be unable to sell their products abroad if they featured backdoors accessible to the US security services. Civil rights groups also objected to the surveillance capability the initiative would create, and researchers demonstrated that the proposed system was far from technically robust. The administration was forced to introduce a revised Clipper II initiative, but ultimately that also failed.

Another instrument the US government has used to dominate the encryption industry is export controls. Under legislation passed in 1976, products featuring very strong encryption required export licences. However, these were rarely granted. Strong encryption was permitted within the US, but only weaker forms could be exported. As a result, American companies were disadvantaged in international markets. Against a background characterized by increasing market globalization and the availability of open-source knowledge, the US government ultimately ended the export controls around the turn of the century.Footnote 142

Scientists and others who defied the US government by open-sourcing their expertise in the so-called ‘Crypto Wars’ acted as an important counterweight to the authorities’ efforts to nationalize cryptography. So too did the business community. Although the private sector does sometimes ally itself with the government, the examples above illustrate how companies can also work against the authorities in order to protect their own international commercial interests. Following a 2015 terrorist attack in San Bernardino, California, for instance, Apple refused to co-operate with the FBI’s request for assistance in decrypting material on the attackers’ phones. The result was a court case in which Apple argued that the FBI’s request would compromise the privacy of all iPhone users. Following the case, a slew of US technology companies, including WhatsApp, Yahoo and Google adopted strong forms of encryption in a move that FBI Director James Comey referred to as the ‘going dark problem’.Footnote 143

7.4 The Importance of International Co-Operation

Although attempts have been made to nationalize system technologies, history provides many examples of efforts to promote open, international co-operation around such technologies. A wide variety of formal and informal international contacts have been used to develop standards, guidelines, codes and principles of good use. For example, the joule, ohm and ampere – standard international units of measurement in use to this day – were defined at a meeting of the British Association in 1861.Footnote 144 More recently the Domain Name System, the technique used by computers everywhere to address each other, was the outcome of a global standardization effort. In that case the drive for uniformity was led by universities and not initially by companies or governments.Footnote 145

In biotechnology, researchers have developed various forms of self-regulation on the international stage. Colin Scott claims that the biotechnology industry also benefits from the informal agreement of international standards, guidelines and other forms of self-regulation. If a government adopts an overly domineering approach to standardization, it fails to utilize both the expertise that exists elsewhere and the opportunity to create a sense of ‘ownership’ of the resulting regulations.Footnote 146 Scientists Wolfram Kaiser and Johan Schot have shown that, long before creation of the European Union, the technocratic outlook of experts and industrial associations had been acting as a force for European convergence since the nineteenth century.Footnote 147

Nevertheless, nation states have also succeeded in securing international agreements on the use of new technologies. In 1975, for example, the United Nations Biological Weapons Convention – the first international attempt to ban an entire category of weaponry – came into force.Footnote 148 Starting in 1967, five international space treaties were agreed, covering matters such as the peaceful exploration of space, damage caused by objects in space and the militarization of the moon.Footnote 149

In short, the focus on national economic and military power is counterbalanced by many international co-operative initiatives. Notably, international collaboration with regard to new technologies has often been motivated explicitly by a wish to promote peace. That was the aim behind Italian Piero Puricelli’s proposal for a European motorway network in 1921, and it re-emerged as a significant aspect of the motivation for building such a network in the wake of the Second World War.Footnote 150 CERN’s creation in the post-war period also owed much to the desire to promote prosperity and collaboration and to facilitate non-military research.Footnote 151

On the other hand, international collaboration has sometimes served as a smokescreen for rivalry. By agreeing to “work together on the matter of mastering the universe”, the US and the Soviet Union each prevented the other from securing a clear lead individually.Footnote 152

One final observation is in order regarding the idea of nations racing against each other where system technologies are concerned. A race implies everyone heading towards a shared goal. Yet there are numerous historical examples of countries seeking to technologies of this kind in quite different ways. The development of the US electricity network was driven by private commercial interests, for example, whereas in Europe its supply of electricity was always seen as a public service. As a result, European homes were connected more quickly and at lower cost than their American counterparts even though the US led the world in the commercial application of electricity. This shows that the purpose and nature of a new technology’s application are not predetermined.Footnote 153

Key Points – Overarching Task 5: Positioning

  • The introduction of a new system technology is often portrayed as a global race. Encouragement of a new technology by means of strategic programmes does tend to enhance a country’s competitiveness and strategic power. Strong economic and geopolitical grounds for investing in new system technologies therefore exist.

  • Nevertheless, characterization of the introduction process as a race is misleading. Technological development and advancement are always international processes, especially where system technologies are concerned. Attempts to nationalize those processes and exclude other countries usually fail and are often counterproductive.

  • International cooperation and the development of universal standards aid the successful embedding of a new system technology.

  • Moreover, the race analogy disregards the international diversity that exists in terms of system technology adoption and the values underpinning the technology’s design and use.

In this chapter we have discussed five overarching tasks that historically have proven crucial regarding the integration into society of system technologies. In the second part of this report, we consider what those tasks entail in relation to AI. We also examine the current dynamics pertaining to each of them, and their implications for the societal embedding of this particular new system technology.