The term ‘contextualization’ concerns the uses to which AI is put. This overarching task is a key part of the transition from laboratory to society. It is also a complex and often underestimated process. In practice AI will have to operate in specific contexts, each with its own systems, practices, rules and logic. This process is known as contextualization and involves adaptation and integration, which are both very time-consuming. That delays the maturation of new system technologies. As a result, it often takes longer than we might expect for them to become part of our everyday lives. The overarching task of contextualization therefore raises the following question: ‘how will AI work?’

We tackle this question by discussing a variety of AI applications. Many of these are in healthcare, but we also use autonomous vehicles (an application for which many have high expectations) to illustrate the issues surrounding contextualization (see Box 6.1). Their development is not primarily dependent on mechanical aspects, such as the engine; it is more a matter of intelligent algorithms that make decisions about routes and can respond dynamically to the environment. We therefore use repeated references to autonomous vehicles in separate boxes to illustrate the central dimensions of contextualization.

Specifically in terms of AI, what does this task involve? The author Kai-Fu Lee draws a historical analogy with the contextualization process surrounding electricity. Thomas Edison’s discoveries led numerous entrepreneurs to disrupt all sorts of industries. They used electricity to cook food, light rooms and power industrial tools. Lee states that electrification – harnessing and applying electricity – required four inputs. These were fossil fuels to generate electricity, entrepreneurs to develop this technology commercially, engineers to apply it and a government to provide the underlying infrastructure. By analogy, he claims, AI requires raw material (in the form of data), talented entrepreneurs, AI scientists and a government policy that provides incentives.Footnote 1

FormalPara Box 6.1: What Is an Autonomous Vehicle?
  • Levels of autonomy

Autonomous vehicles cannot easily be defined, although the so-called SAE model is often used for this purpose. It classifies driving automation into six levels: zero to five. Vehicles with autonomy levels of zero to two still need human drivers, but provide them with some degree of software support. At level three and above a vehicle can drive itself in certain situations. So, the step up from level two to level three is in fact a giant leap. Eventually, vehicles will be able to operate fully automatically in all situations. That is level-five autonomy but is still a very distant prospect. Most modern cars have level-one autonomy, which involves advanced driving assistance systems (ADAS). These feature automatic lane keeping, parking assistance and cruise control. In 2018 the Dutch Ministry of Infrastructure and Water Management estimated that 1% of vehicles had level-two autonomy. This involves adaptive cruise control (ACC), which adjusts the vehicle’s speed to match that of the vehicle in front. Vehicles at level three are not yet commercially available.

  • Contemporary applications

So-called ‘truck platooning’ has made reasonable progress in traffic automation. This is when a convoy of (potentially driverless) lorries – the ‘platoon’ – automatically follows closely behind a lead vehicle with a human driver. The benefits include improved traffic flows and fuel savings. Dutch research organization TNO anticipates that further development work will lead to fully autonomous trucks driving in a platoon.Footnote 2 Field experiments on public highways in Europe have been under way since 2016 as part of the European Truck Platooning Challenge. In technical terms, then, this technique is becoming increasingly possible. At present however, current regulations do not permit driverless vehicles to use the roads.

In addition, there are several examples of automatic robot taxis. These are only allowed to operate within a very limited area (‘geofencing’), but within those limits now cover many kilometres and are collecting a great deal of data. One of the companies working on projects of this kind is Waymo, a subsidiary of Alphabet (Google), which operates in the vicinity of Phoenix, Arizona. For safety’s sake, a human driver is always present as well – and that has turned out to be necessary. On one occasion a taxi stopped on the wrong side of the road and had difficulty entering an area where there was a lot of activity.

FormalPara A Sociotechnical Approach

We classify the elements of contextualization rather more broadly than Kai-Fu Lee. This involves a sociotechnical ecosystem approach, divided into a technical ecosystem of supporting and emergent technologies on the one hand and a social ecosystem featuring the human context on the other. We can then consider this dual ecosystem at the macro level (employment and the productivity paradox) and at the micro level (behavioural context). Contextualization is the overarching task of embedding a new technology in these different ecosystems.

If we contrast a sociotechnical approach with other approaches to AI, its value becomes clear. A strictly defined approach to AI only addresses the specific algorithms that make up these systems, distinguishing AI from data-related issues, for example. While this can be justified based on theoretical considerations, the use of such a strict definition creates blind spots in terms of what makes the technology successful in practice. This requires supporting technology such as data, even if, strictly speaking, data is not part and parcel of AI itself. We need to consider the broader technical ecosystem to gain a complete picture of the overarching task of contextualization.

We can also contrast this with the instrumental approach to AI, which can often be found in ethical analyses. When viewed purely as a tool, AI is considered neutral. This is because people can use this technology for good or bad purposes. That limits potential responses to the establishment of principles or rules for good uses. This approach carries the risk that the context in which the technology operates may be ignored. An ecosystem approach actually spotlights the fact that entire environments are being transformed. Take the internet. An instrumental emphasis on good use focuses on ethical principles and formulates rules of etiquette for online behaviour, for example, but the internet has also transformed the public space and impacted interpersonal interactions. Any approach that focuses purely on the ethics of good practice would miss these more wide-ranging systematic changes.

Finally, yet another approach also touches on the issue of contextualization. This is research into ‘AI readiness’. Oxford Insights publishes an annual index on this topic, which lists different countries’ ‘states of readiness’ for AI. Whilst this touches on contextualization, however, it does not include non-technical dimensions (the social ecosystem). Furthermore, even within its technical dimensions it covers only a limited range of conditions.Footnote 3 This means that an excessively strict, instrumental or narrow technical approach would not address key factors that determine when AI will become workable. We therefore approach this question from the perspective of the sociotechnical ecosystem in which AI will operate (see Fig. 6.1). Below we start by discussing AI’s technical ecosystem (Sect. 6.1), which, as indicated above, consists of two dimensions. This is followed by a discussion of the social ecosystem (Sect. 6.2).

Fig. 6.1
An illustration describes 2 categories, technical ecosystem and social ecosystem.

AI’s technical and social ecosystem

1 The Technical Ecosystem

1.1 Supporting Technology

The first dimension of the technical ecosystem is supporting technology.Footnote 4 Strictly speaking, supporting technologies are not an integral part of a new system technology. Nevertheless, it cannot function without them. A related concept is ‘enveloping’, which emphasizes modifications to the environment rather than improvements to the new technology as a condition to make it function in practice.

In its strictest sense, AI involves the development of ‘intelligent’ algorithms, as mentioned in Part 1 of this report. What other technologies does AI rely upon? On the one hand there is the data it uses as raw material, on the other the hardware required.Footnote 5

‘Hardware’ first of all refers to effective digital networks. This means the existence of a fast and reliable network. AI is based on complex calculations, which often have to be performed at great speed. Autonomous vehicles in heavy traffic need to make decisions in milliseconds. The same applies to the machinery used in factories. Aside from the sheer speed involved, there must be no faltering or ‘latency’ (delay). In road traffic, even a brief failure of the network can have fatal consequences. Signal coverage varies from one area or town to another. It can be quite poor in some sparsely populated areas or in surroundings where the signals are blocked by massive structures or Faraday cages. Before AI can be implemented at a given site, then, it is important to determine whether the local digital network meets the necessary requirements. Prior to developing an AI application, therefore, we need to be sure that reliable networks are or will become available.

But hardware is not just about networks, it is also a matter of computing power. That involves chip technology and supercomputers. The computer chips developed by the semiconductor industry are key to this, as they are used in AI to perform the necessary calculations. This has traditionally involved central processing units (CPUs), an industry long been dominated by the American company Intel. The advent of smartphones fuelled the need for chips that use energy more efficiently. The US firm Qualcomm (which uses designs by the British company ARM) soon became a leader in this area. As we saw in the first chapter, it gradually became apparent that graphic processing units (GPUs) were the most effective means of performing many complex AI calculations. These chips, used mainly in the gaming industry, were developed by companies such as Nvidia.Footnote 6 Some of today’s versions have been designed to perform specialized calculations, such as those used in machine learning algorithms. This technology is so specific and of such strategic importance to the industry that it is being developed by major technology platforms themselves. Google’s TPUs and Microsoft’s FPGAs are just two examples.Footnote 7

It is important to note that Silicon Valley companies lead the development of this supporting technology. In its trade war with China, the US has denied that country access to critical chip technology, a move that has also affected other countries. In 2016 Qualcomm made a takeover bid for NXP (Philips’ former semiconductor branch). However, this ultimately fell through due to China’s opposition. For the Netherlands in particular, chips for AI are of strategic importance. Our nation, which is home to ASML, is a leading player in the global chip industry.

Supercomputers are another source of computing power. While these are not required for many everyday AI applications, they could become vital for very complex ones in the future. Japan, the US and China currently top the world rankings for the most powerful supercomputers.Footnote 8 In 2021 however, the top 10 also included two European supercomputers: JUWELS from Germany and HPC5 from Italy. Europe is investing in supercomputing capabilities by backing the European High Performance Computing Joint Undertaking (EuroHPC JU). VEGA, the first cofounded supercomputer, was presented in 2021.Footnote 9

Besides hardware, the other major supporting technology for AI is the raw material it uses: data. Today’s leading approaches to AI, such as deep learning, certainly require huge amounts of data – much more than classic rule-based systems. First and foremost, then, it is important this be available. However good its algorithms might be, AI cannot work without relevant data. So, it is important in AI applications to check that this can be obtained and where it is located. It is no coincidence that AI was first widely applied to consumer platforms. After all, these have access to enormous amounts of data from sources such as social media, search engines and online shopping behaviour.

The amount of data collected differs from one sector to another, and from one country to another. Take healthcare. In the Netherlands, various bottlenecks prevent this sector from making full use of AI. In many areas, for instance, the available data is limited or not entirely useable. Hospitals and institutions all have their own systems, which are not always mutually compatible. Moreover, some data exists only in handwritten form or in paper archives. This diversity stems from the decentralized nature of the Dutch system. In France, say, the sector is organized differently: it uses universal systems and centralized databases. That is one reason why this sector is a pillar of France’s AI strategy.Footnote 10 With this in mind, the Dutch Council for Public Health and Society (RVS) emphasizes the importance of continuity of patient data when using AI in healthcare.Footnote 11

Not only is the availability of sufficient data a key factor, but it must also be of sufficient quality, commensurable and accessible. The process of training algorithms is often hampered by the limited quantity of raw material (data) they have to draw on. This is due to factors such as commercial confidentiality, legislation, professional secrecy or just flawed systems. At Dutch university hospitals, for instance, AI scientists often have to train their algorithms using American medical data. This is either because the Dutch material is not accessible or because these scientists must first navigate their way through a complex and confusing application process. The equivalent procedures are much easier in the US.

This brings us to another point about the requisite data: it must be representative. There are no guarantees that algorithms trained on data from one site will produce good results anywhere else. This is clear from the above example of healthcare data. The populations of different countries may have different genetic traits and lifestyles, which make it impossible to draw general conclusions. That quickly became clear at the outbreak of the COVID-19 pandemic. Initially, most data were generated at a global level. But it was also necessary to collect data locally to allow for any country-specific differences in virus development. The same applies to mobility. Road signs, traffic regulations and urban planning differ significantly from one nation to another. We can train autonomous vehicles in one country but cannot simply assume that they will then operate effectively elsewhere. Therefore, it is important to supplement worldwide analyses with local data.

Challenges around the availability of good data do not stem just from technical issues, either. Others are related to the way in which a sector is organized, to legislation and standards or to the establishment of new systems for effective data management. Technical solutions to some of these challenges do exist, though. They include generative adversarial networks (GANs), which artificially generate new data when insufficient source material is available. If navigation service TomTom’s cameras film a street while it is raining, GANs can filter out the rain when they generate data for that street. Another option is the technique of ‘federated learning’. Here the algorithm is sent to the data instead of the other way around. This enables organizations to train their algorithms on sensitive data without having to acquire it or use it illegally (Box 6.2).

Box 6.2: Supporting Technology for Autonomous Vehicles

We can again apply the framework of supporting technologies to autonomous vehicles. Here we tend to focus mainly on the vehicle itself – or, more specifically, on the intelligence of the system that controls it. In technical terms, however, there is much more to effective autonomous vehicles than AI software alone. AI depends on a wide range of other technologies (involving both hardware and data) to operate effectively in this role. Autonomous vehicle developers are currently using a variety of technical approaches. In many cases, the following technologies are essential.

  • Sensors

We cannot collect relevant traffic data in real time without effective sensors. Autonomous vehicles need this hardware to scan their environment. Many cameras or other scanners have a limited forward field of view. Thus, given the response time involved, many autonomous systems only work at low speeds. Sensors also need to operate in a variety of weather conditions and environments. On one occasion they featured in a fatal crash involving a Tesla. Unable to distinguish a white truck crossing the highway from the background brightness of the sky, the car failed to apply its brakes. Given the potential risks involved, autonomous vehicle manufacturers have installed some critical elements in triplicate.

  • Digital maps

Digital maps are another important source of data for autonomous vehicles. These need to be accurate and up to date. For instance, they need to display any temporary roadworks that change the traffic situation. Companies such as TomTom and Waze are currently developing HD (high definition) maps accurate to the nearest centimetre, which is a major advance in data collection.

  • Computing power

Autonomous vehicles need to perform complex calculations at great speed, so computing power is critical. In this respect Moore’s Law (the doubling rate of transistors on chips) has important implications. We had to wait until around 2005 for cars with flash drives that could store 3D maps of cities.

  • I2C

The physical and digital infrastructures are additional supporting technologies. More specifically, this is all about infrastructure-to-car (I2C) communication. In general people are easily able to identify road signs and traffic lights. The same cannot be said of computers. This could be remedied by infrastructure that communicates directly with the car (by digital means). The vehicle would no longer need to interpret images (‘Is that a traffic light or the brake light of a lorry?’). To that end we would need to incorporate chips into the physical infrastructure.

  • C2C

Another supporting technology is car-to-car communication (C2C). This would resolve the difficult problem of assessing other drivers’ intentions. One particular benefit of automated communication between vehicles is that it could prevent rear-end collisions. Indeed, if cars were able to brake simultaneously, they could drive much closer together. The ‘platooning’ technique mentioned earlier is an example of this.

  • Network

Some types of autonomous vehicles use their onboard computing power, while others place greater reliance on network-based intelligence. When it comes to communication outside the car (I2C or C2C), an effective digital network is a must. Future 5G networks, in particular, may play an important part in this. We explore this in greater detail in the next box, on emergent technologies.

1.2 Technology in an Envelope

‘Enveloping’ is a key concept when it comes to supporting technologies. It enables a new technology to operate more effectively, not by improving the technology itself but by adapting the environment in which it operates. Here we present some examples that clearly illustrate the importance of enveloping AI because a failure to use this approach resulted in major problems. In her book Artificial Unintelligence, Meredith Broussard recalls how the introduction of e-education was hailed as a solution to the limited availability of textbooks in many American neighbourhoods. It gave children online access to their learning materials. All that was needed was a one-off investment in a telephone or tablet. However, its backers forgot that e-education is just one element of an entire ecosystem, without which it cannot operate effectively. The cost of infrastructure is just one example. Computers can help children deal with problems, but they require maintenance and access to all kinds of other services, such as telephone lines and e-mail. Old school buildings are a case in point. Teachers need to confirm that their pupils can safely connect and charge all those computers. Which means that they need to have the building’s electrical system assessed. The Wi-Fi network must operate effectively at all times and in all parts of the school. Those in charge need to create access codes, as well as secure learning environments that do not violate privacy rules. They also need to create a means of identification for each user, delete the accounts of former pupils and purchase licences for digital books. This shows that there is more to the digitalization of education than simply handing out computers.

Autonomous vehicles are yet another example of AI systems that are poorly suited to their surroundings. Broussard notes that even though arXiv and GitHub host massive online training sets for this application, that data does not include sufficient ‘edge cases’ (unusual situations). In addition, there can never be enough data to cover every eventuality on the road. People can quickly interpret situations that would confuse autonomous vehicles. One example might be children playing a new game that involves unpredictable movements near roads; another could be people chasing runaway pets. Australian mining companies use autonomous trucks, but these operate in relatively controlled environments where human work is already highly automated. As a result, these vehicles can move around without causing major hazards.Footnote 12 We explore enveloping for autonomous vehicles in greater detail in Box 6.3.

Box 6.3: Autonomous Vehicles and the Physical Environment

  • The path to autonomous driving

Each year KPMG publishes its Autonomous Vehicle Readiness Index. The key predictors in this regard (KPMG, Index 2019) are the quality of road surfaces, road design and signage. It is important to maintain surfaces properly, for instance, so that markings remain clearly visible at all times, even under poor weather conditions and following wear and tear. The boundaries of the carriageway also need to be clearly defined. Roadworks (and especially unannounced urgent works) pose a particular challenge in this regard. Workers often overpaint or erase existing markings or replace them with yellow temporary markings. This creates two conflicting sources of data, which can pose problems for autonomous vehicles. Country-specific road-design features can also create difficulties. Take the ‘peak-hour lanes’ on Dutch motorways: during rush hours motorists are sometimes allowed to use the hard shoulder as an extra running lane. This means that they can ignore the continuous white line, but only at specific times.

Enveloping is a way of adapting the system’s environment. It avoids the need to constantly tweak AI applications until their performance reaches the required level. Instead, the environment itself can be modified to make it more ‘readable’ for an AI system, which is then be able to deal with it more effectively even if it is still unable to match human capabilities. Compare this to the approach taken by the Wright brothers, who developed the first aeroplane (Broussard, 2019: 131). People once believed that flying machines would need to imitate the flapping wings of birds. However, scientists have only recently devised ways to mimic nature in that respect. Orville and Wilbur Wright did not design their aircraft as mechanical birds, but based them on an entirely different principle. Similarly, if we adapt their environment autonomous vehicles will not need to match human ability.

  • The rollout of autonomous vehicles

How does this affect the deployment of autonomous vehicles now and within the foreseeable future? At first, they will probably operate in straightforward and predictable environments, like the robots mentioned above. Driverless buses could fairly easily be run on industrial estates, on airport aprons or in other relatively well-defined areas such as golf courses and care facilities, where there is little ‘competing’ traffic.

Autonomous vehicles may eventually be able to operate on motorways, too, but city centres will remain far more challenging. Their introduction could thus involve transferring people or goods from one mode of transport to another at specific locations. For instance, people might travel along motorways in driverless vehicles before transferring to alternatives with a human driver. That would require a fully integrated transport system, like the one currently used to co-ordinate bus and train services.

We might even need to adopt a more prudent approach and implement more rigorous infrastructural measures before permitting autonomous vehicles to take to the roads. The following issue is a case in point. In complex environments, human drivers need to be able to take over the driving from AI systems. Studies have shown that this process takes about 20 seconds. Accordingly, we need to classify driving situations into those suitable for AI systems and those that require human drivers. We could use the operational design domain (ODD) concept to tackle this issue. This defines an area within which an autonomous vehicle can operate effectively.

There are also more rigorous possible measures, such as the construction of separate lanes or roads for the sole use of these vehicles. The same thing happened when people first started to travel in cars and countries designated certain highways for the exclusive use of motor traffic. Other road users, such as cyclists and pedestrians, had to use a different network. Autonomous vehicles, too, could initially operate in highly controlled areas. Next, following a series of modifications they could gradually extend their operational domain step by step.

‘Enveloping’, therefore, is all about adapting the physical environment to enable AI applications to function properly. How will the necessary adaptations affect our expectations of AI applications? Firstly, we should not focus purely on the capabilities of autonomous vehicles, robots or drones. If we adapt their environment, these applications will make even more progress. Before drones can be used to deliver goods, for instance, we need to standardize their landing sites. They will also need new types of letterboxes, safe routes to avoid hitting people and systems that verify the identities of recipients.

Secondly, environmental constraints affect the enveloping concept as well. In the beginning, therefore, many AI applications will be deployed mainly in specific environments. Their use can be expanded at a later stage. Take robots. People have long fantasized about having a robotic assistant in the home, but that is probably one of the last places in which they will be used. This is due to the wide range of potential tasks in the domestic environment (cleaning, holding a dinner party, supervising homework, personal hygiene), all of which involve very high levels of complexity and many unpredictable variables (small children, pets, fragile objects). See Box 6.4.

Box 6.4: Vacuum Cleaners and Houses

Roomba is a disc-shaped autonomous vacuum cleaner produced by the iRobot company. It finds its way around the house using cameras and data from previous cleaning cycles. One problem with Roomba’s usual operating environment is that there are so many corners; the device cannot access these areas due to its circular shape. However, this is nothing compared to the problems experienced by pet owners. Roomba is very effective at cleaning up dust and dirt, but less so when it comes to animal droppings: it tends to spread this material throughout the house, a phenomenon that has become known as the ‘poopocalypse’.

Luciano Floridi had an interesting idea about right-angled shapes, which is relevant to the concept of enveloping. As a thought experiment, he suggested that we should all live in circular rooms in future. Roomba would then be much more effective, but many would object to the idea of having to adapt their lives to technology in this way, rather than the other way around. Yet Floridi wonders if we are not already doing that. After all, one reason our rooms are square in the first place is that we build them with rectangular bricks.Footnote 13

Stuart Russell maintains that because of this, robots will be introduced in stages via other domains. They will be used in warehouses first, much like Amazon’s robots. The tasks there are clear and simple (‘take X to Y’), and the environments controlled. Robots can operate efficiently in these surroundings.Footnote 14 Next they could be used in other commercial environments, such as agriculture and construction. Here the tasks and objects involved are reasonably predictable. The next step is shelf filling and sorting clothes in the retail sector. In domestic environments, robots will first be used to assist the elderly and people with disabilities with specific tasks. Even then it will still be many years before we have universal robot butlers.Footnote 15

This phasing is particularly important in situations where the use of AI can place peoples’ lives on the line. Domestic robots or vehicles in city centres are prime examples. In other situations, the risks involved are more acceptable. We could introduce applications into less controlled environments before their operational capabilities have been perfected. Take virtual assistants, for example. Alexa and Siri are already widely used in domestic settings. Clearly though, we cannot yet have normal conversations with these applications. We have to pronounce words and structure our sentences in a specific way, otherwise the program is unable to understand us. Even then there is no guarantee that we will receive the right answer. Yet we still find these applications in many households. This is because they are just about good enough for limited purposes (‘Where is the nearest bicycle repair shop?’) and because people love gadgets. Moreover, they collect huge amounts of data in these surroundings, which will eventually make them more useful.

To conclude, there is one final implication. AI systems operate much faster in new, specially customized environments than when integrated into existing ones. For this reason, we should build new infrastructure or take AI applications into account when doing so. This explains why China has made great strides with AI applications. In that rapidly urbanizing country, new districts and entire cities are springing up everywhere. So, planners can design them to handle autonomous vehicles, for example, right from the start.

Key Points – The Technical Ecosystem: Supporting Technology

  • Supporting technologies are part of the technical ecosystem.

  • AI requires supporting hardware in the form of networks, chip technology and supercomputers.

  • It also needs raw materials in the form of data that has to be broad-based, high-quality, commensurable, accessible and representative.

  • Enveloping is an effective but underestimated strategy. People have successfully used it to implement new technologies. The environment is adapted to the technology, enabling it to operate more effectively.

1.3 Emergent Technologies

Supporting technologies show that a new technology is part of a broader technical ecosystem. For users this creates a degree of complexity and uncertainty. That is even more applicable to emergent technologies. These initially had nothing to do with the new technology, unlike supporting technology that was associated with it from the very beginning. Emergent technologies develop in parallel, elsewhere or at a later time, after which they are linked to the relevant technology. Compared with supporting technologies, the process of embedding these emergent technologies is even more difficult to foresee. At first people only used electricity for a limited number of purposes in domestic settings. As time went by, though, links developed with other innovations. No one could have imagined how that would lead to the complete electrification of households due to the introduction of all kinds of domestic appliances.

This uncertainty about emergent technologies also applies to AI. Its application in society is a relatively recent phenomenon. Various new technologies have been developing in parallel with its rise. It is impossible to predict how these might eventually link with AI, especially when they themselves are still in their infancy. Nevertheless, those links could lend a huge impetus to AI or propel its application in particular directions. For this reason, we now briefly explore various emergent technologies that could become linked to AI. We begin with the most mature and work our way down to more recent arrivals.

We have already described network technology as a supporting technology. This rapidly evolving technology has recently sired a new generation, 5G, which represents a leap forward in terms of speed. It also uses different infrastructures and paves the way for other applications. People are currently experimenting with the rollout of 5G. This work will naturally impact the capabilities of AI (see text Box 6.5).

Another technology, the so-called ‘internet of things’ (IoT), is also paralleling the rise of AI. It is already at an advanced stage of development. Researchers are installing sensors and chips in all kinds of objects in the physical environment, which can then be connected to the internet. Developments in nanotechnology are driving this process by shrinking the size and cost of hardware. Roads and traffic lights are just some of the things that will be connected to the IoT. The list also includes dykes, toasters, toys, speakers, factories, refrigerators, clothes and even animals and our own bodies.

Cisco, an American company that manufactures much of the hardware, says that the tipping point came in 2008–2009 when more objects were connected to the internet than people. The International Data Corporation estimates that more than 40 billion devices throughout the world will be connected to the IoT by 2025. Moreover, that technology will increasingly be linked to AI. This is all due to data, one of the building blocks of AI. In recent years people have added an immense amount of data to the internet. That has given AI a massive impetus, and IoT will enhance this effect by collecting new data about the physical world. In this way it will become a key factor for new AI applications.

Box 6.5: Autonomous Vehicles and Emergent Technologies

Autonomous vehicles require an effective digital network. The next-generation network, 5G, could play an important role here. Speed is of the essence on the road, after all – more so than in other areas. Faltering connections or vehicles that are slow to apply their brakes can mean the difference between life and death. 5G is much faster than previous generations, and that is essential here. In addition, these networks have much lower latency (the time between sending and receiving a signal), which is also crucial. When we transitioned to 3G and 4G, we were able to stream videos and movies on smartphones. The previous network was simply too limited for this. By the same token, according to some experts 5G can pave the way for effective autonomous vehicles.

The electric car is another emergent technology for autonomous vehicles. It is no coincidence that many electric cars also use advanced computer systems (Tesla, Nissan Leaf, Volvo). Both technologies require a sophisticated automatic transmission system, too. So, it makes sense to link the associated new infrastructure for electric cars (such as charging points) to infrastructural facilities for autonomous vehicles.

Yet another emergent technology is cryptocurrencies and the blockchain technology on which they are based. There has been a lot of talk about these in recent years, and especially about Bitcoin, the most well-known. This technology is hype-sensitive, though, as demonstrated by fluctuations in the value of ‘crypto’. Nor can we yet predict how and on what scale it will be applied. Nonetheless, it clearly presents enormous opportunities, especially in combination with other technologies. Cryptocurrencies use the blockchain to facilitate a decentralized payment system, and that could be linked to AI to detect the use of someone’s intellectual property, song or article, after which that party would be paid automatically.Footnote 16 People could conceivably control everything from bicycle locks to home systems connected to the IoT, operating them remotely through digital communication. In the same way, platforms such as Airbnb could grant access to a property for a prepaid period. People or organizations could use the digital route to make access to certain objects or locations a part of physical reality.

In addition to payments, the underlying blockchain technology can be used to decentralize all kinds of other transactions. One potential benefit is that this provides greater security while reducing dependence on central players or databases. Although the latter has its drawbacks, these properties can lower the barriers to all kinds of AI applications. AI and blockchain intersect in ‘DAOs’ (distributed autonomous organizations). Instead of people, these consist of automated rules and contracts that can make decisions automatically.

Quantum computing is an even less mature technology. But it promises to give the power of computers an immense boost. Simply put, traditional computers use bits in a binary logic of ones and zeros whereas quantum computers operate with quantum bits or qubits. These can simultaneously exist in multiple states, greatly increasing the number of calculations a device can perform.Footnote 17 Instead of using brute computing power, they represent all possible configurations at once.

The technology is still developing, and people are trying a range of approaches. Yet, these devices do not outperform regular computers in practical applications. Once they do – a point described as ‘quantum supremacy’ – according to experts this will represent an immense leap forward. One that would immediately invalidate any encryption systems based on huge amounts of computing power – just like giving someone the keys to every safe in the world at once. This is why countries like the US and China are heavily backing ‘quantum’. Between 2019 and 2028 the US will invest more than US$1.2 trillion. China is building a National Laboratory for Quantum Information Sciences. Europe, too, is active in this area. The EU plans to use its ‘Quantum Flagship’ to strengthen the European research tradition and to build a competitive quantum industry.Footnote 18

Even though quantum computing is still in its infancy, it is easy to imagine how this technology might revolutionize the use of AI. As we have seen, the growth in computing power is one of its pillars. If quantum computing is combined with AI, this could give a huge boost to highly complex data analysis issues or to scientific research into medicines, for example. It is no coincidence that parties such as Google are already pushing ahead with research into ‘quantum AI’.

The above descriptions of supporting and emergent technologies show that system technologies like AI always operate within technical ecosystems. This entails a great deal of complexity and unpredictability. Developments in the technology itself, as well as elsewhere in the ecosystem, can facilitate or hinder its application. Which explains why some applications that work well in controlled or laboratory settings (such as autonomous vehicles on racetracks) seem to be quite mature, yet are far from ready for use in everyday life. On the other hand, improvements elsewhere can suddenly trigger great advances in what had appeared to be a stagnant technology.

Emergent technologies teach us that innovation in one type of technology can provide an enormous impetus to a completely different technology. For this reason, technologies like AI should not be developed in isolation. We need a clearer picture of new developments in other technologies and we need to invest in them as well. This is important for the future of AI. With this in mind, the planners of many AI strategies would be well advised to focus on emergent technologies such as the IoT, 5G, blockchain and quantum computing. AI is deeply intertwined with other technologies. Which is why the Dutch AI strategy was eventually merged with the government’s more broad-based digitization strategy.

Key Points – The Technical Ecosystem: Emergent Technologies

  • Emergent technologies are ones that are initially distinct and separate. If linked together, however, they can have a major impact on further development.

  • 5G, IoT, blockchain and quantum computing all appear to be candidate emergent technologies for AI.

  • The future course of these other technologies cannot be predicted with any certainty. Nevertheless, it is prudent to include them in the aspirations and strategies associated with AI.

  • Both dimensions of the technical ecosystem encompassing supporting and emergent technologies explain why a technology that is seemingly ready for use does not fully mature until much later. In other cases, however, the process of practical application can suddenly accelerate.

2 The Social Ecosystem

2.1 The Macroeconomic Context

Integrating AI into the social ecosystem raises two key issues at the macro level, in terms of the economy. The first involves its impact on employment in general and on ‘technological unemployment’ in particular. The second concerns what is known as the ‘productivity paradox’. Both issues relate to the long-term impacts of AI, which cannot yet be predicted. At the same time the history of system technologies teaches us to examine these issues in a certain way while offering us the tools needed to steer the associated phenomena in the right direction.

The first issue is a recurring theme throughout the history of technological revolutions. This is the fear of huge job losses leaving large groups of people unable to support themselves. However, this cloud does have a silver lining. Once technology has freed us from boring, dangerous and physically demanding work, we will be able to engage in different, more meaningful activities. Karl Marx was one of the first to advance this idea. He stated that in the final stage of communism, people would spend their time hunting, fishing and writing critiques.Footnote 19 In 1930 the economist John Maynard Keynes predicted a future in which we would only need to work a few hours a day.Footnote 20

People these days are often amused to discover that past generations were afraid that work would disappear entirely. For centuries there have been concerns about the impact of developments such as ploughs, machines and ATMs, yet never have large scale job losses materialized. The Luddites discussed in Chap. 3 are symbolic of such inordinate fears. As manual weavers they feared that the Industrial Revolution would bring unemployment. Instead, it created all kinds of new jobs. Yet the Luddites did have a point.Footnote 21 Here it is important to note that while work has been a constant aspect of life throughout human history, we cannot assume that this will always be the case. Indeed, various authors argue that modern technologies like AI are quite different from their earlier forerunners.

One widely acclaimed book in this genre is The Second Machine Age by Erik Brynjolfsson and Andrew McAfee. The authors contend that contemporary digital technologies such as AI are also GPTs. They take the view that the first machine age – the Industrial Revolution – was complementary to human work but see the technologies of the second as substitutive. The first machine age replaced muscle power and led to a process of ‘deskilling’ in which the complex virtuosity of all kinds of craftsmen was subdivided into simple tasks that could be performed by large numbers of unskilled labourers in factories. However, the current machine age is also replacing our mental abilities. According to Brynjolfsson and McAfee, this will rapidly render human labour redundant. Their main supporting evidence is what they refer to as the ‘spread’. This is the growing inequality gap in today’s technology, where the wages of large groups of employees are lagging behind.Footnote 22

Two scientists at Oxford have published a study that prompted serious concerns about AI’s impact on the future of work. In 2013 Carl Benedikt Frey and Michael A. Osborne predicted that 47% of American jobs could be automated within the next 10–20 years. Even though they stated only that this would become technically possible, not that it would actually happen, their paper immediately sparked uproar around the world.

In 2016 OECD economists suggested that the situation was less dramatic than Frey and Osborne’s study intimated. They found that 9% of jobs are at risk. The authors arrived at this figure by focusing on tasks rather than on jobs. Many individual tasks can be automated, but the same cannot be said of the overall job itself. In 2017 PwC estimated that 38% of US jobs were at high risk of being automated by the early 2030s. According to a McKinsey study, 50% of jobs throughout the world can already be automated.

In this context it is worth noting that adding AI to the mix has changed things. Automation now has a different impact than it has done in the past. This has to do with Moravec’s paradox, which has already been mentioned here and states that some things we find difficult are easy for computers and vice versa. Previous phases of automation mainly replaced physical factory labour. However, AI impacts a wide range of human intellectual and conceptual skills. These correspond to administrative, financial and other ‘white-collar’ jobs.Footnote 23 As yet however, computers are unable to match the motor skills of hairdressers, drivers or cleaners.

People have responded to these scenarios by devising all kinds of solutions to deal with the loss of employment. Silicon Valley, where these changes originated, has also put forward various ideas. For example, Google’s Larry Page suggested that we adopt a shorter working week. If the remaining jobs were shared in this way, more people would be able to find employment. Many people have proposed that we introduce a universal basic income. Yet we cannot predict AI’s ultimate impact on the labour market. The WRR has explored this issue in greater detail in other studies.Footnote 24 Here, in keeping with them, we question the notion that most jobs will disappear.

Firstly, the history of system technologies amply illustrates the recurrent nature of these fears. People are more aware of jobs that have disappeared than of new ones that have been created. The same goes for AI today. Despite the projections made in the studies mentioned above, labour market figures show no structural decline in the number of jobs. Some sectors are even suffering massive worker shortages.

We are also unclear about the causes of certain phenomena, such as the inequality or ‘spread’ in wages. That is of key importance in this context. Kai-Fu Lee, like Brynjolfsson and McAfee, attributes this to the nature of the technology in question. But that is only part of the story. Technologies like AI will certainly contribute to the disappearance of jobs in the middle segment of the workforce, and to the concentration of capital at the top. At the same time, though, many other factors have a major impact in this regard. Neoliberal policies are one example. They have weakened the position of organized labour, restricted social safety nets and reduced the levelling effect of the tax burden. They have also contributed towards the stagnation of many people’s wages. Globalization is another factor. Emerging countries – especially in Asia – have flooded the global market with cheap labour, which has had an adverse impact on wages.

Moreover, in a report entitled Het Betere Werk (‘Better work’) and as discussed in the previous chapter, the WRR stresses that we are still largely unaware of the ultimate impact of technology.Footnote 25 How we harness it and how it impacts employment are underpinned by economic and political choices. We should therefore be wary of claims that the effects we are now seeing are largely inherent to technologies such as AI. In fact, the very notion of AI’s societal integration is all about managing its use more consciously and, as part of that, safeguarding the public interest.

We may have our doubts about the idea that most jobs will disappear, but this does not mean that we should ignore the impact of AI on the labour market. Many jobs will continue to exist but, given AI’s increasing prominence, their nature in the future is very unlikely to match people’s current skill sets. That is the real issue in terms of AI’s impact on the labour market. ‘Technologization’ is just one of the fundamental changes now taking place in the world of work, and people need to adapt to it.Footnote 26 This, too, is in line with the lessons learned from system technologies. The Industrial Revolution generated all sorts of new jobs, but the transition was arduous and painful. This phase was accompanied by unemployment, accidents and misery in the overcrowded inner cities of Europe. Moreover, the new working conditions still lacked adequate rules and frameworks. In the nineteenth century this led to child labour and to the exploitation of workers, as depicted in the novels of Charles Dickens. People had to learn new skills, and employment malpractices had to be addressed.

Even today the process of embedding AI in the world of work is a two-pronged overarching task. First, we need to shift the topic of debate from job loss to job transformation. This requires us to dispense with the idea that man has to compete against the machine – a point nicely illustrated by Dutch chess grandmaster Jan Hein Donner. When asked how he would prepare for a match against a computer, he replied, “I would bring a hammer.”Footnote 27

Rather than ‘man versus machine’, the focus should be ‘man with machine’. Seen in this way, AI is primarily about boosting human intelligence rather than replacing it – a process known as ‘intelligence augmentation’ (IA). Frank Pasquale feels that contrary to all kinds of alarmist stories (“Software is eating the world”), AI actually supports and empowers people in the performance of their work.Footnote 28 The renowned AI researcher Geoffrey Hinton once stated that we should stop training radiologists right away. However, the authors of Prediction Machines show that AI can be a useful aid to these specialists in their work. Furthermore, they as human beings play at least five roles that cannot yet be replaced by AI systems.Footnote 29

To create effective man-machine combinations, people need experience with – and knowledge of – AI. Practical knowledge, in particular, is relevant here. As with electricity (see above), during this societal integration phase we need to consider how the new technology might enrich all kinds of domains, devices and practices, and how this can be achieved responsibly. We discuss the specific implications of this approach, in terms of human work, below.

People also need to explore AI’s impact on working conditions in greater depth. Today, as during the Industrial Revolution, the jobs created by new technology are subject to all kinds of employment malpractices. The conditions and rights of workers on platforms like Uber and Deliveroo are a case in point. Then there is the plight of those employed at Amazon distribution centres. Their toilet breaks are meticulously monitored, and their working conditions are determined by the ‘rate’, which formulates objectives dynamically. At the same time employers are using AI to expand employee surveillance. This is a rapidly growing trend throughout the economy. The AI Now Institute has documented a variety of cases in which technology requires people to work under appalling conditions. These range from migrant labour in agriculture to sensors that tell workers how to walk and what to do.Footnote 30 Other organizations have also warned about the growing trend of digital monitoring in the workplace, especially in the light of people working from home during the COVID-19 pandemic.Footnote 31 So although AI will not replace massive numbers of people in the short term, it will partly automate and transform some jobs while also impacting working conditions.

The second major macroeconomic issue associated with system technologies is the productivity paradox. This is where people often have wildly overblown expectations of such new technologies when in fact their actual impact on economic productivity is often disappointing, at least in the short term. In this context Robert Solow famously remarked in 1987 that “you can see the computer age everywhere but in the productivity statistics”.

This is also an issue that has arisen in the context of AI. There is an article on this topic in the National Bureau of Economic Research (NBER) publication The Economics of Artificial Intelligence, in which the technology is treated as a GPT.Footnote 32 The authors point out that, despite our lofty expectations of AI, we are experiencing a period of weak productivity growth. Between 2005 and 2016, US productivity grew by just 1.3% per year, compared with 2.8% in the period 1995–2004. Various OECD studies show that this is a widespread global phenomenon. The authors also conclude that the slowdown cannot be attributed to the impact of the 2008–2009 global recession. They explore three explanations that could account for this phenomenon to a limited extent, if at all. These are false hopes about the impact of AI, inaccurate measurements of productivity growth and the limited dissemination of AI’s benefits. The latter is the only explanation for which there is any significant evidence.

The explanation concerning false hopes for AI merits further examination. In his book The Rise and Fall of American Growth, Robert Gordon develops a detailed argument in support of this view.Footnote 33 He also draws comparisons with previous far-reaching technological revolutions – the railways, the steamship and the telegraph. Those brought immense improvements to everyday life. Mechanization and household appliances made work easier, better sanitation meant less disease, electric lighting and canned food made our lives more pleasant and there were huge gains in life expectancy. According to Gordon, that kind of progress was a one-off development; current digital technologies will not be able to repeat it. He uses productivity figures to illustrate this point. From 1920 to 1970 productivity grew at an average annual rate of 2.8%. It subsequently declined (with a brief exception between 1995 and 2005) to 1.7–1.8%, its previous level. Gordon accounts for this numerical discrepancy by noting that digital technology has primarily impacted communications. In other areas of life, it has had less overall effect than older technologies. Moreover, current developments in the areas of inequality and education will also contribute to lower productivity growth in the future.

Although Gordon makes a sound argument, some feel that he has not taken sufficient account of recent breakthroughs in AI and tends to underestimate their potential impact. Today many people’s expectations concern sectors outside the field of communication, such as mobility, healthcare and education. Carlota Perez argues that there will be productivity increases in a future phase, as the effects of a technological revolution spread throughout the economy.Footnote 34 The phenomena spotlighted by Gordon, like economic inequality, can certainly have an adverse impact on efforts to spread the benefits of AI far and wide, but such phenomena are not necessarily connected to AI.

Accordingly, the authors of the NBER’s book on AI and the economy argue that the productivity paradox is more likely due to the time taken to implement and restructure as a result of the new technology. The other three explanations are based on the assumption that one side of the paradox is incorrect. They argue either that there will be no productivity growth in the case of false hopes (first explanation) or unequal dissemination (third explanation), or that such growth is already taking place but has not yet been measured (second explanation). In the fourth explanation, based on delay, both observations are correct. People quite rightly have lofty expectations, but these have yet to be realized. In fact, the impacts involve such a big change that it is naturally going to take time to make that transition.Footnote 35 That is in keeping with our ideas concerning contextualization. We have already pointed out the technical factors that need to be in place in order for AI to work. From a macroeconomic perspective this involves the development of new business models, the design of various other types of processes in organizations, efficiency gains and price reductions.

The roboticist Rodney Brooks, who we encountered earlier in the context of the overarching task of demystification, sees AI in the same way. He goes so far as to state that it takes 30 years to progress from the laboratory to a practical product. In the case of AI, technical breakthroughs in the backpropagation algorithm, for example, date back to the 1980s.Footnote 36 The same applies to autonomous vehicles. Even if these are technically feasible, people still doubt that they could be integrated into the processes and rules of road traffic. At what points along the road would autonomous vehicles be able to stop and pick up passengers? How might other road users respond to them? Will we still need traffic lights and other road features designed for human use rather than for autonomous vehicles?Footnote 37 To paraphrase Robert Solow, we could say that we currently see autonomous vehicles everywhere, except on the roads. In addition to its technological prerequisites, embedding AI requires a process of societal change – and that will take time.Footnote 38

Key Points – The Social Ecosystem: Macroeconomic Context

  • People are afraid that AI technology will lead to mass unemployment. Nobody can predict the future, but there are reasons to suspect that such fears are groundless. There are, however, more pressing questions about the impact of AI on work.

  • On balance, AI may not eliminate jobs. It mainly seems to require different skills on the part of employers and employees.

  • However, AI could adversely impact working conditions – through the use of employee surveillance, for example.

  • Economic and political choices underpin the way in which AI is used in practice and how it impacts employment.

  • Besides its impact on work, questions have also been raised in the macroeconomic context, with regard to the productivity paradox. AI has the potential to trigger a great deal of change, so there is all the more reason to assume that a lag effect will be involved.

2.2 The Behavioural Context

At the micro level, too, we need to focus on how AI is embedded in the social ecosystem. More specifically we must examine the behavioural context in which it will be used. We can start by pointing out that developers frequently fail to give due consideration to a new technology’s intended role within existing procedures and working methods.Footnote 39 This is a particular issue in the field of healthcare. Developers produce new items of software or apps without considering how medical professionals will use them in everyday practice. Can they rely on the app? Who has access to its data? How should doctors respond to patients who use apps to make a self-diagnosis at home? Developers should avoid devising solutions that address only specific individual aspects of the care process. The best approach is to embed that technology within broader behavioural patterns, in this case those of medical professionals and their patients.

Another point in this context is that those involved need to take receptiveness into account. Even if something works well, people can still find reasons to reject it. One key aspect to reckon with is that the technology could pose a threat to the work of the person concerned. Many hospitals or healthcare professionals are assessed by the number of treatments they administer. In these situations, any technology that renders such treatments redundant is a potential threat. If we want the new technology to function properly, we may need to redesign an entire process in order to change people’s motivations.Footnote 40 In the educational system, too, teachers may see AI as a threat. In this regard a Dutch study has highlighted the importance of acceptance, by encouraging teaching staff to acquire digital skills, and of conducting experiments.Footnote 41

Another important behavioural issue concerns the specific nature of AI. In many contexts it may autonomously take decisions that would normally be a human responsibility. In this respect it is entirely unlike previous technologies. The key issue here, then, is achieving the optimum degree of interaction between man and machine when taking particular decisions.

This can be tackled by a model that distinguishes three forms of human-machine interaction: ‘human in the loop’, ‘human on the loop’ and ‘human out of the loop’. In the first of these, while an AI system may be involved in the process, ultimate responsibility for any decisions rests with a human being. This is a standard aspect of the ‘loop’. It means that if no people are involved, no decisions can be taken. In the second type, ‘human on the loop’, people play a smaller part. In theory, an AI system of this kind can take independent decisions without any human intervention. Nevertheless, the process is monitored by a human being who is able to intervene and make changes. In the final type, ‘human out of the loop’, the AI system acts completely autonomously. People are no longer involved in the process.

The latter form of interaction is used in many situations involving activities not of vital importance to people, such as recommending certain films or products. In some uncomplicated situations, we rely on algorithms to make the right decisions. When a roadside camera records a speeding violation, for instance, the driver is fined automatically. No human operators are involved.

In situations of great importance to people’s lives, it is essential to include a human in the process. This right is enshrined in European privacy legislation. According to Article 15 of the EU Data Protection Directive:

Member States shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc.Footnote 42

The directive gives no precise definition of decisions that ‘significantly affect’ people, however, so there is ongoing debate concerning their delineation. Using this as a starting point, the EU has since published recommendations concerning the use of AI.Footnote 43

In some domains it may be sufficient to have a human being ‘on the loop’ to check that no mistakes are being made. In some domains, however, decisions have such a major impact that the authorities consider it essential for them to be monitored by a person ‘in the loop’. This applies to autonomous vehicles, for example, see the detailed explanation in Box 6.7. Life-and-death situations play an even greater role in military applications. Autonomous weapon systems that independently identify and destroy their targets are a case in point. The armies of various countries are already conducting extensive trials with systems of this kind, but their potential application has attracted widespread opposition calling for ‘meaningful human control’.Footnote 44

In other contexts, too, such as combating fraud or allocating benefits, people can be very severely impacted by decisions. This was illustrated by the childcare allowances scandal in the Netherlands (see Box 6.6). In the UK, too, government uses automated systems for a variety of purposes, including the allocation of social security benefits.Footnote 45 Applications like this have direct impacts on people, which is a strong argument for permanent human monitoring. Those involved in integrating AI into the social ecosystem thus face challenges concerning the form that monitoring should take.

Box 6.6: The Dutch Childcare Allowances Scandal

Between 2013 and 2019 the Dutch tax authorities used a self-learning algorithm to identify tax fraudsters. It picked out individuals who, supposedly, were wrongly receiving childcare allowances and demanded repayment. But in many cases these accusations turned out to be totally unfounded. This mistake went unrecognized for years, leaving thousands of parents and families with enormous debts.

In the Netherlands people can apply for government benefits if they need financial support with their fixed costs. For example, working parents can apply for an allowance to meet the costs of childcare. In 2013 however, the authorities discovered that Bulgarian criminals were abusing the system by applying for this allowance in the Netherlands and then returning to Bulgaria. The national tax administration responded by designing an algorithm to detect fraudulent claims. This created a risk model based on several indicators that supposedly could identify those receiving payments they were not entitled to. The algorithm assigned a high-risk score to childcare allowances in particular. If an administrative error led to a discrepancy in a claim, for example, the recipient was placed on a blacklist. Their payments were then suspended, and they were required to refund any money they had already received.

In 2018 this approach became a political scandal when a group of journalists published details of the affected parents’ stories. Further investigation revealed that the algorithm had assigned a higher risk score to holders of dual nationality and to low-income households.Footnote 46 The victims were promised €30,000 each in compensation, but many of those payments were delayed. On 15 January 2021 the affair led to the fall of the government when prime minister Mark Rutte and his cabinet submitted their collective resignation.

It seems that the three types of human-machine interaction offer a clear means of selecting the right design in various contexts. At the same time, though, this approach does also suffer from a number of problems.

Firstly, some people may exhibit behaviour that does not align with the selected model. For example, those who are officially ‘on the loop’ or even ‘in the loop’ may suffer lapses of concentration. Alternatively, they may act recklessly due to their unwarranted trust in the technology in question. ‘Automation bias’ is a psychological mechanism that causes people to blindly follow a computer’s suggestions, even if these are incorrect. The phenomenon of ‘alert fatigue’ has the opposite effect. When systems generate too many reports, people become overloaded with information and take these signals less seriously.Footnote 47

A second problem concerns an insidious process that gradually erodes the significance of the human decision-making role. A prime example would be an algorithm that helps healthcare professionals to reach a diagnosis. Doctors still make the decisions and check the algorithm’s suggestions. Over time however, the staff involved become habituated to this procedure and so their checks may become less rigorous. This is especially true of algorithms with a good track record. Today’s doctors have all the skills needed to reach proper diagnoses without the aid of a computer. But over time successive generations of doctors may be less well trained in that particular skill set. Calculators have had a similar impact on the skills of mathematics students. Long-term familiarity and an increased work rate can also make it more difficult for human decision-makers to question the results produced by an algorithm. The people involved must be increasingly sure of themselves before they cast doubt on a commonly used and efficient process.

This mix of dynamics makes the human decision less meaningful, yet those implicated in these situations still bear responsibility for the outcome of that decision. Which can present the risk of a problematic intermediate phase emerging, where the algorithms are not yet good enough to make decisions autonomously, but people are no longer able to intervene effectively. A situation that can lead to mistakes and, as a result, human suffering.

By extension, this involves a third challenge for the interaction model. After all, human control only makes sense in situations where algorithms are performing tasks that are normally undertaken manually. In many contexts, though, the algorithm’s activities could quite conceivably become much faster and more complex over time.

When this happens, human control often becomes impossible or even hazardous. For example, the law only permits autonomous vehicles to use the roads if a person is behind the wheel to intervene if necessary. Cars could drive much closer together in the future, thanks to C2C communication (see Box 6.2). Human reaction times are too slow to be of use in this case, so human control would actually pose a hazard to other road users. Moreover, vehicles could use I2C communication to communicate directly with their surroundings. This technology may ultimately render road signs and even traffic regulations obsolete. But if they were to be discarded, it would then be very difficult for human drivers to navigate the road network. The use of autonomous weapons poses similar difficulties. People are capable of successfully attacking individual enemies, but what happens if the battlefield becomes much more complex? How would they cope with combat that involves large formations of drones, for example? Humans would be of no use here, as they cannot see the bigger picture and their reaction times are far too slow.

John Danaher presents a topical example from a very different domain. The products stored in traditional warehouses are organized by category. Anyone familiar with the category index can easily find their way around a facility of this kind. But Amazon warehouses use a dynamic storage algorithm to shelve products in the most efficient manner. This is based on complex calculations about future demand, involving a logic beyond human comprehension. To the casual observer, everything just appears to be jumbled up. People need algorithms to find their way around facilities like this. Which, says Danaher, poses the risk of creating an ‘algocracy’: a system governed by complex algorithms, which is beyond human comprehension (Box 6.7).Footnote 48

Box 6.7: The Behavioural Context of Autonomous Transport

AI’s behavioural context involves a range of issues that feature prominently in autonomous transport. Autonomous vehicles are still prohibited by law. This has nothing to do with their technical capabilities. It is simply that cars using the public highway must all be under the responsibility and control of a human driver. At the same time people often fail to behave appropriately, with serious consequences.

At a basic level this is already the case with navigation software. Drivers sometimes follow the instructions given by their satnav systems even when common sense and current road signs dictate otherwise. From time to time there are reports of people driving into the sea or along impassable nature trails, and some have even died in incidents known as ‘death by GPS’.Footnote 49 These are classic examples of ‘automation bias’.

So, although autonomous vehicles are very much in the spotlight, full autonomy is not yet a reality. In the meantime, all kinds of decision support software are now available, such as ADAS. As long as users remain responsible for making decisions, accidents will still happen if they fail to act appropriately.Footnote 50 People should therefore make greater allowance for the human factor when estimating the risks involved in automating transport.

Tesla’s ‘autopilot’ function is a very specific case in point. The name suggests that the motorist can just sit back and leave the driving to the car. However, the owner’s manual points out that this is not the case. Nevertheless, the company still refuses to change this misleading name even though many people are critical of the suggestion it creates. So effective communication and instruction are key factors in terms of human behaviour.

The behavioural effect of updates is a related issue. They are designed to improve the vehicles, causing them to respond to a specific situation in a certain way. Yet a later update may cause the same vehicle to respond to exactly the same situation in an entirely different manner. That can be difficult and confusing for the driver. Ergonomic features are important, too, as they can mitigate the risks posed by human behaviour. For instance, they can clearly show drivers which vehicle functions are currently active, and which are not.

Here again, the risk of a problematic intermediate phase may arise, as described above. Vehicles are not yet capable of handling all road traffic-related decisions autonomously. Yet people cannot be expected to keep their attention focused on the road during a long journey when the car is doing the driving. Accordingly, some people contend that this human factor is sufficient reason to ban experiments with semi-autonomous vehicles. They are unwilling to compromise, insisting that cars should either drive themselves or be driven by people.

In many contexts people place an overly simplistic emphasis on human control. The three challenges mentioned above raise doubts about the wisdom of this approach. They show that we need to focus on the complexity of issues associated with human-machine interactions. That complexity includes efforts to identify the strengths and weaknesses of humans and machines, which are very different. Machines are much better at detecting patterns in large quantities of data, for example. Humans on the other hand are generally more competent at using reason to resolve anomalies. Man and machine can interact effectively if their characteristics are co-ordinated properly. They can compensate for each other’s weaknesses and gain the maximum benefit from each other’s strengths. Catholijn Jonker describes this as ‘hybrid intelligence’.Footnote 51

Different AI systems may have different properties, depending on how they are set up. Human editors can use AI redaction tools to automatically obscure certain passages of text. These tools can be set up in various ways. If the main aim is to prevent the disclosure of sensitive information, the algorithm can be set to ‘heavy’. Conversely, if people feel that too little information is being disclosed, a much ‘lighter’ algorithm setting can be used.Footnote 52 The system’s settings should therefore align with its operational context.

Key Points – The Social Ecosystem: Behavioural Context

  • In the behavioural context, we need to take various factors into account when embedding AI systems. These are existing organizational structures, working methods and motives for human behaviour.

  • The ‘human in the loop’/‘human on the loop’/‘human out of the loop’ model is a way to design interactions between man and machine. It can also distinguish between different degrees of human control.

  • However, these highly distinct categories can be undermined by many kinds of behavioural factors. For this reason, we need a detailed examination of the design and use of technology.

3 In Conclusion

In this chapter we have explored the status quo regarding the overarching task of contextualization – integrating AI into the sociotechnical ecosystem. To a large extent this process cannot be centrally controlled. All sorts of organizations will go through it, both internally and externally. They will use it to innovate, to experiment with their processes and to achieve efficiency gains through improved production methods.

Nevertheless, governments can still play a key part. They could start by investing in good digital infrastructure, for example, or in further training. They could also capitalize on their own use of AI to influence contextualization. Public-sector organizations, especially executive agencies, can help others develop good contextualization practices and even to set standards. Governments could provide further assistance through their procurement policies. As major players and ‘launching customers’ they can foster emerging markets or nudge existing ones in a certain direction.

In Chap. 8 (Positioning) we review the issues associated with a country’s competitiveness. It is important to remember that governments possess a broad palette of tools. This enables them to prioritize domains for AI applications and to encourage contextualization in those areas. Some of these could be associated with competitiveness and with a country’s economic engine. Others could be of enormous importance to its society. This category includes healthcare and sustainability, domains in which government is specifically responsible for pioneering new developments. Countries can use this approach to focus more intensively on establishing an ‘AI identity’ of their own.