The first overarching societal task we address is demystification. This is all about public perceptions of new technologies. System technologies appeal particularly to the imagination, because their wide range of applications and generic nature confer a certain intangible quality. In Chap. 4 we discussed the risk that this might trigger overblown expectations and inordinate fears, effects that can make harder for a technology to integrate into society. Demystification helps counterbalance unrealistic perceptions technologies like AI and – particularly importantly – ensures that people do not lose sight of genuine opportunities and risks. As such, it enhances the quality of the AI debate by effectuating a shift from captivating perceptions to issues that merit attention.

The previous chapter touched briefly on how a new system technology such as electricity can trigger myths. A similar dynamic can be seen with the rise of AI. We shall highlight some prevalent AI myths that reflect overoptimistic, pessimistic or simply flawed ideas about its true nature. We also identify misconceptions and pinpoint genuine issues, thus demystifying some of the unrealistic and oversimplified perceptions about AI. Finally, we examine the details of this overarching task at a societal level. How can we as a society ensure that unrealistic perceptions are not shaping our approach to AI? In other words, ‘What are we talking about here?’

1 Behind the Myths About AI

1.1 Utopia and Dystopia

From the public perspective, the histories of system technologies share a number of patterns. The first of these involves the emergence of utopian ideas on the one hand and doomsday scenarios on the other. The way in which AI is perceived also reflects these two extremes. “We’re at the beginning of a golden age of AI,” says Amazon CEO Jeff Bezos. Elon Musk takes a different view: “With artificial intelligence, we are summoning the demon.”Footnote 1 Their statements illustrate two extreme sentiments associated with the rise of AI. Some are hailing this technology as the ultimate technological redemption, others see it as an existential threat to humanity. The robotics pioneer Rodney Brooks says that much of the disquieting imagery and many of the utopian visions are based on misconceptions about the nature of AI:

“… having ideas is easy. Turning them into reality is hard. Turning them into being deployed at scale is even harder.”Footnote 2 According to Brooks, myths about AI often give rise to unrealistic expectations about what it has in store for us, for better or for worse.

Supreme faith in the beneficial effects of AI can take the form of ‘technosolutionism’. This is the term used by Evgeny Morozov to describe the tendency to re-envision complex societal phenomena as issues to which technology is the answer. Solving problems then becomes a matter of simply deploying the right algorithm.Footnote 3 This ‘silicon mentality’, as Morozov previously described this tendency, is particularly evident when it comes to AI. Astro Teller, the head of X (Alphabet’s technology lab), has stated that there is a 90% chance that ‘smart’ machines will be able to solve specific societal problems.Footnote 4 The founder of DeepMind, Demis Hassabis, predicts that superhuman intelligence will solve major problems ranging from climate change to incurable diseases.Footnote 5

The rise of AI is also associated with the other extreme – a deep distrust of everything that involves algorithms and automation. The key concerns here spring from beliefs involving dehumanization, mass unemployment or even existential threats. As we also saw with electric street lighting, AI is being linked to the fear of a ‘Big Brother’ type of society in which digital technology is used to monitor us continuously. AI also features in existing conspiracy theories in the context of 5G, for example, and of related concerns about radiation and privacy.Footnote 6 In the spring of 2020 there was even a rumour that COVID-19 vaccines would manipulate our DNA and connect us to an AI system that continuously receives information about us.Footnote 7

A global survey commissioned by the World Economic Forum shows that four out of ten people are concerned about AI.Footnote 8 Studies of American attitudes to technology reveal that, whilst most respondents support the further development of AI, ultimately they also expect it to have an adverse impact as it becomes more ‘intelligent’.Footnote 9 The Dutch, meanwhile, people associate AI primarily with ‘computers’ and ‘robots’. A survey in the Netherlands has found that more than half of respondents have both positive and negative feelings about AI. They see great opportunities in care sector and in improving safety, but also fear potentially adverse impacts. Less-well-educated Dutch people are quite anxious about job losses and elimination of the ‘human factor’. Highly educated people are particularly concerned about a lack of control over AI systems and about violations of privacy.Footnote 10

1.2 Public Events

Another historical pattern associated with distorted perceptions of generic technologies like AI is the impact of events. In response to the supposed dangers of past emergent system technologies, live demonstrations were held to show that they were in fact reliable and, indeed, capable of spectacular things. The previous chapter has already described historical examples of public competitions and exhibitions in which applications of new technologies were introduced to the public, such as the demonstration of electricity.

Much the same has happened with AI. Indeed, many of its developmental milestones involved a combination of competitions and exhibitions. One of these was when IBM’s Deep Blue chess computer defeated world champion Garry Kasparov; another was the occasion that IBM’s Watson won the YV quiz show Jeopardy!. Other key moments include AlphaGo’s victory over two go world champions and DeepMind’s Agent57, which can defeat any human player in 57 Atari video games. All of these were challenges organized to demonstrate AI’s capabilities, with their impact enhanced by the fact that they pitted it against the intelligence of human champions. Even when AI systems are defeated by flesh-and-blood opponents, the showdown can still be impressive. This was the case in 2019 when IBM’s Watson took on the world’s best debater; although the computer program lost, its performance can nevertheless be viewed as a great success. The mere fact that computers can challenge humans in an arena as complex as a debating competition was enough to show the public how far AI technology has come. At the same time, though, it sparked a furore about the future of the technology.

From time to time, competitions are also held to pit different AI systems against one another. At one time the US Defense Advanced Research Projects Agency (DARPA) staged the DARPA Grand Challenge, a competition for autonomous vehicles. From 2012 to 2015 it also organized the DARPA Robotics Challenge. The two events produced spectacular images of autonomous vehicle races and of robots performing physical tests. The annual Loebner Prize, instigated in 1990, is awarded to the chatbot that comes closest to passing the Extended Turing Test (in other words, the system that most convincingly passes as human). However, none of the competing systems has ever won a gold or silver medal. The best performance so far has been a bronze medal for the ‘least disappointing’ bot.Footnote 11

AI is also making use of the power of live demonstrations. For example, many conferences nowadays open with a ‘conversation’ between a robot and a human presenter who poses it questions. This creates the impression that the robot has a real personality; if it does make a mistake, that is often dismissed as a human failing rather than a technical defect. At one presentation, the CLOi AI robot manufactured by the electronics company LG embarrassingly failed to answer on three occasions. The presenter attempted to explain this away by saying that “even robots have an occasional ‘off’ day” and “it doesn’t like me and apparently doesn’t want to talk to me”. Apple and Google also used live demonstrations when launching their respective voice assistants. Boston Dynamics publishes impressive video clips to demonstrate its robots’ flexibility to the public; in one of the latest the entire ‘family’ dances to a particularly fitting song by The Contours: Do You Love Me from the album Do You Love Me (Now That I Can Dance)?

Demos like this literally appeal to people’s imagination – rather than being told stories about streets paved with gold, the public actually sees them. At the same time, events of this kind can easily mislead the casual observer as to the technology’s true level of development. As far as we know, the Boston Dynamics video was not edited and so the robots really were making these dance moves – but it was not really dancing, of course, as every movement was meticulously programmed in advance.Footnote 12 In that sense, the suggestion that these robots can equal humans’ ability to dance is misleading. According to Brooks, demonstrations like this give rise to all kinds of misconceptions about AI.Footnote 13 The audience only sees what happens on stage and not the work done by people behind the scenes who enable the computer to perform as it does.

In the introduction to this report, we referred to an article in The Guardian that created a stir in 2020. That was headlined ‘A robot wrote this entire article. Are you scared yet human?’Footnote 14 The entire piece was generated by new language processing software called GPT-3 (Generative Pre-trained Transformer 3), which can produce credible texts with relatively little input. As The Guardian’s article supposedly proved. In copy indistinguishable from written work produced by a human, an attempt was made to convince readers that they need not be afraid of robots and AI. “I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.” A lot of people were greatly impressed, believing that they were witnessing the shape of things to come. Later, however, it turned out that human editors had played a vital part in creating the article. First, GPT-3 was used to generate a total of eight essays. Humans then selected parts of these and used them to compose the final version.Footnote 15 One critic compared this to “selecting phrases from spam messages, grouping them together and claiming that the spammers wrote Hamlet.”Footnote 16

Demonstrations often tend to exaggerate the performance of AI systems, then. Things that appear to happen spontaneously are often preprogrammed or have been prepared by people in some other way. But that human contribution remains hidden from view, literally and figuratively. Moreover, such events usually take place in extremely controlled settings. So what the public sees is usually misleading, and certainly not how the system would function in the uncontrolled and highly variable situations that occur in everyday life. By disregarding what it takes for them to perform well on stage, people are tempted to believe that AI systems in general have robust and broadly applicable capabilities. In this way, public demos or ‘evidence’ of AI in action can give rise to unrealistic ideas about its abilities today or in the near future.

1.3 The Power of Words

A final pattern in the mythification of system technology involves the use of certain words. We previously cited the example of the term ‘electrocution’, which caused electricity to be linked with mortal danger. Likewise, these days AI-related terms have a strongly associative character so that they immediately evoke a certain image. The simplest example is the use of the term ‘intelligence’, which links AI’s repertoire to our own capabilities. By facilitating misconceptions, that association can make incorrect use more likely. The same applies to the use of ‘human’ verbs such as ‘think’, ‘learn’ (machine learning), ‘reason’ (automated reasoning) and ‘observe’ to describe the performance of AI systems.

The same applies to the use of human names or titles for AI systems, such as the ‘robot judge’, ‘robot police officer or ‘robot doctor’. Along similar lines, AI systems are sometimes referred to as ‘digital colleagues’. Not only does this downplay the fact that they do not operate in human ways, it also ignores the fact that working with them presupposes the use of processes and skills different from those involved when working with human colleagues. So humanizing AI in this way distorts perceptions of its true nature.

Other terminology is also problematic. The word ‘autopilot’ suggests a fully automated control system, when in fact it has only a supporting function. So, designating a system as such may evoke incorrect perceptions of what it is actually doing.Footnote 17 The risk here is that the need for accountability is then more likely to be imposed on the system itself than on its users and designers. A good example in this respect is the assumption that followers of certain Twitter accounts will see automatically selected advertisements, whereas in some cases it turns out that very deliberate, targeted human actions are in fact behind their presentation.Footnote 18

Other terms trigger certain associations in far less subtle ways. Two vivid examples are ‘killer robot’ and ‘killer drone’, which explicitly frame the automation of weapon systems as creating killing machines and so very much push the public debate on this topic in a certain direction. Another loaded term often encountered in the context of AI is ‘dataism’. This was popularized by Yuval Noah Harrari in his book Homo Deus, when referring to an almost religious belief in the promise of data and algorithms,Footnote 19 and has now become quite fashionable. It is often used in the public discourse to present the use of data and AI as a reprehensible ideology that causes us to lose sight of what it means to be human. The phrase “Computer says no” was made famous by the satirical TV comedy series Little Britain but has since entered common usage as a way to evoke the spectre of a computer-dominated system that lacks flexibility and the human touch.

The widely used term ‘black box’ is also worth mentioning in this respect. Referring to AI as a ‘black box’ suggests that people are completely in the dark about how such systems work. It is therefore quite remarkable that the system used by Dutch local authorities to predict the risk of fraud (‘System Risk Indication’, SyRI) was also initially referred to as the ‘Black Box’. That created the impression of a technology that cannot be understood to any meaningful extent.Footnote 20 In the next section we dissect the perception of AI as something essentially unfathomable.

In another commonly used frame, people speak of a ‘race’ for AI that we must win or that we have already lost, or almost have. Virginia Dignum, the co-founder of ALLAI, argues that both the media and policymakers are obsessed with this alleged competition – and in particular with fears that China might ‘win’, which are forcing other countries to speed up in order to avoid being left behind. According to Dignum, this ‘race’ narrative is both mistaken and risky as it focuses on competition and generates an atmosphere of doom and gloom.Footnote 21 Whatever the case, this type of appeal to people’s emotions (fear of losing) is prompting governments around the world to invest enormous sums in innovation so as not to fall behind or lose the race. We explore the ‘race’ frame in more detail in Chap. 9.

The use of specific terms and frames can thus strongly influence the way people think and speak about AI. Indeed, they are often more effective than rational arguments and hard facts. As a result, misconceptions cannot always be debunked rationally. So the power of words should never be underestimated. In this section, besides the use of loaded terms we have also identified other historical patterns in perceptions of AI. One involves impressing the public by means of competitions or live demonstrations. Another is to make associations with other concerns or with overblown expectations about what a new generic technology like AI has in store for us. This heady cocktail gives rise to distorted and sometimes downright unrealistic ideas about what exactly we mean by the term ‘AI’. To shed some light on this, in the next section we address some of the most common myths surrounding AI and show just how misleading they can be.

2 Contemporary Myths About AI

Like previous system technologies, AI has given rise to a variety of myths. In this section we examine some prime examples – some specific to AI, others more general in nature. We start with those are centring on AI itself, its operation and impact. We then turn to another, more generic category: myths about digital technology in a broader sense and how technologies like AI are developed by Silicon Valley. See Fig. 5.1 for a summary.

Fig. 5.1
An illustration is divided into 3 categories, how A I system work, consequences of A I, and technological development.

Perceptions and myths surrounding AI

2.1 Myths About How AI Operates

2.1.1 Artificial Intelligence Is Neutral

This is a very common perception of AI. The idea is that, unlike humans, AI systems have no weaknesses, fears or prejudices. Sometimes cited in this context is an Israeli study purportedly showing that the verdicts handed down by judges are affected by whether they are hungry or not.Footnote 22 AI is never hungry, never tired and never gets up on the wrong side of the bed.

Because they have no emotions, it has been claimed autonomous weapon systems never feel hatred and so are not prone to ‘overkill’.Footnote 23 AI is also said to be entirely neutral as it is unburdened by innate prejudice. The American Correctional Offender Management Profiling for Alternative Sanctions system (COMPAS) was designed to assess an offender’s risk of recidivism. A factsheet produced by the company that developed it states that “objective, standardized instruments, rather than subjective judgments alone, are the most effective methods for determining the programming needs that should be targeted for each offender”.Footnote 24

Being free of emotions, prejudices and ideological convictions, in other words, the tool’s judgments would be more objective than those made by people. Similarly, AI is supposedly operating in an apolitical fashion. Rather than engaging in ideological disputes about what needs to be done, rational systems mathematically optimize the parameters of any given situation. In this way everyone is treated neutrally, without focusing on personal factors.

Despite these claims, however, COMPAS was found to overestimate the risk of recidivism in black people and to underestimate it in white people.Footnote 25 While it is indeed unencumbered by emotion, prejudice or vested interests, this outcome indicates that AI itself is not yet entirely neutral. This is because the way in which it operates can itself be biased or ideological. First, there may be hidden biases in the data used – the well-known phenomenon of ‘garbage in, garbage out’. Algorithms need to be trained, and that requires training data. If this is poor in quality (because it is contaminated, incomplete or biased, for example), that will affect the way the algorithm functions. Consider how Google’s search algorithm operates: Gary Marcus and Ernest Davis cite several examples of bias due to its training on existing data gleaned from the internet. For example, a 2013 study showing that googling a ‘typical’ African American given name like Jermaine is far more likely to produce hits containing details of arrests than when a ‘white’ name is used.

In 2015 Google Photos labelled a number of African American people as gorillas. According to another study, a search for ‘professional hairstyle for work’ produces images of white women while ‘unprofessional’ yields images of black women. Searches for the word ‘mother’ overwhelming bring up images of white women, while only about 10 per cent of those on the hit list for ‘professor’ are female.Footnote 26 An Amazon HR algorithm was found to systematically exclude women from jobs.Footnote 27 Ruha Benjamin cites a 2016 study in which searches for ‘three black teenagers’ yielded photos of arrests. Searches for ‘three white teenagers’ produced images of happy young people, while those for ‘three Asian teenagers’ returned photographs of scantily clad girls.Footnote 28 Another example involves Amsterdam Schiphol Airport. An algorithm designed to support the logistics of handling aircraft failed to recognize a white Delta Airlines aircraft; having been trained mainly with KLM’s blue fleet, it had learned that aircraft were, by definition, blue. These are all examples of ways in which algorithms reflect any prejudices that may be present in their training data.

A system’s neutrality can also be undermined by design choices and the objectives it is set. For example, the characteristics or perceptions of the developers themselves may influence its design. Various facial recognition software packages and certain automatic hand soap dispensers are known to perform poorly with subjects whose skin is black – a clear sign that that group was not considered during the development and testing phases. Meredith Broussard notes that when the Apple Watch was introduced, it was able to quantify a wide range of health data but not information relating to menstrual cycles. The developers had failed to bear them in mind, even though they are obviously very important for women.Footnote 29

Even if the data is entirely free of bias, an algorithm’s chosen goal can still lead to people being disadvantaged. For example, hospital algorithms might be optimized to perform as many treatments as possible, to save as much money as possible or to design the most efficient work schedules for medical staff. Given identical data sets, very different outcomes can arise depending on what goals are selected. As Cathy O’Neil points out, many algorithms are used to generate cost savings rather than to improve the field they operate in.Footnote 30 In short, AI’s purported objectivity can easily conceal a specific underlying agenda.

Problems of this kind are not necessarily the result of conscious actions, though. Many human activities serve a range of goals and interests simultaneously, some of which are not always explicit and clear. Optimizing for one of these can compromise others, particularly if they are more opaque or abstract. Consider consultations by a GP, for instance, the purpose of which is to diagnose patients correctly. Various online platforms are designed to assist with this task, freeing the doctor up to focus on more complex clinical pictures. However, some people visit their GP mainly for reassurance or simply for human contact. The platforms tend to ignore these unspoken goals.

Then there is navigation, which seems a pretty straightforward matter. Algorithms can present either the fastest or shortest route from A to B. However, these are not the only potential goals of a journey. Others include visits to petrol stations, finding a spot with a nicer view, looking for a good place to stop and eat along the way or avoiding winding roads. Navigation tools can take many of these into account, but probably not every possible factor a person might take into account when choosing a route.Footnote 31

Any prejudices in the training data and the type of choices made during the design process will mean that the resultant AI system is not necessarily neutral – or perhaps even necessarily not neutral. In other words, AI in itself does not automatically ‘depoliticise’ processes. We have already seen that the goals set can serve particular purposes and agendas, but even where there is general agreement on them that does not mean that an algorithm will be able to optimize its functioning in a neutral manner. Algorithms can distribute resources equitably, but in very different ways.

To start with, there are many ways of defining ‘equitable’. Take gender as a variable. If this is considered when someone applies for a job, that is clearly a case of discrimination. Yet when pregnancy leads to gaps in a woman’s CV, gender is quite likely to be considered to avoid giving men an unfair advantage over her. In other cases, the need to support disadvantaged groups requires that allowances be made for certain variables for reasons of equitability. One study has shown that it is mathematically impossible to satisfy more than one definition of equitability at once.Footnote 32 So mathematically based algorithms are no substitute for political discussions about what is equitable.

Another reason for questioning the neutrality of AI concerns the use of all kinds of so-called ‘proxies’. In many cases, what we are trying to find out is either difficult to calculate or unclear. To overcome this difficulty, other variables are used as indicators of the parameter we actually wish to measure. As the architect Laura Kurgan succinctly put it, “We measure the things that are easy [and] cheap to measure.”Footnote 33 The online world is full of proxies for human characteristics. The number of friends a person has on Facebook is a measure of their interpersonal relationships, the number of ‘likes’ they attract a measure of their popularity, and their payment history is a measure of their creditworthiness. Similarly, an app designed by a Stanford University PhD student is claimed to be able to assess whether someone takes ‘a good selfie’. This assessment was supposedly based on objective standards, but in fact the algorithm was trained on photographs and the number of likes they garnered on social media. So, what was actually being measured was popularity. As a result, selfies by young white women consistently rated highly and those by older black men far lower, regardless of actual quality of the images.Footnote 34

This limitation can also be seen more broadly across society. Cities, for example, are sometimes ranked according to vague ‘quality of life’ indices, the prevalence of ‘supercreative professions’ and the number of patents they generate, which supposedly serve as indicators of ‘innovative power’ – an ethereal quality impossible to measure directly.Footnote 35 We need to realise, therefore, that often we do not engage directly with the phenomenon we are actually interested in. Instead, we use proxies, which can give rise to distorted and non-objective images.

When software developers use proxies that have not been consciously selected, moreover, that can lead to biases in their algorithms. This is a common problem with AI systems. Their creators can expressly remove certain variables from the dataset, such as gender or ethnicity, but even without the relevant input self-learning algorithms are still capable of developing proxies for those variables and so disadvantage certain groups anyway. For example, studies have shown that algorithms can identify the gender of job applicants based solely on their use of words. Likewise, postal codes can serve as a proxy for ethnicity. Consequently, a great deal of research effort is now focusing on ways to address this problem by technical means.Footnote 36

Another related objection to claims that AI neutrality lies in the fact that many of the words for things of great importance to us have no objective meaning whatsoever. They are dependent on our choices and actually consist, by definition, of subjective ‘proxies’. One obvious example is ‘beauty’. The company Beauty AI developed an app that enabled people to submit photographs of themselves, which would then be judged by such purportedly objective standards as symmetry, wrinkles and age. When they examined the outcomes of this beauty contest, the designers found that their algorithm judged dark-skinned people to be less attractive than others. Because ‘beauty’ as a concept is highly subjective, any supposedly objective parameters to measure it in fact reflect the subjective preferences of their designers or of the population group or social class to which they belong.Footnote 37

Another such issue concerns the word ‘health’, which is the focus of many AI systems. While health of course has objective elements, it has other aspects on which people hold differing views. The same is true of terms that do not seem particularly subjective, like ‘poverty’ and ‘deprived neighbourhood’, but are actually the product of political discourses and frames. Moreover, an algorithm may produce a correct prediction for something it is searching for but in fact be referring to an entirely different pattern. If that underlying pattern remains invisible, the prediction could be wrongfully portrayed as neutral. For example, an algorithm might correctly indicate that certain people will have dealings with the police. However, it is quite possible that this finding reflects people who are excluded by certain institutions and come to the attention of the police as a result of that. Consequently, the underlying injustice of this situation is overlooked.

Take the COMPAS system mentioned earlier. It produced a score for the risk of recidivism based on a 137-point questionnaire for detainees. This focused on issues such as poor education, debt, criminal associates and an unfavourable home situation. In theory the algorithm was able to show that these factors are predictors for repeat criminality. But rather than measuring a person’s predisposition to offend again, as was claimed, these factors are actually indicators of poverty. Their use in this way categorized less well-off people as potential criminals.Footnote 38 As well as measuring criminality, then, variables of this kind also contribute towards the way it is portrayed and produced, and so are not neutral. Unlike the scientific method, a great deal of AI research itself influences the subsequent outcomes. A credit rating, for instance, not only assesses a person’s risk of bankruptcy but also actually increases it.Footnote 39

A final fundamental problem with AI’s purported objectivity is that no matter how good the data may be, it only ever reflects a given aspect of reality. In this context, Greenfield notes that the word ‘data’ itself – Latin for ‘that which is given’ – is misleading and it would be more appropriate to use the term ‘capta’, meaning ‘that which is taken’.Footnote 40 Data enables you to gain a grasp of something, and so to some extent always involves an element of power: it introduces structure into what is measured and what is not, and it both categorizes and is amenable to classification – a binary classification by gender, for instance, even though that may not always be adequate. This power dimension is particularly evident in something like Quantified Self movement, which seeks to collect a wide range of personal data through wearables but also presents the human body in a certain light and suggests ways of gaining control over it by means of a fitness regime. It is important to remain aware of this power aspect, especially in the face of claims that an algorithm is completely neutral.Footnote 41

Our conclusion from the above objections to the idea that AI is neutral is not that its use should be discouraged, nor to say that it can never be more neutral than humans. It certainly can be. What the objections do show is the sheer complexity involved in AI applications and what we need to focus on when using this technology for specific purposes: they highlight the questions, technical challenges and discussions that are part and parcel of the responsible use of AI. If we fail to address these issues and instead rely blindly on the supposedly neutral judgment of algorithms, a whole range of abuses can arise and prejudices could be embedded within systems even as it is being suggested that they are totally free of bias. Arising out of the three myths being discussed here, at the end of this section we present a list of questions relevant to AI systems.

Key Points – The ‘AI Is Neutral’ Perception

  • The fact that AI lacks feelings and other human qualities has led to the suggestion that this is a neutral technology. However, its workings can indeed involve prejudices and abuses.

  • Various factors raise questions concerning the operational neutrality of algorithms: the quality of the training data, the characteristics of their developers, the uses to which they are put, conflicting definitions of equitability, the use (even unintentionally) of proxies and the subjective meaning of words, as well as the filtering tendencies of data and the power that entails.

  • These are not arguments against the use of AI, but they do indicate that we need to ask probing questions if we are to use this technology responsibly.

2.1.2 Artificial Intelligence Is More Rational Than the Human Mind

The perception that AI is neutral is closely linked to the notion that it is rational, or at least considerably more rational than humans. Neutrality suggests that outcomes are more equitable. Rationality suggests that AI draws on superior data and computing power, which enable it to identify patterns and relationships too complex for human brains.

That supposed rationality holds out great promise for many AI applications. Take healthcare, for example. When making a diagnosis, doctors compare the data at hand with the body of knowledge and experience they have acquired during their careers. The role of humans in this process is necessarily limited. Moreover, the sheer quantity of knowledge continues to grow at a rapid pace. According to some estimates, medical specialists need to spend most of their time reading research papers if they are to have any hope of keeping abreast of the latest developments in their field. This is an impossible task. Thus, rare genetic disorders in mainly immigrant populations, for example, are difficult to diagnose. AI, on the other hand, can scan immense databases and can be constantly updated with the latest medical knowledge. That was also the promise when IBM’s Watson was first used in healthcare settings.

As we will see in other chapters, this is not as simple as it might seem. Nevertheless, the underlying logic seems clear: AI systems can process much more data than humans, and they have immense computing power at their disposal. Accordingly, the decisions they make can be seen as being more rational and more accurate than those made by humans. Even prominent AI researchers may harbour a (naïve) belief in the ability of this technology to introduce greater rationality into police work, for example, or into the operation of financial markets.Footnote 42

The caveats here fall into four categories. First, many AI systems measure correlation, which is not the same thing as causality. In practice these two relationships are often confused, even though the philosophy of science demonstrated their distinctness several centuries ago. The fact that two phenomena regularly occur together does not mean that one is the cause of the other. Much more complex causality may be involved, or the concurrence may simply be a matter of chance. An example of the former was a chess program that identified a pattern in which players who gave away their queen often went on to win the game. Accordingly, the program identified that as a good move. However, a queen sacrifice is very costly and is only used when there is the prospect of checkmate, a prize that more than makes up for the loss of that valuable piece.Footnote 43

Secondly, the supposed rationality of an AI system is often associated with the promotion of services and products that wrongfully depict human rationality in a bad light. Broussard provide an example from the world of autonomous vehicles, derived from the very commonly cited fact that every year 1.2 million people worldwide die in road accidents. Ninety-five per cent of these cases are due to human error. That sounds like a very good reason for automating mobility. But, as Broussard rightly points out, that 95% figure is a statement of the obvious, as almost all accidents are the result of human error. This is because every single car on the road is driven by a human. It therefore would be very odd if the data suggested otherwise.Footnote 44 Another example of an adverse comparison of human capabilities with AI concerns the terminology associated with data applications like IBM’s Watson. Information that has not yet been ‘digitally captured’ (made available in digital form) is often referred to as ‘dark data’ – a term that evokes a lack of control, disorder and subversion. By depicting existing practices as ‘dark’, digital solutions are thus portrayed as sources of transparency and rationality. They are claimed to help us by preventing the wastage of various types of data.Footnote 45 Such frames are not limited to AI alone; they are also associated with more wide-ranging ideological positions. Broussard refers to this as ‘technochauvinism’, while Zuboff calls it the rhetoric of ‘surveillance capitalism’. We return to this issue later in this chapter, when we discuss broader perceptions of technology. But at this point it is important to note that AI’s alleged superior rationality could be just another unrealistic idea about the world.

A third caveat concerning the notion that AI is rational involves a dynamic we have encountered in previous system technologies, namely the ability of words to deceive. As noted in Chap. 2, our understanding of AI tends to be couched in human terms. In other words, we try to anthropomorphize it. Remember Moravec’s paradox. People see the game of chess as something that requires a great deal of intelligence. So if a machine can play chess, we tend to see that achievement as an indication of more powerful intellectual abilities, even though there is no justification for doing so. Saying that machines can outperform us at chess is much the same as stating that horses can run faster than we can. But that does not mean that either machines or horses yet surpass us in other domains. It is important to acknowledge, therefore, how the feats achieved by machines differ from intelligent behaviour by humans.

Our fourth caveat merits special attention, as it concerns an increasingly common phenomenon with potentially harmful effects. That is views of AI based on pseudoscientific theories and applications. One of the most striking examples of this is the field of emotion detection. This is an aspect of facial recognition, in which it is claimed that people’s underlying emotions can be distilled from their facial expressions. The company Kairos, for example, claims to be able to identify anger, fear and sadness from images in video recordings. In 2019 Amazon announced that its Rekognition system was able to identify eight different emotions from facial expressions. One area in which this technique has found a market is recruitment; HireVue is one of several firms offering it for use in job interviews. In China emotion detection is deployed to check that students are paying attention in classes. The American company BrainCo is working on a similar application. Programs like Cogito and Empath use voice analysis to identify the emotions of people who phone call centres. Security agencies in the US and the UK believe that is can help them discern whether people are lying or hiding something. So, this particular application of AI is on the rise. Projections indicate that its value will grow from $12 billion in 2018 to $90 billion in 2024.Footnote 46

The strange thing is that, despite it having become a growth industry, there is no scientific basis for emotion detection. Its origins can be traced back to the work of the psychologist Paul Ekman in the 1960s. He developed a method to distinguish between 27 ‘action units’ in faces and concluded that there are six basic emotions. The entire field is based on his work. As yet, however, there is no proof of its veracity.Footnote 47 Indeed, there is reason to believe that the ways in which people experience and express their emotions vary between cultures and individuals, and even in a single individual over time. It is worrying that even though the whole notion is questionable, it is nevertheless being actively employed. Children are being punished for not paying attention,Footnote 48 job applicants are being rejected and others are suspected of lying.

Emotion recognition is just one example of the wider phenomenon of algorithm-based pseudoscience. Another is the online personality tests used to determine whether job applicants are suited for a specific job. In this area, too, there is no evidence that people can be clearly classified into personality types with predictive power in respect of their work skills.Footnote 49

Also falling into this category are various fitness trackers and wearables. There is considerable doubt as to whether movement, calories burned or the duration of someone’s sleep can be accurately measured. Yet many people see these applications as a ‘scientific’ way of tracking their health.

Similarly dubious is the use of facial recognition software to identify a person’s sexual orientation. Some researchers have claimed to be able to do this with great accuracy.Footnote 50 However, this can be very risky as homosexuality is a punishable offence in many countries. Even if the results generated by this software were accurate – which is very uncertain – it would pose a grave danger to many people if it were to fall into the hands of authoritarian regimes.

How can we account for the fact that, despite the enormous amounts of data and computing power involved, AI can still be used for purposes based on pseudoscientific theories like this? One reason is that we barely understand the theme in question, the complex nature of which makes it difficult to test or contradict. For example, how do I prove that I do not have an impatient personality? Or that I did not get enough sleep? That I was indeed paying attention in class? Matters like sexual orientation are very complex and simply cannot be captured completely in a binary distinction between heterosexual and homosexual, as demonstrated by the enormous diversity within the LGBTQIA (lesbian, gay, bisexual, transgender, queer, intersex, asexual) community. Analyses of this kind are thus unscientific simplifications.Footnote 51

Another aspect of pseudoscientific theories is the lack of feedback to determine whether a prediction was correct. We can never know for sure whether a job candidate who was rejected based on a personality test might have been suitable for the position after all. Someone whose asylum application was turned down because they lied often has no opportunity to prove their innocence. Technological applications of this kind do not just investigate a certain area, then, they also generate their own reality, often without being tested.

Key Points – The ‘AI Is More Rational Than the Human Mind’ Perception

  • More data, greater computing power and the ability to identify complex relationships suggest that AI is more rational than the human mind.

  • In reality, however, correlation is often confused with causality.

  • Some of the ways in which AI’s abilities are portrayed are designed to serve commercial purposes.

  • Anthropomorphizing AI creates the impression that it has greater intellectual capacity than is in fact the case.

  • Emotion detection, online personality testing, fitness trackers, sexual orientation analyses and certain approaches to poverty are in fact based on pseudoscientific theories and applications, and thus pose major societal risks.

2.1.3 Artificial Intelligence Is a Black Box

One commonly heard view of AI is that it is a ‘black box’. This term, popularized by early cybernetics experts, refers to systems we cannot properly fathom and understand. How exactly black boxes translate input into output remains a mystery, since we have no grasp of their inner workings.Footnote 52 As a result, AI is seen as being opaque, undefinable and almost impossible to regulate. This is particularly problematic in domains where transparency is important, such as a court’s reasons for imposing a particular sentence on defendant. The black-box problem also features in other cases where legitimacy, legal certainty and legal equality are crucial – a concern reflected in demands from the Dutch civil courts and the Council of State for greater transparency in the use of algorithms.Footnote 53 While it is often argued that control and transparency are unnecessary in less ‘vital’ situations, neglecting them can still prove problematic in the long run.

The notion that AI is a black box has even prompted Frank Pasquale to express concerns about the rise of a ‘black-box society’, where unfathomable systems make a whole range of decisions in such areas such as reputation management, online searches and the financial sector. Pasquale’s use of this term has another dimension, too; as well as the incomprehensibility of the systems involved, he is worried about the ‘black box’ as a universal recording device, analogous with its namesake found in aircraft.Footnote 54

Is the idea that AI is a black box just another myth? Not necessarily. But it is important to be more precise about what we actually mean here. The term ‘black box’ tends to be used in very different ways, with some of these variants presenting greater obstacles to be overcome than others. In order to formulate appropriate responses, therefore, we first need to distinguish clearly between the various definitions.

First, the concept of a black box may be used to indicate something so complex that certain people are unable to understand it. But that does not exclude the possibility that others do. In this sense, many aspects of modern society are black boxes for most people. When they step into a lift, they rely on it to operate properly without knowing exactly how it works. The same applies to other technologies and to many legal, political and administrative issues as well. However, this does not mean that these things are entirely beyond human comprehension. Some groups of people are skilled in the relevant areas and bear responsibility for them.

In the world of AI, the decision trees used by expert systems are analogous to this type of black box. People who have not studied that technique find it difficult to understand, but it can be readily explained by those who have. This form of black box poses few problems: we need only to ensure that there are enough people who understand and can explain the system, just as there must always be enough mechanics available to repair faulty lifts.

A second type concerns situations in which we do not have access to the data and analyses used to generate certain outcomes. This could be due to a variety of factors. One possibility is that the data in question simply has not been maintained or stored. Another is that we lack the rights needed to view that information, as when we use the services of a company that considers its data and the workings of its algorithm to be trade secrets. Likewise, a government agency might not wish to make an algorithm public as this would undermine its purpose (combating fraud, for instance).

This variant is considered as involving a black box since a particular interested party is given no opportunity to understand the system. In many cases this is for commercial or legal reasons – due to a confidentiality clause in a contract, say, or because of barriers imposed by intellectual property rights. Here too, the obstacles are not insurmountable.

Looking at the US, for example, Pasquale argues that we should critically review all the various legislation that has made it easier to classify things as trade secrets, especially since their effects now permeate society.Footnote 55 The American AI Now Institute, too, urges that we not accept that the workings of systems key to the functioning of society constitute a proprietary secret.Footnote 56 As well as rules relating to confidentiality, the institute is here also referring contractual provisions that can be refused. A trickier permutation of this variant is when the data on which an outcome is based is derived from a range of very different sources.Footnote 57 One example is algorithms based on other algorithms, whose origin cannot be traced. This problem arises in chain decisions. Studies carried out in the Netherlands show how many government decisions are made by linking various systems.Footnote 58 Although the resulting outcome is not unfathomable in theory, in practice it is virtually impossible to trace how the final decision came about.

A third use of the term ‘black box’ is more technical in nature is closely related to the recent rise of deep learning in AI. These are systems so advanced in terms of their complexity that the outcomes would be too difficult for people to understand. As an example, consider the process whereby a particular article is placed on someone’s Facebook timeline. This is an immensely complex, real-time operation involving millions of users at once, in which each person’s data interacts with that of other people. This issue is a cause of concern because, for example, it raises worries that elections might be influenced. The danger is that, given the level of complexity involved, it may no longer be possible to find out why a given message did or did not appear on someone’s timeline.Footnote 59 It is important to note that this does not necessarily mean that the process is fundamentally incomprehensible, but simply that it makes the task of finding out how a system arrives at a particular decision a very complex one for potential investigators.

Sometimes, though, a system’s processes are indeed too complex for a human to check. This is because, on occasions, the logic used by AI systems differs from that in our brains. Take pixel-level image recognition, for instance. We can certainly understand how just a part of a photograph is enough to recognize a face. However, deep learning identifies patterns at various deeper layers involving input at the level of individual pixels. Humans are unable to follow logical reasoning at that level. At Facebook, two computer programs are said to have developed a ‘language’ that enables them to communicate with one another in a way people are unable to understand. In all such cases, though, the question is whether the issue really is fundamentally incomprehensible or whether, given enough time, we would be able to understand the process concerned.

We do not intend here to explore the details of specific remedies for the various types of black box. Our aim is merely to show that this term covers a range of very different phenomena, which leads to confusion. Some of those phenomena present greater obstacles than others. Moreover, it is not just the nature of algorithms that can make black boxes unfathomable; property rights and complex social systems play their part as well. As for how to tackle this issue, the answer will be different for each of the four types we have described. But it is not impossible, so whenever the term ‘black box’ is used to indicate that something is beyond our understanding and that, as a result, we should not use it or cannot control it effectively, it is time to pause and dispel the myth.

Key Points – The ‘AI Is a Black Box’ Perception

  • The image of AI as a ‘black box’ can give rise to the notion that control or transparency is impossible.

  • However, the term ‘black box’ is used in very different ways. It may indicate complexity (which is not beyond the understanding of experts), a lack of access to a system’s inner workings (due to legal or other restrictions), the performance of huge numbers of calculations or something that is fundamentally incomprehensible.

  • The term ‘black box’ can refer to many different things, so it is important to have a clear understanding of what we mean by it in any given situation.

Many misunderstandings about how AI operates can be prevented by asking critical questions during the various steps involved in its application. The box on the next page provides some suggestions.

Questions to Ask About AI in Practice

  • Goal & planning

    • What selected slice of reality is being produced here?

    • Has current practice been properly analysed?

    • Which goal does the system optimise?

    • Does the application domain serve multiple purposes?

    • How has the system been influenced by its creators’ world view?

    • Can anyone explain how the algorithm works?

    • Are the databases and models used to train the algorithm accessible?

    • Are there any legal barriers to examination of the way an algorithm operates?

  • Data collection & training

    • How good is the training data?

    • Is the phenomenon measured directly or are proxies used?

    • Is the subject of the measurement really an objective phenomenon?

    • Is the model underpinned by any pseudoscientific theories?

  • System design

    • What definition of equitability is used?

    • Are any patterns involved that are incomprehensible to human brains?

    • Does the system use multiple data sources that might make it more difficult to understand?

  • Output & effects

    • Is a distinction drawn between correlation and causation?

    • Do any words used suggest a false analogy with human intelligence?

    • Does the algorithm influence the factors it measures?

2.2 Myths About the Impact of AI

2.2.1 Artificial Intelligence Will Soon Equal Humans

We have already discussed three myths about how AI operates – that the technology is neutral, rational and a black box. Next, we examine various myths concerning its expected implications in the near future. The story about Sophia the robot (see Box 5.1) is typical of these in that it implies that AI will soon rival humans and then far surpass us.Footnote 60 As we saw in Part I, people have been speculating about this type of artificial general intelligence (AGI) ever since the field first emerged.

Box 5.1: Robot Citizens

In 2016 Sophia the robot, developed by Hanson Robotics, was exhibited at the famous South by Southwest technology festival. A year later she appeared at the Future Investment Summit in Riyadh, where she was granted Saudi Arabian citizenship. She replied in person, saying, “I’m the first robot to be granted citizenship, it’s history in the making”. The move immediately triggered wide-ranging discussions. Did it mean that Sophia had the right to marry or to vote? And would switching her off now infringe her rights as a citizen?

The writer Vernor Vinge coined the term ‘singularity’ for the moment when smart machines would start relating to us the way we relate to animals. A moment he believed would come. Similarly, the mathematician I. J. Good described a future ‘intelligence explosion’.Footnote 61 Potential scenarios of this kind have come to the fore again in recent years. The futurist Ray Kurzweil, who works for Google, expects AGI to arrive in 2029 and the singularity to occur around 2045. DeepMind and various other companies (such as Vicarious, Kindred and Numenta) have issued mission statements expressly declaring that their goal is to create AGI.Footnote 62

The expectation that AI will soon be able to equal human capabilities has been fuelled by recent advances and by suggestions that various current breakthroughs are paving the way for AGI. In the field of autonomous mobility, Otto (a division of Uber) has succeeded in developing a vehicle able to drive itself from the east coast to the west coast of the US. Also referring to autonomous vehicles, President Obama noted in a 2016 interview that “the technology is essentially here”.Footnote 63

As we saw at the beginning of this chapter, the idea that fundamental breakthroughs are now taking place is also being stoked by publicity-generating competitions. The rhetoric used on those occasions tends to fan the flames of unjustified extrapolations. Every event is portrayed as yet another step towards the day when computers finally acquire the full range of human intellectual skills. Melanie Mitchell refers to this as one of the pitfalls we tend to fall into when thinking about AI: that ‘narrow intelligence’ and ‘general intelligence’ are two points on the same continuum.Footnote 64 While IBM’s Watson did indeed win Jeopardy!, that does not make the program a good doctor. In 2017 the MD Anderson Cancer Center at the University of Texas terminated its partnership with the Watson project on the grounds that some of system’s recommendations were “unsafe and incorrect”.Footnote 65

The history of system technologies teaches us to be cautious concerning the expectations engendered by competitions and demonstrations. They attract attention and appeal to people’s imagination, but their primary purpose is to promote the technology – and so, in many cases, the controlled conditions under which they take place are glossed over. For example, Otto’s impressive road trip took place in a heavily managed environment. Since that first demonstration drive in 2016, several fatal accidents involving autonomous vehicles have occurred under more mundane and considerably less controlled conditions. As time goes on, an increasing number of major obstacles to autonomous vehicles are emerging. People can easily rotate and displace objects in their minds, for example, but they are difficult tasks for algorithms to emulate. So, a failure to recognize an orange traffic cone that has toppled over could lead to hazardous situations. In the next chapter we explore the current situation with regard to autonomous vehicles in greater depth.

Experts have been claiming since 2012 that autonomous vehicles will be here ‘within a few years’, but that timeline is constantly being pushed further and further into the future. This helps put expectations around AI into perspective. Marcus and Davis point out that we were hoping to get Rosie, the robot servant from the cartoon series The Jetsons, but instead we got Roomba, the autonomous vacuum cleaner.Footnote 66 Even the technology entrepreneur Peter Thiel has remarked, “We wanted flying cars. Instead we got 140 characters”, referring to the maximum length of a tweet.

So-called ‘hackathons’ are a very popular type of competition designed to drive innovation. From their beginnings in Silicon Valley, they have now spread all over the world. However, those familiar with the field say that the flashy publicity associated with these events should be taken with a grain of salt. Any developments to come out of them are too short-lived to deliver genuine progress towards viable products. For this reason, the products of hackathons are often jokingly referred to as ‘vapourware’ – great promises of innovations that will never appear.Footnote 67

While many recent breakthroughs are more relevant than this, they still need to be placed in the right context. With regard to Jeopardy!, for example, nearly 95% of its answers are the titles of Wikipedia pages.Footnote 68 Watson’s win demonstrated its ability to navigate through that material, not a mastery of the complexities of human language. According to the philosopher Daniel Dennett, moreover, the rules of the game were tightened up somewhat to enable Watson to take part.Footnote 69 As we have already noted, the defeat of a chess grandmaster was the result of a linear progression that can be traced back to the 1960s.Footnote 70 The game go requires enormous computing power, so AlphaGo’s win over Lee Sedol was impressive. At the same time, the algorithm’s achievement required use of a combination of methods plus the input of knowledge gleaned from a large number of human experts.Footnote 71 Although far more complex, the basic challenge in Go is comparable with the game noughts and crosses (tic-tac-toe) in that it involves filling a two-dimensional grid and the optimum outcome can be expressed as a function.Footnote 72 The victorious AlphaGo program has very few applications outside the context of these games.

If we are to demystify AI, then we must tackle unrealistic expectations. Although important steps are being taken, the technology is still not close to equalling humans, to achieving AGI or to overshadowing us. On the other hand, some people tend to overly downplay the chances of achieving superhuman intelligence.

According to Andrew Ng, such concerns are “like worrying about overpopulation on Mars”.Footnote 73 Stuart Russell questions that argument, however, and rightly so. While we are not yet in the process of colonizing Mars, substantial investments are already being made in the development of AGI.Footnote 74 After all, this is the goal of the AI field. Russell feels that it is odd for people who are busily developing a train that is destined to plunge off a cliff to insist that there is no need to worry because we will have run out of fuel long before we reach the cliff-edge.

We therefore need to take the goal of equalling human intellectual abilities very seriously indeed. At the same time, we must put any announcements of breakthroughs into context. Given the current state of progress, after all, that goal is still far beyond our reach. Russell presents a very useful classification system for a range of variables we can use to assess AI applications. The nature of the environment may be entirely clear (like a chessboard) or much less so (like road traffic), actions can be discrete or continuous, other actors may or may not be involved, the outcomes of actions may or may not be predictable, the environment may or may not change dynamically and the horizon against which the achievement of goals is measured can be near or distant.Footnote 75 These variables give rise to a huge set of assorted issues. While great progress is being made in areas that are completely manageable, discreet and predictable, for example, the resolution of other points remains a very distant prospect.

Many of the questions we have raised concerning predictions that AI will equal humans within a relatively short space of time are covered by the three elements of what Marcus and Davis describe as the ‘AI gap’ between expectation and reality. The first of these is our own credulity. We attribute human qualities to machines. While people require intelligence to perform certain tasks, though, that is not necessarily the case for machines. The second element concerns imaginary progress. Advances in solving simple problems (as in Jeopardy!) should not be confused with an improved ability to solve complex ones (such as understanding human language). Finally, say Marcus and Davis, there is a robustness gap. Compared with solutions already achieved or within our grasp, such as hand-free motorway driving, more complex tasks like autonomous inner-city driving involve an inordinately greater degree of difficulty. In metaphorical terms, you can climb taller and taller trees but that will never get you to the moon.Footnote 76 For that you must develop alternative methods. The idea of machines equalling humans should certainly not be dismissed out of hand, but it is still far beyond the reach of current methods. We will need to make further fundamental breakthroughs if we are to move any closer to that goal. As discussed in Chap. 3, today’s artificial intelligence is all ‘narrow AI’ – that is, systems focusing on specific tasks. They already surpass humans in a number of these, and for the time being we are much more likely to create more systems that outdo us in other narrow domains than we are to achieve AGI.

Key Points – The ‘AI Will Soon Equal Humans’ Perception

  • Recent developments and breakthroughs suggest that we are close to equalling human capabilities, what we call artificial general intelligence (AGI).

  • However, high-profile competitions and demonstrations largely gloss over the controlled conditions required for AI to be successful.

  • There is an ‘AI gap’ between expectation and reality. This is driven by projecting the way human intelligence operates onto machines, by the imaginary progress associated with the misrepresentation of milestones and by unjustifiably extrapolating from simple issues to complex ones.

2.2.2 Malign Artificial Intelligence Could Turn Against Humans

This is perhaps society’s greatest fear when it comes to AI, one further inflamed by imagery in popular culture. As we saw in the previous chapter, the term ‘robot’ was first used in a play about mechanical workers turning against humanity. Over the years, the same theme has featured in numerous movies.

These stories are based on a motif from the distant past, long before AI or computers were invented. In the first chapter we saw that myths about artificial forms of life date back to ancient Greece, and perhaps even earlier. Many of these include a dystopian element. The creation of artificial life has generally been viewed historically as a transgression of boundaries that warrants some form of punishment. The tales of Prometheus, Daedalus and Medea are much in the same vein. A more modern story in that same tradition is Frankenstein (subtitled ‘The Modern Prometheus’) by Mary Shelley. Dr. Frankenstein creates an artificial life form that eventually kills its creator. The modern fear of malign AI is just the latest chapter in a long tradition of disquieting imagery.

Another phenomenon that helps stoke this fear is known as the ‘uncanny valley’. This centres on our relationship with machines that display human characteristics or behaviour. We tend to feel sympathetic towards machines in human form, but that turns into fear and repugnance if they resemble us too closely. The advent of machines indistinguishable from humans, however, will make the ‘uncanny valley’ a thing of the past. This is yet another phenomenon that inflames fears of malign AI.

Researchers like Nick Bostrom and Max Tegmark have devoted several thought experiments to scenarios of this kind,Footnote 77 although they are keen to emphasise that their work is purely speculative. In movies, malign AI often assumes humanoid form as a robot or a talking computer. While that is certainly possible for ‘real’ AI as well, physical incarnations of this kind are not essential to its further development. Extremely powerful AI is more likely to take the form of intangible algorithms than actual machines.

Besides its form, the myth of malign AI also imbues the technology with other human qualities it cannot rationally be expected to develop, such as a lust for power, a desire for freedom, jealousy and a fear of death.

According to Steven Pinker, the scenario that robots will become superintelligent and enslave humans “makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle. The … fallacy is a confusion of intelligence with motivation – of beliefs with desires, inferences with goals, thinking with wanting. Even if we did invent superhumanly intelligent robots, why would they want to enslave their masters or take over the world? Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: being smart is not the same as wanting something.”Footnote 78

Yann LeCun points out that a desire to take over the world correlates not with intelligence but with testosterone.Footnote 79 A related objection is that malign AI scenarios assume that we have reached the level of AGI, whereas in fact – as noted above – there is currently no prospect of that.

Given the compelling nature of this myth, one key objection to it is that focusing on something so entirely speculative tends to distract us from more serious threats that are very real. For instance, the risks posed to human life by machines have nothing to do with intentions, malign or otherwise. A missile flying towards its target has no ill will at all, but it will kill people nonetheless. The problem, then, is not so much that AI may develop malign goals of its own but that it is very adept at achieving the goals people have built into it – which may be dangerous or ill-conceived.

This brings us to the issue of ‘value alignment’, which means designing AI with goals that coincide with our own – a concern prompted by the fact that an AI’s rigorous pursuit of certain goals can jeopardise others. Russell describes this as the ‘King Midas’ problem, after the legend of the monarch who was granted his wish that everything he touched turn to gold. This enabled him to achieve his aim of becoming enormously wealthy, but when his food and his relatives also turned to gold, he discovered that this goal conflicted with others.Footnote 80

An increasingly efficient AI that becomes destructive in the pursuit of certain preprogrammed goals thus poses more of a risk than malign AI. As Norbert Wiener put it, “…human impotence has shielded us from the full destructive impact of human folly”.Footnote 81 Now that we are able to make machines that can achieve goals by advanced means, we are confronted with our more ill-conceived aim. A well-known illustration of this problem is Nick Bostrom’s thought experiment about a paperclip machine. He proposes the idea of a highly intelligent machine whose goal is to manufacture as many paper clips as possible. To achieve that, it may first decide to wipe out humanity to ensure that it can transform any matter it finds into paper clips quietly and without resistance.Footnote 82 Linear AI with no goals of its own is a greater danger than AI with nefarious plans.

Russell provides a compelling example of the destructive effects of a simple algorithm designed to select content on social media. Its purpose is to maximize advertising revenue by increasing the number of click-throughs. If the algorithm starts by selecting the content people find most interesting, that seems relatively harmless. However, this algorithm achieved its goal in a different way: it changed people’s preferences in a way that made their behaviour became more predictable. People with more extreme political views tend to have more predictable preferences, so the algorithm prompted users to become interested in more extreme content. Given the prevailing hostile political climate on social media platforms, this is a significant factor. Yet there is no malicious intent here; these actions are entirely in keeping with the pursuit of the original goal – maximizing advertising revenue.Footnote 83

Similarly, when it comes to autonomous vehicles people tend to focus on speculative scenarios rather than acute issues. Much of the debate in this area centres around the so-called ‘trolley problem’ (referring to trolleybuses): what course of action should autonomous vehicles take when an accident is unavoidable and they have to decide who lives and who dies? The ‘appropriate’ values in this case are the subject of a great deal of speculation. Are these universal, and would people buy cars that might sacrifice the driver to save the lives of others? While this could be the topic of many interesting philosophical debates, there are other more acute challenges associated with autonomous vehicles. Furthermore, simpler forms of driver assistance are already commercially available. These have been implicated in cases of injury and death, so it would be better for us to focus on them.Footnote 84

Reports about AI and the frames used in communications on this topic can create the impression among the public that the technology is developing along harmful lines. That is a myth. Facebook’s programs were not plotting the overthrow of humanity, nor do autonomous weapons want to take over the world. Their actions are life-threatening, to be sure, but technically they are no different from chess computers in the sense that they calculate and execute moves with the goal of winning the game.

Key Points – The ‘Malign AI Could Turn Against Humans’ Perception

  • Triggered in part by popular media, there is now a widespread public fear of malign AI. This has deep historical roots. It is being further inflamed by the use of specific terminology like ‘killer robots’.

  • Disquieting imagery of this kind projects human characteristics and intentions onto AI, even though there is little reason to do so.

  • Malign AI also presupposes the existence of AGI, which is still only a very distant prospect.

  • Yet even without malicious intent, AI can still be dangerous by pursuing flawed goals or by achieving certain aims at the expense of others.

The view that AI is developing along harmful lines is just a myth. But this does not mean that we should downplay such perceptions. The history of system technologies teaches us that words, associations and disquieting imagery have often been highly influential. On occasions, they have even turned the public against certain technologies. So, demystification is vital if we as a society are ultimately to reap the benefits of a new system technology such as AI.

2.3 Generic Myths About Digital Technology

2.3.1 Technology Should Be Regulated as Little as Possible

The five perceptions described so far are specific to AI. Three are about how it operates and two about its future impact. But as a major new technological development, AI is also part of a wider environment. Leading platforms involved with previous digital technologies are now at the forefront of this one as well. Because of this interdependence, it is important to examine broader perceptions of technologies – and digital technologies in particular – with their origins in Silicon Valley. Demystifying them will help us to gain a better understanding of AI.

One of the first powerful perceptions of technology to arise in Silicon Valley was that it should be subject to as little regulation as possible. This view can be substantiated in various ways. It may follow from a techno-deterministic approach, for instance: the notion that technology operates autonomously and that the world simply has to adapt to it. Any society that fails to do so, that insists on curbing technology, will be left behind. The motto of the 1933 Chicago World’s Fair was ‘Science Finds, Industry Applies, Man Adapts’.Footnote 85 Few people would put it quite so forcefully these days, but many still embrace milder variants of techno-determinism.

We also see an instrumental approach to technology:Footnote 86 while it does not actually shape society, it is a tool whose uses will be decided by people themselves. A hammer, for example, can be used to build a house or to kill someone – that is up to the user.

There is a grain of truth in both of these approaches. The following quote is usually attributed to Marshall McLuhan, a renowned philosopher of technology: “First we shape our tools and thereafter our tools shape us.”Footnote 87 What McLuhan suggested in his work is that society and technology are inseparable. That they are deeply intertwined. The history of system technologies also teaches us that embedding a new technology in society requires a process of mutual adaptation, part of which involves setting standards and drawing up regulations.

Although they differ radically from one another, both the techno-deterministic and the instrumental approaches lead to the same conclusion: that technology should be subject to as little regulation as possible. In the former this is because regulation is futile, while the latter argues that we should focus on use rather than the technology as such. History also teaches us that each new system technology arouses ideologically motivated appeals that it be left to its own devices as far as possible, and that this approach always requires correction later.Footnote 88 When it comes the technology of today, that correction now gradually seems to be taking place. In Chap. 8 we specifically address the overarching task of regulation. Here we first examine the origins of the myth that no rules are needed, then go on to explore how that frame is being applied with regard to today’s technology to legitimize a specific agenda that could potentially jeopardize civic values.

Jonathan Taplin has very effectively documented the philosophy of Silicon Valley. He describes how Facebook CEO Mark Zuckerberg seized the opportunity presented by the Arab Spring of 2011 to put forward a techno-deterministic line of reasoning. Zuckerberg praised the way in which technology had helped ordinary people overthrow dictators. He contrasted this with fears about information being gathered and shared: “You can’t isolate some things you like about the internet, and control other things you don’t.”Footnote 89 Google’s original slogan was ‘Don’t be evil’. The purpose of framing tech companies as a force serving the interests of society is to ensure that they remain as free as possible from all forms of control.Footnote 90

In addition to their desire to keep regulation to an absolute minimum, many Silicon Valley businesses oppose a variety of existing laws and standards. Not only did Uber launch its app in places where it was in clear breach of taxi regulation laws, it even developed a program called Greyball to determine the best way to evade enforcement checks.Footnote 91

Such clashes with established rules and conventions are deeply rooted in the culture of Silicon Valley. This dogma expresses itself in positive terms such as ‘disruption’ and has much in common with the hacker movement. Zuckerberg’s first letter to investors when his company went public was headed ‘The Hacker Way’. In it he stated that hacking had an unfairly negative connotation. Disrupting the existing order was an official corporate goal. Facebook’s internal motto until 2014 was: ‘Move fast and break things.’ In a 2009 interview, Zuckerberg stated that “Unless you’re breaking stuff, you’re not moving fast enough”.Footnote 92

Opposition to the existing order is expressed even more strongly in a book entitled Zero to One: Notes on Startups, or How to Build the Future by PayPal founder Peter Thiel. Here he proudly tells the tale of how four of the six people who started that business had built bombs in high school.Footnote 93 Peter Thiel is still a major investor in Silicon Valley. The people with whom he founded PayPal (also known as the ‘PayPal mafia’) went on to occupy important posts at a wide range of companies, including Tesla, YouTube, Facebook and Palantir (a software company that operates throughout the world, mainly in the security domain).

Taplin reveals that this mentality is rooted in libertarian beliefs that the size of government should be reduced to an absolute minimum, which can be traced back to the philosophy of Ayn Rand. She advocated the freest possible market, led by pre-eminent entrepreneurs. For them, “The question isn’t who is going to let me; it’s who is going to stop me.” There is no question that they should have to ask permission to innovate. Peter Thiel is known to be an adherent of Rand’s philosophy.Footnote 94

A distaste for government interference and regulation is evident in many Silicon Valley enterprises. This is characteristic of the ‘cypherpunk’ movement, whose goal is to render government interference impossible through technologies such as cryptography. Computer specialist Ryan Lackey moved to Sealand in 1999. This former wartime fort off the east coast of England has declared independence, although no established nation has recognised it.Footnote 95 Google founder Larry Page is known to have commissioned research into autonomous city states. A recent example of these efforts to move beyond the reach of governments is the Seasteading Institute, whose goal is to construct an artificial island without a government in international waters.

Another member of the cypherpunk movement, Timothy C. May, published the Crypto Anarchist Manifesto in 1988. In this he harked back to the old American frontier as a free and lawless territory until a single, apparently insignificant invention, barbed wire, enabled people to define boundaries and fence off private property. According to May, the internet is the new frontier. The ‘minor’ invention of cryptography would now be on the side of freedom, however, rendering online borders and possessions impossible.Footnote 96

The same metaphor was used by internet pioneer Stuart Brand in a 1990 article entitled Crime and Puzzlement: In Advance of the Law on the Electronic Frontier. This specifically compared cyberspace with the Wild West of nineteenth-century America. Following in Brand’s footsteps, author John Perry Barlow went on to found the Electronic Frontier Foundation and later published the Declaration of Independence of Cyberspace. In this he claims to be a representative of the future whose mission is to inform governments that they have no sovereignty in cyberspace.Footnote 97

Online piracy is yet another area that reflects the libertarian aversion to rules and regulations. Kim Dotcom, the founder and owner of Megaupload (a major music piracy site until it was closed down), wrote a rap song in which he portrays himself as a defender of free speech and compares himself to Martin Luther King.Footnote 98

These examples of Silicon Valley beliefs all reveal an uneven tug of war that favours a libertarian Wild West over private property, privacy and a strong state committed to such goals as the redistribution of wealth.

In a number of cases, this ideology impinges even further on key civic values. While that does not apply to Silicon Valley as a whole, of course, some influential individuals over there question democracy itself. Thiel, for instance, he has stated that he “no longer believes that freedom and democracy are compatible”. His personal preference is clearly for the former. In a text for the website of the Cato Institute, a right-wing economic think tank, he writes, “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women – two constituencies that are notoriously tough for libertarians – have rendered the notion of ‘capitalist democracy’ into an oxymoron.”Footnote 99 In another piece on the same site, he adds, “In our time, the great task for libertarians is to find an escape from politics in all its forms – from the totalitarian and fundamentalist catastrophes to the unthinking demos … We are in a deadly race between politics and technology.”Footnote 100

These are extreme standpoints, of course, and many in Silicon Valley do not share them. In fact it is home to various schools of thought on this topic, including those now convinced of the need for government intervention. The above views do come from an influential figure, though, and are still being widely propagated – albeit in a diluted form –by large technology firms. Samuel Freeman argues that recent libertarian thinking can no longer be described as ‘liberal’; it seems instead to resemble a form of feudalism, which aims to replace a shared public space with individual bilateral contracts between companies and consumers.Footnote 101

Many of the above standpoints concerning non-interference with technology find specific reflections in the context of AI. Here too, it is often argued that regulation is unnecessary, impossible or even harmful, and that it works to the detriment of society. The problematic nature of this issue is discussed in Chap. 8. Here it is important to realise that, like previous technologies, AI is associated with a specific ideology that rejects any form of regulation, and that can be at odds with democracy. History shows that this can lead to all sorts of hazards and accidents. Moreover, rules and standards are not at odds with the development of technology; indeed, they can facilitate its use. When developing an appropriate form of regulation, it is helpful to be aware of the sources, impact and hazards of any myths that refute its usefulness.

Key Points – The “Technology Should Be Regulated as Little as Possible” Perception

  • The techno-deterministic and instrumental approaches to technology argue that it should be subject to as little regulation as possible.

  • Its culture of disruption, hacking and libertarian beliefs often puts Silicon Valley at odds with the existing societal and political order.

  • Silicon Valley even features certain schools of thought and developments that cannot easily be reconciled with democratic control.

2.3.2 There Is No Alternative (TINA)

Our second general perception concerning the nature of technology is closely related to the previous one. As well as militating against the regulation of technology, techno-determinism also argues that society has to adapt it. A kindred idea is the notion that the form and impact of today’s technology are inherent to the technology itself, so there is no alternative. In other words, huge corporations, the mass collection of data, advertising as a source of income, markets as the source of all innovation and other aspects are all unavoidable, not as by-products of the technology but as integral part of it. So if we want to reap its benefits, we also have to accept every one of these.

Evgeny Morozov distinguishes the physical infrastructure from what he refers to as the ‘myth of the internet’. The latter is a complex repository for a wide range of wishes and projections, which he says has very little to do with the hardware. ‘The internet’ (in quotes, referring to the mythical variant) has no clear meaning and can encompass virtually everything that happens online, from business modelling to the struggle for net neutrality and a wide range of internet-related technologies.Footnote 102 ‘The internet’ in this sense is a rhetorical construction, a myth, which renders clear understanding and critical views impossible.

Of course, even this perception that there is no alternative does not rule out variety of all kinds within the technological framework. While the business models used by Google and Facebook revolve around advertising, for example, that is not the case for a company like Apple. There are also substantial differences between social media platforms. Nevertheless, in this perception the fundamental organization of today’s technology is immutable. Alternative models, such as not collecting data or allowing users (its source) to own it themselves, are considered unrealistic.

We are not concerned here about whether specific alternatives are realistic. However, we have critically examined the notion that the current incarnation of the technology is essential and the only possible option. In a separate report entitled The public core of the internet: an international agenda for internet governance, the WRR distinguishes the core components and deeper layers of the internet from the superstructure used by large technology companies.Footnote 103

Various authors have in recent years questioned the presumptions that there is, of necessity, a link between technology and the free market and that private companies are the main sources of innovation. Mariana Mazzucato argues that much of today’s innovation in fact originates in the public rather than the private sector; the latter is good at commercializing the results, but innovation itself is the product of a lengthy process of fundamental research that is too risky for market parties and too focused on the long term – and so requires public funding. Foundational work in renewable energy such the development of solar panels, for example, as well as a great deal of innovation in biotechnology and nanotechnology are reliant on government support. Mazzucato also shows how many key components of the iPhone sprang from government-funded research. The same is true of the internet, touchscreens, GPS and even the voice assistant Siri, which was developed in the research laboratories at SRI International, an offshoot of Stanford University.Footnote 104 Mazzucato thus dispels the myth that only large technology companies can develop the wide-ranging innovation we see today, and goes on to ask whether it is right that the government – and, by extension, the public – should bear the risk associated with fundamental innovations while private companies appropriate all the profits.

Shoshana Zuboff shows that the structure of today’s technology can be traced back to specific decisions in the past, which means that any number of alternative designs are possible. She describes a 2000 Georgia Tech project entitled ‘Aware Home’. This involved early incarnations of today’s ‘smart home’ technology, such as smart thermostats and virtual assistants, but adopted a completely different model. Not least, this involved the residents retaining full ownership of their data.Footnote 105 In her comprehensive study, Zuboff reveals how, over time, technology has become intertwined with – and shaped by – other developments. In particular, she explains how the neoliberal market economy first became involved. After the events of ‘9/11’, governments began taking an interest in data collection and population surveillance. This required them to forge links with Silicon Valley companies that excel in these areas. Both the neoliberal market and data collection for surveillance purposes are external developments. As such, they are not inherent to the way in which technology itself operates. Zuboff’s criticism focuses not so much on technology itself as on its owners and the choices they make – or as she puts it, on the “puppet masters, not the puppet” (Box 5.2).Footnote 106

Box 5.2: Acceleration

Another kind of source that harks back to the historically more public nature of innovation is a 2013 publication by Alex Williams and Nick Srnicek, the #ACCELERATE MANIFESTO for an Accelerationist Politics. This offers a utopian vision of technology’s ability to solve a wide range of problems, an outlook we examine critically in our review of the next perception.

What is relevant at this point is that the authors draw attention to the discrepancy between the great promise associated with today’s technology and the way it is actually used – to create unnecessary gadgets and generate advertising revenue. They attribute this to the way technology has been become welded to neoliberal ideology. Williams and Srnicek argue in favour of drawing inspiration from earlier periods, such as the 1960s, when the goals to which technology was put, like sending human beings to the moon, incorporated wider societal interests.

In this respect they align seamlessly with Mazzucato, who suggests that as well as spotlighting the pace of innovation, we should also focus on the direction it is taking. She too cites ‘man on the moon projects’ as a model for the use of technology for public purposes.

How can we translate this broad view of technology into a specific focus on AI? The government’s historically key role in technological development is certainly reflected in the emergence of AI. In particular, the US military and its research arm, DARPA, played a critical part there.Footnote 107 The organization of AI in China, Japan and South Korea also shows that even today this technology can be directed much more firmly by government. Whether this is desirable is another matter entirely, but what is relevant here is that its linkage exclusively to large private companies is not the only possible model.

The history of AI also shows that, besides its public orientation, the technology was also once associated with a different model. One of its creators was Douglas Engelbart, an inventor who was decades ahead of his time in proposing innovations like the mouse, windows, video conferencing and hypertext. In 1968 he gave a classic 100-minute presentation that has since come to be known as ‘the mother of all demos’. One particularly important aspect of this was that Engelbart placed computer technology in an entirely new context. The old mainframe was a piece of equipment used in large government offices and centralized organizations like IBM. Engelbart presented a vision of the personal computer as a device that could be used by individual citizens, something that would have a decentralizing effect.Footnote 108

The person who filmed that legendary demo was Stewart Brand, founder of the magazine Whole Earth Catalog and an inspiration to the first generation of internet pioneers. The magazine played a pivotal role in transferring the idealism of the hippie movement to computer technology.Footnote 109 In this way, what had previously been part of a ‘Cold War technocracy’ became part of a desire for personal development, collaboration and community.Footnote 110 Cyberspace was no longer restricted to military projects and space travel, it had entered the San Francisco ‘counterculture’.Footnote 111

Jonathan Taplin argues that libertarians disregard the fact that the internet was initially conceived and funded by the government, after which it was adopted by v and academics who had no interest in profit. Impelled by idealism, early developers like Tim Berners-Lee (one of the founding fathers of the world wide web) wrote code free of charge.Footnote 112 He still regularly criticizes the form that the internet has now taken and is committed to supporting alternatives.Footnote 113

Over time, digital technology has become increasingly linked to a more libertarian and technocratic market model. The contrast between the different visions is nicely illustrated by a conversation between Engelbart and renowned AI pioneer Marvin Minsky, who stated that he intended to make machines intelligent and conscious. To which Engelbart replied, “You’re going to do all this for machines? What are you going to do for people?”Footnote 114

As we have shown above, modern technology is not necessarily chained to its various modern-day incarnations. It has already existed in at least two other forms. So many of the design features of today’s technology are non-essential elements. Various schemes have been devised to harness technologies like AI for other purposes and in other contexts, and to make different choices about its design. In Chap. 7 we show how activists of all kinds are working to make AI more diverse and more democratic.

Key Points – The ‘There Is No Alternative’ Perception

  • The myth of ‘the internet’ equates technology with the overall structure of today’s internet.

  • From the technical point of view, numerous alternatives are possible. Various thinkers have shown how many of the factors that shaped today’s internet are exogenous in nature. In other words, they are not part and parcel of that structure.

  • Digital technology has taken on other forms in the past. During the 1960s it was shaped by the Cold War, while in later decades it was influenced by idealism.

  • There are widespread calls for a redesign of the internet.

2.3.3 Technology Is the Solution to All Society’s Problems

The final perception of AI we discuss here is the conviction that technology is a panacea for the great and difficult issues facing society. Whilst it clearly it can be (and indeed already is) a great help in resolving a lot of problems, however, all too often people place excessive faith in technology – and that can be problematic in itself.

Different authors have looked at this perception in different ways. Meredith Broussard uses the term ‘technochauvinism’, meaning a belief that technology can solve any given problem.Footnote 115 Evgeny Morozov prefers the critical term ‘technological solutionism’ and begins his book on the subject with a quote from Google’s former CEO, Eric Schmidt: (“In the future, people will spend less time making technology work … because it will function seamlessly. It will just be there. The Web will become everything, and it will also be nothing. It will be just like electricity …) If we get this right, I believe we can solve all the world’s problems.”Footnote 116 One of the snags with this approach, says Morozov, is that it assumes that all kinds of issues are in fact problems, when that may not actually be the case. Solutionism takes the view that numerous inefficiencies, ambiguities and obscurities detract from an ideal reality. Whereas obscurity is often essential for privacy, professional confidentiality or other matters we value, and lack of efficiency provides scope for the experimentation and practice crucial to for many human activities, such as cooking or learning a language. According to Morozov, the will to change things reformulates a diverse range of complex social situations into clearly defined problems with solutions that can be calculated or into transparent and evident processes that are easy to optimise. Other domains are then forced to model themselves on the way in which technology operates. So, Wikipedia become the model for politics, for example, Facebook the model for citizenship and Google the model for all innovation.Footnote 117

Reformulating a wide range of societal domains as problems that can be solved by technical means is not without its hazards. For a start it puts their key functions at risk by narrowing them down to such a great extent. Work, for example, is not just a matter of output. So, while an algorithm optimized for output might improve efficiency, at the same time it could easily undermine job satisfaction, another important aspect of work. We have already mentioned the example of medical care, which is not just about healing people but often also a source of solace or even human contact. An algorithm could well improve the purely clinical aspect, but if it renders human contact superfluous it might cause the other functions of care to disappear entirely. This point is illustrated by the use of algorithms in the judicial system, where AI does indeed streamline certain proceedings – the processing of straightforward traffic fines, for instance. But even in such simple cases, this does not render human contact entirely irrelevant.Footnote 118 Then there are all the technologies designed to promote public safety and social harmony through surveillance and risk assessments. Although these may indeed reduce crime and improve behaviour, the social cost of continuous monitoring could include all manner of personal distress (Box 5.3).

Box 5.3: Covid Apps

In response to the Covid-19 pandemic, apps have been developed all around the world to chart the spread of the virus and facilitate contact tracing.Footnote 119 Their aim is to use monitoring so that targeted action – such as testing and quarantine – can be taken more quickly.

In April 2020 the Dutch government staged an ‘appathon’ as part of its development effort. This indicates that a form of solutionism had set in, with the authorities looking automatically to new technology to come up with answers even though it was far from certain that this would be the most productive approach. Alternative methods might well have better met the needs of the services responsible for tracking and tracing infections. New Zealand, for instance, adopted a low-tech policy: everyone in the country was simply asked to keep a diary of their contacts. In retrospect, the app eventually adopted in the Netherlands appears to have been of little help in fighting the pandemic. Also in April 2020, the WRR submitted a position paper to Dutch parliament cautioning against ‘technosolutionism’ in its response to the pandemic.

Broussard stresses the danger of extrapolating success in one domain into others. Prominent technology pioneers are often treated as gurus entitled to express opinions about anything and everything. But just because someone has achieved a breakthrough in mathematics or created a profitable business model, say, this does not automatically make them an expertise in social issues or public policy. Indeed, an unswerving dedication to finding mathematical solutions, for example, can have a disastrous impact when this applied to interactions between people (or the capacity for such interactions).Footnote 120 A final risk of technochauvinism or technosolutionism is that in emphasizing revolutionary plans to build new things now in the future it overlooks the potential benefits of maintaining and improving what we already have.

Key Points – The ‘Technology Is the Solution to All Society’s Problems’ Perception

  • Terms like technochauvinism and technosolutionism reflect a belief that technology can solve all the thorny issues in society.

  • Those who hold such beliefs tend to favour simplification or one-dimensional explanations, thus downplaying or disrupting other aspects of the social order.

  • Simple quantitative approaches or an emphasis on the latest technology tend to distract attention from more effective, non-technological approaches that can sometimes work much better.

3 In Conclusion

Myths have always been part of the human story and they always will be. The same applies to artificial intelligence. So, it is impossible to permanently dispel the mythology surrounding AI. Moreover, there is no such thing as a ‘realistic vision’ of AI – reality is far too complex and uncertain for that. But this does not mean that we are powerless to counter the genesis of myths. Indeed, it is certainly possible to deconstruct unrealistic perceptions. As the first of our overarching tasks, then, demystification is primarily about helping to improve understanding of what AI is and what it can do. This makes crucial to our other tasks. Only with a better understanding of AI, after all, can we find appropriate ways of engaging with it – and even more importantly, remaining engaged. Unrealistic ideas about the technology will only foster a general aversion to it, which could cause us to miss out on genuine opportunities. On the other hand, a very limited understanding could result in excessive risks and unnecessary casualties. In short, there are plenty of reasons to demystify AI.

In this chapter we have explored people’s perceptions of AI as well as the importance of its demystification. We have seen how its generic and novel nature gives our imaginations free rein. This can lead to unreasonable perceptions, especially since AI is occasionally associated with existing sources of distrust. We have seen, too, how impressive demonstrations can fuel overblown expectations and how the use of certain terms and frames can shape the way we think about AI. We have also discussed eight specific and very diverse perceptions. Three of these concern the way AI operates: that the technology is neutral, rational and a black box. Two involve future expectations: matching human intelligence and the danger of malign AI. Finally, the remaining three are broader perceptions of technology that often resurface in the context of AI: that it should be unregulated, that it can only take a single form and that it is the solution to all society’s problems. We have found that while some myths are easy to dispel, that is not always the case – especially when it comes to perceptions rooted in the predominant ideology of Silicon Valley.

Finally, we have seen that a variety of actors are involved in this overarching task. To play its role, society needs to gain greater experience with the technology and become familiar with its use. Scientists, schools and the media have a particularly important function in this regard. Government, too, can help with the demystification process by investing in knowledge and in public campaigns, by setting up institutes or by supporting third parties capable of playing a key of their own. We discuss the challenge this overarching task poses for governments at length in Part III. For example, the need to serve multiple interest groups improve their knowledge of AI and familiarity with the technology by a variety of means. In the final chapter we fine-tune the discharge of this task by identifying current points of concern. We also put forward specific recommendations on how to start that process.