Skip to main content

Introduction

  • Chapter
  • First Online:

Abstract

For millennia, laws have ordered society, kept people safe and promoted commerce. But until now, laws have only had one subject: humans. Turner argues that the rise of artificial intelligence (AI) presents novel issues for which current legal systems are only partially equipped. These include questions of responsibility, rights and ethics. Defining AI is a difficult exercise but without this prerequisite, creating a theory of regulation is impossible. Turner adopts a functional and legal definition which focusses on AI’s ability to take decisions based on principles. Turner shows that AI is becoming increasingly important, and unless we act now it may become too late to shape its development.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Fyodor Dostoyevsky, Crime and Punishment, translated by Constance Garnett (Urbana, IL: Project Gutenberg, 2006), Chapter VII.

  2. 2.

    Isaac Asimov, “Runaround”, in I, Robot (London: HarperVoyager, 2013), 31. Runaround was originally published in Astounding Science Fiction (New York: Street & Smith, March 1942). Owing to the potential weaknesses in his first three laws, Asimov later added the Fourth or Zeroth law. See Isaac Asimov, “The Evitable Conflict”, Astounding Science Fiction (New York: Street & Smith, 1950).

  3. 3.

    Isaac Asimov, “Interview with Isaac Asimov”, interview on Horizon, BBC, 1965, http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/archive/asimov/, accessed 1 June 2018. Asimov made a similar statement in the introduction to his collection The Rest of Robots: “[t]here was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the sixty-one words of the Three Laws”. Isaac Asimov, The Rest of Robots (New York: Doubleday, 1964), 43.

  4. 4.

    As to data, see “Data Management and Use: Governance in the 21st Century a Joint Report by the British Academy and the Royal Society”, British Academy and the Royal Society, June 2017, https://royalsociety.org/~/media/policy/projects/data-governance/data-management-governance.pdf, accessed 1 June 2018. As to unemployment, see Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin Programme on the Impacts of Future Technology Working Paper, September 2013, http://www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf, accessed 1 June 2018. See also Daniel Susskind and Richard Susskind, The Future of the Professions : How Technology Will Transform the Work of Human Experts (Oxford: Oxford University Press, 2015).

  5. 5.

    See Nick Bostrom, Superintelligence (Oxford: Oxford University Press, 2014).

  6. 6.

    See Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking Press, 2005).

  7. 7.

    Several nineteenth-century thinkers including Charles Babbage and Ada Lovelace arguably predicted the advent of AI and even prepared designs for machines capable of carrying out intelligent tasks. There is some debate as to whether Babbage actually believed that such a machine was capable of cognition. See, for example, Christopher D. Green, “Charles Babbage, the Analytical Engine, and the Possibility of a 19th-Century Cognitive Science”, in The Transformation of Psychology, edited by Christopher D. Green, Thomas Teo, and Marlene Shore (Washington, DC: American Psychological Association Press, 2001), 133–152. See also Ada Lovelace, “Notes by the Translator”, Reprinted in R.A. Hyman, ed. Science and Reform: Selected Works of Charles Babbage (Cambridge: Cambridge University Press, 1989), 267–311.

  8. 8.

    What follows is by no means intended to be exhaustive. For a far more comprehensive survey of AI and robotics in popular culture, religion and science, see George Zarkadakis, In Our Image: Will Artificial Intelligence Save or Destroy Us? (London: Rider, 2015).

  9. 9.

    T. Abusch, “Blood in Israel and Mesopotamia”, in Emanuel: Studies in the Hebrew Bible, the Septuagint, and the Dead Sea Scrolls in Honor of Emanuel Tov, edited by Shalom M. Paul, Robert A. Kraft, Eva Ben-David, Lawrence H. Schiffman, and Weston W. Fields (Leiden, The Netherlands: Brill, 2003), 675–684, especially at 682.

  10. 10.

    New World Encyclopedia, Entry on Nuwa (quoting Qu Yuan (屈原), book: “Elegies of Chu” (楚辞, or Chuci), Chapter 3: “Asking Heaven” (天問)), http://www.newworldencyclopedia.org/entry/Nuwa, accessed 1 June 2018.

  11. 11.

    Genesis 2:7, King James Bible.

  12. 12.

    Homer, The Iliad, translated by Herbert Jordan (Oklahoma: University of Oklahoma Press: Norman, 2008), 352.

  13. 13.

    Eden Dekel and David G. Gurley, “How the Golem Came to Prague”, The Jewish Quarterly Review, Vol. 103, No. 2 (Spring 2013), 241–258.

  14. 14.

    The original Czech is “Rossumovi Univerzální Roboti”. Roboti translates roughly to “slaves”. We will return to this feature in Chapter 4.

  15. 15.

    “Homepage”, Neuralink Website, https://www.neuralink.com/, accessed 1 June 2018; Chantal Da Silva, “Elon Musk Startup ‘to Spend £100m’ Linking Human Brains to Computers”, The Independent, 29 August 2017, http://www.independent.co.uk/news/world/americas/elon-musk-neuralink-brain-computer-startup-a7916891.html, accessed 1 June 2018. For commentary on Neuralink, see Tim Urban’s provocative blog post “Neuralink and the Brain’s Magical Future”, Wait But Why, 20 April 2017, https://waitbutwhy.com/2017/04/neuralink.html, accessed 1 June 2018.

  16. 16.

    Tim Cross, “The Novelist Who Inspired Elon Musk”, 1843 Magazine, 31 March 2017, https://www.1843magazine.com/culture/the-daily/the-novelist-who-inspired-elon-musk, accessed 1 June 2018.

  17. 17.

    Robert M. Geraci, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (New York: Oxford University Press, 2010), 147.

  18. 18.

    For the distinction, see David Weinbaum and Viktoras Veitas, “Open Ended Intelligence: The Individuation of Intelligent Agents”, Journal of Experimental & Theoretical Artificial Intelligence, Vol. 29, No. 2 (2017), 371–396.

  19. 19.

    See Roger Penrose, The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics (Oxford: Oxford University Press, 1989). The number of sceptics may be shrinking. As Wallach and Allen comment: “pessimists tend to get weeded out of the profession”, Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford: Oxford University Press, 2009), 68. For instance, Margaret Boden was one of the most well-known proponents of the sceptical view, although in her latest work, Margaret Boden, AI: Its nature and Future (Oxford: Oxford University Press, 2016), 119 et seq she acknowledges the potential for “real” artificial intelligence, but maintains that “…no one knows for sure, whether [technology described as Artificial General Intelligence] could really be intelligent”.

  20. 20.

    See further Chapter 3 at s. 2.1.2.

  21. 21.

    As to AI systems developing the capacity to self-improve, see further FN 114 below and more generally Chapter 2 at s. 3.2.

  22. 22.

    Our prediction for the process of narrow AI gradually coming closer to general AI is similar to evolution. Homo sapiens did not appear overnight as if by magic. Instead, we developed iteratively through a series of gradual upgrades to our hardware (bodies) and software (minds) on the basis of trial and error experiments, otherwise known as natural selection.

  23. 23.

    Jerry Kaplan, Artificial Intelligence: What Everyone Needs to Know (New York: Oxford University Press, 2016), 1.

  24. 24.

    Peter Stone et al., “Defining AI”, in “Artificial Intelligence and Life in 2030”. One Hundred Year Study on Artificial Intelligence: Report of the 20152016 Study Panel (Stanford, CA: Stanford University, September 2016), http://ai100.stanford.edu/2016-report, accessed 1 June 2018.

  25. 25.

    Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (Natick, MA: A.K. Peters, 2004), 133.

  26. 26.

    Peter Stone et al., “Defining AI”, in “Artificial Intelligence and Life in 2030”. One Hundred Year Study on Artificial Intelligence: Report of the 20152016 Study Panel (Stanford, CA: Stanford University, September 2016), http://ai100.stanford.edu/2016-report, accessed 1 June 2018. See also Pamela McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (Natick, MA: A.K. Peters, 2004), 204.

  27. 27.

    The same observation might be made of law itself. See H.L.A. Hart, The Concept of Law (2nd edn. Oxford: Clarendon, 1997).

  28. 28.

    Jacobellis v. Ohio, 378 U.S. 184 (1964), 197.

  29. 29.

    Lon L. Fuller, The Morality of Law (New Haven, CT: Yale University Press, 1969).

  30. 30.

    Ibid., 107.

  31. 31.

    Franz Kafka, The Trial, translated by Idris Parry (London: Penguin Modern Classics, 2000).

  32. 32.

    Stuart Russell and Peter Norvig divide definitions into four categories: (i) thinking like a human: AI systems adopt similar thought processes to human beings;

    (ii) acting like a human: AI systems are behaviourally equivalent to human beings;

    (iii) thinking rationally: AI systems have goals and reason their way towards achieving those goals;

    (iv) acting rationally: AI systems act in a manner that can be described as goal-directed and goal-achieving. Stuart Russell and Peter Norvig, Artificial Intelligence: International Version: A Modern Approach (Englewood Cliffs, NJ: Prentice Hall, 2010), para. 1.1 (hereafter “Russell and Norvig, Artificial Intelligence”). However, John Searle’s “Chinese Room” thought experiment demonstrates the difficulty of distinguishing between acts and thoughts. In short, the Chinese Room experiment suggests that we cannot distinguish between intelligence of Russell and Norvig’s types (i) and (ii), or types (iii) and (iv) John R. Searle, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, Vol. 3, No. 3 (1980), 417–457. Searle’s experiment has been met with various numbers of replies and criticisms, which are set out in the entry on The Chinese Room Argument, Stanford Encyclopedia of Philosophy, First published 19 March 2004; substantive revision 9 April 2014, https://plato.stanford.edu/entries/chinese-room/, accessed 1 June 2018.

  33. 33.

    Alan M. Turing, “Computing Machinery and Intelligence”, Mind: A Quarterly Review of Psychology and Philosophy, Vol. 59, No. 236 (October 1950), 433–460, 460.

  34. 34.

    Yuval Harari has offered the interesting explanation that the form of Turing’s Imitation Game resulted in part from Turing’s own need to suppress his homosexuality, to fool society and the authorities into thinking he was something that he was not. The focus on gender and subterfuge in the first iteration of the test is, perhaps, not accidental. Yuval Harari, Homo Deus (London: Harvill Secker, 2016), 120.

  35. 35.

    See, for example, the website of The Loebner Prize in Artificial Intelligence, http://www.loebner.net/Prizef/loebner-prize.html, accessed 1 June 2018.

  36. 36.

    José Hernández-Orallo, “Beyond the Turing Test”, Journal of Logic, Language and Information, Vol. 9, No. 4 (2000), 447–466.

  37. 37.

    “Turing Test Transcripts Reveal How Chatbot ‘Eugene’ Duped the Judges”, Coventry University, 30 June 2015, http://www.coventry.ac.uk/primary-news/turing-test-transcripts-reveal-how-chatbot-eugene-duped-the-judges/, accessed 1 June 2018.

  38. 38.

    Various competitions are now held around the world in an attempt to find a ‘chatbot’, as conversational programs are known, which is able to pass the Imitation Game. In 2014, a chatbot called ‘Eugene Goostman’, which claimed to be a 13-year-old Ukrainian boy, convinced 33% of the judging panel that he was a human, in a competition held by the University of Reading. Factors which assisted Goostman included that English (the language in which the test was held) was not his first language, his apparent immaturity and answers which were designed to use humour to deflect the attention of the questioner from the accuracy of the response. Unsurprisingly, the world did not herald a new age in AI design. For criticism of the Goostman ‘success’, see Celeste Biever, “No Skynet: Turing Test ‘Success’ Isn’t All It Seems”, The New Scientist, 9 June 2014, http://www.newscientist.com/article/dn25692-no-skynet-turing-test-success-isnt-all-it-seems.html, accessed 1 June 2018. The author Ian McDonald offers another objection: “Any AI smart enough to pass a Turing test is smart enough to know to fail it”. Ian McDonald, River of Gods (London: Simon & Schuster, 2004), 42.

  39. 39.

    This definition is adapted from that used by the UK Department for Business, Energy and Industrial Strategy, Industrial Strategy: Building a Britain Fit for the Future (November 2017), 37, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/664563/industrial-strategy-white-paper-web-ready-version.pdf, accessed 1 June 2018.

  40. 40.

    “What Is Artificial Intelligence?”, Website of John McCarthy, last modified 12 November 2007, http://www-formal.stanford.edu/jmc/whatisai/node1.html, accessed 1 June 2018.

  41. 41.

    Ray Kurzweil, The Age of Intelligent Machines (Cambridge, MA: MIT Press, 1992), Chapter 1.

  42. 42.

    Ibid.

  43. 43.

    NV Rev Stat § 482A.020 (2011), https://law.justia.com/codes/nevada/2011/chapter-482a/statute-482a.020/, accessed 1 June 2018.

  44. 44.

    For the new law, see NRS 482A.030. “Autonomous vehicle” now means a motor vehicle that is equipped with autonomous technology (Added to NRS by 2011, 2876; A 2013, 2010). NRS 482A.025 “Autonomous technology” means technology which is installed on a motor vehicle and which has the capability to drive the motor vehicle without the active control or monitoring of a human operator. The term does not include an active safety system or a system for driver assistance, including without limitation, a system to provide electronic blind spot detection, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane keeping assistance, lane departure warning, or traffic jam and queuing assistance, unless any such system, alone or in combination with any other system, enables the vehicle on which the system is installed to be driven without the active control or monitoring of a human operator (Added to NRS by 2013, 2009). Chapter 482A—Autonomous Vehicles, https://www.leg.state.nv.us/NRS/NRS-482A.html, accessed 1 June 2018.

  45. 45.

    Ryan Calo, “Nevada Bill Would Pave the Road to Autonomous Cars”, Centre for Internet and Society Blog, 27 April 2011, http://cyberlaw.stanford.edu/blog/2011/04/nevada-bill-would-pave-road-autonomous-cars, accessed 1 June 2018.

  46. 46.

    Will Knight, “Alpha Zero’s “Alien” Chess Shows the Power, and the Peculiarity, of AI”, MIT Technology Review, https://www.technologyreview.com/s/609736/alpha-zeros-alien-chess-shows-the-power-and-the-peculiarity-of-ai/, accessed 1 June 2018. See for the academic paper: David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”, Cornell University Library Research Paper, 5 December 2017, https://arxiv.org/abs/1712.01815, accessed 1 June 2018. See also Cade Metz, What the AI Behind AlphaGo Can Teach Us About Being Human”, Wired, 19 May 2016, https://www.wired.com/2016/05/google-alpha-go-ai/, accessed 1 June 2018.

  47. 47.

    Russell and Norvig, Artificial Intelligence, para. 1.1.

  48. 48.

    Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, UK: Cambridge University Press, 2010), Preface. Similarly, Shane Legg (one of the co-founders of the leading AI company DeepMind), writing with his doctoral supervisor Professor Marcus Hutter, also supports a rationalist definition of intelligence: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”. Shane Legg, “Machine Super Intelligence” (Doctoral Dissertation submitted to the Faculty of Informatics of the University of Lugano in partial fulfillment of the requirements for the degree of Doctor of Philosophy, June 2008).

  49. 49.

    Another way of putting this is that rationalist definitions are appropriate for narrow AI but are less well suited to general AI.

  50. 50.

    For a discussion of unsupervised machine learning, see Chapter 2 at s. 3.2.1.

  51. 51.

    See, for example, Stuart Russell and Eric Wefald, Do the Right Thing: Studies in Limited Rationality (Cambridge, MA: MIT Press, 1991).

  52. 52.

    Russell and Norvig, Artificial Intelligence, paras. 2.3, 35.

  53. 53.

    Robert Sternberg, quoted in Richard Langton Gregory, The Oxford Companion to the Mind (Oxford: Oxford University Press, 2004), 472.

  54. 54.

    Ernest G. Boring, “Intelligence As the Tests Test It”, New Republic, Vol. 35 (1923), 35–37.

  55. 55.

    See, for example, Aharon Barak, Purposive Interpretation in Law, translated by Sari Bashi (Princeton, NJ: Princeton University Press, 2007).

  56. 56.

    Elsewhere, the terms “robots” and “robotics” are sometimes used to describe any type of automation, whether involving AI or not (see, for example, definition of “robot” in the Merriam-Webster Dictionary, https://www.merriam-webster.com/dictionary/robot, accessed 1 June 2018. This book’s definition is closer to the original meaning of the term “robot”—as intelligent servants—as used by Capek (see FN 14 above). Others have taken a contrary view: that artificial intelligence cannot exist without physical embodiment. See Ryan Calo, “Robotics and the Lessons of Cyberlaw”, California Law Review, Vol. 103 (2015), 513–563, 529: “A robot in the strongest, fullest sense of the term exists in the world as a corporeal object with the capacity to exert itself physically”. See also Jean-Christophe Baillie, “Why AlphaGo Is Not AI”, IEEE Spectrum, 17 March 2016, https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai, accessed 1 June 2018.

  57. 57.

    As to the unique nature of this aspect of AI, see further Chapter 2.

  58. 58.

    The Society of Automobile Engineers has provided a useful primer of five levels of autonomy for autonomous vehicles. These are as follows:

    Level 0—No Automation: The full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems; Level 1—Driver Assistance: The driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task; Level 2—Partial Automation: The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task; Level 3—Conditional Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene; Level 4—High Automation: The driving mode-specific performance by an Automated Driving System of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene; Level 5—Full Automation: The full-time performance by an Automated Driving System of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver. To put this book’s definition into context, AI might be displayed even at Level 1—provided the system is making choices based on evaluative principles, even if this is only within a narrow sphere, and even if it is only providing advice to the human driver. Of course, the more potential for human oversight of the process, the less need there will be for a separate legal regime, but the same principles apply nonetheless. More difficult questions apply at level 2 onwards, where power is actually delegated to the AI system.

    See SAE International, J3016, https://www.sae.org/misc/pdfs/automated_driving.pdf, accessed 1 June 2018. This classification was adopted by the US Department of Transport in September 2016. SAE, “U.S. Department of Transportation’s New Policy on Automated Vehicles Adopts SAE International’s Levels of Automation for Defining Driving Automation in On-Road Motor Vehicles”, SAE Website, https://www.sae.org/news/3544/, accessed 1 June 2018.

  59. 59.

    In his discussion of how robots are to be regulated, Bertolini has eschewed a definition of “robots”, calling this a pointless exercise, and instead focussed on autonomy as the relevant criterion justifying special legal treatment. However, in seeking to describe autonomy, Bertolini relies on undefined and highly debated concepts, including “self-awareness or self-consciousness, leading to free will and thus identifying a moral agent”, and “the ability to intelligently interact in the operating environment”. In so doing, Bertolini avoids the key question of what it is that should be regulated. Andrea Bertolini, “Robots as Products: The Case for a Realistic Analysis of Robotic Applications and Liability Rules”, Law Innovation and Technology, Vol. 5, No. 2 (2013), 214–247, 217–221.

  60. 60.

    Ronald Dworkin, “The Model of Rules”, The University of Chicago Law Review, Vol. 35 (1967), 14, 14–46, 25.

  61. 61.

    Ibid., and see also Scott Shapiro, “The Hart-Dworkin Debate: A Short Guide for the Perplexed”, Working Paper No. 77, University of Michigan Law School, 9, https://law.yale.edu/system/files/documents/pdf/Faculty/Shapiro_Hart_Dworkin_Debate.pdf, accessed 1 June 2018.

  62. 62.

    Another term for this technology is “classical AI”.

  63. 63.

    Though not an exact match, programs described as classical or symbolic AI (sometimes referred to as “Good Old Fashioned AI”, see Margaret Boden, AI: Its Nature and Future (Oxford: Oxford University Press, 2016), 6–7, bear somewhat more resemblance to the decision tree format than do programs based on neural networks—the other main branch of AI technology.

  64. 64.

    For a discussion of the distinction between systems based on “Good Old Fashioned AI” versus neural networks, see Lefteri H. Tsoukalas and Robert E. Uhrig, Fuzzy and Neural Approaches in Engineering (New York, NY: Wiley, 1996).

  65. 65.

    Originally, they were inspired by the functioning of brains.

  66. 66.

    Song Han, Jeff Pool, John Tran, and William Dall, “Learning Both Weights and Connections for Efficient Neural Network”, Advances in Neural Information Processing Systems (2015), 1135–1143, http://papers.nips.cc/paper/5784-learning-both-weights-and-connections-for-efficient-neural-network.pdf, accessed 1 June 2018.

  67. 67.

    Margaret Boden, “On Deep Learning, Artificial neural Networks, Artificial Life, and Good Old-Fashioned AI”, Oxford University Press Website, 16 June 2016, https://blog.oup.com/2016/06/artificial-neural-networks-ai/, accessed 1 June 2018.

  68. 68.

    David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, “Learning Representations by Back-Propagating Errors”, Nature, Vol. 323 (9 October 1986), 533–536.

  69. 69.

    Admittedly, setting up a hard distinction between symbolic AI and neural networks may be a false dichotomy, as there are systems which utilise both elements. In those situations, provided that the neural network, or other evaluative process, has a determinative effect on the choice made, then the entity as a whole will pass the test for intelligence under this book’s definition.

  70. 70.

    Karnow adopts a similar distinction, describing “expert” versus “fluid” systems. The latter, he says, necessitate different legal treatment, based on their unpredictability. Curtis E.A. Karnow, “Liability for Distributed Artificial Intelligences”, Berkeley Technology Law Journal, Vol. 147 (1996), 11, http://scholarship.law.berkeley.edu/btlj/vol11/iss1/3, accessed 1 June 2018.

  71. 71.

    The situation is slightly different with regard to “rights” for AI, which we discuss in Chapter 4. As we explain there, certain rights might best be reserved to AI which is indeed conscious and can suffer. However, the better way to account for this issue is not to say that an entity is not AI unless it can suffer, but rather to say that AI which can also suffer ought to be accorded an enhanced set of rights or legal status. See further Chapter 4 at s. 1.

  72. 72.

    Indeed, an absence of features such as imagination, emotions or consciousness may contribute to situations in which an AI system is liable to act differently from humans. For instance, an AI system which lacks the ability to empathise with human suffering might present more danger to people than a human carrying out the same task. This phenomenon, of itself, is one reason why new rules are desirable to guide and constrain choices made by AI.

  73. 73.

    For a famous application of this principle, see Lewis Carroll’s Through the Looking Glass: “When I use a word”, Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less”. “The question is”, said Alice, “whether you can make words mean so many different things”. “The question is”, said Humpty Dumpty, “which is to be master—that’s all”. Lewis Carroll, Through the Looking-Glass (Plain Label Books, 2007), 112 (originally published 1872). See also the UK House of Lords case Liversidge v Anderson [1942] A.C. 206, 245.

  74. 74.

    H.L.A Hart, “Positivism and the Separation of Law and Morals”, Harvard Law Review, Vol. 71 (1958), 593, 607.

  75. 75.

    See, for example, Ann Seidman, Robert B. Seidman, and Nalin Abeyesekere, Legislative Drafting for Democratic Social Change (London: Kluwer Law International, 2001), 307.

  76. 76.

    Those interpreting the core definition can use various tools so as to ascertain the proper scope of application of the provision in question. These might include the legislative history of the provision, the mischief to which it was directed or even shifting social norms. See Ronald Dworkin, “Law as Interpretation”, University of Texas Law Review, Vol. 529 (1982), 60.

  77. 77.

    José Hernández-Orallo, The Measure of All Minds: Evaluating Natural and Artificial Intelligence (Cambridge: Cambridge University Press, 2017). See also José Hernández-Orallo and David L. Dowe, “Measuring Universal Intelligence: Towards an Anytime Intelligence Test”, Artificial Intelligence, Vol. 174 (2010), 1508–1539. For an important early examination of algorithmic information theory and universal distributions, see Ray Solomonoff, “A Formal Theory of Inductive Inference: Part I”, Information and Control, Vol. 7, No. 1 (1964), 1–22.

  78. 78.

    See further Chapter 6 at s. 2.1 in which it is argued that there is a spectrum of intelligences between narrow and general artificial intelligence, using the increasing ability of programs to display compositionality as an example.

  79. 79.

    Discussed in Gerald M. Levitt, The Turk, Chess Automaton (Jefferson, NC: McFarland & Co., 2007).

  80. 80.

    John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”, 31 August 1955, full text available at: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, accessed 1 June 2018.

  81. 81.

    Jacob Poushter, “Smartphone Ownership and Internet Usage Continues to Climb in Emerging Economies”, Pew Research Centre, 22 February 2016, http://www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/, accessed 1 June 2018. The global median smartphone ownership was at the time of the poll 43% but this rate is climbing fastest in developing countries.

  82. 82.

    As Ariel Ezrachi and Maurice E. Stucke chart in their book, Virtual Competition (Oxford: Oxford University Press, 2016), Internet sites can use an increasingly sophisticated set of data including the time users spend hovering their mouse over a particular part of a page in order to predict, and shape, their preferences.

  83. 83.

    Perhaps surprisingly, the idea of household appliances connected to the Internet has a fairly long history. In 1990, a toaster was reportedly connected to the then fledgling Internet via TCP/IP networking. The power could be remote controlled, allowing a user to determine how darkened the toast should be, http://www.livinginternet.com/i/ia_myths_toast.htm, accessed 1 June 2018.

  84. 84.

    David Schatsky, Navya Kumar, and Sourabh Bumb, “Intelligent IoT: Bringing the Power of AI to the Internet of Things”, Deloitte, 12 December 2017, https://www2.deloitte.com/insights/us/en/focus/signals-for-strategists/intelligent-iot-internet-of-things-artificial-intelligence.html, accessed 1 June 2018.

  85. 85.

    Aatif Sulleyman, “Durham Police to Use AI to Predict Future Crimes of Suspects, Despite Racial Bias Concerns”, Independent, 12 May 2017, http://www.independent.co.uk/life-style/gadgets-and-tech/news/durham-police-ai-predict-crimes-artificial-intelligence-future-suspects-racial-bias-minority-report-a7732641.html, accessed 1 June 2018. For criticism of such technology and its tendency to adopt racial biases, see Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals: And It’s Biased Against Blacks”, ProPublica, May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 1 June 2018. We will return in Chapter 8 at s. 3 to the propensity for such decision-making AI to adopt human biases and the ways in which regulation might stop it.

  86. 86.

    See, for example, the U.S. Department of Transportation, “Federal Automated Vehicles Policy”, September 2016, https://www.transportation.gov/AV, accessed 1 June 2018, as well as the UK House of Lords Science and Technology Select Committee, 2nd Report of Session 2016–2017, “Connected and Autonomous Vehicles: The Future?”, 15 March 2017, https://www.publications.parliament.uk/pa/ld201617/ldselect/ldsctech/115/115.pdf, accessed 1 June 2018.

  87. 87.

    Gareth Corfield, “Tesla Death Smash Probe: Neither Driver nor Autopilot Saw the Truck”, The Register, 20 July 2017, https://www.theregister.co.uk/2017/06/20/tesla_death_crash_accident_report_ntsb/, accessed 1 June 2018.

  88. 88.

    Sam Levin and Julia Carrie Wong, “Self-driving Uber Kills Arizona Woman in First Fatal Crash Involving Pedestrian”, The Guardian, 19 March 2018, https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe, accessed 1 June 2018.

  89. 89.

    Department of Defense, “Defense Science Board, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, Summer Study on Autonomy”, June 2016, http://web.archive.org/web/20170113220254/http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf, accessed 1 June 2018.

  90. 90.

    Mary L. Cummings, Artificial Intelligence and the Future of Warfare, Chatham House, 26 January 2017, https://www.chathamhouse.org/publication/artificial-intelligence-and-future-warfare, accessed 1 June 2018.

  91. 91.

    Some reports cast doubt on whether the malfunction was as a result of software or human error. See, for example, Tom Simonite, “‘Robotic Rampage’ Unlikely Reason for Deaths”, New Scientist, 19 October 2007, available at: https://www.newscientist.com/article/dn12812-robotic-rampage-unlikely-reason-for-deaths/, accessed 1 June 2018.

  92. 92.

    An example of this is Elli.Q, a social care robot which has been designed to convey emotion through speech tones, light and movement or body language. See Darcie Thompson-Fields, “AI Companion Aims to Improve Life for the Elderly”, Access AI, 12 January 2017, http://www.access-ai.com/news/511/ai-companion-aims-to-improve-life-for-the-elderly/, accessed 1 June 2018.

  93. 93.

    Daniela Hernandez, “Artificial Intelligence Is Now Telling Doctors How to Treat You”, Wired Business/Kaiser Health News, 2 June 2014, https://www.wired.com/2014/06/ai-healthcare/. Alphabet’s DeepMind has been partnering with healthcare providers, including the NHS, on a variety of initiatives, including an app called Streams, which has the capability to analyse medical history and test results to alert doctors and nurses of potential dangers which might not have otherwise been spotted, see “DeepMind—Health”, https://deepmind.com/applied/deepmind-health/, accessed 1 June 2018.

  94. 94.

    Rena S. Miller and Gary Shoerter, “High Frequency Trading: Overview of Recent Developments”, US Congressional Research Service, 4 April 2016, 1, https://fas.org/sgp/crs/misc/R44443.pdf, accessed 1 June 2018.

  95. 95.

    Laura Noonan, “ING Launches Artificial Intelligence Bond Trading Tool Katana”, Financial Times, 12 December 2017, https://www.ft.com/content/1c63c498-de79-11e7-a8a4-0a1e63a52f9c, accessed 1 June 2018.

  96. 96.

    Alex Marshall, “From Jingles to Pop Hits, A.I. Is Music to Some Ears”, New York Times, 22 January 2017, https://www.nytimes.com/2017/01/22/arts/music/jukedeck-artificial-intelligence-songwriting.html, accessed 1 June 2018.

  97. 97.

    Bob Holmes, “Requiem for the Soul”, New Scientist, 9 August 1997, https://www.newscientist.com/article/mg15520945-100-requiem-for-the-soul/, accessed 1 June 2018. For criticism, see Bayan Northcott, “But Is It Mozart?”, Independent, 4 September 1997, http://www.independent.co.uk/arts-entertainment/music/but-is-it-mozart-1237509.html, accessed 1 June 2018.

  98. 98.

    “Homepage”, Mubert Website, http://mubert.com/en/, accessed 1 June 2018.

  99. 99.

    Hal 90210, “This Is What Happens When an AI-Written Screenplay Is Made into a Film”, The Guardian, 10 June 2016, https://www.theguardian.com/technology/2016/jun/10/artificial-intelligence-screenplay-sunspring-silicon-valley-thomas-middleditch-ai, accessed 1 June 2018.

  100. 100.

    The process used to create such visualisations was revealed first on two blog posts of 17 June 2015 and 1 July 2015 by Alexander Mordvintsev, Christopher Olah, and Mike Tyka, See “Inceptionism: Going Deeper into Neural Networks”, Google Research Blog, 17 June 2015, https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html, accessed 1 June 2018. The name DeepDream was first used in the latter, https://web.archive.org/web/20150708233542/http://googleresearch.blogspot.co.uk/2015/07/deepdream-code-example-for-visualizing.html, accessed 1 June 2018. Like many scientific breakthroughs, and innovations, the DeepDream generator was discovered as a by-product of other research, into the use of neural networks. Its designers explained: “Two weeks ago we blogged about a visualization tool designed to help us understand how neural networks work and what each layer has learned. In addition to gaining some insight on how these networks carry out classification tasks, we found that this process also generated some beautiful art”. The program used to create the visualisations is now available online at: https://deepdreamgenerator.com/, accessed 1 June 2018. See also Cade Metz, “Google’s Artificial Brain Is Pumping Our Trippy—And Pricey—Art”, Wired, 29 February 2016, https://www.wired.com/2016/02/googles-artificial-intelligence-gets-first-art-show/, accessed 1 June 2018.

  101. 101.

    Tencent “Not Your Father’s AI: Artificial Intelligence Hits the Catwalk at NYFW 2017”, PR Newswire, http://www.prnewswire.com/news-releases/not-your-fathers-ai-artificial-intelligence-hits-the-catwalk-at-nyfw-2017-300407584.html, accessed 1 June 2018.

  102. 102.

    For an in-depth treatment of love between humans and robots, see D. Levy, Love and Sex with Robots (New York: Harper Perennial, 2004).

  103. 103.

    See Chapter 4 at s. 4.4.

  104. 104.

    John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”, 31 August 1955, full text available at: http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, accessed 1 June 2018.

  105. 105.

    Ian J. Good, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers, edited by F. Alt and M. Ruminoff, Vol. 6 (New York: Academic Press, 1965).

  106. 106.

    Nick Bostrom, “How Long Before Superintelligence?”, International Journal of Future Studies, 1998, vol. 2..

  107. 107.

    The singularity was conceived of shortly after the advent of modern AI studies, having been introduced by John von Neumann in 1958 and then popularised by Vernor Vinge, in “The Coming Technological Singularity: How to Survive in the Post-human Era” (1993), available at: https://edoras.sdsu.edu/~vinge/misc/singularity.html, accessed 22 June 2018 and subsequently by Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking Press, 2005).

  108. 108.

    In 1968, a Scottish chess champion bet AI pioneer John McCarthy £500 that a computer would not be able to beat him by 1979. Levy won that wager (though was eventually beaten by a computer in 1989). For an account, see Chris Baraniuk, “The Cyborg Chess Player Who Can’t Be Beaten”, BBC Website, 4 December 2015, http://www.bbc.com/future/story/20151201-the-cyborg-chess-players-that-cant-be-beaten, accessed 1 June 2018.

  109. 109.

    The situation is somewhat complicated in that Kasparov had held the Fédération Internationale des Échecs (FIDE) world title until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association.

  110. 110.

    Nick Bostrom, Superintelligence : Paths, Dangers and Strategies (Oxford: Oxford University Press, 2014), 16.

  111. 111.

    In May 2017, a subsequent version of the program, “AlphaGo Master”, defeated the world champion Go player, Ke Jie by three games to nil. See “AlphaGo at The Future of Go Summit, 23–27 May 2017”, DeepMind Website, https://deepmind.com/research/alphago/alphago-china/, accessed 16 August 2018. Perhaps as a control against accusations that top players were being beaten psychologically by the prospect of playing an AI system rather than on the basis of skill, DeepMind had initially deployed AlphaGo Master in secret, during which period it beat 50 of the world’s top players online, playing under the pseudonym “Master”. See “Explore the AlphaGo Master series”, DeepMind Website, https://deepmind.com/research/alphago/match-archive/master/, accessed 16 August 2018. DeepMind, promptly announced AlphaGo’s retirement from the game to pursue other interests. See Jon Russell, “After Beating the World’s Elite Go Players, Google’s AlphaGo AI Is Retiring”, Tech Crunch, 27 May 2017, https://techcrunch.com/2017/05/27/googles-alphago-ai-is-retiring/ accessed 1 June 2018. Rather like a champion boxer tempted out of retirement for one more fight, AlphaGo (or at least a new program bearing a similar name, AlphaGo Zero) returned a year later to face a new challenge: AlphaGo Zero. This is discussed in Chapter 2 at s. 3.2.1, and FN 130 and 131.

  112. 112.

    Cade Metz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future”, Wired, 16 March 2016, https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/, accessed 1 June 2018. In October 2017, DeepMind announced yet another breakthrough involving Go: a computer which was able to master the game without access to any data generated by human players. Instead, it was provided only with the rules and, within a number of hours, had mastered the game to such an extent that it was able to beat the previous version of AlphaGo by 100 games to 0. See “AlphaGo Zero: Learning from Scratch”, DeepMind Website, 18 October 2017, https://deepmind.com/blog/alphago-zero-learning-scratch/, accessed 1 June 2018. See also Chapter 2 at s. 3.2.1.

  113. 113.

    For a helpful analysis of the barriers to the singularity, see Toby Walsh, Android Dreams (London: Hurst & Co., 2017), 89–136.

  114. 114.

    Barret Zoph and Quoc V. Le, “Neural Architecture Search with Reinforcement Learning”, Cornell University Library Research Paper, 15 February 2017, https://arxiv.org/abs/1611.01578, accessed 1 June 2018. See also Tom Simonite, “AI Software Learns to Make AI Software”, MIT Technology Review, 17 January 2017, https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/, accessed 1 June 2018.

  115. 115.

    Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel, “RL2: Fast Reinforcement Learning via Slow Reinforcement Learning”, Cornell University Library Research Paper, 10 November 2016, https://arxiv.org/abs/1611.02779, accessed 1 June 2018.

  116. 116.

    Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar, “Designing Neural Network Architectures Using Reinforcement Learning”, Cornell University Library Research Paper, 22 March 2017, https://arxiv.org/abs/1611.02167, accessed 1 June 2018.

  117. 117.

    Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick, “Learning to Reinforcement Learn”, Cornell University Library Research Paper, 23 January 2017, https://arxiv.org/abs/1611.05763, accessed 1 June 2018.

  118. 118.

    It may be objected that this is a simplification, or even a caricature, and indeed many have expressed sentiments at different times which could be covered by each of these categories, and in reality, there are more points on a spectrum than strict alternatives. Nonetheless, we think these labels provide a helpful summary of current attitudes.

  119. 119.

    Ray Kurzweil, “Don’t Fear Artificial Intelligence”, Time, 19 December 2014, http://time.com/3641921/dont-fear-artificial-intelligence/, accessed 1 June 2018.

  120. 120.

    Alan Winfield, “Artificial Intelligence Will Not Turn into a Frankenstein’s Monster”, The Guardian, 10 August 2014, https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield, accessed 1 June 2018.

  121. 121.

    Nick Bostrom, Superintelligence, (Oxford: Oxford University Press, 2014), 124–125.

  122. 122.

    Elon Musk, as quoted in S. Gibbs, “Elon Musk: Artificial Intelligence Is Our Biggest Existential Threat”, The Guardian, 27 October 2014, https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat, accessed 1 June 2018.

  123. 123.

    “Open Letter”, Future of Life Institute, https://futureoflife.org/ai-open-letter/, accessed 1 June 2018.

  124. 124.

    Alex Hern, “Stephen Hawking: AI Will Be ‘Either Best or Worst Thing’ for Humanity”, The Guardian, 19 October 2016, https://www.theguardian.com/science/2016/oct/19/stephen-hawking-ai-best-or-worst-thing-for-humanity-cambridge, accessed 1 June 2018.

  125. 125.

    See The Locomotives on Highways Act 1861, The Locomotive Act 1865 and the Highways and Locomotives (Amendment) Act 1878 (all UK legislation).

  126. 126.

    See, for example, Steven E. Jones, Against Technology: From the Luddites to Neo-Luddism (London: Routledge, 2013).

  127. 127.

    Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (New York: Allen Lane, 2015), 286.

  128. 128.

    This was due in large part to the publication of: Gideon Lewis-Kraus, “The Great A.I. Awakening”, The New York Times Magazine, 14 December 2016, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html, accessed 1 June 2018.

  129. 129.

    The changing appearance of the Facebook interface over time is a good example of a technology company using small updates to make large changes over time. See Jenna Mullins, “This Is How Facebook Has Changed Over the Past 12 Years”, ENews, 4 February 2016, http://www.eonline.com/uk/news/736977/this-is-how-facebook-has-changed-over-the-past-12-years, accessed 1 June 2018.

  130. 130.

    See the Kyoto Protocol to the United Nations Framework Convention on Climate Change, 1997.

  131. 131.

    See the Paris Climate Agreement, 2016.

  132. 132.

    Richard Dobbs, James Manyika, and Jonathan Woetzel, “No Ordinary Disruption: The Four Global Forces Breaking All the Trends”, McKinsey Global Institute, April 2015, https://www.mckinsey.com/mgi/no-ordinary-disruption, accessed 1 June 2018.

  133. 133.

    The observation that law is not simply a command backed by a threat (such as “do not steal or you will be punished”) was made originally by H.L.A. Hart in The Concept of Law (2nd edn. Oxford: Clarendon, 1997). Hart observed that such models of the law do not fully account for law’s role in other social functions, such as making certain agreements legally binding. For the command theory of law, see John Austin, The Province of Jurisprudence Determined and the Uses of the Study of Jurisprudence (London: John Murray, 1832), vii.

  134. 134.

    See Gerald Postema, “Coordination and Convention at the Foundations of Law”, Journal of Legal Studies, Vol. 165 (1982), 11, 172 et seq.

  135. 135.

    As explained further in Chapter 6, without a new universal system to ensure that all AI vehicles adhere to the same rules, many of their potential advantages over human drivers in terms of safety and efficiency will be lost.

  136. 136.

    In philosophical terms, the concept of according rights and obligations to an entity is sometimes referred to as “personhood”, but the preferred term in law is “legal personality”, and that will be used here. For discussion of what legal personality entails, see Chapter 5 at s. 2.1. For the avoidance of doubt, legal personality does not refer to the collection of psychological traits which characterise an individual.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Turner, J. (2019). Introduction. In: Robot Rules . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-96235-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96235-1_1

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-319-96234-4

  • Online ISBN: 978-3-319-96235-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics