Skip to main content

Controlling the Creations

  • Chapter
  • First Online:
Robot Rules
  • 4817 Accesses

Abstract

Turner explains how in order to implement constraints into AI directly, we will need to address both moral and technical questions: Which norms should be chosen? How can these be implemented? Potential basic laws for robots include: a law of identification, requiring that AI makes its status clear; a law of explanation, requiring that at least some parts of AI’s reasoning be divulged; a laws on avoiding bias; and a law setting out any limits to areas where AI can operate. Finally, a kill switch law might make it mandatory that AI systems include a mechanism for safely interrupting their processes or operations, either temporarily or permanently.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 29.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    On the issue of value alignment see, for example, Ariel Conn, “How Do We Align Artificial Intelligence with Human Values?”, Future of Life Institute, 3 February 2017, https://futureoflife.org/2017/02/03/align-artificial-intelligence-with-human-values/?cn-reloaded=1, accessed 1 June 2018.

  2. 2.

    For an excellent introductory work on this topic, see Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford: Oxford University Press, 2009).

  3. 3.

    Numerous academics and organisations have tackled this issue. See Roman Yampolskiy and Joshua Fox, “Safety Engineering for Artificial General Intelligence” Topoi, Vol. 32, No. 2 (2013), 217–226; Stuart Russell, Daniel Dewey, and Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine, Vol. 36, No. 4 (2015), 105–114; James Babcock, János Kramár, and Roman V. Yampolskiy, “Guidelines for Artificial Intelligence Containment”, arXiv preprint arXiv:1707.08476 (2017); Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané, “Concrete Problems in AI Safety”, arXiv preprint arXiv:1606.06565 (2016); Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch, “Alignment for Advanced Machine Learning Systems”, Machine Intelligence Research Institute (2016); Smitha Milli, Dylan Hadfield-Menell, Anca Dragan, and Stuart Russell, “Should Robots Be Obedient?”, arXiv preprint arXiv:1705.09990 (2017); and Iyad Rahwan, “Society-in-the-Loop: Programming the Algorithmic Social Contract ”, Ethics and Information Technology, Vol. 20, No. 1 (2018), 5–14. See also the work of OpenAI , an NGO which focuses on achieving safe artificial general intelligence: “Homepage”, Website of OpenAI , https://openai.com/, accessed 1 June 2018. The blog of OpenAI and Future of Humanity Institute researcher Paul Christiano also contains many valuable resources and discussions on the topic: https://ai-alignment.com/, accessed 1 June 2018.

  4. 4.

    See, for example, the UK Locomotive Act 1865, s.3.

  5. 5.

    Toby Walsh, Android Dreams (London: Hurst & Company, 2017), 111. Walsh notes at 112 the above is “not the law itself… but a summary of its intent”, and that an actual law will “require a precise definition of autonomous system”. See also Toby Walsh, “Turing’s Red Flag”, Communications of the ACM, Vol. 59, No. 7 (July 2016), 34–37. Walsh terms it the “Turing Red Flag Law ”, named after UK regulations from the ninetieth century which required that a person walk in front of an automobile waving a flag, so as to warn other road users of the new technology. See further below at s. 4.1.

  6. 6.

    Ibid.

  7. 7.

    “Homepage”, Website of AI2, http://allenai.org/, accessed 1 June 2018.

  8. 8.

    Oren Etzioni, “How to Regulate Artificial Intelligence”, 1 September 2017, New York Times, https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html, accessed 1 June 2018.

  9. 9.

    For a similar formulation to Walsh see Tim Wu, “Please Prove You’re Not a Robot”, New York Times, 15 July 2017, https://www.nytimes.com/2017/07/15/opinion/sunday/please-prove-youre-not-a-robot.html, accessed 1 June 2018.

  10. 10.

    Toby Walsh, Android Dreams (London: Hurst & Company, 2017), 113–114.

  11. 11.

    Though a 2018 accident in Arizona, where a woman was killed after walking in front of a self-driving vehicle travelling at 40 miles per hour, suggests that—at least at the time of writing—autonomous vehicles remain imperfect in this regard. See, for the issue and a potential solution: Dave Gershgorn, “An AI-Powered Design Trick Could Help Prevent Accidents like Uber’s Self-Driving Car Crash”, Quartz, 30 March 2018, https://qz.com/1241119/accidents-like-ubers-self-driving-car-crash-could-be-prevented-with-this-ai-powered-design-trick/, accessed 1 June 2018.

  12. 12.

    For an example of a system which is designed to test whether AI has “common sense”, see the discussion of the AI2 Reasoning Challenge in Will Knight, “AI Assistants Say Dumb Things, and We’re About to Find Out Why”, MIT Technology Review, 14 March 2018, https://www.technologyreview.com/s/610521/ai-assistants-dont-have-the-common-sense-to-avoid-talking-gibberish/, accessed 1 June 2018. See also the “AI2 Reasoning Challenge Leaderboard”, AI2 Website, http://data.allenai.org/arc/, accessed 1 June 2018.

  13. 13.

    Walsh also makes this point: Toby Walsh, Android Dreams (London: Hurst & Company, 2017), 116. As to the proficiency of AI poker players, see Byron Spice, “Carnegie Mellon Artificial Intelligence Beats Top Poker Pros”, Carnegie Mellon University Website, https://www.cmu.edu/news/stories/archives/2017/january/AI-beats-poker-pros.html, accessed 1 June 2018.

  14. 14.

    Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, February 2018, https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf, accessed 1 June 2018.

  15. 15.

    In the USA , there is a specific head of product liability law called “Failure to Warn”. See further Chapter 3 at s. 2.2.

  16. 16.

    José Hernández-Orallo, “AI: Technology Without Measure”, Presentation to Judge Business School, Cambridge University, 26 January 2018.

  17. 17.

    Toby Walsh, The Future of AI Website, http://thefutureofai.blogspot.co.uk/2016/09/staysafe-committee-driverless-vehicles.html, accessed 1 June 2018.

  18. 18.

    “Driverless Vehicles and Road Safety in New South Wales”, 22 September 2016, Staysafe (Joint Standing Committee on Road Safety), 2, https://www.parliament.nsw.gov.au/committees/DBAssets/InquiryReport/ReportAcrobat/6075/Report%20-%20Driverless%20Vehicles%20and%20Road%20Safety%20in%20NSW.pdf, accessed 1 June 2018.

  19. 19.

    Adapted from Philip K. Dick, Do Androids Dream of Electric Sheep? (New York: Doubleday, 1968).

  20. 20.

    See, for example, Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (“unfair commercial practices directive”), OJ L 149, 11 June 2005, 22–39).

  21. 21.

    Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation ”, International Data Privacy Law, Vol. 7, No. 4 (1 November 2017), 233–242, https://doi.org/10.1093/idpl/ipx022, accessed 1 June 2018.

  22. 22.

    DARPA Website”, https://www.darpa.mil/, accessed 1 June 2018.

  23. 23.

    David Gunning, “Explainable Artificial Intelligence (XAI)”, DARPA Website, https://www.darpa.mil/program/explainable-artificial-intelligence, accessed 1 June 2018.

  24. 24.

    David Gunning, DARPA XAI Presentation, DARPA, https://www.cc.gatech.edu/~alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI%20WS.pdf, accessed 1 June 2018.

  25. 25.

    Will Knight, “The Dark Secret at the Heart of AI”, MIT Technology Review, 11 April 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/, accessed 1 June 2018.

  26. 26.

    Bryce Goodman and Seth Flaxman, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation ’,” arXiv:1606.08813v3 [stat.ML], 31 August 2016, https://arxiv.org/pdf/1606.08813.pdf, accessed 1 June 2018.

  27. 27.

    Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms”, Big Data & Society (January–June 2016), 1–12 (2).

  28. 28.

    Hui Cheng et al. “Multimedia Event Detection and Recounting”, SRI-Sarnoff AURORA at TRECVID 2014 (2014) http://www-nlpir.nist.gov/projects/tvpubs/tv14.papers/sri_aurora.pdf, accessed 1 June 2018.

  29. 29.

    Upol Ehsan, Brent Harrison, Larry Chan, and Mark Riedl, “Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations”, arXiv:1702.07826v2 [cs.AI], 19 Dec 2, https://arxiv.org/pdf/1702.07826.pdf, accessed 1 June 2018.

  30. 30.

    Daniel Whitenack, “Hold Your Machine Learning and AI Models Accountable”, Medium, 23 November 2017, https://medium.com/pachyderm-data/hold-your-machine-learning-and-ai-models-accountable-de887177174c, accessed 1 June 2018.

  31. 31.

    Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016], OJ L119/1 (GDPR).

  32. 32.

    See, for example, “Overview of the General Data Protection Regulation (GDPR)” (Information Commissioner’s Office 2016), 1.1, https://ico.org.uk/for-organisations/data-protection-reform/overview-of-the-gdpr/individuals-rights/rights-related-to-automated-decision-making-and-profiling/, accessed 1 June 2018; House of Commons Science and Technology Committee, ‘Robotics and Artificial Intelligence’ (House of Commons 2016) HC 145, http://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf, accessed 1 June 2018.

  33. 33.

    GDPR, art. 83.

  34. 34.

    Ibid., art. 3.

  35. 35.

    Equivalent wording is found in art. 14(2)(g) and art. 15(1)(h).

  36. 36.

    “Profiling” is defined at art. 4(4) as “automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”. The profiling referred to at art. 22 refers to automated decision-making about a person which “which produces legal effects concerning him or her or similarly significantly affects him or her”.

  37. 37.

    EU legislation is published in multiple languages, each of which is equally valid. Some light might perhaps be cast on the term “meaningful information” by the other versions of the GDPR. The German text of the GDPR uses the word “aussagekräftige”, the French text refers to “informations utiles”, and the Dutch version uses “nuttige informative”. Although Selbst and Powells contend that “These formulations variously invoke notions of utility, reliability, and understandability”, the overall effect of this provision under any version remains obscure. Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation ”, International Data Privacy Law, Vol. 7, No. 4 (1 November 2017), 233–242, https://doi.org/10.1093/idpl/ipx022, accessed 1 June 2018.

  38. 38.

    Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation ”, International Data Privacy Law, Vol. 7, No. 4 (1 November 2017), 233–242, https://doi.org/10.1093/idpl/ipx022, accessed 1 June 2018.

  39. 39.

    Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.

  40. 40.

    See, for example, Tadas Klimas and Jurate Vaiciukaite, “The Law of Recitals in European Community Legislation”, International Law Students Association Journal of International and Comparative Law, Vol. 15 (2009), 61, 92.

  41. 41.

    Ibid., 80.

  42. 42.

    Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law, Vol. 7, No. 2 (1 May 2017), 76–99 (91), https://doi.org/10.1093/idpl/ipx005, accessed 1 June 2018. See also Fred H. Cate, Christopher Kuner, Dan Svantesson, Orla Lynskey, and Christopher Millard, “Machine Learning with Personal Data: Is Data Protection Law Smart Enough to Meet the Challenge?”, International Data Privacy Law, Vol. 7, No. 1 (2017); Ricardo Blanco-Vega, José Hernández-Orallo, and María José Ramírez-Quintana, “Analysing the Trade-Off Between Comprehensibility and Accuracy in Mimetic Models”, in International Conference on Discovery Science (Berlin, Heidelberg: Springer, 2004), 338–346.

  43. 43.

    Douwe Korff, “New Challenges to Data Protection Study-Working Paper No. 2”, European Commission DG Justice, Freedom and Security Report 86, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1638949, accessed 1 June 2018.

  44. 44.

    See the discussion of the difference between Directives and Regulations in Chapter 6 at s. 7.3.

  45. 45.

    Ibid.

  46. 46.

    “Glossary”, Website of the European Data Protection Supervisor, https://edps.europa.eu/data-protection/data-protection/glossary/a_en, accessed 1 June 2018.

  47. 47.

    Art. 29 Working Party, “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation” 2016/679, adopted on 3 October 2017, 17/EN WP 251.

  48. 48.

    Ibid.

  49. 49.

    See, for example, Mangold v. Helm (2005) C-144/04, or, more recently, the development of a “right to be forgotten” by the Court of Justice of the EU in relation to the ability of individuals to demand their removal from web search engine results—despite this not being specifically provided for in the relevant legislation at the time: Google Spain Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja González (2014) C-131/12.

  50. 50.

    Dong Huk Park et al., “Attentive Explanations: Justifying Decisions and Pointing to the Evidence”, arXiv:1612.04757v1 [cs.CV], 14 December 2016, https://arxiv.org/pdf/1612.04757v1.pdf, accessed 1 June 2018.

  51. 51.

    “AI finds novel way to beat classic Q*bert Atari video game”, BBC Website, 1 March 2018, http://www.bbc.co.uk/news/technology-43241936, accessed 1 June 2018.

  52. 52.

    “For Artificial Intelligence to Thrive, It Must Explain Itself”, The Economist, 15 February 2018.

  53. 53.

    Lilian Edwards and Michael Veale, “Slave to the Algorithm? Why a ‘Right to an Explanation ’ Is Probably Not the Remedy You Are Looking For” Duke Law and Technology Review, Vol. 16, No. 1 (2017), 1–65 (43).

  54. 54.

    Vijay Panday, “Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear”, New York Times, 25 January 2018, https://www.nytimes.com/2018/01/25/opinion/artificial-intelligence-black-box.html, accessed 1 June 2018.

  55. 55.

    See Daniel Kahneman and Jason Riis, “Living, and Thinking About It: Two Perspectives on Life”, in The Science of Well-Being, Vol. 1 (2005). See also Daniel Kahneman, Thinking, Fast and Slow (London: Penguin, 2011).

  56. 56.

    Indeed, the latter is so powerful that the UK Government created a specialist body—the Behavioural Insights Team (popularly known as the Nudge Unit) designed to influence people’s behaviour without them realising. Website of the Behavioural Insights Team, http://www.behaviouralinsights.co.uk/, accessed 1 June 2018.

  57. 57.

    Campolo et al., AI Now Institute 2017 Report, https://assets.contentful.com/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf, accessed 1 June 2018.

  58. 58.

    For an example of a functional approach to explainable AI, see Todd Kulesza, Margaret M. Burnett, Weng-Keen Wong and Simone Stumpf, “Principles of Explanatory Debugging to Personalize Interactive Machine Learning”, IUI 2015, Proceedings of the 20th International Conference on Intelligent User Interfaces (2015), 126–137.

  59. 59.

    David Weinberger, “Don’t Make AI Artificially Stupid in the Name of Transparency ”, Wired, 28 January 2018, https://www.wired.com/story/dont-make-ai-artificially-stupid-in-the-name-of-transparency/, accessed 1 June 2018. See also David Weinberger, “Optimization Over Explanation: Maximizing the Benefits of Machine Learning Without Sacrificing Its Intelligence”, Berkman Klein Centre, 28 January 2018, https://medium.com/berkman-klein-center/optimization-over-explanation-41ecb135763d, accessed 1 June 2018. See also, for example, Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology, Forthcoming. Available at Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR” (6 October 2017), Harvard Journal of Law & Technology, Forthcoming, https://ssrn.com/abstract=3063289 or http://dx.doi.org/10.2139/ssrn.3063289, accessed 1 June 2018.

  60. 60.

    Entry on Elizabeth I, The Oxford Dictionary of Quotations (Oxford: Oxford University Press, 2001), 297.

  61. 61.

    A person’s mental state in terms of knowledge or intent may well be important, but it rarely has legal consequences unless it is accompanied by some form of culpable action or omission: people are not usually penalised for “having bad thoughts”.

  62. 62.

    Ben Dickson, “Why It’s So Hard to Create Unbiased Artificial Intelligence”, Tech Crunch, 7 November 2016, https://techcrunch.com/2016/11/07/why-its-so-hard-to-create-unbiased-artificial-intelligence/, accessed 1 June 2018.

  63. 63.

    Sam Levin, “A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin”, The Guardian, https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people, accessed 1 June 2018.

  64. 64.

    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias ”, ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 1 June 2018.

  65. 65.

    Marvin Minsky, The Emotion Machine (London: Simon & Schuster, 2015), 113.

  66. 66.

    See, for example, the Entry on Bias in the Cambridge Dictionary: “… the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment”, Cambridge Dictionary, https://dictionary.cambridge.org/dictionary/english/bias, accessed 1 June 2018.

  67. 67.

    Nora Gherbi, “Artificial Intelligence and the Age of Empathy”, Conscious Magazine, http://consciousmagazine.co/artificial-intelligence-age-empathy/, accessed 1 June 2018.

  68. 68.

    The programs tested were those of IBM , Microsoft and Face ++. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” (Conference on Fairness, Accountability, and Transparency , February 2018), http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf, accessed 1 June 2018.

  69. 69.

    Ibid.

  70. 70.

    “Mitigating Bias in AI Models”, IBM Website, https://www.ibm.com/blogs/research/2018/02/mitigating-bias-ai-models/, accessed 1 June 2018. “Computer Programs Recognise White Men Better Than Black Women”, The Economist, 15 February 2018.

  71. 71.

    Ibid.

  72. 72.

    Using the definition above, Tay’s behaviour displayed a form of bias inasmuch as the implicit aim of Microsoft was to create a chatbot which could engage in civil conversation, but it was influenced by user inputs which were incompatible with polite discourse.

  73. 73.

    Sarah Perez, “Microsoft Silences Its New A.I. Bot Tay, after Twitter Users Teach It Racism”, Tech Crunch, 24 March 2016, https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/, accessed 1 June 2018.

  74. 74.

    John West, “Microsoft ’s Disastrous Tay Experiment Shows the Hidden Dangers of AI”, Quartz, 2 April 2016, https://qz.com/653084/microsofts-disastrous-tay-experiment-shows-the-hidden-dangers-of-ai/, accessed 1 June 2018.

  75. 75.

    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus, 2013. “Intriguing Properties of Neural Networks”, arXiv preprint server, https://arxiv.org/abs/1312.6199, accessed 1 June 2018.

  76. 76.

    “CleverHans”, GitHub, https://github.com/tensorflow/cleverhans, accessed 1 June 2018.

  77. 77.

    Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases”, Science, Vol. 356, No. 6334 (2017), 183–186.

  78. 78.

    “Biased Bots: Human Prejudices Sneak into AI Systems”, Bath University Website, 13 April 2017, http://www.bath.ac.uk/news/2017/04/13/biased-bots-artificial-intelligence/, accessed 1 June 2018.

  79. 79.

    Matthew Huston, “Even Artificial Intelligence can Acquire Biases Against Race and Gender”, Science Magazine, 13 April 2017, http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender, accessed 1 June 2018.

  80. 80.

    881 N.W.2d 749 (2016).

  81. 81.

    State of Wisconsin, Plaintiff-Respondent, v. Eric L. LOOMIS, Defendant-Appellant, 881 N.W.2d 749 (2016), 2016 WI 68, https://www.leagle.com/decision/inwico20160713i48, accessed 1 June 2018.

  82. 82.

    It is well established in US law that “[a] defendant has a constitutionally protected due process right to be sentenced upon accurate information” Travis, 347 Wis.2d 142, 17, 832 N.W.2d 491.

  83. 83.

    State of Wisconsin, Plaintiff-Respondent, v. Eric L. LOOMIS, Defendant-Appellant, 881 N.W.2d 749 (2016), 2016 WI 68, 65–66, https://www.leagle.com/decision/inwico20160713i48, accessed 1 June 2018.

  84. 84.

    Ibid., 54.

  85. 85.

    Ibid., 72. In State of Wisconsin v. Curtis E. Gallion, the Wisconsin Supreme Court explained that circuit courts “have an enhanced need for more complete information upfront, at the time of sentencing” 270 Wis.2d 535, 34, 678 N.W.2d 197.

  86. 86.

    “State v. Loomis, Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing”, 10 March 2017, 130 Harvard Law Review 1530, 1534.

  87. 87.

    Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias ”, ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 1 June 2018.

  88. 88.

    A and others v. United Kingdom [2009] ECHR 301; applied by the UK Supreme Court in AF [2009] UKHL 28.

  89. 89.

    However, it is not clear whether courts in Europe will treat the protection of intellectual property in an algorithm with as much importance as they do national security.

  90. 90.

    Under the Fourteenth Amendment to the US Constitution.

  91. 91.

    Houston Federation of Teachers Local 2415 et al. v. Houston Independent School District, Case 4:14-cv-01189, 17, https://www.gpo.gov/fdsys/pkg/USCOURTS-txsd-4_14-cv-01189/pdf/USCOURTS-txsd-4_14-cv-01189-0.pdf, accessed 1 June 2018.

  92. 92.

    Ibid., 18.

  93. 93.

    John D. Harden and Shelby Webb, “Houston ISD Settles with Union Over Controversial Teacher Evaluations”, Chron, 12 October 2017, https://www.chron.com/news/education/article/Houston-ISD-settles-with-union-over-teacher-12267893.php, accessed 1 June 2018.

  94. 94.

    Interestingly, Loomis was not directly considered by the District Court in the Houston Teachers case despite the fact that the latter reached the opposite conclusion on constitutionality; it’s only mention was a passing reference in a footnote, recording that “Courts are beginning to confront similar due process issues about government use of proprietary algorithms in other contexts”.

  95. 95.

    “Sampling Methods for Political Polling”, American Association for Public Opinion Research, https://www.aapor.org/Education-Resources/Election-Polling-Resources/Sampling-Methods-for-Political-Polling.aspx, accessed 1 June 2018.

  96. 96.

    Kate Crawford, “Artificial Intelligence’s White Guy Problem”, New York Times, 25 June 2016, https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html, accessed 1 June 2018.

  97. 97.

    See, for instance, Ivana Bartoletti, “Women Must Act Now, or Male-Designed Robots Will Take Over Our Lives”, The Guardian, 13 March 2018, https://www.theguardian.com/commentisfree/2018/mar/13/women-robots-ai-male-artificial-intelligence-automation, accessed 1 June 2018.

  98. 98.

    See, for example, the proposals in Michael Veale and Reuben Binns, “Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data”, Big Data & Society, Vol. 4, No. 2 (2017), 2053951717743530.

  99. 99.

    “Laws Enforced by EEOC”, Website of the U.S. Equal Employment Opportunity Commission, https://www.eeoc.gov/laws/statutes/, accessed 1 June 2018.

  100. 100.

    It may be the case that researchers wish to assess an otherwise protected characteristic as part of a scientific experiment or poll. For instance, it would be legitimate for a program to discriminate on grounds of race if it was being used in an experiment to map certain the prevalence of genetic diseases which are commonly found in one particular race. In this situation, the use of a protected characteristic would not meet the definition of bias outlined above, because it would be relevant to the task in question.

  101. 101.

    Silvia Chiappa and Thomas P.S. Gillam, “Path-Specific Counterfactual Fairness”, arXiv:1802.08139v1 [stat.ML], 22 Feb 2018.

  102. 102.

    Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva, “Counterfactual Fairness”, Advances in Neural Information Processing Systems, Vol. 30 (2017), 4069–4079.

  103. 103.

    Silvia Chiappa and Thomas P.S. Gillam, “Path-Specific Counterfactual Fairness”, arXiv:1802.08139v1 [stat.ML], 22 February 2018.

  104. 104.

    Samiulla Shaikh, Harit Vishwakarma, Sameep Mehta, Kush R. Varshney, Karthikeyan Natesan Ramamurthy, and Dennis Wei, “An End-To-End Machine Learning Pipeline That Ensures Fairness Policies”, arXiv:1710.06876v1 [cs.CY], 18 October 2017.

  105. 105.

    Ibid.

  106. 106.

    In addition to the papers cited above, see also B. Srivastava and F. Rossi, “Towards Composable Bias Rating of AI Services”, AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, New Orleans, LA, February 2018; F.P. Calmon, D. Wei, B. Vinzamuri, K.N. Ramamurty, and K.R. Varshney, “Optimized Pre-Processing for Discrimination Prevention”, Advances in Neural Information Processing Systems, Long Beach, CA, December 2017; and R. Nabi and I. Shpitser, “Fair inference on Outcomes”, Thirty-Second AAAI Conference on Artificial Intelligence, 2018.

  107. 107.

    Will Knight, “The Dark Secret at the Heart of AI”, MIT Technology Review, 11 April 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/, accessed 1 June 2018.

  108. 108.

    Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi, “The Ethics of Algorithms: Mapping the Debate”, Big Data & Society, Vol. 3, No. 2 (2016), http://journals.sagepub.com/doi/full/10.1177/2053951716679679, accessed 1 June 2018.

  109. 109.

    Marc Bennetts, “Soviet Officer Who Averted Cold War Nuclear Disaster Dies Aged 77”, The Guardian, 18 September 2017, https://www.theguardian.com/world/2017/sep/18/soviet-officer-who-averted-cold-war-nuclear-disaster-dies-aged-77, accessed 1 June 2018.

  110. 110.

    Benjamin Bidder, “Forgotten Hero: The Man Who Prevented the Third World War”, Der Spiegel, 21 April 2010, http://www.spiegel.de/einestages/vergessener-held-a-948852.html, accessed 1 June 2018.

  111. 111.

    See, for instance, George Dvorsky, “Why Banning Killer AI is Easier Said Than Done”, 9 July 2017, Gizmodo, https://gizmodo.com/why-banning-killer-ai-is-easier-said-than-done-1800981342, accessed 1 June 2018.

  112. 112.

    This appears to be the approach taken by the UK military with regard to automated and autonomous weapons: “Current UK policy is that the operation of our weapons will always be under human control as an absolute guarantee of human oversight and authority and of accountability for weapon usage. This information has been put on record a number of times, both in parliament and international forums. Although a limited number of defensive systems can currently operate in automatic mode, there is always a person involved in setting the parameters of any such mode”. UK Ministry of Defence, “Joint Doctrine Publication 0-30.2 Unmanned Aircraft Systems”, Development, Concepts and Doctrine Centre, August 2017, 42, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/673940/doctrine_uk_uas_jdp_0_30_2.pdf, accessed 1 June 2018.

  113. 113.

    Art. 29 Data Protection Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, adopted 3 October 2017, 17/EN WP 251, 10.

  114. 114.

    Eduardo Ustaran and Victoria Hordern, “Automated Decision-Making Under the GDPR—A Right for Individuals or A Prohibition for Controllers?”, Hogan Lovells, 20 October 2017, https://www.hldataprotection.com/2017/10/articles/international-eu-privacy/automated-decision-making-under-the-gdpr-a-right-for-individuals-or-a-prohibition-for-controllers/, accessed 1 June 2018.

  115. 115.

    Art. 29 Data Protection Working Party, “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679”, adopted 3 October 2017, 17/EN WP 251, 10.

  116. 116.

    Ibid., 11.

  117. 117.

    Eduardo Ustaran and Victoria Hordern, “Automated Decision-Making Under the GDPR—A Right for Individuals or A Prohibition for Controllers?”, Hogan Lovells, 20 October 2017, https://www.hldataprotection.com/2017/10/articles/international-eu-privacy/automated-decision-making-under-the-gdpr-a-right-for-individuals-or-a-prohibition-for-controllers/, accessed 1 June 2018.

  118. 118.

    Ibid.

  119. 119.

    See, for example, Richa Bhatia, “Is Deep Learning Going to Be Illegal in Europe?”, Analytics India Magazine, 30 January 2018, https://analyticsindiamag.com/deep-learning-going-illegal-europe/; Rand Hindi, “Will Artificial Intelligence Be Illegal in Europe Next Year?”, Entrepreneur, 9 August 2017, https://www.entrepreneur.com/article/298394, both accessed 1 June 2018.

  120. 120.

    “Media Advisory: Campaign to Ban Killer Robots Launch in London”, art. 36, 11 April 2013, http://www.article36.org/press-releases/media-advisory-campaign-to-ban-killer-robots-launch-in-london/, accessed 1 June 2018.

  121. 121.

    Samuel Gibbs, “Elon Musk Leads 116 Experts Calling for Outright Ban of Killer Robots”, The Guardian, 20 August 2017, https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war, accessed 1 June 2018. See also “2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)”, United Nations Office at Geneva, https://www.unog.ch/80256EE600585943/(httpPages)/7C335E71DFCB29D1C1258243003E8724?OpenDocument, accessed 1 June 2018.

  122. 122.

    Ian Steadman, “IBM ’s Watson Is Better at Diagnosing Cancer Than Human Doctors”, Wired, 11 February 2013, http://www.wired.co.uk/article/ibm-watson-medical-doctor, accessed 1 June 2018.

  123. 123.

    International Committee of the Red Cross, What Is International Humanitarian Law? (Geneva: ICRC, July 2004), https://www.icrc.org/eng/assets/files/other/what_is_ihl.pdf, accessed 1 June 2018.

  124. 124.

    Loes Witschge, “Should We Be Worried About ‘Killer Robots’?”, Al Jazeera, 9 April 2018, https://www.aljazeera.com/indepth/features/worried-killer-robots-180409061422106.html, accessed 1 June 2018.

  125. 125.

    Protocol IV of the 1980 Convention on Certain Conventional Weapons (Protocol on Blinding Laser Weapons).

  126. 126.

    Ottawa Treaty 1997. To date there are 164 signatories but 32 UN states are non-signatories. This includes powerful and important parties such as the US , Russia, China and India.

  127. 127.

    Nadia Whitehead, “Face Recognition Algorithm Finally Beats Humans”, Science, 23 April 2014, http://www.sciencemag.org/news/2014/04/face-recognition-algorithm-finally-beats-humans, accessed 1 June 2018.

  128. 128.

    Loes Witschge, “Should We Be Worried About ‘Killer Robots’?”, Al Jazeera, 9 April 2018, https://www.aljazeera.com/indepth/features/worried-killer-robots-180409061422106.html, accessed 1 June 2018.

  129. 129.

    H.L.A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Clarendon Press, 1978).

  130. 130.

    Carlsmith and Darley, “Psychological Aspects of Retributive Justice”, in Advances in Experimental Social Psychology, edited by Mark Zanna (San Diego, CA: Elsevier, 2008).

  131. 131.

    In evidence to the Royal Commission on Capital Punishment, Cmd. 8932, para. 53 (1953).

  132. 132.

    Exodus 21:24, King James Bible.

  133. 133.

    John Danaher, “Robots, Law and the Retribution Gap”, Ethics and Information Technology, Vol. 18, No. 4 (December 2016), 299–309.

  134. 134.

    Recent experiments conducted by Zachary Mainen involving the use of the hormone serotonin on biological systems may provide one avenue for future AI to experience emotions in a similar manner to humans. See Matthew Hutson, “Could Artificial Intelligence Get Depressed and Have Hallucinations?”, Science Magazine, 9 April 2018, http://www.sciencemag.org/news/2018/04/could-artificial-intelligence-get-depressed-and-have-hallucinations, accessed 1 June 2018.

  135. 135.

    In a gruesome example of public retribution being exacted against insensate “perpetrators”, in 1661 following the restoration of the English monarchy after the English Civil War and the rebublican Protectorate, three of the already deceased regicides who had participated in the execution of Charles I were disinterred from their graves and tried for treason. Having been found “guilty”, the corpses’ heads were removed and set on stakes above Westminster Hall. This may sound ridiculous, but arguably it answered a societal need: justice was seen to have been done. See Jonathan Fitzgibbons, Cromwell’s Head, (London: Bloomsbury Academic, 2008), 27–47. See also Chapter 2 at s. 2.1.3.

  136. 136.

    H.L.A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Clarendon Press, 1978).

  137. 137.

    Robert Lowe and Tom Ziemke, “Exploring the Relationship of Reward and Punishment in Reinforcement Learning: Evolving Action Meta-Learning Functions in Goal Navigation” (ADPRL), 2013 IEEE Symposium, pp. 140–147 (IEEE, 2013).

  138. 138.

    Stephen M. Omohundro, “The Basic AI Drives”, in Proceedings of the First Conference on Artificial General Intelligence, 2008.

  139. 139.

    Stuart Russell, “Should We Fear Supersmart Robots?”, Scientific American, Vol. 314 (June 2016), 58–59.

  140. 140.

    Nate Soares and Benja Fallenstein, “Aligning Superintelligence with Human Interests: A Technical Research Agenda”, in The Technological Singularity (Berlin and Heidelberg: Springer, 2017), 103–125. See also Stephen M. Omohundro, “The Basic AI Drives”, in Proceedings of the First Conference on Artificial General Intelligence, 2008.

  141. 141.

    Ibid.

  142. 142.

    Nick Bostrom, Superintelligence : Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), Chapter 9.

  143. 143.

    See John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior (Princeton, NJ: Princeton University Press, 1944).

  144. 144.

    Nate Soares and Benja Fallenstein, “Toward Idealized Decision Theory”, Technical Report 2014–7 (Berkeley, CA: Machine Intelligence Research Institute, 2014), https://arxiv.org/abs/1507.01986, accessed 1 June 2018.

  145. 145.

    See, for example, Thomas Harris, The Silence of the Lambs (London: St. Martin’s Press, 1998).

  146. 146.

    Jon Bird and Paul Layzell, “The Evolved Radio and Its Implications for Modelling the Evolution of Novel Sensors”, in Evolutionary Computation, 2002. CEC’02. Proceedings of the 2002 Congress on. Vol. 2. IEEE. 2002, 1836–1841.

  147. 147.

    Laurent Orseau and Stuart Armstrong, “Safely Interruptible Agents” (London and Berkeley, CA: DeepMind / MIRI, 28 October 2016), http://intelligence.org/files/Interruptibility.pdf, accessed 1 June 2018.

  148. 148.

    Ibid.

  149. 149.

    Ibid.

  150. 150.

    Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong “Corrigibility ”, in Artificial Intelligence and Ethics, edited by Toby Walsh AAAI Technical Report WS-15-02 (Palo Alto, CA: AAAI Press 2015), 75, https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136, accessed 1 June 2018.

  151. 151.

    We addressed this proposal in Chapter 4 at s. 4 when discussing the extent to which an AI system might exhibit some aspects of consciousness .

  152. 152.

    Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell, “The Off-Switch Game”, arXiv preprint arXiv:1611.08219 (2016), 1.

  153. 153.

    Jessica Taylor, Eliezer Yudkowsky, Patrick LaVictoire, and Andrew Critch, “Alignment for Advanced Machine Learning Systems”, Machine Intelligence Research Institute (2016). For a proposal building (and arguably improving) on the work of Orseau and Armstrong, see El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, and Alexandre Maure, “Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning”, EPFL Working Paper (2017), No. EPFL-WORKING-229332 (EPFL, 2017). Whereas Orseau and Armstrong address safe interruptibility for single agent AI, El Mhamdi et al. “precisely define and address the question of safe interruptibility in the case of several agents, which is known to be more complex than the single agent problem. In short, the main results and theorems for single agent reinforcement learning rely on the Markovian assumption that the future environment only depends on the current state. This is not true when there are several agents which can co-adapt”.

  154. 154.

    Gonzalo Torres, “What Is a Computer Virus?”, AVG Website, 18 December 2017, https://www.avg.com/en/signal/what-is-a-computer-virus, accessed 1 June 2018.

  155. 155.

    See also Chapter 3 at s. 2.6.4.

  156. 156.

    Nate Soares, Benja Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong “Corrigibility ”, in Artificial Intelligence and Ethics, edited by Toby Walsh, AAAI Technical Report WS-15-02 (Palo Alto, CA: AAAI Press, 2015), https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/view/10124/10136, accessed 1 June 2018.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jacob Turner .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 The Author(s)

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Turner, J. (2019). Controlling the Creations. In: Robot Rules . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-96235-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-96235-1_8

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-319-96234-4

  • Online ISBN: 978-3-319-96235-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics