Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach

Abstract

In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: (a) the development of a ‘good AI society’; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Furlow (2016).

  2. 2.

    Fleury (2015).

  3. 3.

    National Science and Technology Council Networking and Information Technology, Research and Development Subcommittee, and National Science and Technology Council Networking and Information Technology (2016). On the IT-friendly trend see Floridi (2014).

  4. 4.

    Mittelstadt et al. (2016).

  5. 5.

    Our focus is solely on the initial reports coming out in the fall and winter of 2016. This choice was made to ensure that the comparison would be focused on the specifics of the first round of reports of these governments, as opposed to on the ensuing responses and follow up reports.

  6. 6.

    We also mention the US R&D compendium and the adjoining Economic Report, as they are integral to initial US report.

  7. 7.

    Floridi (2016a).

  8. 8.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016).

  9. 9.

    Felten and Lyons (2016).

  10. 10.

    Request for Information on Artificial Intelligence (2016).

  11. 11.

    The OSTP report states that: “Developing and studying machine intelligence can help us better understand and appreciate our human intelligence. Used thoughtfully, AI can augment our intelligence, helping us chart a better and wiser path forward.” (2016, pp. 14, 49).

  12. 12.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 1).

  13. 13.

    The OSTP report states that in certain cases this can be achieved by working together with public institutes, or supported by public funding: “Private and public institutions are encouraged to examine whether and how they can responsibly leverage AI and machine learning in ways that will benefit society. Social justice and public policy institutions that do not typically engage with advanced technologies and data science in their work should consider partnerships with AI researchers and practitioners that can help apply AI tactics to the broad social problems these institutions already address in other ways.” 2016, pp. 14, 40.

  14. 14.

    Finley (2016).

  15. 15.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 17).

  16. 16.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 17).

  17. 17.

    The OSTP report for instance mentions the approach to evolving regulatory frameworks on the basis of ongoing experimentation: “The Department of Transportation (DOT) is using an approach to evolving the relevant regulations that is based on building expertise in the Department, creating safe spaces and test-beds for experimentation, and working with industry and civil society to evolve performance-based regulations that will enable more uses as evidence of safe operation accumulates.” (2016, p. 1).

  18. 18.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 20).

  19. 19.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 4).

  20. 20.

    The OSTP report makes the following recommendation: “Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R&D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R&D in these areas.”, 2016, pp. 26, 41.

  21. 21.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 2).

  22. 22.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, pp. 2, 29).

  23. 23.

    The OSTP report states that: “Public policy can [also] ensure that the economic benefits created by AI are shared broadly, and assure that AI responsibly ushers in a new age in the global economy.”, 2016, p. 2.

  24. 24.

    Executive Office of the President (2016). Here after referred to as Executive Office of the President (2016).

  25. 25.

    Executive Office of the President (2016, p. 3).

  26. 26.

    Executive Office of the President (2016, p. 27).

  27. 27.

    The OSTP’s companion document, entitled the “National Artificial Intelligence Research and Development Strategic Plan”, details how R&D investments can be used to advance policies that have a positive long term impact on society and the world (2016, pp. 7–10). The plan is available at: https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf. Hereafter referred to as Networking and Information Technology Research and Development Subcommittee (2016).

  28. 28.

    Networking and Information Technology Research and Development Subcommittee (2016, p. 7).

  29. 29.

    Executive Office of the President (2016, p. 3).

  30. 30.

    Executive Office of the President (2016, pp. 32–34).

  31. 31.

    Executive Office of the President (2016, p. 35).

  32. 32.

    The OSTP report states: “Agencies across the U.S. Government are working to develop a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.”, 2016, p. 3.

  33. 33.

    Taddeo (2016a, b).

  34. 34.

    Libicki (2009), Quackenbush (2011), Floridi (2016a, b).

  35. 35.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 38).

  36. 36.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 2).

  37. 37.

    The OSTP report defines transparency as consisting of two parts: “The data and algorithms involved, and the potential to have some form of explanation for any AI-based determination”, 2016, p. 2.

  38. 38.

    Transparency is covered in some greater detail in the R&D strategy compendium of the OSTP report: “A key research challenge is increasing the “explainability” or “transparency” of AI. Many algorithms, including those based on deep learning, are opaque to users, with few existing mechanisms for explaining their results. This is especially problematic for domains such as healthcare, where doctors need explanations to justify a particular diagnosis or a course of treatment. AI techniques such as decision-tree induction provide built-in explanations but are generally less accurate. Thus, researchers must develop systems that are transparent, and intrinsically capable of explaining the reasons for their results to users.”, See Networking and Information Technology Research and Development Subcommittee (2016, p. 28).

  39. 39.

    United States Standards Strategy Committee (2015).

  40. 40.

    Networking and Information Technology Research and Development Subcommittee (2016, pp. 14, 26).

  41. 41.

    Kroll et al. (2017), Annany and Crawford (2016).

  42. 42.

    Crawford and Calo (2016).

  43. 43.

    Wachter et al. (Forthcoming).

  44. 44.

    Calo (2014).

  45. 45.

    Tutt (2016).

  46. 46.

    Scherer (2016).

  47. 47.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 27), Executive Office of the President (2016, pp. 3, 28–29); Executive Office of the President National Science and Technology Council Committee on Technology (2016, pp. 35–36).

  48. 48.

    Crawford (2016).

  49. 49.

    The OSTP report emphasis the problems of the lack of quality data, especially in the context of the criminal justice system (2016, p. 30).

  50. 50.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, p. 32).

  51. 51.

    National Science and Technology Council Networking and Information Technology. Networking and Information Technology Research and Development Subcommittee (2016).

  52. 52.

    The OSTP report specifically mentions that: “Ethical training for AI practitioners and students is a necessary part of the solution. Ideally, every student learning AI, computer science, or data science would be exposed to curriculum and discussion on related ethics and security topics. However, ethics alone is not sufficient. Ethics can help practitioners understand their responsibilities to all stakeholders, but ethical training needs to be augmented with the technical capability to put good intentions into practice by taking technical precautions as a system is built and tested.” (2016, p. 32).

  53. 53.

    The OSTP report states on this topic: "AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias. It is important that anyone using AI in the criminal justice context is aware of the limitations of current data." (2016, p. 30). The R&D compendium focuses on the need for establishing “AI technology benchmarks” and ensuring coordination between the different partners in the AI community. It warns that current examples are sector specific and that many questions remain unanswered surrounding the development, use and availability of datasets that produce reliable outcomes (2016, p. 30–33).

  54. 54.

    Executive Office of the President (2016 p. 29).

  55. 55.

    Partnership on AI (2016).

  56. 56.

    In the OSTP report it is stated that: “The general consensus of the RFI commenters was that broad regulation of AI research or practice would be inadvisable at this time. Instead, commenters said that the goals and structure of existing regulations were sufficient, and commenters called for existing regulation to be adapted as necessary to account for the effects of AI. For example, commenters suggested that motor vehicle regulation should evolve to account for the anticipated arrival of autonomous vehicles, and that the necessary evolution could be carried out within the current structure of vehicle safety regulation. In doing so, agencies must remain mindful of the fundamental purposes and goals of regulation to safeguard the public good, while creating space for innovation and growth in AI.” 2016, p. 17.

  57. 57.

    The OSTP report mentions the example of the Department of Transportation (DOT) which: “[Is] using an approach to evolving the relevant regulations that is based on building expertise in the Department (…).” 2016, p. 1.

  58. 58.

    Felten (2016).

  59. 59.

    There is a need for further investment in research and the development of systems to make algorithms more transparent and understandable. Networking and Information Technology Research and Development Subcommittee (2016, p. 7).

  60. 60.

    Executive Office of the President (2016), Introduction.

  61. 61.

    European Parliament Committee on Legal Affairs (2016).

  62. 62.

    For further information on the history e.g. the working group established by the committee and its members see European Parliament Committee on Legal Affairs (2016) p. 20.

  63. 63.

    This report was adopted in a modified form by the European Parliament on the 16th of February 2017. http://www.europarl.europa.eu/sides/getDoc.do?type=TA&reference=P8-TA-2017-0051&format=XML&language=EN.

  64. 64.

    The EP report specifically “Asks for the establishment of committees on robot ethics in hospitals and other health care institutions tasked with considering and assisting in resolving unusual, complicated ethical problems involving issues that affect the care and treatment of patient” European Parliament Committee on Legal Affairs (2016, pp. 8–9).

  65. 65.

    European Parliament Committee on Legal Affairs (2016, p. 22).

  66. 66.

    The European focus on robotics can best be understood taking into account that the RoboLaws project: Palmerini et al. (2016) and the Green Paper on legal issues in robotics by Leroux and Labruto (2013). This research played a crucial in defining the framing and focus of the European debate.

  67. 67.

    European Parliament Committee on Legal Affairs (2016, pp. 3, 5), 10ff, 22.

  68. 68.

    European Parliament Committee on Legal Affairs (2016, pp. 11, 21).

  69. 69.

    Schafer (2016).

  70. 70.

    European Parliament Committee on Legal Affairs (2016, pp. 3, 9–10, 22).

  71. 71.

    European Parliament Committee on Legal Affairs (2016, p. 10).

  72. 72.

    Ibid.

  73. 73.

    Ibid.

  74. 74.

    European Parliament Committee on Legal Affairs (2016, p. 14).

  75. 75.

    European Parliament Committee on Legal Affairs (2016, p. 7ff.

  76. 76.

    European Parliament Committee on Legal Affairs (2016, p. 13).

  77. 77.

    European Parliament Committee on Legal Affairs (2016, pp. 5, 10ff, 14).

  78. 78.

    European Parliament Committee on Legal Affairs (2016, p. 8).

  79. 79.

    European Parliament Committee on Legal Affairs (2016, p. 4).

  80. 80.

    European Parliament Committee on Legal Affairs (2016, p. 8).

  81. 81.

    European Parliament Committee on Legal Affairs (2016, pp. 12, 22).

  82. 82.

    European Parliament Committee on Legal Affairs (2016, p. 11ff).

  83. 83.

    European Parliament Committee on Legal Affairs (2016, p. 8).

  84. 84.

    Ibid.

  85. 85.

    European Parliament Committee on Legal Affairs (2016, pp. 10–11).

  86. 86.

    European Parliament Committee on Legal Affairs (2016, pp. 7), in more depth 14.

  87. 87.

    European Parliament Committee on Legal Affairs (2016, p. 7).

  88. 88.

    European Parliament Committee on Legal Affairs (2016, p. 14).

  89. 89.

    European Parliament Committee on Legal Affairs (2016, p. 14).

  90. 90.

    E.g. European Parliament Committee on Legal Affairs (2016, p. 18). On the relation to licences for designers: “You should develop tracing tools at the robot’s design stage. These tools will facilitate accounting and explanation of robotic behaviour, even if limited, at the various levels intended for experts, operators and users.”

  91. 91.

    European Parliament Committee on Legal Affairs (2016, p. 14).

  92. 92.

    European Parliament Committee on Legal Affairs (2016, pp. 14–15).

  93. 93.

    European Parliament Committee on Legal Affairs (2016, p 16).

  94. 94.

    European Parliament Committee on Legal Affairs (2016, p. 17f).

  95. 95.

    European Parliament Committee on Legal Affairs (2016, p. 8).

  96. 96.

    This was mentioned in the context of possibly assigning electronic personhood to robots. See European Parliament Committee on Legal Affairs (2016, p. 12).

  97. 97.

    European Parliament Committee on Legal Affairs (2016, p. 10ff).

  98. 98.

    European Parliament Committee on Legal Affairs (2016, p. 7).

  99. 99.

    European Parliament Committee on Legal Affairs (2016, p. 7).

  100. 100.

    European Parliament Committee on Legal Affairs (2016, p. 8).

  101. 101.

    However, despite the narrow focus the report does cover a wider set of issues that make this report comparable to its British and American equivalents.

  102. 102.

    The specific focus on civil liability rules of the EU report comes from its ability to regulate this particular area, whereas in some of the other recommendation areas it may or may not be able to promote the proposals made. The report’s focus on what is clearly a rather narrow competency, allows it to also venture out into more ambitious proposals. Yet, the EU does not have the flexibility of its counterparts to suggest broad more generic approaches that deal with all aspects of AI.

  103. 103.

    European Parliament Committee on Legal Affairs (2016, p. 18).

  104. 104.

    House of Commons Science and Technology Committee (2016a).

  105. 105.

    Ibid, p. 7.

  106. 106.

    The report states that: “There is not a Government strategy for developing the skills, and securing the critical investment, that is needed to create future growth in robotics and AI. Nor is there any sign of the Government delivering on its promise to establish a ‘RAS Leadership Council’ to provide much needed coordination and direction. Without a Government strategy for the sector, the productivity gains that could be achieved through greater uptake of the technologies across the UK will remain unrealised. (Paragraph 98)” (2016, p. 37).

  107. 107.

    The report states that: “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” 2016, pp. 25, 36.

  108. 108.

    “DeepMind.” (2016).

  109. 109.

    House of Commons Science and Technology Committee (2016b, pp. 34–35).

  110. 110.

    House of Commons Science and Technology Committee (2016b, p. 36).

  111. 111.

    The report holds: “Though some of the more transformational impacts of AI might still be decades away, others—like driverless cars and supercomputers that assist with cancer prediction and prognosis—have already arrived. The ethical and legal issues discussed in this chapter, however, are cross-cutting and will arise in other areas as AI is applied in more and more fields. For these reasons, witnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed.”, 2016, p. 22.

  112. 112.

    House of Commons Science and Technology Committee (2016b, pp. 3, 26, 36).

  113. 113.

    The Alan Turing Institute (2016).

  114. 114.

    The report holds that: “Membership of the Commission should be broad and include those with expertise in law, social science and philosophy, as well as computer scientists, natural scientists, mathematicians and engineers. Members drawn from industry, NGOs and the public, should also be included and a programme of wide ranging public dialogue instituted.” 2016, p. 37.

  115. 115.

    House of Commons Science and Technology Committee (2016b). pp. 7, 11, 12.

  116. 116.

    House of Commons Science and Technology Committee (2016b), pp. 26, 36.

  117. 117.

    Floridi and Taddeo (2016).

  118. 118.

    Ingold and Soper (2016).

  119. 119.

    The report states that: “Advances in robotics and AI hold the potential to reshape fundamentally the way we live and work. While we cannot yet foresee exactly how this ‘fourth industrial revolution’ will play out, we know that gains in productivity and efficiency, new services and jobs, and improved support in existing roles are all on the horizon, alongside the potential loss of well-established occupations. Such transitions will be challenging.” 2016, p. 15.

  120. 120.

    House of Commons Science and Technology Committee (2016b, pp. 5, 13, 36).

  121. 121.

    House of Commons Science and Technology Committee (2016b, p. 18).

  122. 122.

    House of Commons Science and Technology Committee (2016b, p. 21).

  123. 123.

    Ibid.

  124. 124.

    Ibid.

  125. 125.

    House of Commons Science and Technology Committee (2016b, p. 22).

  126. 126.

    House of Commons Science and Technology Committee (2016b, p. 13).

  127. 127.

    European Union (2016).

  128. 128.

    House of Commons Science and Technology Committee (2016b, p. 18).

  129. 129.

    House of Commons Science and Technology Committee (2016b, p. 22).

  130. 130.

    Disclosure: please note that please note that ATI.

  131. 131.

    UK Government Office for Science (2016).

  132. 132.

    House of Commons Science and Technology Committee (2016b, p. 3).

  133. 133.

    Executive Office of the President National Science and Technology Council Committee on Technology (2016, pp. 2, 30–32).

  134. 134.

    The report’s companion document, entitled the “National Artificial Intelligence Research and Development Strategic Plan”, details how AI should ideally impact various sectors, pp. 8–10.

  135. 135.

    As said before, the vision laid out in the R&D report cannot be seen as indicative of the Government approach in the same way that the general report can, as the R&D report focuses specifically on: “defining a high-level framework that can be used to identify scientific and technological gaps in AI and track the Federal R&D investments that are designed to fill those gaps. The AI R&D Strategic Plan identifies strategic priorities for both near-term and long-term support of AI that address important technical and societal challenges. The AI R&D Strategic Plan, however, does not define specific research agendas for individual Federal agencies. Instead, it sets objectives for the Executive Branch, within which agencies may pursue priorities consistent with their missions, capabilities, authorities, and budgets, so that the overall research portfolio is consistent with the AI R&D Strategic Plan. The AI R&D Strategic Plan also does not set policy on the research or use of AI technologies nor does it explore the broader concerns about the potential influence of AI on jobs and the economy.” 2016, p. 7.

  136. 136.

    Executive Office of the President (2016, p. 22).

  137. 137.

    European Parliament Committee on Legal Affairs (2016, p. 4).

  138. 138.

    European Parliament Committee on Legal Affairs (2016, p. 7).

  139. 139.

    House of Commons Science and Technology Committee (2016b, pp. 26, 36).

  140. 140.

    House of Commons Science and Technology Committee (2016b, pp. 25, 36).

  141. 141.

    This approach is based on the work on digital ethics developed at the University of Oxford and at The Alan Turing Institute by our research group.

  142. 142.

    European Union (2016).

  143. 143.

    http://fra.europa.eu/en/charterpedia/article/1-human-dignity.

  144. 144.

    The importance of such foresight has been elaborately described by one of us: “The development of ICT has not only brought enormous benefits and opportunities but also greatly outpaced our understanding of its conceptual nature and implications, while raising problems whose complexity and global dimensions are rapidly expanding, evolving and becoming increasingly serious. A simple analogy may help to make sense of the current situation. Our technological tree has been growing its far-reaching branches much more widely, rapidly and chaotically than its conceptual, ethical and cultural roots. (…) The risk is that, like a tree with weak roots, further and healthier growth at the top might be impaired by a fragile foundation at the bottom.” He also states that: “as a consequence, today, any advanced information society faces the pressing task of equipping itself with a viable philosophy and ethics of information”. We argue that this argument needs to be extended to the realm of governance, which equally needs a clear vision to root the tree of AI. See Floridi (2010).

References

  1. Annany, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 1–17. http://journals.sagepub.com/doi/pdf/10.1177/1461444816676645.

  2. Calo, R. (2014). The case for a federal robotics commission|Brookings Institution. Retrieved from https://www.brookings.edu/research/the-case-for-a-federal-robotics-commission/.

  3. Crawford, K. (2016). Artificial intelligence’s white guy problem. Retrieved from http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=1.

  4. Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature News, 538(7625), 311. doi:10.1038/538311a.

    Article  Google Scholar 

  5. DeepMind. (2016). DeepMind. November 15. https://deepmind.com/about/.

  6. European Parliament Committee on Legal Affairs. (2016). Civil law rules on robotics (2015/2103 (INL)). Brussels, Belgium: European Parliament. Retrieved from http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN.

  7. European Union. (2016). European Union (EU) General Data Protection Regulation 2016/679. Brussels, Belgium. Retrieved from http://ec.europa.eu/justice/data-protection/reform/files/regulation_oj_en.pdf.

  8. Executive Office of the President. (2016). Artificial intelligence, automation and the economy. Washington, DC, USA. Retrieved from https://www.whitehouse.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF.

  9. Executive Office of the President National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence. Washington, DC, USA. Retrieved from https://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.

  10. Felten, E. W. (2016). Preparing for the future of artificial intelligence. White House Website Blog. Retrieved from https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence.

  11. Felten, E. W., & Lyons, T. (2016). Public input and next steps on the future of artificial intelligence. Medium. Retrieved from https://medium.com/@USCTO/public-input-and-next-steps-on-the-future-of-artificial-intelligence-458b82059fc3#.fj949abr5.

  12. Finley, K. (2016). Obama wants to help the government to develop AI. Retrieved from https://www.wired.com/2016/10/obama-envisions-ai-new-apollo-program/.

  13. Fleury, M. (2015). How artificial intelligence is transforming the financial industry. Retrieved from http://www.bbc.co.uk/news/business-34264380.

  14. Floridi, L. (2010). Ethics after the information revolution. In L. Floridi (Ed.), The Cambridge handbook of information and computer ethics (pp. 3–19). Cambridge: Cambridge University Press. Retrieved from http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521888981.

  15. Floridi, L. (2013). Infraethics. Philosphers’ Magazine, 60(1), 26–27.

    Article  Google Scholar 

  16. Floridi, L. (2014). The fourth revolution. How the infosphere is reshaping human reality. Oxford, UK: Oxford University Press.

    Google Scholar 

  17. Floridi, L. (2016a). Mature information societies—A matter of expectations. Philosophy and Technology, 29(1), 1–4. doi:10.1007/s13347-016-0214-6.

    Article  Google Scholar 

  18. Floridi, L. (2016b). On human dignity as a foundation for the right to privacy. Philosophy and Technology, 29(4), 307–312.

    Article  Google Scholar 

  19. Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society, 374(2083), 1–4. doi:10.1098/rsta.2016.0360.

    Google Scholar 

  20. Furlow, B. (2016). IBM Watson collaboration aims to improve oncology decision support tools. Retrieved from http://www.cancernetwork.com/mbcc-2016/ibm-watson-collaboration-aims-improve-oncology-decision-support-tools.

  21. Hart, A. (1961). The concept of law. Oxford: Clarendon.

    Google Scholar 

  22. House of Commons Science and Technology Committee. (2016a). Robotics and artificial intelligence (No. Fifth Report of Session 2016-17). London, UK. Retrieved from http://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/145.pdf.

  23. House of Commons Science and Technology Committee. (2016b). The Big Data dilemma: Government response to the Committee’s fourth report of session 201516 contents. http://www.publications.parliament.uk/pa/cm201516/cmselect/cmsctech/992/99204.htm.

  24. Ingold, D., & Soper, S. (2016). Amazon doesn’t consider the race of its customers. Should It? Retrieved from http://www.bloomberg.com/graphics/2016-amazon-same-day/.

  25. Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 1. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2765268.

  26. Leroux, C., & Labruto, R. (2013). A green paper on legal issues in robotics. ResearchGate. Retrieved from https://www.researchgate.net/publication/310167745_A_green_paper_on_legal_issues_in_robotics.

  27. Libicki, M. C. (2009). Cyberdeterrence and cyberwar. The RAND Corporation. Retrieved from http://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG877.pdf.

  28. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society. doi:10.1177/2053951716679679.

    Google Scholar 

  29. National Science and Technology Council Networking and Information Technology. Networking and Information Technology Research and Development Subcommittee. (2016). The national artificial intelligence research and development strategic plan (Washington DC, USA). Retrieved from https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf.

  30. Pagallo, U. (2016a). Three lessons learned for intelligent transport systems that Abide by the law. In Jusletter IT 24.

  31. Pagallo, U. (2016b). Even angels need the rules: AI, roboethics, and the law. ECAI, 258, 209–215.

    Google Scholar 

  32. Palmerini, E., Bertolini, A., Battaglia, F., Koops, B.-J., Carnevale, A., & Salvini, P. (2016). RoboLaw: Towards a European framework for robotics regulation. Robotics and Autonomous Systems, 86, 78–85. doi:10.1016/j.robot.2016.08.026.

    Article  Google Scholar 

  33. Partnership on AI. (2016). Retrieved from https://www.partnershiponai.org/.

  34. Quackenbush, S. L. (2011). Deterrence theory: Where do we stand? Review of International Studies, 37(2), 741–762.

    Article  Google Scholar 

  35. Request for Information on Artificial Intelligence. (2016). Science and technology policy office. Retrieved from https://www.federalregister.gov/documents/2016/06/27/2016-15082/request-for-information-on-artificial-intelligence.

  36. Schafer, B. (2016). Closing Pandora’s box? (pp. 55–67). Law and Technology: The EU Proposal on the Regulation of Robots. Pandora’s Box.

    Google Scholar 

  37. Scherer, M. U. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law and Technology, 29(2), 372. http://dx.doi.org/10.2139/ssrn.2609777.

  38. Taddeo, M. (2016a). Just information warfare. Topoi, 35(1), 213–224.

    Article  Google Scholar 

  39. Taddeo, M. (2016b). On the risks of relying on analogies to understand cyber conflicts. Minds and Machines, 26(4), 317–321.

    Article  Google Scholar 

  40. The Alan Turing Institute. (2016). Accessed September 1. https://www.turing.ac.uk/.

  41. Tutt, A. (2016). An FDA for algorithms. Administrative Law Review, 67, 18. Available at SSRN: https://ssrn.com/abstract=2747994.

  42. UK Government Office for Science. (2016). Artificial intelligence: An overview for policy-makers. Retrieved from https://www.gov.uk/government/publications/artificial-intelligence-an-overview-for-policy-makers.

  43. United States Standards Strategy Committee. (2015). United States Standards Strategy. Retrieved from https://share.ansi.org/shared%20documents/Standards%20Activities/NSSC/USSS_Third_edition/ANSI_USSS_2015.pdf.

  44. Wachter, S., Mittelstadt, B. D., & Floridi, L. (Forthcoming). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Retrieved from Available at SSRN: https://ssrn.com/abstract=2903469.

Download references

Acknowledgements

We discussed multiple versions of this article on various conferences and mailing lists. Specifically, the first author discussed some of the ideas included in this article at the IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems conferences in Brussels. We are deeply indebted for the feedback we received from these various communities and audiences. In particular, we wish to thank the three anonymous reviewers whose comments greatly improved the final version. We also want to thank John Havens, Greg Adamson and Inez De Beaufort for their insightful comments and for the time they put into discussing the ideas presented in this article.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Corinne Cath.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cath, C., Wachter, S., Mittelstadt, B. et al. Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Sci Eng Ethics 24, 505–528 (2018). https://doi.org/10.1007/s11948-017-9901-7

Download citation

Keywords

  • Algorithms
  • Artificial intelligence
  • Data ethics
  • Good society
  • Human dignity