Advertisement

Mind the gap: responsible robotics and the problem of responsibility

  • David J. Gunkel
Original Paper

Abstract

The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. (1) It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (2) It then considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. The innovations considered in this section include: autonomous technology, machine learning, and social robots. (3) The essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics.

Keywords

Robot Robotics Ethics Machine ethics Technology Responsibility Philosophy 

References

  1. Anderson, M., & Anderson, S. L. (2007). The status of machine ethics: A report from the AAAI symposium. Minds & Machines, 17(1), 1–10.CrossRefGoogle Scholar
  2. Anderson, M., & Anderson, S. L. (2011). Machine ethics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  3. Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Boca Raton: CRC Press.CrossRefGoogle Scholar
  4. Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687–709.CrossRefGoogle Scholar
  5. Beard, J. M. (2014). Autonomous weapons and human responsibilities. Georgetown Journal of International Law, 45(1), 617–681.Google Scholar
  6. Breazeal, C. L. (2004). Designing sociable robots. Cambridge, MA: MIT Press.MATHGoogle Scholar
  7. Bringsjord, S. (2007). Ethical robots: The future can heed us. AI & Society, 22(4), 539–550.CrossRefGoogle Scholar
  8. Brooks, R. A. (2002). Flesh and machines: How robots will change us. New York: Pantheon Books.Google Scholar
  9. Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 63–74). Amsterdam: John Benjamins.CrossRefGoogle Scholar
  10. Calverley, D. J. (2008). Imaging a non-biological machine as a legal person. AI & Society, 22(4), 523–537.CrossRefGoogle Scholar
  11. Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.CrossRefGoogle Scholar
  12. Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription. New York: Palgrave Macmillan.CrossRefGoogle Scholar
  13. Committee on Legal Affairs. Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. European Parliament, 2016. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN.
  14. Darling, K. (2012). Extending legal protection to social robots. IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots.
  15. Datteri, E. (2013). Predicting the long-term effects of human-robot interaction: A reflection on responsibility in medical robotics. Science and Engineering Ethics, 19(1), 139–160.CrossRefGoogle Scholar
  16. Dennett, D. C. (1996). Kinds of minds: Toward and understanding of consciousness. New York: Perseus Books.Google Scholar
  17. Derrida, J. (2005). Paper machine (trans. by R. Bowlby). Stanford, CA: Stanford University Press.Google Scholar
  18. Feenberg, A. (1991). Critical theory of technology. New York: Oxford University Press.Google Scholar
  19. Floridi, L. (2013). The ethic of information. Oxford: Oxford University Press.CrossRefGoogle Scholar
  20. French, P. (1979). The corporation as a moral person. American Philosophical Quarterly, 16(3), 207–215.MathSciNetGoogle Scholar
  21. Garreau, J. (2007). Bots on the Ground: In the Field of Battle (or Even Above it), Robots are a Soldier’s Best Friend. The Washington Post, Retrieved May 6, 2007, from http://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009.html.
  22. Gladden, M. E. (2016). The diffuse intelligent other: An ontology of nonlocalizable robots as moral and legal actors. In M. Nørskov (Ed.), Social robots: Boundaries, potential, challenges (pp. 177–198). Burlington, VT: Ashgate.Google Scholar
  23. Go Ratings. (2016). https://www.goratings.org/.
  24. Goertzel, B. (2002). Thoughts on AI morality. Dynamical Psychology: An International, Interdisciplinary Journal of Complex Mental Processes, May 2002. http://www.goertzel.org/dynapsyc/2002/AIMorality.htm.
  25. Google DeepMind. (2016). AlphaGo. https://deepmind.com/alpha-go.html.
  26. Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology and other subjects. Ethics and Information Technology, 9(3), 165–177.CrossRefGoogle Scholar
  27. Gunkel, D. J. (2012). The machine question: Critical perspectives on ai, robots and ethics. Cambridge, MA: MIT Press.Google Scholar
  28. Hall, J. S. (2001). Ethics for machines. KurzweilAI.net. http://www.kurzweilai.net/ethics-for-machines.
  29. Hammond, D. N. (2015). Autonomous weapons and the problem of state accountability. Chicago Journal of International Law, 15(2), 652–687.Google Scholar
  30. Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.CrossRefGoogle Scholar
  31. Heidegger, M. (1962). Being and time (trans. by John Macquarrie and Edward Robinson). New York: Harper and Row.Google Scholar
  32. Heidegger, M. (1977). The Question concerning technology and other essays (trans. by William Lovitt). New York: Harper and Row.Google Scholar
  33. Hemmersbaugh, P. A. NHTSA Letter to Chris Urmson, Director, Self-Driving Car Project, Google, Inc. https://isearch.nhtsa.gov/files/Google - compiled response to 12 Nov 15 interp request - 4 Feb 16 final.htm.
  34. Jibo. (2014). https://www.jibo.com.
  35. Johnson, D. G. (1985). Computer ethics. Upper Saddle River, NJ: Prentice Hall.Google Scholar
  36. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.CrossRefGoogle Scholar
  37. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.CrossRefGoogle Scholar
  38. Kant, I. (1963). Duties to animals and spirits. lectures on ethics (trans. by L. Infield) (pp. 239–241). New York: Harper and Row.Google Scholar
  39. Keynes, J. M. (2010). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 321–334). New York: Palgrave Macmillan.CrossRefGoogle Scholar
  40. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Burlington: Ashgate.Google Scholar
  41. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford: Oxford University Press.Google Scholar
  42. Lee, P. Learning from Tay’s introduction. Official Microsoft Blog, 25 March 2016. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/.
  43. Lokhorst, G. J., & van den Hoven, J. (2012). Responsibility for military robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robots (pp. 145–155). Cambridge, MA: MIT Press.Google Scholar
  44. Lyotard, J. F. (1993). The postmodern condition: A report on knowledge (trans. by Geoff Bennington and Brian Massumi). Minneapolis, MN: University of Minnesota Press.Google Scholar
  45. Marx, K. (1977). Capital (trans. by Ben Fowkes). New York: Vintage Books.Google Scholar
  46. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.CrossRefGoogle Scholar
  47. Metz, C. Google’s AI Wins a Pivotal Second Game in Match with Go Grandmaster. Wired, March 2016. http://www.wired.com/2016/03/googles-ai-wins-pivotal-game-two-match-go-grandmaster/.
  48. Microsoft. (2016). Meet Tay—Microsoft AI. Chatbot with Zero Chill. https://www.tay.ai/.
  49. Moore, G. E. (2005). Principia ethica. New York: Barnes & Noble Books.Google Scholar
  50. Mowshowitz, A. (2008). Technology as excuse for questionable ethics. AI & Society, 22(3), 271–282.CrossRefGoogle Scholar
  51. Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42.CrossRefGoogle Scholar
  52. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge: Cambridge University Press.Google Scholar
  53. Riceour, P. (2007). Reflections on the just (trans. by David Pellauer). Chicago: University of Chicago Press.Google Scholar
  54. Risely, J. (2016). Microsoft’s Millennial Chatbot Tay.ai Pulled Offline After Internet Teaches Her Racism. GeekWire. http://www.geekwire.com/2016/even-robot-teens-impressionable-microsofts-tay-ai-pulled-internet-teaches-racism/.
  55. Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5(1), 17–34.CrossRefGoogle Scholar
  56. Ross, P. E. (2016). A google car can qualify as a legal driver. IEEE Spectrum. http://spectrum.ieee.org/cars-that-think/transportation/self-driving/an-ai-can-legally-be-defined-as-a-cars-driver.
  57. Schulzke, M. (2013). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203–219.CrossRefGoogle Scholar
  58. Sharkey, N. (2012). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robots (pp. 111–128). Cambridge, MA: MIT Press.Google Scholar
  59. Singer, P. (1975). Animal liberation: A new ethics for our treatment of animals. New York: New York Review Book.Google Scholar
  60. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century. New York: Penguin Books.Google Scholar
  61. Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290.CrossRefGoogle Scholar
  62. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRefGoogle Scholar
  63. Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8(4), 205–213.CrossRefGoogle Scholar
  64. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.Google Scholar
  65. Sullins, J. P. (2010). Robowarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.CrossRefGoogle Scholar
  66. Suzuki, Y., Galli, L., Ikeda, A., Itakura, S., & Kitazaki, M. (2015). Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports, 5(1), 15924. doi: 10.1038/srep15924.
  67. Turing, A. (1999). Computing machinery and intelligence. In P. A. Meyer (Ed.), Computer media and communication: A reader (pp. 37–58). Oxford: Oxford University Press.Google Scholar
  68. van de Poel, I., Nihle´n Fahlquist, J., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science Engineering Ethics, 18(1), 49–67.CrossRefGoogle Scholar
  69. Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  70. Wagenaar, W. A., & Groenewegen, J. (1987). Accidents at sea: Multiple causes and impossible consequences. International Journal of Man-Machine Studies, 27, 587–598.CrossRefGoogle Scholar
  71. Wallach, W. (2015). A dangerous master: How to keep technology from slipping beyond our control. New York: Basic Books.Google Scholar
  72. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.CrossRefGoogle Scholar
  73. Wiener, N. (1988). The human use of human beings: Cybernetics and society. Boston: Ad Capo Press.Google Scholar
  74. Winner, L. (1977). Autonomous technology: Technics-out-of-control as a theme in political thought. Cambridge, MA: MIT Press.Google Scholar
  75. Winograd. T. (1990). Thinking machines: Can there be? Are we? In D. Partridge & Y. Wilks (Eds.), The foundations of artificial intelligence: A sourcebook (pp. 167–189). Cambridge: Cambridge University Press.Google Scholar
  76. Žižek, S. (2006). Philosophy, the “Unknown Knowns,” and the public use of reason. Topoi, 25(1–2), 137–142.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.Northern Illinois UniversityDekalbUSA

Personalised recommendations