HRI ethics and type-token ambiguity: what kind of robotic identity is most responsible?

Abstract

This paper addresses ethical challenges posed by a robot acting as both a general type of system and a discrete, particular machine. Using the philosophical distinction between “type” and “token,” we locate type-token ambiguity within a larger field of indefinite robotic identity, which can include networked systems or multiple bodies under a single control system. The paper explores three specific areas where the type-token tension might affect human–robot interaction, including how a robot demonstrates the highly personalized recounting of information, how a robot makes moral appeals and justifies its decisions, and how the possible need for replacement of a particular robot shapes its ongoing role (including how its programming could transfer to a new body platform). We also consider how a robot might regard itself as a replaceable token of a general robotic type and take extraordinary actions on that basis. For human–robot interaction robotic type-token identity is not an ontological problem that has a single solution, but a range of possible interactions that responsible design must take into account, given how people stand to gain and lose from the shifting identities social robots will present.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    Note that we specifically refrain from engaging in the ongoing philosophical debate as to whether robots can or could be truly moral agents; all we intend here is the interactant’s construal of the robot as norm-following or moral.

References

  1. Arkin, R. C., Ulam, P., & Wagner, A. R. (2012). Moral decision making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception. Proceedings of the IEEE, 100(3), 571–589.

    Article  Google Scholar 

  2. Benjamin, W., Arendt, H., & Zohn, H. (1970). Illuminations; edited and with an introduction by Hannah Arendt. Translated by Harry Zohn (1st ed.). London: Cape.

  3. Bickhard, M. H. (2017). Robot sociality: Genuine or Simulation? In R. Hakli & J. Seibt (Eds.), Sociality and normativity for robots (pp. 41–66). Cham: Springer International Publishing.

    Google Scholar 

  4. Biegler, P. (2016). The real costs of making friends with robots. The Age. Retrieved 12, December 2016, from http://www.theage.com.au/technology/technology-news/the-real-costs-of-making-friends-with-robots-20161027-gscgbe.html.

  5. Breazeal, C. L. (2002). Designing sociable robots. Cambridge: MIT Press.

    MATH  Google Scholar 

  6. Briggs, G., & Scheutz, M. (2015). Sorry, I can’t do that: Developing mechanisms to appropriately reject directives in human-robot interactions. In 2015 AAAI fall symposium series.

  7. Bringsjord, S. (2015). A 21st-century ethical hierarchy for robots and persons: EH. In: A world with robots. International conference on robot ethics: ICRE (Vol. 84, p. 47).

  8. Bryson, J. J. (2012). Patiency is not a virtue:suggestions for co-constructing an ethical framework including intelligent artefacts. In: D. J. Gunkel, J. J. Bryson, & S. Torrance (Eds.), The machine question (pp. 73–77). Birmingham: Society for the Study of Artificial Intelligence and the Simulation of Behaviour.

    Google Scholar 

  9. Caine, K., Sabanovic, S., & Carter, M. (2012). The effect of monitoring by cameras and robots on the privacy enhancing behaviors of older adults. In 7th ACM/IEEE international conference on human–robot interaction (HRI) (pp. 343–350).

  10. Calo, R. (2015). Robotics and the lessons of Cyberlaw. California Law Review, 103, 2008–2014.

    Google Scholar 

  11. Carpenter, J. (2016). Culture and human-robot interaction in militarized spaces: A war story. Abingdon: Routledge.

    Book  Google Scholar 

  12. Demiris, Y. (2009). Knowing when to assist: Developmental issues in lifelong assistive robotics. In 2009 Annual international conference of the IEEE engineering in medicine and biology society (pp. 3357–3360). IEEE.

  13. Draper, H., & Sorell, T. Ethical values and social care robots for older people: An international qualitative study. Ethics and Information Technology, 19, 49–68.

  14. Farmer, H., & Tsakiris, M. (2013). Touching hands: A neurocognitive review of intersubjective touch. In Z. Radman (Ed.). The hand, an organ of the mind: What the manual tells the mental (p. 103). Cambridge: MIT Press

    Google Scholar 

  15. Friedman, B., Kahn, P. H. Jr., & Hagman, J. (2003). Hardware companions? What online AIBO discussion forums reveal about the human-robotic relationship. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 273–280). New York: ACM.

  16. Ju, W. (2015). The design of implicit interactions. Synthesis Lectures on Human-Centered Informatics, 8(2), 1–93.

    Article  Google Scholar 

  17. Kahn, P. H. Jr., Kanda, T., Ishiguro, H., Gill, B. T., Shen, S., Gary, H. E., & Ruckert, J. H. (2015). Will people keep the secret of a humanoid robot? Psychological intimacy in HRI. In Proceedings of the 10th annual ACM/IEEE international conference on human-robot interaction (pp. 173–180). New York: ACM.

  18. Knight, W. (2016). Google builds a robotic hive-mind kindergarten. MIT Technology Review. https://www.technologyreview.com/s/602529/google-builds-a-robotic-hive-mind-kindergarten/.

  19. Li, J., Ju, W., & Reeves, B. (2016). Touching a mechanical body: Tactile contact with intimate parts of a humanoid robot is physiologically arousing. Journal of Human-Robot Interaction. https://doi.org/10.5898/JHRI.6.3.Li

    Article  Google Scholar 

  20. Lin, P. (2016). We’re building superhuman robots. Will they be heroes, or villains? Washington Post. Retrieved 1 December 2016, from https://www.washingtonpost.com/news/in-theory/wp/2015/11/02/were-building-superhuman-robots-will-they-be-heroes-or-villains/?utm_term=.a58657bad760.

  21. Malle, B. F., & Scheutz, M. (2014). Moral competence in social robots. In 2014 IEEE international symposium on ethics in science, technology and engineering (pp. 1–6). IEEE.

  22. Moor, J. (2009). Four Kinds of Ethical Robots.”Philosophy Now, 72, 12–14.

    Google Scholar 

  23. Peirce, C. S. (1998). The essential Peirce: Selected philosophical writings (Vol. 2). Bloomington: Indiana University Press.

    MATH  Google Scholar 

  24. Scheutz, M. (2011). The Inherent Dangers of Unidirectional Emotional Bonds between Humans and Social Robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot Ethics: The Ethical and Social Implications of Robotics (p. 205). Cambridge: MIT Press.

    Google Scholar 

  25. Scheutz, M. (2014). ‘Teach One, Teach All’—the explosive combination of instructible robots connected via cyber systems. In Proceedings of the 4th annual international conference on cyber technology in automation, control and intelligent systems, pp. 43–48.

  26. Scheutz, M., & Arnold, T. (2016). Feats without heroes: Norms, means, and ideal robotic action. Frontiers in Robotics and AI, 3, 32.

    Article  Google Scholar 

  27. Seibt, J. (2017). Towards an ontology of simulated social interaction: Varieties of the “As If” for robots and humans. In R. Hakli & J. Seibt (Eds.), Sociality and normativity for robots (pp. 11–39). Basel: Springer International Publishing.

    Google Scholar 

  28. Strait, M., Briggs, G., & Scheutz, M. (2013). Some correlates of agency ascription and emotional value and their effects on decision-making. In Affective computing and intelligent interaction (ACII), 2013 humaine association conference on (pp. 505–510). IEEE.

  29. Suchman, L. (2006). Reconfiguring human-robot relations. In ROMAN 2006-The 15th IEEE international symposium on robot and human interactive communication (pp. 652–654). IEEE.

  30. Sung, J., Christensen, H. I., & Grinter, R. E. (2009). Robots in the wild: Understanding long-term use. In 4th ACM/IEEE international conference on human-robot interaction (HRI), 2009 (pp. 45–52). IEEE.

  31. Sung, J. Y., Guo, L., Grinter, R. E., & Christensen, H. I. (2007). My Roomba is Rambo? intimate home appliances. In International conference on ubiquituous computing.

  32. Tapus, A., Mataric, M. J., & Scassellati, B. (2007). Socially assistive robotics. IEEE Robotics and Automation Magazine, 14(1), 35.

    Article  Google Scholar 

  33. Van Wynsberghe, A. (2013). Designing robots for care: Care centered value-sensitive design. Science and Engineering Ethics, 19(2), 407–433.

    Article  Google Scholar 

  34. Wetzel, L. (2009). Types and tokens. Cambridge: MIT Press.

    Book  Google Scholar 

  35. Wetzel, L. (2014). Types and tokens, The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), E. N. Zalta (Ed.), https://plato.stanford.edu/archives/spr2014/entries/types-tokens/.

  36. Williams, T., Briggs, P., Pelz, N., & Scheutz, M. (2014). Is robot telepathy acceptable? Investigating effects of nonverbal robot-robot communication on human-robot interaction. In Robot and human interactive communication, 2014 RO-MAN: The 23rd IEEE international symposium on (pp. 886–891). IEEE.

  37. Yohanan, S., & MacLean, K. E. (2012). The role of affective touch in human-robot interaction: Human intent and expectations in touching the haptic creature. International Journal of Social Robotics, 4(2), 163–180.

    Article  Google Scholar 

  38. Young, J. E., Sung, J., Voida, A., Sharlin, E., Igarashi, T., Christensen, H. I., & Grinter, R. E. (2011). Evaluating human-robot interaction. International Journal of Social Robotics, 3(1), 53–67.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Thomas Arnold.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Arnold, T., Scheutz, M. HRI ethics and type-token ambiguity: what kind of robotic identity is most responsible?. Ethics Inf Technol 22, 357–366 (2020). https://doi.org/10.1007/s10676-018-9485-1

Download citation

Keywords

  • Robot ethics
  • Artificial moral agents
  • Human–robot interaction
  • Robotic design