Robot Betrayal: a guide to the ethics of robotic deception

Abstract

If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception—superficial state deception—is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    The philosophical obsession with consciously intended deception has been criticized by others. Robert Trivers, in his natural history of deception, points out that “If by deception we mean only consciously propagated deception—outright lies—then we miss the much larger category of unconscious deception, including active self-deception” (Trivers 2011, p. 3). This seems right and is the more appropriate approach to take when looking at robotic deception.

  2. 2.

    This paper is not the first to defend this idea, though the theory has not always been explicitly named as it is in the main text. For similar arguments, see Neely (2014), Schwitzgebel and Garza (2015), and Danaher (2019a, b). Each of these authors suggests, either directly or indirectly, that the superficial signals of a robot should be taken seriously from an ethical perspective, at least under certain conditions.

  3. 3.

    Ethical behaviourism is also consistent with, but distinct from, the science of machine behaviour that Rahwan et al. (2019) advocate. Rahwan et al.’s article make a plea for scientists to study how machines behave and interact human beings using the tools of behavioural science. In making this plea, they highlight a tendency to focus too much on the engineering details (how the robot/AI was designed and programmed) in the current literature. The result of this is that an important aspect of how machines work and the behavioural patterns they exhibit is being overlooked. I fully support their programme and the stance advocated in this article agrees with them insofar as (a) the behavioural perspective on robots does seem to be overlooked or downplayed in the current debate and (b) there is a significance to machine behaviour that is independent from the mechanical and computational details of their operation. Nevertheless, despite my sympathy for their programme, I would emphasize that ethical behaviourism is not intended to be part of a science of machine behaviour. It is a claim about the kinds of evidence we can use to warrant our ethical attitudes toward machines.

  4. 4.

    Easily overridden by other moral considerations, that is. They might still legally amount to a breach of contract.

  5. 5.

    He also might not. He doesn’t discuss the issue at all.

References

  1. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.

    Google Scholar 

  2. Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology,9, 468.

    Article  Google Scholar 

  3. Danaher, J. (2019a). The philosophical case for robot friendship. The Journal of Posthuman Studies,3(1), 5–24.

    Article  Google Scholar 

  4. Danaher, J. (2019b). Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00119-x.

    Article  Google Scholar 

  5. Elder, A. (2015). False friends and false coinage: A tool for navigating the ethics of sociable robots. SIGCAS Computers and Society,45(3), 248–254.

    Article  Google Scholar 

  6. Elder, A. (2017). Robot friends for autistic children: Monopoly money or counterfeit currency? In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0: From autonomous cars to artificial intelligence. Oxford: OUP.

    Google Scholar 

  7. EU High Level Expert Group on AI. (2019). Ethics Guidelines for Trustworthy AI. Brussels: European Commission. https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines#Top

  8. Graham, G. (2015). Behaviorism. In E. Zalta (Ed.) Stanford encyclopedia of the philosophy. Retrieved July 10, 2018 from https://plato.stanford.edu/entries/behaviorism/.

  9. Grice, P. H. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Speech acts (pp. 41–58). New York: Academic Press.

    Google Scholar 

  10. Gunkel, D. (2018). Robot rights. Cambridge, MA: MIT Press.

    Google Scholar 

  11. Häggström, O. (2019). Challenges to the Omohundro-Bostrom framework for AI motivations. Foresight,21(1), 153–166.

    Article  Google Scholar 

  12. Isaac, A. M. C., & Bridewell, W. (2017). White lies and silver tongues: Why robots need to deceive (and how). In P. Lin, R. Jenkins, & K. Abney (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence. Oxford: Oxford University Press.

    Google Scholar 

  13. Kaminsky, M., Ruben, M., Smart, W., & Grimm, C. (2017). Averting robot eyes. Maryland Law Review,76, 983.

    Google Scholar 

  14. Leong, B. and Selinger, E. (2019). Robot eyes wide shut: Understanding dishonest anthropomorphism. FAT* Conference 2019. https://doi.org/10.1145/3287560.3287591

  15. Mahon, J. E. (2015). The definition of lying and deception. In E. Zalta (Ed) Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/lying-definition/

  16. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., and Cusimano, C. (2015) Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 117–124).

  17. Margalit, A. (2017). On betrayal. Cambridge: Harvard University Press.

    Google Scholar 

  18. Neely, E. L. (2014). Machines and the moral community. Philosophy Technology,27(1), 97–111. https://doi.org/10.1007/s13347-013-0114-y.

    Article  Google Scholar 

  19. Omohundro, S. (2008). The basic AI drives. In P. Wang, B. Goertzel, S. Franklin (Eds.), Proceedings of the First AGI Conference Artificial General Intelligence 2008 (pp. 483–492). Amsterdam: IOS.

  20. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Jean-Francois, B., et al. (2019). Machine behaviour. Nature,568, 477–486.

    Article  Google Scholar 

  21. Schwitzgebel, Eric, & Garza, Mara. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy,39(1), 89–119. https://doi.org/10.1111/misp.12032.

    Article  Google Scholar 

  22. Sebo, J. (2018). The moral problem of other minds. The Harvard Review of Philosophy. https://doi.org/10.5840/harvardreview20185913.

    Article  Google Scholar 

  23. Sharkey, A., & Sharkey, N. (2010). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology,14(1), 27–40.

    Article  Google Scholar 

  24. Shaw, K. (2015). Experiment on Human Robot Deception. http://katarinashaw.com/project/experiment-on-human-robot-deception/

  25. Shim, J. and Arkin, R. (2016). Other-oriented robot deception: How can a robot’s deceptive feedback help humans in HRI? International Conference on Social Robotics. https://doi.org/10.1007/978-3-319-47437-3_22, https://www.cc.gatech.edu/ai/robot-lab/online-publications/ICSR2016_JS_camera_ready.pdf

    Google Scholar 

  26. Simler, K., & Hanson, R. (2018). The elephant in the brain. Oxford: Oxford University Press.

    Google Scholar 

  27. Trivers, R. (2011). The folly of fools. New York: Basic Books.

    Google Scholar 

  28. Turing, A. (1950). Computing machinery and intelligence. Mind,49, 433–460.

    MathSciNet  Article  Google Scholar 

  29. Turkle, S. (2007). Authenticity in the age of digital companions. Interaction Studies,8, 501–507.

    Article  Google Scholar 

  30. Turkle, S. (2010). In Good Company. In Y. Wilks (Ed.), Close engagements with artificial companions. Amsterdam: John Benjamins Publishing.

    Google Scholar 

  31. Voiklis, J., Kim, B., Cusimano, C., and Malle, B.F. (2016). Moral judgments of human vs. robot agents. In 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 775–780). IEEE.

  32. Wagner, A. (2016). Lies and deception: Robots that use falsehood as a social strategy. In J. Markowitz (Ed) Robots that talk and listen: Technology and social impact. De Grutyer https://doi.org/10.1515/9781614514404

    Google Scholar 

  33. Wagner, A., & Arkin, R. (2011). Acting deceptively: Providing robots with the capacity for deception. International Journal of Social Robotics,3(1), 5–26.

    Article  Google Scholar 

  34. Zawieska, K. (2015). Deception and manipulation in social robotics. The emerging policy and ethics of human-robot interaction. Workshop Paper at The 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI2015), https://www.researchgate.net/publication/272474319_Deception_and_Manipulation_in_Social_Robotics

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Robot Betrayal: a guide to the ethics of robotic deception. Ethics Inf Technol 22, 117–128 (2020). https://doi.org/10.1007/s10676-019-09520-3

Download citation

Keywords

  • Robotics
  • Deception
  • Anthropomorphism
  • Dishonesty
  • Betrayal
  • Loyalty