Skip to main content

Robot Betrayal: a guide to the ethics of robotic deception


If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception—superficial state deception—is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use.

This is a preview of subscription content, access via your institution.


  1. The philosophical obsession with consciously intended deception has been criticized by others. Robert Trivers, in his natural history of deception, points out that “If by deception we mean only consciously propagated deception—outright lies—then we miss the much larger category of unconscious deception, including active self-deception” (Trivers 2011, p. 3). This seems right and is the more appropriate approach to take when looking at robotic deception.

  2. This paper is not the first to defend this idea, though the theory has not always been explicitly named as it is in the main text. For similar arguments, see Neely (2014), Schwitzgebel and Garza (2015), and Danaher (2019a, b). Each of these authors suggests, either directly or indirectly, that the superficial signals of a robot should be taken seriously from an ethical perspective, at least under certain conditions.

  3. Ethical behaviourism is also consistent with, but distinct from, the science of machine behaviour that Rahwan et al. (2019) advocate. Rahwan et al.’s article make a plea for scientists to study how machines behave and interact human beings using the tools of behavioural science. In making this plea, they highlight a tendency to focus too much on the engineering details (how the robot/AI was designed and programmed) in the current literature. The result of this is that an important aspect of how machines work and the behavioural patterns they exhibit is being overlooked. I fully support their programme and the stance advocated in this article agrees with them insofar as (a) the behavioural perspective on robots does seem to be overlooked or downplayed in the current debate and (b) there is a significance to machine behaviour that is independent from the mechanical and computational details of their operation. Nevertheless, despite my sympathy for their programme, I would emphasize that ethical behaviourism is not intended to be part of a science of machine behaviour. It is a claim about the kinds of evidence we can use to warrant our ethical attitudes toward machines.

  4. Easily overridden by other moral considerations, that is. They might still legally amount to a breach of contract.

  5. He also might not. He doesn’t discuss the issue at all.


Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to John Danaher.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Robot Betrayal: a guide to the ethics of robotic deception. Ethics Inf Technol 22, 117–128 (2020).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Robotics
  • Deception
  • Anthropomorphism
  • Dishonesty
  • Betrayal
  • Loyalty