Skip to main content

Perceptions of People’s Dishonesty Towards Robots

Part of the Lecture Notes in Computer Science book series (LNAI,volume 12483)

Abstract

Dishonest behavior is an issue in human-human interactions and the same might happen in human-robot interactions. To ascertain people’s perceptions of dishonesty, we asked participants to evaluate five different scenarios where someone was being dishonest towards a human or a robot, but we varied the level of autonomy the robot presented. We asked them how guilty they would feel by being dishonest towards a robot, and why do they think people would be dishonest with robots. We see that, regardless of being a human or the autonomy the robot presented, people always evaluated as being wrong to be dishonest. They reported feeling low guilt with a robot. And they expressed that people will be dishonest mostly because of lack of capabilities in the robot to prevent dishonesty, absence of presence, and a human tendency for dishonesty. These results bring implications for the developments of autonomous robots in the future.

Keywords

  • Human-robot interaction
  • Dishonesty
  • Unethical behavior

This work was supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UIDB/50021/2020 and Sofia Petisca acknowledges an FCT Grant (Ref.SFRH/BD/118013/2016).

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-62056-1_12
  • Chapter length: 12 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-62056-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   139.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.

Notes

  1. 1.

    For the complete scenarios please contact the first author.

References

  1. Fischbacher, U., Föllmi-Heusi, F.: Lies in disguise–an experimental study on cheating. J. Eur. Econ. Assoc. 11(3), 525–547 (2013). https://doi.org/10.1111/jeea.12014

    CrossRef  Google Scholar 

  2. Forlizzi, J., Saensuksopa, T., Salaets, N., Shomin, M., Mericli, T., Hoffman, G.: Let’s be honest: a controlled field study of ethical behavior in the presence of a robot. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 769–774. IEEE (2016). https://doi.org/10.1109/ROMAN.2016.7745206

  3. Gino, F., Ayal, S., Ariely, D.: Contagion and differentiation in unethical behavior: the effect of one bad apple on the barrel. Psychol. Sci. 20(3), 393–398 (2009). https://doi.org/10.1111/j.1467-9280.2009.02306.x

    CrossRef  Google Scholar 

  4. Gino, F., Galinsky, A.D.: Vicarious dishonesty: when psychological closeness creates distance from one’s moral compass. Organ. Behav. Hum. Decis. Process. 119(1), 15–26 (2012). https://doi.org/10.1016/j.obhdp.2012.03.011

    CrossRef  Google Scholar 

  5. Graham, J., et al.: Moral foundations theory: the pragmatic validity of moral pluralism. In: Advances in Experimental Social Psychology, vol. 47, pp. 55–130. Elsevier (2013). https://doi.org/10.1016/B978-0-12-407236-7.00002-4

  6. Hoffman, G., et al.: Robot presence and human honesty: experimental evidence. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 181–188. ACM (2015)

    Google Scholar 

  7. Jiang, T.: The mind game: invisible cheating and inferable intentions (2012). https://doi.org/10.2139/ssrn.2051476

  8. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. biometrics, pp. 159–174 (1977). https://doi.org/10.2307/2529310

  9. Litoiu, A., Ullman, D., Kim, J., Scassellati, B.: Evidence that robots trigger a cheating detector in humans. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 165–172. ACM (2015). https://doi.org/10.1145/2696454.2696456

  10. Malle, B.F., Scheutz, M., Arnold, T., Voiklis, J., Cusimano, C.: Sacrifice one for the good of many?: people apply different moral norms to human and robot agents. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 117–124. ACM (2015)

    Google Scholar 

  11. Malle, B.F., Scheutz, M., Forlizzi, J., Voiklis, J.: Which robot am i thinking about?: the impact of action and appearance on people’s evaluations of a moral robot. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 125–132. IEEE Press (2016). https://doi.org/10.1109/HRI.2016.7451743

  12. Mazar, N., Amir, O., Ariely, D.: The dishonesty of honest people: a theory of self-concept maintenance. J. Mark. Res. 45(6), 633–644 (2008). https://doi.org/10.1509/jmkr.45.6.633

    CrossRef  Google Scholar 

  13. Petisca, S., Esteves, F., Paiva, A.: Cheating with robots: how at ease do they make us feel? In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2102–2107 (2019). https://doi.org/10.1109/IROS40897.2019.8967790

  14. Sandoval, E.B., Brandstetter, J., Bartneck, C.: Can a robot bribe a human?: the measurement of the negative side of reciprocity in human robot interaction. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 117–124. IEEE Press (2016). https://doi.org/10.1109/HRI.2016.7451742

  15. Shalvi, S., Eldar, O., Bereby-Meyer, Y.: Honesty requires time (and lack of justifications). Psychol. Sci. 23(10), 1264–1270 (2012). https://doi.org/10.1177/0956797612443835

    CrossRef  Google Scholar 

  16. Ullman, D., Leite, I., Phillips, J., Kim-Cohen, J., Scassellati, B.: Smart human, smarter robot: how cheating affects perceptions of social agency. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 36 (2014)

    Google Scholar 

  17. Zhong, C.B., Bohns, V.K., Gino, F.: Good lamps are the best police: darkness increases dishonesty and self-interested behavior. Psychol. Sci. 21(3), 311–314 (2010). https://doi.org/10.1177/0956797609360754

    CrossRef  Google Scholar 

Download references

Acknowledgments

The authors thank the help of Iolanda Leite in reviewing a first draft of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sofia Petisca .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Petisca, S., Paiva, A., Esteves, F. (2020). Perceptions of People’s Dishonesty Towards Robots. In: , et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62056-1_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62055-4

  • Online ISBN: 978-3-030-62056-1

  • eBook Packages: Computer ScienceComputer Science (R0)