Advertisement

Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues

  • Maha SalemEmail author
  • Gabriella Lakatos
  • Farshid Amirabdollahian
  • Kerstin Dautenhahn
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9388)

Abstract

As robots are increasingly developed to assist humans socially with everyday tasks in home and healthcare settings, questions regarding the robot’s safety and trustworthiness need to be addressed. The present work investigates the practical and ethical challenges in designing and evaluating social robots that aim to be perceived as safe and can win their human users’ trust. With particular focus on collaborative scenarios in which humans are required to accept information provided by the robot and follow its suggestions, trust plays a crucial role and is strongly linked to persuasiveness. Accordingly, human-robot trust can directly affect people’s willingness to cooperate with the robot, while under- or overreliance may have severe or even dangerous consequences. Problematically, investigating trust and human perceptions of safety in HRI experiments proves challenging in light of numerous ethical concerns and risks, which this paper aims to highlight and discuss based on experiences from HRI practice.

Keywords

Socially assistive robots Safety and trust in HRI Roboethics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ackerman, E.: Testing Trust in Autonomous Vehicles through Suspension of Disbelief (2015). http://spectrum.ieee.org/cars-that-think/transportation/self-driving/testing-trust-in-autonomous-vehicles-by-fooling-human-passengers
  2. 2.
    Cohen-Almagor, R.: Responsibility of and trust in ISPs. Knowledge, Technology & Policy 23(3–4), 381–397 (2010)CrossRefGoogle Scholar
  3. 3.
    Corritore, C.L., Kracher, B., Wiedenbeck, S.: On-line trust: Concepts, evolving themes, a model. Int. J. Hum.-Comput. Stud. 58(6), 737–758 (2003)CrossRefGoogle Scholar
  4. 4.
    Desai, M., Stubbs, K., Steinfeld, A., Yanco, H.: Creating trustworthy robots: lessons and inspirations from automated systems. In: Proceedings of the AISB Convention on New Frontiers in Human-Robot Interaction (2009)Google Scholar
  5. 5.
    Freedy, A., de Visser, E., Weltman, G., Coeyman, N.: Measurement of trust in human-robot collaboration. In: International Symposium on Collaborative Technologies and Systems (CTS 2007), pp. 106–114 (2007)Google Scholar
  6. 6.
    Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53(5), 517–527 (2011)CrossRefGoogle Scholar
  7. 7.
    Lee, J.J., Knox, B., Baumann, J., Breazeal, C., DeSteno, D.: Computationally modeling interpersonal trust. Frontiers in Psychology 4(893) (2013)Google Scholar
  8. 8.
    Lewis, J.D., Weigert, A.: Trust as a social reality. Social Forces 63(4), 967–985 (1985)CrossRefGoogle Scholar
  9. 9.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Academy of Management Review 20(3), 709–734 (1995)Google Scholar
  10. 10.
    Milgram, S.: Behavioral study of obedience. The Journal of Abnormal and Social Psychology 67(4), 371 (1963)CrossRefGoogle Scholar
  11. 11.
    Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)CrossRefGoogle Scholar
  12. 12.
    Riek, L.D., Howard, D.: A code of ethics for the human-robot interaction profession. In: Proceedings of We Robot (2014)Google Scholar
  13. 13.
    Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likability. Int. Journal of Social Robotics, pp. 1–11 (2013)Google Scholar
  14. 14.
    Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015) (2015)Google Scholar
  15. 15.
    Tour-Tillery, M., McGill, A.L.: Who or what to believe: Trust and the differential persuasiveness of human and anthropomorphized messengers. Journal of Marketing (2015)Google Scholar
  16. 16.
    Wilson, J.M., Straus, S.G., McEvily, B.: All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behavior and Human Decision Processes 99(1), 16–33 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Authors and Affiliations

  • Maha Salem
    • 1
    Email author
  • Gabriella Lakatos
    • 1
  • Farshid Amirabdollahian
    • 1
  • Kerstin Dautenhahn
    • 1
  1. 1.Adaptive Systems Research GroupUniversity of HertfordshireHatfieldUK

Personalised recommendations