Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues
As robots are increasingly developed to assist humans socially with everyday tasks in home and healthcare settings, questions regarding the robot’s safety and trustworthiness need to be addressed. The present work investigates the practical and ethical challenges in designing and evaluating social robots that aim to be perceived as safe and can win their human users’ trust. With particular focus on collaborative scenarios in which humans are required to accept information provided by the robot and follow its suggestions, trust plays a crucial role and is strongly linked to persuasiveness. Accordingly, human-robot trust can directly affect people’s willingness to cooperate with the robot, while under- or overreliance may have severe or even dangerous consequences. Problematically, investigating trust and human perceptions of safety in HRI experiments proves challenging in light of numerous ethical concerns and risks, which this paper aims to highlight and discuss based on experiences from HRI practice.
KeywordsSocially assistive robots Safety and trust in HRI Roboethics
Unable to display preview. Download preview PDF.
- 1.Ackerman, E.: Testing Trust in Autonomous Vehicles through Suspension of Disbelief (2015). http://spectrum.ieee.org/cars-that-think/transportation/self-driving/testing-trust-in-autonomous-vehicles-by-fooling-human-passengers
- 4.Desai, M., Stubbs, K., Steinfeld, A., Yanco, H.: Creating trustworthy robots: lessons and inspirations from automated systems. In: Proceedings of the AISB Convention on New Frontiers in Human-Robot Interaction (2009)Google Scholar
- 5.Freedy, A., de Visser, E., Weltman, G., Coeyman, N.: Measurement of trust in human-robot collaboration. In: International Symposium on Collaborative Technologies and Systems (CTS 2007), pp. 106–114 (2007)Google Scholar
- 7.Lee, J.J., Knox, B., Baumann, J., Breazeal, C., DeSteno, D.: Computationally modeling interpersonal trust. Frontiers in Psychology 4(893) (2013)Google Scholar
- 9.Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Academy of Management Review 20(3), 709–734 (1995)Google Scholar
- 12.Riek, L.D., Howard, D.: A code of ethics for the human-robot interaction profession. In: Proceedings of We Robot (2014)Google Scholar
- 13.Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likability. Int. Journal of Social Robotics, pp. 1–11 (2013)Google Scholar
- 14.Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015) (2015)Google Scholar
- 15.Tour-Tillery, M., McGill, A.L.: Who or what to believe: Trust and the differential persuasiveness of human and anthropomorphized messengers. Journal of Marketing (2015)Google Scholar
Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.