Skip to main content

Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues

  • Conference paper
  • First Online:
Social Robotics (ICSR 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9388))

Included in the following conference series:

Abstract

As robots are increasingly developed to assist humans socially with everyday tasks in home and healthcare settings, questions regarding the robot’s safety and trustworthiness need to be addressed. The present work investigates the practical and ethical challenges in designing and evaluating social robots that aim to be perceived as safe and can win their human users’ trust. With particular focus on collaborative scenarios in which humans are required to accept information provided by the robot and follow its suggestions, trust plays a crucial role and is strongly linked to persuasiveness. Accordingly, human-robot trust can directly affect people’s willingness to cooperate with the robot, while under- or overreliance may have severe or even dangerous consequences. Problematically, investigating trust and human perceptions of safety in HRI experiments proves challenging in light of numerous ethical concerns and risks, which this paper aims to highlight and discuss based on experiences from HRI practice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ackerman, E.: Testing Trust in Autonomous Vehicles through Suspension of Disbelief (2015). http://spectrum.ieee.org/cars-that-think/transportation/self-driving/testing-trust-in-autonomous-vehicles-by-fooling-human-passengers

  2. Cohen-Almagor, R.: Responsibility of and trust in ISPs. Knowledge, Technology & Policy 23(3–4), 381–397 (2010)

    Article  Google Scholar 

  3. Corritore, C.L., Kracher, B., Wiedenbeck, S.: On-line trust: Concepts, evolving themes, a model. Int. J. Hum.-Comput. Stud. 58(6), 737–758 (2003)

    Article  Google Scholar 

  4. Desai, M., Stubbs, K., Steinfeld, A., Yanco, H.: Creating trustworthy robots: lessons and inspirations from automated systems. In: Proceedings of the AISB Convention on New Frontiers in Human-Robot Interaction (2009)

    Google Scholar 

  5. Freedy, A., de Visser, E., Weltman, G., Coeyman, N.: Measurement of trust in human-robot collaboration. In: International Symposium on Collaborative Technologies and Systems (CTS 2007), pp. 106–114 (2007)

    Google Scholar 

  6. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53(5), 517–527 (2011)

    Article  Google Scholar 

  7. Lee, J.J., Knox, B., Baumann, J., Breazeal, C., DeSteno, D.: Computationally modeling interpersonal trust. Frontiers in Psychology 4(893) (2013)

    Google Scholar 

  8. Lewis, J.D., Weigert, A.: Trust as a social reality. Social Forces 63(4), 967–985 (1985)

    Article  Google Scholar 

  9. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Academy of Management Review 20(3), 709–734 (1995)

    Article  Google Scholar 

  10. Milgram, S.: Behavioral study of obedience. The Journal of Abnormal and Social Psychology 67(4), 371 (1963)

    Article  Google Scholar 

  11. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Article  Google Scholar 

  12. Riek, L.D., Howard, D.: A code of ethics for the human-robot interaction profession. In: Proceedings of We Robot (2014)

    Google Scholar 

  13. Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likability. Int. Journal of Social Robotics, pp. 1–11 (2013)

    Google Scholar 

  14. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015) (2015)

    Google Scholar 

  15. Tour-Tillery, M., McGill, A.L.: Who or what to believe: Trust and the differential persuasiveness of human and anthropomorphized messengers. Journal of Marketing (2015)

    Google Scholar 

  16. Wilson, J.M., Straus, S.G., McEvily, B.: All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behavior and Human Decision Processes 99(1), 16–33 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maha Salem .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K. (2015). Towards Safe and Trustworthy Social Robots: Ethical Challenges and Practical Issues. In: Tapus, A., André, E., Martin, JC., Ferland, F., Ammi, M. (eds) Social Robotics. ICSR 2015. Lecture Notes in Computer Science(), vol 9388. Springer, Cham. https://doi.org/10.1007/978-3-319-25554-5_58

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-25554-5_58

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-25553-8

  • Online ISBN: 978-3-319-25554-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics