Skip to main content

Anthropomorphizing and Trusting Social Robots

  • Chapter
  • First Online:
Challenges of the Technological Mind

Part of the book series: New Directions in Philosophy and Cognitive Science ((NDPCS))

  • 27 Accesses

Abstract

The chapter explores the challenges posed by the proliferation of robots in society, particularly humanoid robots, focusing on cognitive constraints, ergonomic concerns, cross-cultural issues and the emergence of an artificial morality in human–robot interactions. It also delves into the distinctions between physical and intentional trust, highlighting the role of deference, both biologically grounded and epistemic, in human–robot relationships. The concept of selective deference is introduced as a means of navigating interactions with robots, emphasizing the importance of balancing trust and deference based on the context and the technology involved. Overall, the analysis underscores the need for a nuanced understanding of trust and deference in the evolving landscape of human–robot interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Allen, C., Smith, I., Wallach, W. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology (2005) 7:149—155 DOI https://doi.org/10.1007/s10676-006-0004-4.

    Article  Google Scholar 

  • Bruni D., Perconti P., Plebe A. 2018. Anti-anthropomorphism and Its Limits. Front. Psychol. 9:2205. doi: https://doi.org/10.3389/fpsyg.2018.02205.

    Article  Google Scholar 

  • Cangelosi, A., Schlesinger, M. 2015. Developmental Robotics; From Babies to Robots. Cambridge University Press, Cambridge (UK).

    Google Scholar 

  • Christoforakos L., Gallucci A., Surmava-Große T., Ullrich D, Diefenbach S. 2021. Can Robots Earn Our Trust the Same Way Humans Do? A Systematic Exploration of Competence, Warmth, and Anthropomorphism as Determinants of Trust Development in HRI. Front. Robot. AI 8:640444. doi: https://doi.org/10.3389/frobt.2021.640444.

  • Carelman, J., 1925. Catalogue d’objets introuvables et cependant indispensables, Paris, Édition Balland, 1969.

    Google Scholar 

  • De Visser, Ewart, Monfort, S., Goodyear, K. & Lu, L., O’Hara, M., Lee, M., Parasuraman, R., Krueger, F. 2017. A Little Anthropomorphism Goes a Long Way: Effects of Oxytocin on Trust, Compliance, and Team Performance With Automated Agents. Human Factors. 59. 116–133. https://doi.org/10.1177/0018720816687205.

  • Epley, N., Waytz, A., Cacioppo, J. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114, 864–886.

    Article  Google Scholar 

  • Graziano M.S.A. 2013. Consciousness and the Social Brain. Oxford University Press, Oxford, UK.

    Google Scholar 

  • Grzyb, T., Maj, K., Dolinski, D. 2023. Obedience to robot. Humanoid robot as an experimenter in Milgram paradigm, Computers in Human Behavior: Artificial Humans, Volume 1, Issue 2, 2023.

    Google Scholar 

  • Jozifkova E., Kolackova, M. 2017. Sexual arousal by dominance and submission in relation to increased reproductive success in the general population. Neuroendocrinol Lett 2017; 38(5): 381–387.

    Google Scholar 

  • Milgram, S. 1963. Behavioral study of obedience. Journal of Abnormal Psychology, 67, 371–378. https://doi.org/10.1515/pjbr-2019-0026.

    Article  Google Scholar 

  • Milgram, S. 1974. Obedience to authority. An experimental view. New York: Harper & Row.

    Google Scholar 

  • Moore, G. E. 1925. A Defence of Common Sense, in “Contemporary British Philosophy” (2nd series), ed. J. H. Muirhead. Reprinted in G. E. Moore, Philosophical Papers, New York, Routledge, 2013.

    Google Scholar 

  • Nof, S. Y. 2009. Springer handbook of automation. Berlin, Springer.

    Book  Google Scholar 

  • Plebe, A., & Perconti, P. 2022. The Future of the Artificial Mind. Boca Raton: CRC Press.

    Book  Google Scholar 

  • Van den Brule, R., Dotsch, R., Bijlstra, G., Wigboldus, D. H. J., Haselager, W.F.G. 2014. Do Robot Performance and Behavioral Style affect Human Trust? Int. Journal of Social Robotics, 6(4), 519–531.

    Article  Google Scholar 

  • van der Woerdt, S. & Haselager, W.F.G. 2019. When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas in Psychology, 54, 93–100. doi: https://doi.org/10.1016/j.newideapsych.2017.11.001.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Perconti, P., Plebe, A. (2024). Anthropomorphizing and Trusting Social Robots. In: Alexandre e Castro, P. (eds) Challenges of the Technological Mind. New Directions in Philosophy and Cognitive Science. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-55333-2_3

Download citation

Publish with us

Policies and ethics