Skip to main content

Vertrauen und Vertrauenswürdigkeit bei sozialen Robotern

Stärkung von Mensch-Roboter-Vertrauensbeziehungen mithilfe Erklärbarer Künstlicher Intelligenz

  • Chapter
  • First Online:
Soziale Roboter
  • 10k Accesses

Zusammenfassung

Dieses Kapitel befasst sich mit der Vertrauensbeziehung zwischen Menschen und sozialen Robotern, stellt doch Vertrauen einen wichtigen Bestandteil für die Akzeptanz sozialer Roboter dar. Ausgehend von den Merkmalen sozialer Interaktionen zwischen Mensch und Roboter wird ein Überblick über verschiedene Definitionen von Vertrauen in diesem Kontext gegeben. Zudem werden theoretische Vertrauensmodelle und praktische Möglichkeiten der Erfassung von Vertrauen skizziert sowie der Vertrauensverlust als Folge von Roboterfehlern betrachtet. Es wird beleuchtet, wie Erklärbare Künstliche Intelligenz helfen kann, eine transparente Interaktion zwischen Roboter und Mensch zu ermöglichen und dadurch das Vertrauen in soziale Roboter (wieder-)herzustellen. Insbesondere wird auf die Gestaltungsmöglichkeiten und Herausforderungen beim Einsatz von Erklärungen im Bereich der Robotik eingegangen. Die Wirkung, die Erklärungen von Robotern auf die mentalen Modelle von Nutzer:innen haben, bildet den Abschluss dieses Kapitels.

Hör auf mich, glaube mir, Augen zu, vertraue mir!

(Kaa, Schlange in Walt Disneys Das Dschungelbuch)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Es werden hier die englischen Begriffe verwendet, da die deutsche Übersetzung „Misstrauen“ die feinen Unterschiede von Distrust, Untrust und Mistrust nicht trennscharf benennt.

Literatur

  • Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10:e0130140

    Article  Google Scholar 

  • Bandura A (2010) Self-efficacy. In: Weiner IB, Craighead WE (Hrsg) The Corsini encyclopedia of psychology. Wiley Online Library, Hoboken, S 1–3

    Google Scholar 

  • Bartneck C, Forlizzi J (2004) A design-centred framework for social human-robot interaction. In: 13th IEEE international workshop on robot and human interactive communication. Institute of Electrical and Electronics Engineers, Kurashiki, S 591–594

    Google Scholar 

  • Beckers R, Holland OE, Deneubourg JL (2000) Fom local actions to global tasks: stigmergy and collective robotics. In: Cruse HD, Ritter J (Hrsg) Prerational intelligence: interdisciplinary perspectives on the behavior of natural and artificial systems. Springer, Dordrecht, S 1008–1022

    Google Scholar 

  • Bendel O (2017) Robotik. In: Gabler Wirtschaftslexikon. https://wirtschaftslexikon.gabler.de/definition/robotik-54198. Zugegriffen am 20.03.2021

  • Bickmore TW, Picard RW (2005) Establishing and maintaining long-term human-computer relationships. ACM Trans Comput Hum Interact 12:293–327

    Article  Google Scholar 

  • Blau PM (2017) Exchange and power in social life. Routledge, New York/London

    Book  Google Scholar 

  • Borenstein J, Wagner AR, Howard A (2018) Overtrust of pediatric health-care robots: a preliminary survey of parent perspectives. IEEE Robot Autom Mag 25:46–54

    Article  Google Scholar 

  • Boyce MW, Chen JY, Selkowitz AR, Lakhmani SG (2015) Effects of agent transparency on operator trust. In: Adams JA, Smart W (Hrsg) Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction extended abstracts. Association for Computing Machinery, New York, S 179–180

    Chapter  Google Scholar 

  • Castelfranchi C, Falcone R (2009) Trust theory – A socio-cognitive and computational model. John Wiley & Sons Ltd, Chichester

    MATH  Google Scholar 

  • Dautenhahn K, Billard A (1999) Bringing up robots or – the psychology of socially intelligent robots: from theory to implementation. In: Proceedings of the 3th international conference on autonomous agents. Association for Computing Machinery, Seattle, S 366–367

    Chapter  Google Scholar 

  • De Visser EJ, Peeters MM, Jung MF, Kohn S, Shaw TH, Pak R, Neerincx MA (2020) Towards a theory of longitudinal trust calibration in human-robot teams. Int J Soc Robot 12:459–478

    Google Scholar 

  • Deneubourg JL, Goss S, Franks N, Sendova-Franks A, Detrain C, Chretien L (1992) The dynamics of collective sorting: robot-like ants and ant-like robots. In: Meyer JA, Wilson SW (Hrsg) From animals to animats: proceedings of the first international conference on simulation of adaptive behavior. MIT Press, Cambridge, MA, S 356–363

    Google Scholar 

  • European Commission (2012) Public attitudes towards robots. Special Eurobarometer 382: Directorate-General for Communication. http://ec.europa.eu/public_opinion/archives/eb_special_399_380_en.htm#382. Zugegriffen am 16.03.2021

  • European Commission (2017) Attitudes towards the impact of digitization and automation on daily life. https://ec.europa.eu/digital-single-market/en/news/attitudes-towards-impact-digitisation-and-automation-daily-life. Zugegriffen am 16.03.2021

  • Eyssel F, Reich N (2013) Loneliness makes the heart grow fonder (of robots) – On the effects of loneliness on psychological anthropomorphism. In: Kuzuoka H, Evers V, Imai M, Forlizzi J (Hrsg) HRI 2013: Proceedings of the 8th ACM/IEEE international conference on human-robot interaction. Institute of Electrical and Electronics Engineers, Tokyo, S 121–122

    Google Scholar 

  • Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42:143–166

    Article  Google Scholar 

  • Gaudiello I, Zibetti E, Lefort S, Chetouani M, Ivaldi S (2016) Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers. Comput Hum Behav 61:633–655

    Article  Google Scholar 

  • Halasz FG, Moran TP (1983) Mental models and problem solving in using a calculator. In: Janda A (Hrsg) Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, New York, S 212–216

    Google Scholar 

  • Hammer S, Wißner M, André E (2015) Trust-based decision-making for smart and adaptive environments. User Model User-Adap Inter 25:267–293

    Article  Google Scholar 

  • Hancock PA, Billings DR, Schaefer KE, Chen JYC, Visser de EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53:517–527

    Google Scholar 

  • Heimerl A, Weitz K, Baur T, André E (2020) Unraveling ML models of emotion with NOVA: multi-level explainable AI for non-experts. IEEE Transactions on Affective Computing

    Google Scholar 

  • Hoff KA, Bashir M (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Hum Factors 57:407–434

    Article  Google Scholar 

  • Holliday D, Wilson S, Stumpf S (2016) User trust in intelligent systems: a journey over time. In: Nichols J, Mahmud J, O’Donovan J, Conati C, Zancanaro M (Hrsg) Proceedings of the 21st international conference on intelligent user interfaces. Association for Computing Machinery, New York, S 164–168

    Chapter  Google Scholar 

  • Huber T, Weitz K, André E, Amir, O (2021). Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. Artificial Intelligence, 103571

    Google Scholar 

  • Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cogn Ergon 4:53–71

    Article  Google Scholar 

  • Kessler TT, Larios C, Walker T, Yerdon V, Hancock PA (2017) A comparison of trust measures in human-robot interaction scenarios. In: Savage-Knepshield P, Chen J (Hrsg) Advances in human factors in robots and unmanned systems. Springer, Cham, S 353–364

    Google Scholar 

  • Körber M (2018) Theoretical considerations and development of a questionnaire to measure trust in automation. In: Bagnara S, Tartaglia R, Albolino S, Alexander T, Fujita Y (Hrsg) Congress of the international ergonomics association. Springer, Cham, S 13–30

    Google Scholar 

  • Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46:50–80

    Article  Google Scholar 

  • Lewis JD, Weigert A (1985) Trust as a social reality. Social Forces 63:967–985

    Google Scholar 

  • Linegang MP, Stoner HA, Patterson MJ, Seppelt BD, Hoffman JD, Crittendon ZB, Lee JD (2006) Human-automation collaboration in dynamic mission planning: a challenge requiring an ecological approach. In: Proceedings of the human factors and ergonomics society annual meeting. SAGE Publications, Los Angeles, S 2482–2486

    Google Scholar 

  • Lyons JB (2013) Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring symposium trust and autonomous systems. Stanford

    Google Scholar 

  • Marsh S, Dibben MR (2005) Trust, untrust, distrust and mistrust–an exploration of the dark(er) side. In: Herrmann P, Issarny V, Shiu S (Hrsg) International conference on trust management. Springer, Berlin/Heidelberg, S 17–33

    Google Scholar 

  • Merritt SM, Ilgen DR (2008) Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Hum Factors 50:194–210

    Article  Google Scholar 

  • Mertes S, Huber T, Weitz K, Heimerl A, André E (2020) This is not the texture you are looking for! Introducing novel counterfactual explanations for non-experts using generative adversarial learning. arXiv preprint

    Google Scholar 

  • Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Process 73:1–15

    Article  MathSciNet  Google Scholar 

  • Norman DA (1983) Some observations on mental models. In: Gentner K, Stevens AL (Hrsg) Mental Models. Psychology Press, New York, S 15-22

    Google Scholar 

  • Petrak B, Weitz K, Aslan I, André E (2019) Let me show you your new home: studying the effect of proxemic-awareness of robots on users’ first impressions. In: 2019 28th IEEE international conference on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, New Delhi, S 1–7

    Google Scholar 

  • Ribeiro MT, Singh S, Guestrin C (2016) „Why should I trust you?“ Explaining the predictions of any classifier. In: Krishnapuram B, Shah M, Smola A, Aggarwal C, Shen D, Rastogi R (Hrsg) Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery, New York, S 1135–1144

    Google Scholar 

  • Robinette P, Howard A, Wagner AR (2017) Conceptualizing overtrust in robots: why do people trust a robot that previously failed? In: Lawless WF, Mittu R, Sofge D, Russell S (Hrsg) Autonomy and artificial intelligence: a threat or savior? Springer, Cham, S 129–155

    Chapter  Google Scholar 

  • Rosa H (2016) Resonanz: Eine Soziologie der Weltbeziehung. Suhrkamp, Berlin

    Google Scholar 

  • Rutjes H, Willemsen M, IJsselsteijn W (2019) Considerations on explainable AI and users’ mental models. In: Inkpen K, Chancellor S, De Choudhury MD, Veale M, Baumer E (Hrsg) CHI 2019 Workshop: where is the human? Bridging the gap between AI and HCI. Association for Computing Machinery, New York, S 1–6

    Google Scholar 

  • Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: 2015 10th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Portland, S 1–8

    Google Scholar 

  • Samek W, Wiegand T, Müller KR (2017) Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. arXiv preprint

    Google Scholar 

  • Schaefer K (2013) The perception and measurement of human-robot trust. Electronic Theses and Dissertations

    Google Scholar 

  • Sheh R (2017) „Why did you do that?“ Explainable intelligent robots. In: AAAI workshop-technical report. Curtin Research Publications, San Francisco, S 628–634

    Google Scholar 

  • Stange S, Kopp S (2020) Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Belpaeme T, Young J, Gunes H, Riek L (Hrsg) Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction. Association for Computing Machinery, New York, S 619–627

    Chapter  Google Scholar 

  • Stange S, Buschmeier H, Hassan T, Ritter C, Kopp S (2019) Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture. In: Gross S, Krenn B, Scheutz M (Hrsg) AAMAS 2019 workshop on cognitive architectures for HRI: embodied models of situated natural language interactions (MM-Cog), Montréal, S 1–6

    Google Scholar 

  • Stapels JG, Eyssel F (2021) Let’s not be indifferent about robots: neutral ratings on bipolar measures mask ambivalence in attitudes towards robots. PloS one 16:e0244697

    Article  Google Scholar 

  • Stubbs K, Hinds PJ, Wettergreen D (2007) Autonomy and common ground in human-robot interaction: a field study. IEEE Intell Syst 22:42–50

    Article  Google Scholar 

  • Van Mulken S, André E, Müller J (1999) An empirical study on the trustworthiness of life-like interface agents. In: Bullinger HJ, Ziegler J (Hrsg) Human-computer interaction: communication, cooperation, and application. Lawrence Erlbaum Associates, London, S 152–156

    Google Scholar 

  • VDMA (2018) Umsatz der deutschen Robotikbranche in den Jahren 2000 bis 2018. https://de.statista.com/statistik/daten/studie/188235/umfrage/gesamtumsatz-von-robotik-in-deutschland-seit-1998/. Zugegriffen am 17.03.2021

  • Wang N, Pynadath DV, Hill SG (2016a) Trust calibration within a human-robot team: comparing automatically generated explanations. In: 2016 11th ACM/IEEE international conference on human-robot interaction (HRI). Institute of Electrical and Electronics Engineers, Christchurch, S 109–116

    Chapter  Google Scholar 

  • Wang N, Pynadath DV, Hill SG (2016b) The impact of pomdp-generated explanations on trust and performance in human-robot teams. In: Thangarajah J, Tuyls K, Jonker C, Marsella S (Hrsg) Proceedings of the 2016 international conference on autonomous agents & multiagent systems. International Foundation for Autonomous Agents and Multiagent Systems, Richland, S 997–1005

    Google Scholar 

  • Weitz K, Hassan T, Schmid U, Garbas JU (2019) Deep-learned faces of pain and emotions: elucidating the differences of facial expressions with the help of explainable AI methods. tm-Technisches Messen 86:404–412

    Article  Google Scholar 

  • Weitz K, Schiller D, Schlagowski R, Huber T, André E (2020) „Let me explain!“: exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces 15:87-98

    Google Scholar 

  • Xu J, De'Aira GB, Howard A (2018) Would you trust a robot therapist? Validating the equivalency of trust in human-robot healthcare scenarios. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). Institute of Electrical and Electronics Engineers, Nanjing, S 442–447

    Chapter  Google Scholar 

  • Zhu L, Williams T (2020) Effects of proactive explanations by robots on human-robot trust. In: Wagner AR, Feil-Seifer D, Haring KS, Rossi S, Williams T, He H, Ge SS (Hrsg) International conference on social robotics. Springer, Cham, S 85–95

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katharina Weitz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Der/die Herausgeber bzw. der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Weitz, K. (2021). Vertrauen und Vertrauenswürdigkeit bei sozialen Robotern. In: Bendel, O. (eds) Soziale Roboter. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-31114-8_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-31114-8_16

  • Published:

  • Publisher Name: Springer Gabler, Wiesbaden

  • Print ISBN: 978-3-658-31113-1

  • Online ISBN: 978-3-658-31114-8

  • eBook Packages: Business and Economics (German Language)

Publish with us

Policies and ethics