Justification and Transparency Explanations in Dialogue Systems to Maintain Human-Computer Trust
This paper describes a web-based study testing the effects of different explanations on the human-computer trust relationship. Human-computer trust has shown to be very important in keeping the user motivated and cooperative in a human-computer interaction. Especially unexpected or not understandable situations may decrease the trust and by that the way of interacting with a technical system. Analogous to human-human interaction providing explanations in these situations can help to remedy negative effects. However, selecting the appropriate explanation based on users’ human-computer trust is an unprecedented approach because existing studies concentrate on trust as a one-dimensional concept. In this study we try to find a mapping between the bases of trust and the different goals of explanations. Our results show that transparency explanations seem to be the best way to influence the user’s perceived understandability and reliability.
This work was supported by the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” which is funded by the German Research Foundation (DFG).
- 1.Fogg BJ, Tseng H (1999) The elements of computer credibility. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’99ACM, New York, pp 80–87Google Scholar
- 2.Glass A, McGuinness DL, Wolverton M (2008) Toward establishing trust in adaptive agents. In: IUI ’08: Proceedings of the 13th international conference on intelligent user interfaces. ACM, New York, pp 227–236Google Scholar
- 4.Lim BY, Dey AK, Avrahami D (2009) Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’09ACM, New York, pp 2119–2128Google Scholar
- 5.Madsen M, Gregor S (2000) Measuring human-computer trust. In: Proceedings of the 11th australasian conference on information systems, pp 6–8Google Scholar
- 6.Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734Google Scholar
- 7.Muir BM (1992) Trust in automation: Part i. Theoretical issues in the study of trust and human intervention in automated systems. In: Ergonomics, pp 1905–1922Google Scholar
- 8.Nothdurft F, Bertrand G, Lang H, Minker W (2012) Adaptive explanation architecture for maintaining human-computer trust. In: 36th Annual IEEE computer software and applications conference. COMPSACGoogle Scholar
- 11.Sørmo F, Cassens J (2004) Explanation goals in case-based reasoning. In: Proceedings of the ECCBR 2004 workshopsGoogle Scholar