Justification and Transparency Explanations in Dialogue Systems to Maintain Human-Computer Trust

Chapter
Part of the Signals and Communication Technology book series (SCT)

Abstract

This paper describes a web-based study testing the effects of different explanations on the human-computer trust relationship. Human-computer trust has shown to be very important in keeping the user motivated and cooperative in a human-computer interaction. Especially unexpected or not understandable situations may decrease the trust and by that the way of interacting with a technical system. Analogous to human-human interaction providing explanations in these situations can help to remedy negative effects. However, selecting the appropriate explanation based on users’ human-computer trust is an unprecedented approach because existing studies concentrate on trust as a one-dimensional concept. In this study we try to find a mapping between the bases of trust and the different goals of explanations. Our results show that transparency explanations seem to be the best way to influence the user’s perceived understandability and reliability.

Notes

Acknowledgments

This work was supported by the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” which is funded by the German Research Foundation (DFG).

References

  1. 1.
    Fogg BJ, Tseng H (1999) The elements of computer credibility. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’99ACM, New York, pp 80–87Google Scholar
  2. 2.
    Glass A, McGuinness DL, Wolverton M (2008) Toward establishing trust in adaptive agents. In: IUI ’08: Proceedings of the 13th international conference on intelligent user interfaces. ACM, New York, pp 227–236Google Scholar
  3. 3.
    Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors: J Hum Factors Ergon Soc 46(1):50–80CrossRefGoogle Scholar
  4. 4.
    Lim BY, Dey AK, Avrahami D (2009) Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI conference on human factors in computing systems. CHI ’09ACM, New York, pp 2119–2128Google Scholar
  5. 5.
    Madsen M, Gregor S (2000) Measuring human-computer trust. In: Proceedings of the 11th australasian conference on information systems, pp 6–8Google Scholar
  6. 6.
    Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709–734Google Scholar
  7. 7.
    Muir BM (1992) Trust in automation: Part i. Theoretical issues in the study of trust and human intervention in automated systems. In: Ergonomics, pp 1905–1922Google Scholar
  8. 8.
    Nothdurft F, Bertrand G, Lang H, Minker W (2012) Adaptive explanation architecture for maintaining human-computer trust. In: 36th Annual IEEE computer software and applications conference. COMPSACGoogle Scholar
  9. 9.
    Parasuraman R, Riley V (1997) Humans and automation: use, misuse, disuse, abuse. Hum Factors: J Hum Factors Ergonomics Soc 39(2):230–253CrossRefGoogle Scholar
  10. 10.
    Rammstedt B, John OP (2005) Short version of the ‘big five inventory’ (bfi-k). Diagnostica: Zeitschrift fuer psychologische Diagnostik und differentielle Psychologie 4:195–206CrossRefGoogle Scholar
  11. 11.
    Sørmo F, Cassens J (2004) Explanation goals in case-based reasoning. In: Proceedings of the ECCBR 2004 workshopsGoogle Scholar
  12. 12.
    Tseng S, Fogg BJ (1999) Credibility and computing technology. Commun ACM 42(5):39–44CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute of Communications EngineeringUlmGermany

Personalised recommendations