Advertisement

Do I Trust a Machine? Differences in User Trust Based on System Performance

  • Kun YuEmail author
  • Shlomo Berkovsky
  • Dan Conway
  • Ronnie Taib
  • Jianlong Zhou
  • Fang Chen
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system’s output serves as one of the inputs for the users’ decision making processes. In this chapter, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly. The results also show notable differences between two groups of users and indicate a possible threshold in the acceptance of the system. This important learning can be leveraged by designers of practical systems for sustaining the desired level of user trust.

Notes

Acknowledgements

This work was supported in part by AOARD under grant No. FA2386-14-1-0022 AOARD 134131.

References

  1. 1.
    Berkovsky, S., Freyne, J., Oinas-Kukkonen, H.: Influencing individually: fusing personalization and persuasion. ACM Trans. Interact. Intell. Syst. (TiiS) 2(2), 9 (2012)Google Scholar
  2. 2.
    Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114 (2015)CrossRefGoogle Scholar
  3. 3.
    Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors 44(1), 79–94 (2002)CrossRefGoogle Scholar
  4. 4.
    Earley, P.C.: Computer-generated performance feedback in the magazine-subscription industry. Organ. Behav. Human Decis. Process. 41(1), 50–64 (1988)CrossRefGoogle Scholar
  5. 5.
    Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Human Factors 57(3), 407–434 (2015)CrossRefGoogle Scholar
  6. 6.
    Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Intern. J. Cogn. Ergon. 4(1), 53–71 (2000)CrossRefGoogle Scholar
  7. 7.
    Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Human Comput. Stud. 40(1), 153–184 (1994)CrossRefGoogle Scholar
  8. 8.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Human Factors 46(1), 50–80 (2004)CrossRefGoogle Scholar
  9. 9.
    Madhavan, P., Wiegmann, D.A.: Similarities and differences between human-human and human-automation trust: an integrative review. Theor. Issues Ergon. Sci. 8(4), 277–301 (2007)CrossRefGoogle Scholar
  10. 10.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)CrossRefGoogle Scholar
  11. 11.
    McGuirl, J.M., Sarter, N.B.: Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors 48(4), 656–665 (2006)CrossRefGoogle Scholar
  12. 12.
    Merritt, S.M., Ilgen, D.R.: Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Human Factors 50(2), 194–210 (2008)CrossRefGoogle Scholar
  13. 13.
    Muir, B.M.: Trust in automation: part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)CrossRefGoogle Scholar
  14. 14.
    Rotter, J.B.: A new scale for the measurement of interpersonal trust. J. Pers. 35(4), 651–665 (1967)CrossRefGoogle Scholar
  15. 15.
    Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23(3), 393–404 (1998)CrossRefGoogle Scholar
  16. 16.
    Scott III, C.L.: Interpersonal trust: a comparison of attitudinal and situational factors. Human Relat. 33(11), 805–812 (1980)CrossRefGoogle Scholar
  17. 17.
    Singh, I.L., Molloy, R., Parasuraman, R.: Automation-induced “complacency”: development of the complacency-potential rating scale. Int. J. Aviat. Psychol. 3(2), 111–122 (1993)CrossRefGoogle Scholar
  18. 18.
    Wang, W., Benbasat, I.: Attributions of trust in decision support technologies: a study of recommendation agents for e-commerce. J. Manag. Inf. Syst. 24(4), 249–273 (2008)CrossRefGoogle Scholar
  19. 19.
    Yu, K., Berkovsky, S., Conway, D., Taib, R., Zhou, J., Chen, F.: Trust and reliance based on system accuracy. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pp. 223–227. ACM (2016)Google Scholar
  20. 20.
    Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: An investigation driven by differences in system performance. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 307–317. ACM (2017)Google Scholar
  21. 21.
    Zhou, J., Li, Z., Wang, Y., Chen, F.: Transparent machine learning—revealing internal states of machine learning. In: Proceedings of IUI2013 Workshop on Interactive Machine Learning, pp. 1–3 (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Kun Yu
    • 1
    Email author
  • Shlomo Berkovsky
    • 1
  • Dan Conway
    • 1
  • Ronnie Taib
    • 1
  • Jianlong Zhou
    • 1
  • Fang Chen
    • 1
  1. 1.DATA61CSIROEveleighAustralia

Personalised recommendations