Skip to main content

Do I Trust a Machine? Differences in User Trust Based on System Performance

  • Chapter
  • First Online:

Part of the book series: Human–Computer Interaction Series ((HCIS))

Abstract

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system’s output serves as one of the inputs for the users’ decision making processes. In this chapter, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly. The results also show notable differences between two groups of users and indicate a possible threshold in the acceptance of the system. This important learning can be leveraged by designers of practical systems for sustaining the desired level of user trust.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Berkovsky, S., Freyne, J., Oinas-Kukkonen, H.: Influencing individually: fusing personalization and persuasion. ACM Trans. Interact. Intell. Syst. (TiiS) 2(2), 9 (2012)

    Google Scholar 

  2. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144(1), 114 (2015)

    Article  Google Scholar 

  3. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors 44(1), 79–94 (2002)

    Article  Google Scholar 

  4. Earley, P.C.: Computer-generated performance feedback in the magazine-subscription industry. Organ. Behav. Human Decis. Process. 41(1), 50–64 (1988)

    Article  Google Scholar 

  5. Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Human Factors 57(3), 407–434 (2015)

    Article  Google Scholar 

  6. Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Intern. J. Cogn. Ergon. 4(1), 53–71 (2000)

    Article  Google Scholar 

  7. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Human Comput. Stud. 40(1), 153–184 (1994)

    Article  Google Scholar 

  8. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Human Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  9. Madhavan, P., Wiegmann, D.A.: Similarities and differences between human-human and human-automation trust: an integrative review. Theor. Issues Ergon. Sci. 8(4), 277–301 (2007)

    Article  Google Scholar 

  10. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  11. McGuirl, J.M., Sarter, N.B.: Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors 48(4), 656–665 (2006)

    Article  Google Scholar 

  12. Merritt, S.M., Ilgen, D.R.: Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Human Factors 50(2), 194–210 (2008)

    Article  Google Scholar 

  13. Muir, B.M.: Trust in automation: part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37(11), 1905–1922 (1994)

    Article  Google Scholar 

  14. Rotter, J.B.: A new scale for the measurement of interpersonal trust. J. Pers. 35(4), 651–665 (1967)

    Article  Google Scholar 

  15. Rousseau, D.M., Sitkin, S.B., Burt, R.S., Camerer, C.: Not so different after all: a cross-discipline view of trust. Acad. Manag. Rev. 23(3), 393–404 (1998)

    Article  Google Scholar 

  16. Scott III, C.L.: Interpersonal trust: a comparison of attitudinal and situational factors. Human Relat. 33(11), 805–812 (1980)

    Article  Google Scholar 

  17. Singh, I.L., Molloy, R., Parasuraman, R.: Automation-induced “complacency”: development of the complacency-potential rating scale. Int. J. Aviat. Psychol. 3(2), 111–122 (1993)

    Article  Google Scholar 

  18. Wang, W., Benbasat, I.: Attributions of trust in decision support technologies: a study of recommendation agents for e-commerce. J. Manag. Inf. Syst. 24(4), 249–273 (2008)

    Article  Google Scholar 

  19. Yu, K., Berkovsky, S., Conway, D., Taib, R., Zhou, J., Chen, F.: Trust and reliance based on system accuracy. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization, pp. 223–227. ACM (2016)

    Google Scholar 

  20. Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: An investigation driven by differences in system performance. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 307–317. ACM (2017)

    Google Scholar 

  21. Zhou, J., Li, Z., Wang, Y., Chen, F.: Transparent machine learning—revealing internal states of machine learning. In: Proceedings of IUI2013 Workshop on Interactive Machine Learning, pp. 1–3 (2013)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by AOARD under grant No. FA2386-14-1-0022 AOARD 134131.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Yu, K., Berkovsky, S., Conway, D., Taib, R., Zhou, J., Chen, F. (2018). Do I Trust a Machine? Differences in User Trust Based on System Performance. In: Zhou, J., Chen, F. (eds) Human and Machine Learning. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-90403-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-90403-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-90402-3

  • Online ISBN: 978-3-319-90403-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics