Skip to main content

Why Are People’s Decisions Sometimes Worse with Computer Support?

  • Conference paper
Computer Safety, Reliability, and Security (SAFECOMP 2009)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5775))

Included in the following conference series:

Abstract

In many applications of computerised decision support, a recognised source of undesired outcomes is operators’ apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like “over-reliance” betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently “over-reliant” behaviour). We review relevant literature in the area of “automation bias” and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using “alerting systems”, with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bainbridge, L.: Ironies of Automation. Automatica 19, 775–779 (1983)

    Article  Google Scholar 

  2. Sorkin, R.D., Woods, D.D.: Systems with human monitors: A signal detection analysis. Human-Computer Interaction 1, 49–75 (1985)

    Article  Google Scholar 

  3. Hawley, J.K.: Looking Back at 20 Years of MANPRINT on Patriot: Observations and Lessons. Report ARL-SR-0158, U.S. Army Research Laboratory (2007)

    Google Scholar 

  4. Bisantz, A.M., Seong, Y.: Assessment of operator trust in and utilization of automated decision-aids under different framing conditions. International Journal of Industrial Ergonomics 28(2), 85–97 (2001)

    Article  Google Scholar 

  5. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. International Journal of Human-Computer Studies 58(6), 697–718 (2003)

    Article  Google Scholar 

  6. Muir, B.M.: Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies 27, 527–539 (1987)

    Article  Google Scholar 

  7. Azar, B.: Danger of automation: It makes us complacent. APA monitor 29(7), 3 (1998)

    Google Scholar 

  8. Singh, I.L., Molloy, R., Parasuraman, R.: Automation-induced “complacency": development of the complacency-potential rating scale. International Journal of Aviation Psychology 3, 111–122 (1993)

    Article  Google Scholar 

  9. Wiener, E.L.: Complacency: is the term useful for air safety. In: 26th Corporate Aviation Safety Seminar, pp. 116–125. Flight Safety Foundation, Inc. (1981)

    Google Scholar 

  10. Parasuraman, R., Riley, V.: Humans and automation: Use, misuse, disuse, abuse. Hum. Factors 39, 230–253 (1997)

    Article  Google Scholar 

  11. Wickens, C., Dixon, S., Goh, J., Hammer, B.: Pilot Dependence on Imperfect Diagnostic Automation in Simulated UAV Flights: An Attentional Visual Scanning Analysis. In: Proceedings of the13th International Symposium on Aviation Psychology (2005)

    Google Scholar 

  12. Cummings, M.L.: Automation bias in intelligent time critical decision support systems. In: AIAA 1st Intelligent Systems Technical Conference, AIAA 2004 (2004)

    Google Scholar 

  13. Skitka, L.J., Mosier, K., Burdick, M.D.: Does automation bias decision making? International Journal of Human-Computer Studies 51(5), 991–1006 (1999)

    Article  Google Scholar 

  14. Meyer, J.: Conceptual issues in the study of dynamic hazard warnings. Human Factors 46(2), 196–204 (2004)

    Article  Google Scholar 

  15. Mosier, K.L., Skitka, L.J., Heers, S., Burdick, M.: Automation bias: Decision making and performance in high-tech cockpits. International Journal of Aviation Psychology 8(1), 47–63 (1998)

    Article  Google Scholar 

  16. Prinzel, L.J., De Vries, H., Freeman, F.G., Mikulka, P.: Examination of Automation-Induced Complacency and Individual Difference Variates. Technical Memorandum No. TM-2001-211413, NASA Langley Research Center, Hampton, VA (2001)

    Google Scholar 

  17. Skitka, L.J., Mosier, K., Burdick, M.D.: Accountability and automation bias. International Journal of Human-Computer Studies 52(4), 701–717 (2000)

    Article  Google Scholar 

  18. Meyer, J., Feinshreiber, L., Parmet, Y.: Levels of automation in a simulated failure detection task. In: IEEE International Conference on Systems, Man and Cybernetics 2003, pp. 2101–2106 (2003)

    Google Scholar 

  19. Meyer, J.: Effects of warning validity and proximity on responses to warnings. Hum. Factors 43, 563–572 (2001)

    Article  Google Scholar 

  20. Singh, I.L., Molloy, R., Parasuraman, R.: Automation-induced monitoring inefficiency: role of display location. International Journal of Human-Computer Studies 46(1), 17–30 (1997)

    Article  Google Scholar 

  21. Bahner, J.E., Huper, A.-D., Manzey, D.: Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. Int. J. Human-Computer Studies 66, 688–699 (2008)

    Article  Google Scholar 

  22. Moray, N.: Monitoring, complacency, scepticism and eutactic behaviour. International Journal of Industrial Ergonomics 31(3), 175–178 (2003)

    Article  Google Scholar 

  23. Wickens, C.D., Dixon, S.R.: Is there a Magic Number 7 (to the Minus 1)? The Benefits of Imperfect Diagnostic Automation: A Synthesis of the Literature, University of Illinois at Urbana-Champaign, Savoy, Illinois, pp. 1–11 (2005)

    Google Scholar 

  24. Dassonville, I., Jolly, D., Desodt, A.M.: Trust between man and machine in a teleoperation system. Reliability Engineering & System Safety (Safety of Robotic Systems) 53(3), 319–325 (1996)

    Article  Google Scholar 

  25. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies 40, 153–184 (1994)

    Article  Google Scholar 

  26. Lee, J.D., See, K.A.: Trust in computer technology. Designing for appropriate reliance. Human Factors, 50–80 (2003)

    Google Scholar 

  27. Muir, B.M.: Trust in automation: Part I. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics 37, 1905–1922 (1994)

    Article  Google Scholar 

  28. Muir, B.M., Moray, N.: Trust in automation: Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 429–460 (1996)

    Article  Google Scholar 

  29. Tan, G., Lewandowsky, S.: A comparison of operator trust in humans versus machines. In: Presentation of First International Cyberspace Conference on Ergonomics (1996)

    Google Scholar 

  30. de Vries, P., Midden, C., Bouwhuis, D.: The effects of errors on system trust, self-confidence, and the allocation of control in route planning. International Journal of Human-Computer Studies 58(6), 719–735 (2003)

    Article  Google Scholar 

  31. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Human Factors 44(1), 79–94 (2002)

    Article  Google Scholar 

  32. Parasuraman, R., Molloy, R., Singh, I.L.: Performance consequences of automation-induced “complacency”. International Journal of Aviation Psychology 3, 1–23 (1993)

    Article  Google Scholar 

  33. Bliss, J.P., Acton, S.A.: Alarm mistrust in automobiles: how collision alarm reliability affects driving. Applied Ergonomics 34(6), 499–509 (2003)

    Article  Google Scholar 

  34. Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Communications of the ACM 47(4), 51–55 (2004)

    Article  Google Scholar 

  35. Alberdi, E., Povyakalo, A.A., Strigini, L., Ayton, P.: Effects of incorrect CAD output on human decision making in mammography. Acad. Radiol. 11(8), 909–918 (2004)

    Article  Google Scholar 

  36. Alberdi, E., Povyakalo, A.A., Strigini, L., Ayton, P., Given-Wilson, R.: CAD in mammography: lesion-level versus case-level analysis of the effects of prompts on human decisions. Journal of Computer Assisted Radiology and Surgery 3(1-2), 115–122 (2008)

    Article  Google Scholar 

  37. Alberdi, E., Povyakalo, A.A., Strigini, L., Ayton, P., Hartswood, M., Procter, R., Slack, R.: Use of computer-aided detection (CAD) tools in screening mammography: a multidisciplinary investigation. Br. J. Radiol. 78(suppl_1), S31–S40 (2005)

    Article  Google Scholar 

  38. Povyakalo, A.A., Alberdi, E., Strigini, L., Ayton, P.: Evaluating ’Human + Advisory computer’ systems: A case study. In: HCI 2004,18th British HCI Group Annual Conference, British HCI Group, pp. 93–96 (2004)

    Google Scholar 

  39. Povyakalo, A.A., Alberdi, E., Strigini, L., Ayton, P.: Divergent effects of computer prompting on the sensitivity of mammogram readers, Technical Report, Centre for Software Reliability, City University, London, UK (2006)

    Google Scholar 

  40. Strigini, L., Povyakalo, A.A., Alberdi, E.: Human-machine diversity in the use of computerised advisory systems: a case study. In: 2003 Int. Conf. on Dependable Systems and Networks (DSN 2003). IEEE, Los Alamitos (2003)

    Google Scholar 

  41. Hartswood, M., Procter, R., Rouncefield, M., Slack, R., Soutter, J., Voss, A.: ’Repairing’ the Machine: A Case Study of the Evaluation of Computer-Aided Detection Tools in Breast Screening. In: Eighth European Conference on Computer Supported Cooperative Work, ECSCW 2003 (2003)

    Google Scholar 

  42. Pritchett, A.R., Vandor, B., Edwards, K.: Testing and implementing cockpit alerting systems. Reliability Engineering & System Safety 75(2), 193–206 (2002)

    Article  Google Scholar 

  43. Karau, S.J., Williams, K.D.: Social loafing: a meta-analytic review and theoretical integration. Journal of Personality and Social Psychology 65, 681–706 (1993)

    Article  Google Scholar 

  44. Latanedo, B., Williams, K., Harkins, S.: Many hands make light the work: the causes and consequences of social loafing. Journal of Personality and Social Psychology 37, 822–832 (1979)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Alberdi, E., Strigini, L., Povyakalo, A.A., Ayton, P. (2009). Why Are People’s Decisions Sometimes Worse with Computer Support?. In: Buth, B., Rabe, G., Seyfarth, T. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2009. Lecture Notes in Computer Science, vol 5775. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04468-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04468-7_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04467-0

  • Online ISBN: 978-3-642-04468-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics