Advertisement

Four Perspectives on Human Bias in Visual Analytics

  • Emily WallEmail author
  • Leslie M. Blaha
  • Celeste Lyn Paul
  • Kristin Cook
  • Alex Endert
Chapter

Abstract

Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this chapter we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.

Notes

Acknowledgements

The research described in this document was sponsored by the U.S. Department of Defense. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.

References

  1. 1.
    Alaieri F, Vellino A (2016) Ethical decision making in robots: autonomy, trust and responsibility. In: Agah A, Cabibihan JJ, Howard AM, Salichs MA, He H (eds) Social robotics: 8th international conference. Springer International Publishing, Kansas City, MO, pp 159–168CrossRefGoogle Scholar
  2. 2.
    Amershi S, Cakmak M, Knox WB, Kulesza T (2014) Power to the people: the role of humans in interactive machine learning. AI Mag 35(4):105–120CrossRefGoogle Scholar
  3. 3.
    Brown ET, Ottley A, Zhao H, Lin Q, Souvenir R, Endert A, Chang R (2014) Finding Waldo: learning about users from their interactions. IEEE Trans Visual Comput Graphics 20(12):1663–1672CrossRefGoogle Scholar
  4. 4.
    Burnett M, Stumpf S, Macbeth J, Makri S, Beckwith L, Kwan I, Peters A, Jernigan W (2016) GenderMag: a method for evaluating software’s gender inclusiveness. Interact Comput 28(6):760–787CrossRefGoogle Scholar
  5. 5.
    Busemeyer JR, Diederich A (2010) Cognitive modeling. Sage, Los Angeles, CAGoogle Scholar
  6. 6.
    Busemeyer JR, Townsend JT (1993) Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol Rev 100(3):432–459CrossRefGoogle Scholar
  7. 7.
    Chaiken S, Trope Y (1999) Dual-process theories in social psychology. Guilford Press, New YorkGoogle Scholar
  8. 8.
    Cho I, Wesslen R, Karduni A, Santhanam S, Shaikh S, Dou W (2017) The anchoring effect in decision-making with visual analytics. In: IEEE conference on visual analytics science and technology (VAST)Google Scholar
  9. 9.
    Dimara E, Bezerianos A, Dragicevic P (2017) The attraction effect in information visualization. IEEE Trans Visual Comput Graphics 23(1):471–480CrossRefGoogle Scholar
  10. 10.
    Dou W, Jeong DH, Stukes F, Ribarsky W, Lipford HR, Chang R (2009) Recovering reasoning process from user interactions. IEEE Comput Graphics Appl pp 52–61. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.157.407&rep=rep1&type=pdfCrossRefGoogle Scholar
  11. 11.
    Egeth HE, Yantis S (1997) Visual attention: control, representation, and time course. Annu Rev Psychol 48(1):269–297CrossRefGoogle Scholar
  12. 12.
    Endert A, Ribarsky W, Turkay C, Wong B, Nabney I, Blanco ID, Rossi F (2017) The state of the art in integrating machine learning into visual analytics. In: Computer graphics forum. Wiley Online LibraryGoogle Scholar
  13. 13.
    Fekete JD, Van Wijk J, Stasko J, North C (2008) The value of information visualization. Inf Visual pp 1–18Google Scholar
  14. 14.
    Friedman B (1996) Value-sensitive design. Interactions 3(6):16–23CrossRefGoogle Scholar
  15. 15.
    Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst (TOIS) 14(3):330–347CrossRefGoogle Scholar
  16. 16.
    Frisby JP, Stone JV (2010) Seeing: the computational approach to biological vision. The MIT Press, Cambridge, MAGoogle Scholar
  17. 17.
    Gotz D, Zhou MX (2009) Characterizing users’ visual analytic activity for insight provenance. Inf Visual 8(1):42–55CrossRefGoogle Scholar
  18. 18.
    Gotz D, Sun S, Cao N (2016) Adaptive contextualization: combating bias during high-dimensional visualization and data selection. In: Proceedings of the 21st international conference on intelligent user interfaces - IUI ’16 pp 85–95. http://dl.acm.org/citation.cfm?doid=2856767.2856779
  19. 19.
    Green DM, Birdsall TG, Tanner WP Jr (1957) Signal detection as a function of signal intensity and duration. J Acoust Soc Am 29(4):523–531CrossRefGoogle Scholar
  20. 20.
    Heuer Jr RJ (1999) Psychology of intelligence analysis. Washington, D.CGoogle Scholar
  21. 21.
    Hoffman RR, Johnson M, Bradshaw JM, Underbrink A (2013) Trust in automation. IEEE Intell Syst 28(1):84–88CrossRefGoogle Scholar
  22. 22.
    Horvitz E (1999) Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems pp 159–166Google Scholar
  23. 23.
    Huber J, Payne JW, Puto C (1982) Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. J Consum Res 9(1):90–98CrossRefGoogle Scholar
  24. 24.
    Kahneman D, Frederick S (2005) A model of heuristic judgment. The Cambridge handbook of thinking and reasoning pp 267–294Google Scholar
  25. 25.
    Klein G, Moon B, Hoffman RR (2006) Making sense of sensemaking 2: a macrocognitive model. IEEE Intell Syst 21(5):88–92CrossRefGoogle Scholar
  26. 26.
    Koffka K (2013) Principles of gestalt psychology, vol 44. Routledge, LondonCrossRefGoogle Scholar
  27. 27.
    Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors 46(1):50–80CrossRefGoogle Scholar
  28. 28.
    Lee P (2016) Learning from Tay’s introduction. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
  29. 29.
    Luce RD (1977) The choice axiom after twenty years. J Math Psychol 15(3):215–233MathSciNetCrossRefGoogle Scholar
  30. 30.
    Macmillan NA, Creelman CD (2004) Detection theory: a user’s guide. Psychology Press, New YorkCrossRefGoogle Scholar
  31. 31.
    Malhotra NK (1982) Information load and consumer decision making. J Consum Res 8(4):419–430CrossRefGoogle Scholar
  32. 32.
    Milord JT, Perry RP (1977) A methodological study of overloadx. J Gen Psychol 97(1):131–137CrossRefGoogle Scholar
  33. 33.
    Mosier KL, Skitka LJ (1996) Human decision makers and automated decision aids: made for each other. In: Parasuraman R, Mouloua M (eds) Automation and human performance: theory and applications. Lawrence Erlbaum Associates, Mahwah, NJ, pp 201–220Google Scholar
  34. 34.
    Mosier KL, Skitka LJ (1999) Automation use and automation bias. In: Proceedings of the human factors and ergonomics society annual meeting, vol 43. Sage, Beverley Hills, pp 344–348CrossRefGoogle Scholar
  35. 35.
    Nickerson RS (1998) Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol 2(2):175–220CrossRefGoogle Scholar
  36. 36.
    North C, May R, Chang R, Pike B, Endert A, Fink GA, Dou W (2011) Analytic provenance: process+interaction+insight. In: 29th annual CHI conference on human factors in computing systems, CHI 2011 pp 33–36Google Scholar
  37. 37.
    Nosofsky RM (1991) Stimulus bias, asymmetric similarity, and classification. Cogn Psychol 23(1):94–140CrossRefGoogle Scholar
  38. 38.
    Parasuraman R, Manzey DH (2010) Complacency and bias in human use of automation: an attentional integration. Hum Factors 52:381–410CrossRefGoogle Scholar
  39. 39.
    Patterson RE, Blaha LM, Grinstein GG, Liggett KK, Kaveney DE, Sheldon KC, Havig PR, Moore JA (2014) A human cognition framework for information visualization. Comput Graphics 42:42–58CrossRefGoogle Scholar
  40. 40.
    Pirolli P, Card S (2005) The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In: Proceedings of international conference on intelligence analysis 2005, pp 2–4. http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:The+Sensemaking+Process+and+Leverage+Points+for+Analyst+Technology+as+Identified+Through+Cognitive+Task+Analysis#0
  41. 41.
    Posner MI (1980) Orienting of attention. Q J Exp Psychol 32(1):3–25CrossRefGoogle Scholar
  42. 42.
    Riesenhuber M, Poggio T (1999) Hierarchical models of object recognition in cortex. Nat Neurosci 2(11):1019–1025CrossRefGoogle Scholar
  43. 43.
    Sacha D, Stoffel A, Stoffel F, Kwon BC, Ellis G, Keim DA (2014) Knowledge generation model for visual analytics. IEEE Trans Visual Comput Graphics 20(12):1604–1613CrossRefGoogle Scholar
  44. 44.
    Simons DJ, Chabris CF (1999) Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 28(9):1059–1074CrossRefGoogle Scholar
  45. 45.
    Stanovich KE, West RF (2000) Advancing the rationality debate. Behav Brain Sci 23(5):701–717CrossRefGoogle Scholar
  46. 46.
    Torralba A, Oliva A, Castelhano MS, Henderson JM (2006) Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol Rev 113(4):766–786CrossRefGoogle Scholar
  47. 47.
    Treisman A (1985) Preattentive processing in vision. Comput Vis Graphics Image Process 31(2):156–177MathSciNetCrossRefGoogle Scholar
  48. 48.
    Tsotsos JK (2011) A computational perspective on visual attention. MIT Press, Cambridge, MACrossRefGoogle Scholar
  49. 49.
    Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5(2):207–232CrossRefGoogle Scholar
  50. 50.
    Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science 185:1124–1131CrossRefGoogle Scholar
  51. 51.
    Valdez AC, Ziefle M, Sedlmair M (2018a) A framework for studying biases in visualization research. In: Ellis G (ed) Cognitive biases in visualizations, Chap. 2. Springer, BerlinGoogle Scholar
  52. 52.
    Valdez AC, Ziefle M, Sedlmair M (2018b) Priming and anchoring effects in visualization. IEEE Trans Visual Comput Graphics 24(1):584–594CrossRefGoogle Scholar
  53. 53.
    Vandekerckhove J (2014) A cognitive latent variable model for the simultaneous analysis of behavioral and personality data. J Math Psychol 60:58–71MathSciNetCrossRefGoogle Scholar
  54. 54.
    Wall E, Blaha LM, Franklin L, Endert A (2017) Warning, bias may occur: a proposed approach to detecting cognitive bias in interactive visual analytics. In: IEEE conference on visual analytics science and technology (VAST)Google Scholar
  55. 55.
    Xu K, Attfield S, Jankun-Kelly T, Wheat A, Nguyen PH, Selvaraj N (2015) Analytic provenance for sensemaking: a research agenda. IEEE Comput Graphics Appl 35(3):56–64CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Emily Wall
    • 1
    Email author
  • Leslie M. Blaha
    • 2
  • Celeste Lyn Paul
    • 3
  • Kristin Cook
    • 2
  • Alex Endert
    • 1
  1. 1.Georgia Institute of TechnologyAtlantaUSA
  2. 2.Pacific Northwest National LaboratoryRichlandUSA
  3. 3.U.S. Department of DefenseWashington, D.C.USA

Personalised recommendations