Advertisement

Modeling Human Comprehension of Data Visualizations

  • Michael J. HaassEmail author
  • Andrew T. Wilson
  • Laura E. Matzen
  • Kristin M. Divis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9740)

Abstract

A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.

Keywords

Visual saliency Visualization Modeling Visual search 

References

  1. 1.
    Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 85, 185–207 (2013)CrossRefGoogle Scholar
  2. 2.
    Borji, A., Sihite, D.N., Itti, L.: Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans. Image Process. 22, 55–69 (2013)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Borji, A., Tavakoli, H.R., Sihite, D.N., Itti, L.: Analysis of scores, datasets, and models in visual saliency prediction. In: IEEE International Conference on Computer Vision (ICCV) (2013)Google Scholar
  4. 4.
    Borji, A., Itti, L.: Cat 2000: A large scale fixation dataset for boosting saliency research. In: CVPR 2015 Workshop on “Future of Datasets” (2015). arXiv:1505.03581
  5. 5.
    Borkin, M., Bylinskii, Z., Kim, N., Bainbridge, C.M., Yeh, C., Borkin, D., Pfister, H., Oliva, A.: Beyond memorability: visualization recognition and recall. IEEE Trans. Vis. Comput. Graph. 22, 519–528 (2015). (Proceedings of InfoVis)CrossRefGoogle Scholar
  6. 6.
    Borkin, M., Bylinskii, Z., Krzysztof, G., Kim, N., Oliva, A., Pfister, H.: Massachusetts (massive) visualization dataset. massvis.mit.edu
  7. 7.
    Bylinskii, Z., Judd, T., Borji, A., Itti, L., Durand, F., Oliva, A., Torralba, A.: Mit Saliency BenchmarkGoogle Scholar
  8. 8.
    Connor, C.E., Egeth, H.E., Yantis, S.: Visual attention: bottom-up versus top-down. Curr. Biol. 14(19), 850–852 (2004)CrossRefGoogle Scholar
  9. 9.
    Green, T.M., Ribarsky, W., Fisher, B.: Building and applying a human cognition model for visual analytics. Inf. Vis. 8, 1–13 (2009)CrossRefGoogle Scholar
  10. 10.
    Itti, L., Baldi, P.: A principled approach to detecting surprising events in video. In: IEEE Computer Society Conference on Computer Vision and pattern Recognition (CVPR) (2005)Google Scholar
  11. 11.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)CrossRefGoogle Scholar
  12. 12.
    Matzen, L.E., Haass, M.J., Tran, J., McNamara, L.A.: Using eye tracking metrics and visual saliency maps to assess image utility. Paper Presented at the IS and T International Symposium on Electronic Imaging: Human Vision in Electronic Imaging, San Francisco, CA, USA (2016)Google Scholar
  13. 13.
    Munzner, T.: Visualization Analysis and Design. CRC Press, Boca Raton (2014)Google Scholar
  14. 14.
    Navalpakkam, V., Itti, L.: Modeling the influence of task on attention. Vis. Res. 45, 205–231 (2005)CrossRefGoogle Scholar
  15. 15.
    Peters, R.J., Itti, L.: Beyond bottom-up: incorporating task-dependent influences into a computational model of spatial attention. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2007)Google Scholar
  16. 16.
    Pinto, Y., van der Leij, A.R., Sligte, I.G., Lamme, V.A.F., Scholte, H.S.: Bottom-up and top-down attention are independent. J. Vis. 13(3) (2013)Google Scholar
  17. 17.
    Riche, N., Duvinage, M., Mancas, M., Gosselin, B., Dutoit, T.: Saliency and human fixations: state-of-the-art and study of comparison metrics. In: IEEE International Conference on Computer Vision (ICCV) (2013)Google Scholar
  18. 18.
    Rubner, Y., Tomasi, C., Guibas, L.J.: The earth mover’s distance as a metric for image retrieval. Int. J. Comput. Vis. 40, 99–121 (2000)CrossRefzbMATHGoogle Scholar
  19. 19.
    Vig, E., Dorr, M., Cox, D.: Large-scale optimization of hierarchical features for saliency prediction in natural images. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2014)Google Scholar
  20. 20.
    Zhang, J., Sclaroff, S.: Saliency detection: a Boolean map approach. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2013)Google Scholar
  21. 21.
    Zhang, J., Sclaroff, S.: Exploiting surroundedness for saliency detection: a Boolean map approach. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 38, 889–902 (2015)CrossRefGoogle Scholar
  22. 22.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: a Bayesian framework for saliency using natural statistics. J. Vis. 16, 1–20 (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Michael J. Haass
    • 1
    Email author
  • Andrew T. Wilson
    • 1
  • Laura E. Matzen
    • 1
  • Kristin M. Divis
    • 1
  1. 1.Sandia National LaboratoriesAlbuquerqueUSA

Personalised recommendations