Center bias outperforms image salience but not semantics in accounting for attention during scene viewing

  • Taylor R. HayesEmail author
  • John M. Henderson


How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled’ to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewing tasks were used to evaluate whether image saliency models account for variance in scene fixation density based primarily on scene-dependent, low-level feature contrast, or on their scene-independent spatial biases. For comparison, fixation density was also compared to semantic feature maps (Meaning Maps; Henderson & Hayes, Nature Human Behaviour, 1, 743–747, 2017) that were generated using human ratings of isolated scene patches. The squared correlations (R2) between scene fixation density and each image saliency model’s center bias, each full image saliency model, and meaning maps were computed. The results showed that in tasks that produced observer center bias, the image saliency models on average explained 23% less variance in scene fixation density than their center biases alone. In comparison, meaning maps explained on average 10% more variance than center bias alone. We conclude that image saliency theory generalizes poorly to real-world scenes.


Scene perception Center bias Saliency Semantics Meaning map 



This research was supported by the National Eye Institute of the National Institutes of Health under award number R01EY027792. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.


  1. Allman, J., Miezin, F. M., & McGuinness, E. (1985). Stimulus specific responses from beyond the classical receptive field: Neurophysiological mechanisms for local-global comparisons in visual neurons. Annual Review of Neuroscience, 8, 407–30.Google Scholar
  2. Anderson, N. C., Donk, M., & Meeter, M. (2016). The influence of a scene preview on eye movement behavior in natural scenes. Psychonomic Bulletin & Review, 23(6), 1794–1801.Google Scholar
  3. Antes, J. R. (1974). The time course of picture viewing. Journal of Experimental Psychology, 103(1), 62–70.Google Scholar
  4. Borji, A., Parks, D., & Itti, L. (2014). Complementary effects of gaze direction and early saliency in guiding fixations during free viewing. Journal of Vision, 14(13), 1–32.Google Scholar
  5. Borji, A., Sihite, D. N., & Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing, 22(1), 55–69.Google Scholar
  6. Bruce, N. D., & Tsotsos, J. K. (2009). Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9(3), 1–24.Google Scholar
  7. Bruce, N. D., Wloka, C., Frosst, N., Rahman, S., & Tsotsos, J. K. (2015). On computational modeling of visual saliency: Examining what’s right and what’s left. Vision Research, 116, 95–112.Google Scholar
  8. de Haas, B., Iakovidis, A. L., Schwarzkopf, D. S., & Gegenfurtner, K. R. (2019). Individual differences in visual salience vary along semantic dimensions. Proceedings of the National Academy of Sciences, 116(24), 11687–11692. Google Scholar
  9. Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193–222.Google Scholar
  10. Desimone, R., Schein, S. J., Moran, J. P., & Ungerleider, L. G. (1985). Contour, color and shape analysis beyond the striate cortex. Vision Research, 25, 441–452.Google Scholar
  11. Findlay, J. M., & Gilchrist, I. D. (2003) Active vision: The psychology of looking and seeing. Oxford: Oxford University Press.Google Scholar
  12. Harel, J., Koch, C., & Perona, P. (2006). Graph-based visual saliency. Neural information processing systems (1–8).Google Scholar
  13. Hayes, T. R., & Henderson, J. M. (2017). Scan patterns during real-world scene viewing predict individual differences in cognitive capacity. Journal of Vision, 17(5), 1–17.Google Scholar
  14. Hayes, T. R., & Henderson, J. M. (2018). Scan patterns during scene viewing predict individual differences in clinical traits in a normative sample. PLoS ONE, 13(5), 1–16.Google Scholar
  15. Hayhoe, M. M., & Ballard, D (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9 (4), 188–194.Google Scholar
  16. Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504.Google Scholar
  17. Henderson, J. M. (2007). Regarding scenes. Current Directions in Psychological Science, 16, 219–222.Google Scholar
  18. Henderson, J. M., & Hayes, T. R. (2017). Meaning-based guidance of attention in scenes rereveal by meaning maps. Nature Human Behaviour, 1, 743–747.Google Scholar
  19. Henderson, J. M., & Hayes, T. R. (2018). Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps. Journal of Vision, 18(6:10), 1–18.Google Scholar
  20. Henderson, J. M., Hayes, T. R., Rehrig, G., & Ferreira, F. (2018). Meaning guides attention during real-world scene description. Scientific Reports, 8, 1–9.Google Scholar
  21. Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50, 243–271.Google Scholar
  22. Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jorodzka, H., & van de Weijer, J. (2015) Eye tracking: A comprehensive guide to methods and measures. Oxford: Oxford University Press.Google Scholar
  23. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506.Google Scholar
  24. Itti, L., & Koch, C. (2001). Computational modeling of visual attention. Nature Reviews Neuroscience, 2, 194–203.Google Scholar
  25. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.Google Scholar
  26. Judd, T., Durand, F., & Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations. MIT technical report.Google Scholar
  27. Judd, T., Ehinger, K. A., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In 2009 IEEE 12th international conference on computer vision (pp. 2106–2113).Google Scholar
  28. Klein, R. M. (2000). Inhibition of return. Trends in Cognitive Sciences, 4, 138–147.Google Scholar
  29. Knierim, J. J., & Essen, D. C. V. (1992). Neuronal responses to static texture patterns in area V1 of the alert macaque monkey. Journal of Neurophysiology, 67(4), 961–80.Google Scholar
  30. Koch, C., & Ullman, U. (1985). Shifts in selective visual attention: Towards a underlying neural circuitry. Human Neurobiology, 4, 219–227.Google Scholar
  31. Kümmerer, M., Wallis, T. S., & Bethge, M (2015). Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences of the United States of America, 112(52), 16054–9.Google Scholar
  32. Mackworth, N. H., & Morandi, A. J. (1967). The gaze selects informative details within pictures. Perception & Psychophysics, 2(11), 547–552.Google Scholar
  33. Nuthmann, A., Einhäuser, W., & Schütz, I. (2017). How well can saliency models predict fixation selection in scenes beyond central bias? A new approach to model evaluation using generalized linear mixed models. Frontiers in Human Neuroscience, 11, 491.Google Scholar
  34. O’Connel, T. P., & Walther, D. B. (2015). Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns. Journal of Vision, 15(5), 1–13.Google Scholar
  35. Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 102–123.Google Scholar
  36. Peacock, C. E., Hayes, T. R., & Henderson, J. M. (2019). Meaning guides attention during scene viewing even when it is irrelevant. Attention Perception, and Psychophysics, 81, 20–34.Google Scholar
  37. Rahman, S., & Bruce, N. (2015). Visual saliency prediction and evaluation across different perceptual tasks. PLOS ONE, 10(9), e0138053.Google Scholar
  38. SR Research (2010a). Experiment Builder user’s manual. Mississauga, ON: SR Research Ltd.Google Scholar
  39. SR Research (2010b). EyeLink 1000 user’s manual, version 1.5.2. Mississauga, ON: SR Research Ltd.Google Scholar
  40. Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14), 1–17.Google Scholar
  41. Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786.Google Scholar
  42. Treisman, A., & Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97–136.Google Scholar
  43. Tsotsos, J. K. (1991). Is complexity theory appropriate for analysing biological systems? Behavioral and Brain Sciences, 14(4), 770–773.Google Scholar
  44. Wolfe, J. M. (1994). Guided search 2.0 a revised model of visual search. Psychonomic Bulletin & Review, 1(2), 202–38.Google Scholar
  45. Wolfe, J. M., Cave, K. R., & Franzel, S. (1989). Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology. Human Perception and Performance, 15(3), 419–33.Google Scholar
  46. Wolfe, J. M., & Horowitz, T. S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 1–8.Google Scholar

Copyright information

© The Psychonomic Society, Inc. 2019

Authors and Affiliations

  1. 1.Center for Mind and BrainUniversity of CaliforniaDavisUSA
  2. 2.Department of PsychologyUniversity of CaliforniaDavisUSA

Personalised recommendations