Psychonomic Bulletin & Review

, Volume 26, Issue 5, pp 1683–1689 | Cite as

Scene semantics involuntarily guide attention during visual search

  • Taylor R. HayesEmail author
  • John M. Henderson
Brief Report


During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent visual search task in which they searched for superimposed letter targets whose locations were orthogonal to both the underlying scene semantics and image salience. Critically, the analyzed scenes contained no targets, and participants were unaware of this manipulation. We then directly compared how well the distribution of semantic features and image salience accounted for the overall distribution of overt attention. The results showed that even when the task was completely independent from the scene semantics and image salience, semantics explained significantly more variance in attention than image salience and more than expected by chance. This suggests that salient image features were effectively suppressed in favor of task goals, but semantic features were not suppressed. The semantic bias was present from the very first fixation and increased non-monotonically over the course of viewing. These findings suggest that overt attention in scenes is involuntarily guided by scene semantics.


Scene perception Attention Semantics Salience Visual search 



This research was supported by the National Eye Institute of the National Institutes of Health under award number R01EY027792. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Supplementary material

13423_2019_1642_MOESM1_ESM.pdf (7.3 mb)
(PDF 7.27 MB)


  1. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723.CrossRefGoogle Scholar
  2. Anderson, N.C., Donk, M., & Meeter, M. (2016). The influence of a scene preview on eye movement behavior in natural scenes. Psychonomic Bulletin & Review, 23(6), 1794–1801.CrossRefGoogle Scholar
  3. Antes, J.R. (1974). The time course of picture viewing. Journal of Experimental Psychology, 103(1), 62–70.PubMedCrossRefGoogle Scholar
  4. Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 289–300.CrossRefGoogle Scholar
  5. Bylinskii, Z., Judd, T., Oliva, A., Torralba, A., & Durand, F. (2016). What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605
  6. Cohen, M.A., Alvarez, G.A., & Nakayama, K. (2011). Natural-scene perception requires attention. Psychological Science, 22(9), 1165–1172.PubMedCrossRefGoogle Scholar
  7. Cornelissen, T.H.W., & Võ, M.LH. (2017). Stuck on semantics: Processing of irrelevant object-scene inconsistencies modulates ongoing gaze behavior. Attention, Perception & Psychophysics, 79(1), 154–168.CrossRefGoogle Scholar
  8. de Groot, F., Huettig, F., & Olivers, C.N.L. (2016). When meaning matters: The temporal dynamics of semantic influences on visual attention. Journal of Experimental Psychology. Human Perception and Performance, 42(2), 180–196.PubMedCrossRefGoogle Scholar
  9. Findlay, J.M., & Gilchrist, I.D. (2003) Active vision: The psychology of looking and seeing. Oxford: Oxford University Press.CrossRefGoogle Scholar
  10. Greene, M.R., & Fei-Fei, L. (2014). Visual categorization is automatic and obligatory: Evidence from Stroop-like paradigm. Journal of Vision, 14(1), 1–11.CrossRefGoogle Scholar
  11. Harel, J., Koch, C., & Perona, P. (2006). Graph-based Visual Saliency. In Neural information processing systems (pp. 1–8).Google Scholar
  12. Hayhoe, M.M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9 (4), 188–194.PubMedCrossRefGoogle Scholar
  13. Henderson, J.M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504.PubMedCrossRefGoogle Scholar
  14. Henderson, J.M. (2007). Regarding scenes. Current Directions in Psychological Science, 16, 219–222.CrossRefGoogle Scholar
  15. Henderson, J.M. (2017). Gaze control as prediction. Trends in Cognitive Sciences, 21(1), 15–23.PubMedCrossRefGoogle Scholar
  16. Henderson, J.M., & Hayes, T.R. (2017). Meaning-based guidance of attention in scenes revealed by meaning maps. Nature Human Behaviour, 1, 743–747.PubMedCrossRefGoogle Scholar
  17. Henderson, J.M., & Hayes, T.R. (2018). Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps. Journal of Vision, 18(6:10), 1–18.Google Scholar
  18. Henderson, J.M., Malcolm, G.L., & Schandl, C. (2009). Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychonomic Bulletin & Review, 16, 850–856.CrossRefGoogle Scholar
  19. Henderson, J.M., Hayes, T.R., Rehrig, G., & Ferreira, F. (2018). Meaning guides attention during real-world scene description. Scientific Reports, 8, 1–9.CrossRefGoogle Scholar
  20. Itti, L., & Borji, A. (2014). Computational models: Bottom-up and top-down aspects. In A. C. Nobre, & S. Kastner (Eds.) , The Oxford Handbook of Attention (pp. 1122–1158). Oxford: Oxford University Press.Google Scholar
  21. Itti, L., & Koch, C. (2001). Computational modeling of visual attention. Nature Reviews Neuroscience, 2, 194–203.PubMedCrossRefGoogle Scholar
  22. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.CrossRefGoogle Scholar
  23. Kümmerer, M., Wallis, T.S.A., Gatys, L.A., & Bethge, M. (2017). Understanding low- and high-level contributions to fixation prediction. In 2017 IEEE international conference on computer vision (pp. 4799–4808).Google Scholar
  24. Mackworth, N.H., & Morandi, A.J. (1967). The gaze selects informative details within pictures. Perception & Psychophysics, 2(11), 547–552.CrossRefGoogle Scholar
  25. Malcolm, G.L., Rattinger, M., & Shomstein, S. (2016). Intrusive effects of semantic information on visual selective attention. Attention, Perception, and Psychophysics, 78, 2066–2078.CrossRefGoogle Scholar
  26. O’Connel, T.P., & Walther, D.B. (2015). Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns. Journal of Vision, 15(5), 1–13.CrossRefGoogle Scholar
  27. Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155 B, 23–36.CrossRefGoogle Scholar
  28. Peacock, C.E., Hayes, T.R., & Henderson, J.M. (2019). Meaning guides attention during scene viewing even when it is irrelevant. Attention, Perception, and Psychophysics, 81, 20–34.CrossRefGoogle Scholar
  29. SR Research. (2010) EyeLink 1000 user’s manual, version 1.5.2. Mississauga: SR Research Ltd.Google Scholar
  30. Vincent, B.T., Baddeley, R., Correani, A., Troscianko, T., & Leonards, U. (2009). Do we look at lights? Using mixture modeling to distinguish between low- and high-level factors in natural image viewing. Visual Cognition, 17(6–7), 856–879.CrossRefGoogle Scholar
  31. Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19, 1395–1407.PubMedCrossRefGoogle Scholar
  32. Wolfe, J.M., & Horowitz, T.S. (2017). Five factors that guide attention in visual search. Nature Human Behaviour, 1, 1–8.CrossRefGoogle Scholar

Copyright information

© The Psychonomic Society, Inc. 2019

Authors and Affiliations

  1. 1.Center for Mind and BrainUniversity of California, DavisDavisUSA
  2. 2.Department of PsychologyUniversity of California, DavisDavisUSA

Personalised recommendations