Advertisement

Experimental Brain Research

, 214:131 | Cite as

Task relevance predicts gaze in videos of real moving scenes

  • Christina J. HowardEmail author
  • Iain D. Gilchrist
  • Tom Troscianko
  • Ardhendu Behera
  • David C. Hogg
Research Article

Abstract

Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382–390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the ‘target’ for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

Keywords

Visual search Scene perception Eye movements Attention 

Notes

Acknowledgments

This work was supported by an EPSRC Cognitive Systems Foresight grant and by the Wellcome Trust. We thank Manchester City Council for their invaluable assistance in providing CCTV images for use in these studies. We thank Filipe Cristino for assistance with video analysis.

References

  1. Becic E, Kramer AF, Boot WR (2007) Age-related differences in visual search in dynamic displays. Psychol Aging 22(1):67–74PubMedCrossRefGoogle Scholar
  2. Berg DJ, Boehnke SE, Marino RA, Munoz DP, Itti L (2009) Free viewing of dynamic stimuli by humans and monkeys. J Vis 9(5):1–15CrossRefGoogle Scholar
  3. Chen X, Zelinsky GJ (2006) Real-world visual search is dominated by top-down guidance. Vis Res 46:4118–4133PubMedCrossRefGoogle Scholar
  4. Cristino F, Baddeley R (2009) The nature of the visual representations involved in eye movements when walking down the street. Vis Cogn 17(6/7):880–903CrossRefGoogle Scholar
  5. De Graef P, De Troy A, Dydewalle G (1992) Local and global contextual constraints on the identification of objects in scenes. Can J Psychol 46:489–508PubMedCrossRefGoogle Scholar
  6. Ehinger KA, Hidalgo-Sotelo B, Torralba A, Oliva A (2009) Modelling search for people in 900 scenes: A combined source model of eye guidance. Vis Cogn 17(6/7):945–978PubMedCrossRefGoogle Scholar
  7. Eimer M, Kiss M (2010) Top-down search strategies determine attentional capture in visual search: behavioral and electrophysiological evidence. Atten Percept Psychophys 72(4):951–962PubMedCrossRefGoogle Scholar
  8. Fecteau JH, Munoz DP (2006) Salience, relevance and firing: a priority map for target selection. Trends Cogn Sci 10(8):382–390PubMedCrossRefGoogle Scholar
  9. Furneaux S, Land MF (1999) The effects of skill on the eye–hand span during musical sight-reading. Proc Roy Soc Lond B 266:2435–2440CrossRefGoogle Scholar
  10. Henderson JM, Brockmole JR, Castelhano MS, Mack ML (2007) Visual saliency does not account for eye movements during visual search in real-world scenes. In: van Gompel R, Fischer M, Murray W, Hill RW (eds) Eye movements: a window on mind and brain. Elsevier, Amsterdam, pp 537–562Google Scholar
  11. Henderson JM, Malcolm GL, Schandl C (2009) Searching in the dark: cognitive relevance versus visual salience during search for non-salient objects in real-world scenes. Psychon B Rev 16:850–856CrossRefGoogle Scholar
  12. Itti L (2005) Quantifying the contribution of low-level saliency to human eye movements in dynamic scenes. Vis Cogn 12(6):1093–1123CrossRefGoogle Scholar
  13. Itti L, Koch C (2000) A saliency-based search mechanism for overt and covert shifts of visual attention. Vis Res 40:1489–1506PubMedCrossRefGoogle Scholar
  14. Land MF (1996) The time it takes to process visual information when steering a vehicle. Invest Ophth Vis Sci 37:S525Google Scholar
  15. Land MF, Lee DN (1994) Where we look when we steer. Nature 369(6483):742–744PubMedCrossRefGoogle Scholar
  16. Land MF, Mennie N, Rusted J (1999) The roles of vision and eye movements in the control of activities of daily living. Perception 28:1311–1328PubMedCrossRefGoogle Scholar
  17. Le Meur O, Le Callet P, Barba D (2007) Predicting visual fixations on video based on low-level visual features. Vis Res 47:2483–2498Google Scholar
  18. Ling S, Carrasco M (2006) Sustained and transient covert attention enhance the signal via different contrast response functions. Vis Res 46(8–9):1210–1220PubMedCrossRefGoogle Scholar
  19. Malcolm GL, Henderson JM (2010) Combining top-down processes to guide eye movements during real-world scene search. J Vis 10(2):1–11PubMedCrossRefGoogle Scholar
  20. Neider MB, Zelinsky GJ (2006) Scene context guides eye movements during visual search. Vis Res 46:614–621PubMedCrossRefGoogle Scholar
  21. Parkhurst DJ, Niebur E (2003) Scene content selected by active vision. Spatial Vis 16(2):125–154CrossRefGoogle Scholar
  22. Parkhurst D, Law K, Niebur E (2002) Modeling the role of salience in the allocation of overt visual attention. Vis Res 42:107–123PubMedCrossRefGoogle Scholar
  23. Peters RJ, Itti L (2007). Beyond bottom-up: incorporating task-dependent influences into a computational model of spatial attention. In: Proceedings of IEEE conference on computer vision and pattern recognitionGoogle Scholar
  24. Peters RJ, Iyer A, Itti L, Koch C (2005) Components of bottom-up gaze allocation in natural images. Vis Res 45:2397–2416PubMedCrossRefGoogle Scholar
  25. Pylyshyn ZW, Storm RW (1988) Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spatial Vis 3(3):1–19CrossRefGoogle Scholar
  26. Schmidt J, Zelinsky GJ (2009) Search guidance is proportional to the categorical specificity of a target cue. Q J Exp Psychol 62(10):1904–1914CrossRefGoogle Scholar
  27. Tatler BW, Baddeley RJ, Gilchrist ID (2005) Visual correlates of fixation selection: effects of scale and time. Vis Res 45:643–659PubMedCrossRefGoogle Scholar
  28. Wischnewski M, Steil JJ, Kehrer L, Schneider WX (2009) Integrating inhomogeneous processing and proto-object formation in a computational model of visual attention. In: Proceedings of Human Centered Robotic Systems (HCRS), pp 93–102Google Scholar
  29. Wischnewski M, Belardinelli A, Schneider WX, Steil JJ (2010) Where to look next? Combining static and dynamic proto-objects in a tva-based model of visual attention. Cogn Comput 2:326–343CrossRefGoogle Scholar
  30. Wolfe JM (1994) Guided search 2.0: a revised model of visual search. Psychon B Rev 1(2):202–238CrossRefGoogle Scholar
  31. Yang H, Zelinsky GJ (2009) Visual search is guided to categorically-defined targets. Vis Res 49(16):2095–2103PubMedCrossRefGoogle Scholar
  32. Yarbus AL (1967) Eye movements and vision (B. Haigh, Trans.). Plenum Press, New YorkGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Christina J. Howard
    • 1
    • 3
    Email author
  • Iain D. Gilchrist
    • 1
  • Tom Troscianko
    • 1
  • Ardhendu Behera
    • 2
  • David C. Hogg
    • 2
  1. 1.Department of Experimental PsychologyUniversity of BristolBristolUK
  2. 2.School of ComputingUniversity of LeedsLeedsUK
  3. 3.Psychology DivisionNottingham Trent UniversityNottinghamUK

Personalised recommendations