Task relevance predicts gaze in videos of real moving scenes
- 172 Downloads
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382–390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the ‘target’ for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
KeywordsVisual search Scene perception Eye movements Attention
This work was supported by an EPSRC Cognitive Systems Foresight grant and by the Wellcome Trust. We thank Manchester City Council for their invaluable assistance in providing CCTV images for use in these studies. We thank Filipe Cristino for assistance with video analysis.
- Henderson JM, Brockmole JR, Castelhano MS, Mack ML (2007) Visual saliency does not account for eye movements during visual search in real-world scenes. In: van Gompel R, Fischer M, Murray W, Hill RW (eds) Eye movements: a window on mind and brain. Elsevier, Amsterdam, pp 537–562Google Scholar
- Land MF (1996) The time it takes to process visual information when steering a vehicle. Invest Ophth Vis Sci 37:S525Google Scholar
- Le Meur O, Le Callet P, Barba D (2007) Predicting visual fixations on video based on low-level visual features. Vis Res 47:2483–2498Google Scholar
- Peters RJ, Itti L (2007). Beyond bottom-up: incorporating task-dependent influences into a computational model of spatial attention. In: Proceedings of IEEE conference on computer vision and pattern recognitionGoogle Scholar
- Wischnewski M, Steil JJ, Kehrer L, Schneider WX (2009) Integrating inhomogeneous processing and proto-object formation in a computational model of visual attention. In: Proceedings of Human Centered Robotic Systems (HCRS), pp 93–102Google Scholar
- Yarbus AL (1967) Eye movements and vision (B. Haigh, Trans.). Plenum Press, New YorkGoogle Scholar