Cognitive Computation

, Volume 2, Issue 4, pp 326–343 | Cite as

Where to Look Next? Combining Static and Dynamic Proto-objects in a TVA-based Model of Visual Attention

  • Marco Wischnewski
  • Anna Belardinelli
  • Werner X. Schneider
  • Jochen J. Steil
Article

Abstract

To decide “Where to look next ?” is a central function of the attention system of humans, animals and robots. Control of attention depends on three factors, that is, low-level static and dynamic visual features of the environment (bottom-up), medium-level visual features of proto-objects and the task (top-down). We present a novel integrated computational model that includes all these factors in a coherent architecture based on findings and constraints from the primate visual system. The model combines spatially inhomogeneous processing of static features, spatio-temporal motion features and task-dependent priority control in the form of the first computational implementation of saliency computation as specified by the “Theory of Visual Attention” (TVA, [7]). Importantly, static and dynamic processing streams are fused at the level of visual proto-objects, that is, ellipsoidal visual units that have the additional medium-level features of position, size, shape and orientation of the principal axis. Proto-objects serve as input to the TVA process that combines top-down and bottom-up information for computing attentional priorities so that relatively complex search tasks can be implemented. To this end, separately computed static and dynamic proto-objects are filtered and subsequently merged into one combined map of proto-objects. For each proto-object, attentional priorities in the form of attentional weights are computed according to TVA. The target of the next saccade is the center of gravity of the proto-object with the highest weight according to the task. We illustrate the approach by applying it to several real world image sequences and show that it is robust to parameter variations.

Keywords

Modeling visual attention TVA Proto-objects Static and dynamic features Inhomogeneity Natural scenes Top-down control 

References

  1. 1.
    Adelson EH, Bergen JR. Spatiotemporal energy models for the perception of motion. J Opt Soc Am A. 1985;2(2):284–99.CrossRefPubMedGoogle Scholar
  2. 2.
    Ali S, Shah M. A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. In: IEEE conference on computer vision and pattern recognition, 2007. CVPR ’07. 2007. p. 1–6.Google Scholar
  3. 3.
    Aziz M, Mertsching B. Fast and robust generation of feature maps for region-based visual attention. IEEE Trans Image Process. 2008;17(5):633 –44.CrossRefPubMedGoogle Scholar
  4. 4.
    Belardinelli A, Pirri F, Carbone A. Motion saliency maps from spatiotemporal filtering. Attention in Cognitive Systems 2009. p. 112–23.Google Scholar
  5. 5.
    Breazeal C, Scassellati B. A context-dependent attention system for a social robot. In: IJCAI ’99. San Francisco: Morgan Kaufmann Publishers Inc.; 1999. p. 1146–53.Google Scholar
  6. 6.
    Bruce NDB, Tsotsos JK. Saliency, attention, and visual search: an information theoretic approach. J Vis. 2009;9(3), 1–24.Google Scholar
  7. 7.
    Bundesen C. A theory of visual attention. Psychol Rev. 1990;97(4):523–47.CrossRefPubMedGoogle Scholar
  8. 8.
    Bundesen C, Habekost T. Principles of visual attention: linking mind and brain. Oxford: Oxford University Press; 2008.Google Scholar
  9. 9.
    Bundesen C, Habekost T, Kyllingsbaek S. A neural theory of visual attention: bridging cognition and neurophysiology. Psychol Rev. 2005;112(2):291–328.CrossRefPubMedGoogle Scholar
  10. 10.
    Carbone E, Schneider WX. Gaze is special: the control of stimulus-driven saccades is not subject to central, but visual attention limitations. Atten Percept Psychophys. (in press).Google Scholar
  11. 11.
    Clark A. Feature-placing and proto-objects. Philos Psychol. 2004;17(4):443+.CrossRefGoogle Scholar
  12. 12.
    Comaniciu D, Meer P. Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell. 2002;24(5):603–19.CrossRefGoogle Scholar
  13. 13.
    Daugman JG. Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J Opt Soc Am A. 1985;2(7):1160–9.CrossRefPubMedGoogle Scholar
  14. 14.
    De Monasterio FM, Gouras P. Functional properties of ganglion cells of the rhesus monkey retina. J Physiol. 1975;251(1): 167–95.PubMedGoogle Scholar
  15. 15.
    DeAngelis GC, Ohzawa I, Freeman RD. Spatiotemporal organization of simple-cell receptive fields in the cat’s striate cortex. i. general characteristics and postnatal development. J Neurophysiol. 1993;69(4):1091–117.PubMedGoogle Scholar
  16. 16.
    Deubel H, Schneider WX. Saccade target selection and object recognition: evidence for a common attentional mechanism. Vis Res. 1996;36(12):1827–37.CrossRefPubMedGoogle Scholar
  17. 17.
    Domijan D, Šetić M. A feedback model of figure-ground assignment. J Vis. 2008;8(7):1–27.CrossRefPubMedGoogle Scholar
  18. 18.
    Dosil R, Fdez-Vidal XR, Pardo XM. Motion representation using composite energy features. Pattern Recognit. 2008;41(3):1110–23.CrossRefGoogle Scholar
  19. 19.
    Driscoll J II, RP Cave K. A visual attention network for a humanoid robot. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, 1998. p. 12–6.Google Scholar
  20. 20.
    Findlay JM. Global visual processing for saccadic eye movements. Vis Res. 1982;22(8):1033–45.CrossRefPubMedGoogle Scholar
  21. 21.
    Forssén PE. Low and medium level vision using channel representations. Ph.D. thesis, Linköping University, Sweden, SE-581 83 Linköping, Sweden (2004). Dissertation No. 858, ISBN 91-7373-876-X.Google Scholar
  22. 22.
    Frey HP, Konig P, Einhauser W. The role of first- and second-order stimulus features for human overt attention, perception and psychophysics. Percept Psychophys. 2007;69(2):153–61.PubMedGoogle Scholar
  23. 23.
    Frintrop S, Klodt M, Rome E. A real-time visual attention system using integral images. In: Proceedings of the 5th international conference on computer vision systems (2007).Google Scholar
  24. 24.
    Frintrop S, Rome E, Christensen HI. Computational visual attention systems and their cognitive foundations: a survey. ACM Trans Appl Percept. 2010;7(1):1–39.CrossRefGoogle Scholar
  25. 25.
    Geisler WS, Albrecht DG. Visual cortex neurons in monkeys and cats: detection, discrimination, and identification. Vis Neurosci. 1997;14:897–919.CrossRefPubMedGoogle Scholar
  26. 26.
    Goodale MA, Milner AD. Separate visual pathways for perception and action. Trends Neurosci. 1992;15(1):20–5.CrossRefPubMedGoogle Scholar
  27. 27.
    Goodale MA, Westwood DA. An evolving view of duplex vision: separate but interacting cortical pathways for perception and action. Curr Opin Neurobiol. 2004;14(2):203–11.CrossRefPubMedGoogle Scholar
  28. 28.
    van Hateren JH, Ruderman DL. Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proc Biol Sci. 1998;265(1412):2315–20.CrossRefPubMedGoogle Scholar
  29. 29.
    Heeger DJ. Optical flow using spatiotemporal filters. Int J Comput Vis. 1988;1(4):279–302.CrossRefGoogle Scholar
  30. 30.
    Itti L, Baldi P. Bayesian surprise attracts human attention. Vis Res. 2009;49(10):1295–306.CrossRefPubMedGoogle Scholar
  31. 31.
    Itti L, Koch C. Feature combination strategies for saliency-based visual attention systems. J Electron Imag. 2001;10(1):161–9.CrossRefGoogle Scholar
  32. 32.
    Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.CrossRefGoogle Scholar
  33. 33.
    Kehrer L, Meinecke C. A space-variant filter model of texture segregation: parameter adjustment guided by psychophysical data. Biol Cybern. 2003;88(3):183–200.CrossRefPubMedGoogle Scholar
  34. 34.
    Koch C, Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol. 1985;4(4):219–27.PubMedGoogle Scholar
  35. 35.
    Land M, Tatler B. Looking and acting: vision and eye movements in natural behaviour. Oxford: Oxford University Press; 2009.Google Scholar
  36. 36.
    Le Meur O, Le Callet P, Barba D. Predicting visual fixations on video based on low-level visual features. Vis Res. 2007;47(19):2483–98.CrossRefPubMedGoogle Scholar
  37. 37.
    Mahadevan V, Vasconcelos N. Spatiotemporal saliency in dynamic scenes. IEEE Trans Pattern Anal Mach Intell. 2009;32:171–7.CrossRefGoogle Scholar
  38. 38.
    Marat S, Ho Phuoc T, Granjon L, Guyader N, Pellerin D, Guérin-Dugué A. Modelling spatio-temporal saliency to predict gaze direction for short videos. Int J Comput Vis. 2009;82(3):231–43.CrossRefGoogle Scholar
  39. 39.
    Moren J, Ude A, Koene A, Cheng G. Biologically based top-down attention modulation for humanoid interactions. Int J HR. 2008;5(1):3–24.Google Scholar
  40. 40.
    Morrone MC, Burr DC. Feature detection in human vision A phase-dependent energy model. Proc R Soc Lond B Biol Sci. 1988;235(1280):221–45.CrossRefGoogle Scholar
  41. 41.
    Nagai Y. From bottom-up visual attention to robot action learning. In: Proceedings of 8 IEEE international conference on development and learning. IEEE Press; 2009.Google Scholar
  42. 42.
    Nagai Y, Hosoda K, Morita A, Asada M. A constructive model for the development of joint attention. Conn Sci. 2003;15(4): 211–29.CrossRefGoogle Scholar
  43. 43.
    Navalpakkam, V, Itti L. An integrated model of top-down and bottom-up attention for optimal object detection. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), New York, NY. 2006. p. 2049–56.Google Scholar
  44. 44.
    Navalpakkam V, Itti L. A goal oriented attention guidance model. In: Biologically Motivated Computer Vision, pp. 81–118. Springer (2010).Google Scholar
  45. 45.
    Nothdurft H. The role of features in preattentive vision: comparison of orientation, motion and color cues. Vis Res. 1993;33(14):1937–58.CrossRefPubMedGoogle Scholar
  46. 46.
    Olveczky Bence P, Baccus SA, Meister M. Segregation of object and background motion in the retina. Nature. 2003;423:401–8.CrossRefPubMedGoogle Scholar
  47. 47.
    Orabona F, Metta G, Sandini G. A proto-object based visual attention model. In: Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. 2008. p. 198–215.Google Scholar
  48. 48.
    Palmer SE. Vision science. Cambridge: MIT; 1999.Google Scholar
  49. 49.
    Park S, Shin J, Lee M. Biologically inspired saliency map model for bottom-up visual attention. In: Biologically motivated computer vision. Springer; 2010. p. 113–45.Google Scholar
  50. 50.
    Riesenhuber M, Poggio T. Hierarchical models of object recognition in cortex. Nat Neurosci. 1999;2(11):1019–25.CrossRefPubMedGoogle Scholar
  51. 51.
    Rosenholtz R. A simple saliency model predicts a number of motion popout phenomena. Vis Res. 1999;39(19):3157–63.CrossRefPubMedGoogle Scholar
  52. 52.
    Ruesch J, Lopes M, Bernardino A, Hornstein J, Santos-Victor J, Pfeifer R. Multimodal saliency-based bottom-up attention a framework for the humanoid robot icub. In: International conference on robotics and automation, Pasadena, CA, USA. 2008. p. 962–7.Google Scholar
  53. 53.
    Schaefer G, Stich M. UCID - An Uncompressed Colour Image Database. In: Storage and retrieval methods and applications for multimedia 2004. Proceedings of SPIE, vol. 5307. 2004. p. 472–80.Google Scholar
  54. 54.
    Schneider WX. VAM: A neuro-cognitive model for visual attention control of segmentation, object recognition, and space-based motor action. Vis Cogn. 1995;2(2–3):331–76.CrossRefGoogle Scholar
  55. 55.
    Scholl BJ. Objects and attention: the state of the art. Cognition. 2001;80(1–2):1–46.CrossRefPubMedGoogle Scholar
  56. 56.
    Steil, JJ, Heidemann G, Jockusch J, Rae R, Jungclaus N, Ritter, H.: Guiding attention for grasping tasks by gestural instruction: The gravis-robot architecture. In: Proceedings IROS 2001, IEEE 2001. p. 1570–7.Google Scholar
  57. 57.
    Sun Y, Fisher R, Wang F, Gomes HM. A computer vision model for visual-object-based attention and eye movements. Comput Vis Image Underst. 2008;112(2):126–42.CrossRefGoogle Scholar
  58. 58.
    Tatler B (2009) Current understanding of eye guidance. Vis Cogn. 777–89.Google Scholar
  59. 59.
    Torralba A, Oliva A, Castelhano MS, Henderson JM. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol Rev. 2006;113(4):766–86.CrossRefPubMedGoogle Scholar
  60. 60.
    Treisman A. The binding problem. Curr Opin Neurobiol. 1996;6(2):171–8.CrossRefPubMedGoogle Scholar
  61. 61.
    Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12(1):97–136.CrossRefPubMedGoogle Scholar
  62. 62.
    Tsotsos JK, Culhane SM, Winky WYK, Lai Y, Davis N, Nuflo F Modeling visual attention via selective tuning. Artif Intell. 1995;78(1–2):507–45.CrossRefGoogle Scholar
  63. 63.
    Van Essen D, Anderson C. Information processing strategies and pathways in the primate visual system. In: Zornetzer S, Davis J, Lau C, McKenna T (eds.), An introduction to neural and electronic networks. Academic Press, New York; 1995. p. 45–76.Google Scholar
  64. 64.
    Walther D, Itti L, Riesenhuber M, Poggio T, Koch C. Attentional selection for object recognition—a gentle way. In: Biologically motivated computer vision, Springer; 2002. p. 251–67.Google Scholar
  65. 65.
    Walther D, Koch C. Modeling attention to salient proto-objects. Neural Netw. 2006;19(9):1395–407.CrossRefPubMedGoogle Scholar
  66. 66.
    Watson AB. Detection and recognition of simple spatial forms. Technical report, NASA Ames Research Center; 1983.Google Scholar
  67. 67.
    Watson AB, Albert Jr J. Model of human visual-motion sensing. J Opt Soc Am A. 1985;2(2):322–41.Google Scholar
  68. 68.
    Wildes RP, Bergen JR. Qualitative spatiotemporal analysis using an oriented energy representation. In: ECCV ’00: Proceedings of the 6th European conference on computer vision-part II. 2000. p. 768–84.Google Scholar
  69. 69.
    Wischnewski M, Steil JJ, Kehrer L, Schneider WX. Integrating inhomogeneous processing and proto-object formation in a computational model of visual attention. In: Human centered robot systems. 2009. p. 93–102.Google Scholar
  70. 70.
    Wolfe JM, Horowitz TS. What attributes guide the deployment of visual attention and how do they do it? Nat Rev Neurosci. 2004;5(6):495–501.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Marco Wischnewski
    • 1
  • Anna Belardinelli
    • 1
  • Werner X. Schneider
    • 1
  • Jochen J. Steil
    • 2
  1. 1.Center of Excellence - Cognitive Interaction Technology (CITEC) and Neuro-cognitive PsychologyBielefeld UniversityBielefeld, Northrhine-WestphaliaGermany
  2. 2.Research Institute for Cognition and Robotics (CoR-Lab) & Faculty of TechnologyBielefeld UniversityBielefeld, Northrhine-WestphaliaGermany

Personalised recommendations