Biologically Inspired Computational Models of Visual Attention for Personalized Autonomous Agents: A Survey

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 107)

Abstract

Perception is one of essential capabilities for personalized autonomous agents that act like their users without intervention of the users in order to understand the environment for themselves like a human being. Visual perception in humans plays a major role to interact with objects or entities within the environment by interpreting their visual sensing information. The major technical obstacle of visual perception is to efficiently process enormous amount of visual stimuli in real-time. Therefore, computational models of visual attention that decide where to focus in the scene have been proposed to reduce the visual processing load by mimicking human visual system. This chapter provides the background knowledge of cognitive theories that the models were founded on and analyzes the computational models necessary to build a personalized autonomous agent that acts like a specific person as well as typical human beings.

Keywords

Visual attention Personalized Autonomous agent 

Notes

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MEST) (NRF-M1AXA003-20100029793).

References

  1. 1.
    Anderson JR (2004) Cognitive psychology and its implications, 6th edn. Worth Publishers, New York, p 519Google Scholar
  2. 2.
    Wolfe JM (2000) Visual attention. In: deValois KK (ed) Seeing, 2nd edn. Academic Press, New York, pp 335–386Google Scholar
  3. 3.
    Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cogn Psychol 12:97–136Google Scholar
  4. 4.
    Wolfe JM, Cave K, Franzel S (1989) Guided search: an alternative to the feature integration model for visual search. J Exp Psychol Hum percept Perform 15:419–433CrossRefGoogle Scholar
  5. 5.
    Logan GD (1996) The CODE theory of visual attention: an integration of space-based and object-based attention. Psychol Rev 103:603–649CrossRefGoogle Scholar
  6. 6.
    Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Machine Intell 20:1254–1259CrossRefGoogle Scholar
  7. 7.
    Heidemann G, Rae R et al (2004) Integrating context-free and context-dependent attentional mechanisms for gestural object reference. Mach Vis Appl 16:64–73CrossRefGoogle Scholar
  8. 8.
    Lee K, Feng J, Buxton H (2005) Cue-guided search: a computational model of selective attention. IEEE Trans Neural Netw 16(4):910–924CrossRefGoogle Scholar
  9. 9.
    Courty N, Marchand E (2003) Visual perception based on salient features. In: Proceedings of the 2003 IEEE/RSJ international conference on intelligent robots and systems, Las Vegas, NevadaGoogle Scholar
  10. 10.
    Maki A, Nordlund P, Eklundh JO (2000) Attentional scene segmentation: integrating depth and motion. Comput Vis Image Underst 78:351–373CrossRefGoogle Scholar
  11. 11.
    Moren J, Ude A, Koene A, Cheng G (2008) Biologically based top-down attention modulation for humanoid interactions. Int J Hum Robot 5(1):3–24CrossRefGoogle Scholar
  12. 12.
    Backer G, Mertsching B, Bollmann M (2001) Data- and model-driven gaze control for an active-vision system. IEEE Trans PAMI 23(12):1415–1429CrossRefGoogle Scholar
  13. 13.
    Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231CrossRefGoogle Scholar
  14. 14.
    Hamker FH (2005) The emergence of attention by population-based inference and its role in distributed processing and cognitive control of vision. J Compute Vis Image Underst Spec Issue Atten Perform 100(1–2):64–106CrossRefGoogle Scholar
  15. 15.
    Ouerhani N, Hügli H (2000) Computing visual attention from scene depth. In: Proceedings of the 15th international conference on pattern recognition (ICPR’00), vol 1, pp 375–378Google Scholar
  16. 16.
    Peters C, Sullivan CO (2003) Bottom-up visual attention for virtual human animation. In: Proceedings of the 16th international conference on computer animation and social agents (CASA)Google Scholar
  17. 17.
    Park SJ, Shin JK, Lee M (2002) Biologically inspired saliency map model for bottom-up visual attention. In: Proceedings of the BMCV, pp 418–426Google Scholar
  18. 18.
    Itti L, Dhavale N, Pighin F (2003) Realistic avatar eye and head animation using a neurobiological model of visual attention. In: Proceedings of SPIE 48th annual international symposium on optical science and technology, pp 64–78Google Scholar
  19. 19.
    Frintrop S, Backer G, Rome E (2005) Goal-directed search with a top-down modulated computational attention system. In: Proceedings of the of the annual meeting of the German association for pattern recognition DAGM 2005. Lecture notes in computer science (LNCS), Springer, pp 117–124Google Scholar
  20. 20.
    Oliva A et al (2003) Top-down control of visual attention in object detection. In: IEEE proceedings of the international conference on image processing, IEEE, vol I, pp 253–256Google Scholar
  21. 21.
    Siagian C, Itti L (2007) Rapid biologically-inspired scene classification using features shared with visual attention. IEEE Trans Pattern Anal Mach Intell 29:300–312CrossRefGoogle Scholar
  22. 22.
    Ouerhani N (2003) Visual attention: from bio-inspired modeling to real-time implementation. PhD thesis, Institut de Microtechnique Universit′e de Neuchatel, SwitzerlandGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.Electronics and Telecommunication Research InstitueDaejeonKorea

Personalised recommendations