Biologically Inspired Computational Models of Visual Attention for Personalized Autonomous Agents: A Survey
Perception is one of essential capabilities for personalized autonomous agents that act like their users without intervention of the users in order to understand the environment for themselves like a human being. Visual perception in humans plays a major role to interact with objects or entities within the environment by interpreting their visual sensing information. The major technical obstacle of visual perception is to efficiently process enormous amount of visual stimuli in real-time. Therefore, computational models of visual attention that decide where to focus in the scene have been proposed to reduce the visual processing load by mimicking human visual system. This chapter provides the background knowledge of cognitive theories that the models were founded on and analyzes the computational models necessary to build a personalized autonomous agent that acts like a specific person as well as typical human beings.
KeywordsVisual attention Personalized Autonomous agent
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (MEST) (NRF-M1AXA003-20100029793).
- 1.Anderson JR (2004) Cognitive psychology and its implications, 6th edn. Worth Publishers, New York, p 519Google Scholar
- 2.Wolfe JM (2000) Visual attention. In: deValois KK (ed) Seeing, 2nd edn. Academic Press, New York, pp 335–386Google Scholar
- 3.Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cogn Psychol 12:97–136Google Scholar
- 9.Courty N, Marchand E (2003) Visual perception based on salient features. In: Proceedings of the 2003 IEEE/RSJ international conference on intelligent robots and systems, Las Vegas, NevadaGoogle Scholar
- 15.Ouerhani N, Hügli H (2000) Computing visual attention from scene depth. In: Proceedings of the 15th international conference on pattern recognition (ICPR’00), vol 1, pp 375–378Google Scholar
- 16.Peters C, Sullivan CO (2003) Bottom-up visual attention for virtual human animation. In: Proceedings of the 16th international conference on computer animation and social agents (CASA)Google Scholar
- 17.Park SJ, Shin JK, Lee M (2002) Biologically inspired saliency map model for bottom-up visual attention. In: Proceedings of the BMCV, pp 418–426Google Scholar
- 18.Itti L, Dhavale N, Pighin F (2003) Realistic avatar eye and head animation using a neurobiological model of visual attention. In: Proceedings of SPIE 48th annual international symposium on optical science and technology, pp 64–78Google Scholar
- 19.Frintrop S, Backer G, Rome E (2005) Goal-directed search with a top-down modulated computational attention system. In: Proceedings of the of the annual meeting of the German association for pattern recognition DAGM 2005. Lecture notes in computer science (LNCS), Springer, pp 117–124Google Scholar
- 20.Oliva A et al (2003) Top-down control of visual attention in object detection. In: IEEE proceedings of the international conference on image processing, IEEE, vol I, pp 253–256Google Scholar
- 22.Ouerhani N (2003) Visual attention: from bio-inspired modeling to real-time implementation. PhD thesis, Institut de Microtechnique Universit′e de Neuchatel, SwitzerlandGoogle Scholar