Skip to main content
Log in

A biologically inspired object-based visual attention model

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

A biologically inspired object-based visual attention model is proposed in this paper. This model includes a training phase and an attention phase. In the training phase, all training targets are fused into a target class and all training backgrounds are fused into a background class. Weight vector is computed as the ratio of the mean target class saliency and the mean background class saliency for each feature. In the attention phase, for an attended scene, all feature maps are combined into a top-down salience map with the weight vector by a hierarchy method. Then, top-down and bottom-up salience map are fused into a global salience map which guides the visual attention. At last, the size of each salient region is obtained by maximizing entropy. The merit of our model is that it can attend a class target object which can appear in the corresponding background class. Experimental results indicate that: when the attended target object doesn’t always appear in the background corresponding to that in the training images, our proposed model is excellent to Navalpakkam’s model and the top-down approach of VOCUS.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Frintrop S (2006) VOCUS: a visual attention system for object detection and goal-directed search. Lecture Notes in Artificial Intelligence (LNAI), 3899

  • Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7: 498–504

    Article  Google Scholar 

  • Hollingworth A (2004) Constructing visual representations of natural scenes: the roles of short- and long-term visual memory. J Exp Psychol Hum Percept Perform 30: 519–537

    Article  Google Scholar 

  • Hollingworth A, Henderson JM (2002) Accurate visual memory for previously attended objects in natural scenes. J Exp Psychol Hum Percept Perform 28: 113–136

    Article  Google Scholar 

  • Hollingworth A, Williams CC, Henderson JM (2001) To see and remember: visually specific information is retained in memory from previously attended objects in natural scenes. Psychon Bull Rev 8: 761–768

    Google Scholar 

  • Itti L (2000) Models of bottom-up and top-down visual attention. Ph.D. thesis, California Institute of Technology. Pasadena, CA, January

  • Itti L, Koch C (2001) Feature combination strategies for saliency-based visual attention systems. J Electron Imaging 10(1): 161–169

    Article  Google Scholar 

  • Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11): 1254–1259

    Article  Google Scholar 

  • Kadir T, Brady M (2001) Saliency, scale and image description. Int J Comput Vis 45(2): 83–105

    Article  MATH  Google Scholar 

  • Le Meur O, Le Callet P, Barba D (2006) A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell 28: 802–817

    Article  Google Scholar 

  • Navalpakkam V, Itti L (2002) A goal oriented attention guidance model. Lect Notes Comput Sci 2525: 453–461

    Article  Google Scholar 

  • Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45: 205–231

    Article  Google Scholar 

  • Navalpakkam V, Itti L (2006) An integrated model of top-down and bottom-up attention for optimal objeet deteetion speed. IEEE computer society conference on computer vision and pattern recognition, pp 2049–2056

  • Palmer SE (1999) Vision science, photons to phenomenology. MIT Press, Cambridge, MA

    Google Scholar 

  • Rensink RA (2000) The dynamic representation of scenes. Vis Cogn 7: 17–42

    Article  Google Scholar 

  • Rensink RA (2002) Change detection. Ann Rev Psychol 53: 245–277

    Article  Google Scholar 

  • Shi H, Yang Y (2007) A computational model of visual attention based on saliency maps. Appl Math Comput 188: 1671–1677

    Article  MATH  MathSciNet  Google Scholar 

  • Treisman AM, Gelade G (1980) A feature-integration theory of attention. Cogn Psychol 12: 97–136

    Article  Google Scholar 

  • Watanabe K (2003) Differential effect of distractor timing on localizing versus identifying visual changes. Cognition 88(2): 243–257

    Article  Google Scholar 

  • Yu Y, Mann GKI, Gosine RG (2009) Modeling of top-down object-based attention using probabilistic neural network. IEEE Canadian conference on electrical and computer engineering, pp 533–536

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Longsheng Wei.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wei, L., Sang, N. & Wang, Y. A biologically inspired object-based visual attention model. Artif Intell Rev 34, 109–119 (2010). https://doi.org/10.1007/s10462-010-9162-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-010-9162-1

Keywords

Navigation