Skip to main content
Log in

Cost-sensitive learning of top-down modulation for attentional control

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

A biologically-inspired model of visual attention known as basic saliency model is biased for object detection. It is possible to make this model faster by inhibiting computation of features or scales, which are less important for detection of an object. To this end, we revise this model by implementing a new scale-wise surround inhibition. Each feature channel and scale is associated with a weight and a processing cost. Then a global optimization algorithm is used to find a weight vector with maximum detection rate and minimum processing cost. This allows achieving maximum object detection rate for real time tasks when maximum processing time is limited. A heuristic is also proposed for learning top-down spatial attention control to further limit the saliency computation. Comparing over five objects, our approach has 85.4 and 92.2% average detection rates with and without cost, respectively, which are above 80% of the basic saliency model. Our approach has 33.3 average processing cost compared with 52 processing cost of the basic model. We achieved lower average hit numbers compared with NVT but slightly higher than VOCUS attentional systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Desimone R., Duncan J.: Neural mechanisms of selective visual attention. Ann. Rev. Neurosci. 18, 193–222 (1955)

    Article  Google Scholar 

  2. Li Z.: A saliency map in primary visual cortex. Trends Cogn. Sci. 6, 9–16 (2002)

    Article  Google Scholar 

  3. Corbetta M., Shulman G.L.: Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. 3, 201–215 (2002)

    Google Scholar 

  4. Posner M.I.: Orienting of attention. Q. J. Exp. Psychol. 32, 3–25 (1980)

    Article  Google Scholar 

  5. Posner, M.I., Cohen, Y.: Components of visual orienting. Attention and Performance X, In: Bouma, H., Hillsdale, B.D. (eds.) Erlbaum, 531–556 (1984)

  6. Maunsell J.H., Treue S.: Feature-based attention in visual cortex, Trends in Neurosciences. Neural Substr. Cogn. 29, 317–322 (2006)

    Google Scholar 

  7. Kanwisher N., Driver J.: Objects, attributes, and visual attention: which, what, and where. Curr. Dir. Psychol. Sci. 1, 26–31 (1992)

    Article  Google Scholar 

  8. Yarbus A.L.: Eye movements during perception of complex objects. In: Riggs, L.A.(eds) Eye Movements and Vision, Chap VII, pp. 171–196. Plenum Press, New York (1967)

    Google Scholar 

  9. Itti L., Koch C., Niebur E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)

    Article  Google Scholar 

  10. Nakayama K., Maljkovic V., Kristjansson A.: Short term memory for the rapid deployment of visual attention. In: Gazzaniga, M.S.(eds) The Cognitive Neurosciences, 3rd edn, MIT Press, Cambridge (2004)

    Google Scholar 

  11. Nakayama K., Mackeben M.: Sustained and transient components of focal visual attention. Vis. Res. 29, 1631–1647 (1989)

    Article  Google Scholar 

  12. Kristjansson A., Nakayama K.: A primitive memory system for the deployment of transient attention. Percept. Psychophys. 65, 711–724 (2003)

    Article  Google Scholar 

  13. Maljkovic V., Nakayama K.: Priming of popout I. role of features. Mem. Cogn. 22, 657–672 (1994)

    Article  Google Scholar 

  14. Rybak I.A., Gusakova V.I., Golovan A.V., Podladchikova L.N., Shevtsova N.A.: A model of attention-guided visual perception and recognition. Vis. Res. 38, 2387–2400 (1998)

    Article  Google Scholar 

  15. Klein R.M.: Inhibition of return. Trends Cogn. Sci. 4, 138–147 (2000)

    Article  Google Scholar 

  16. Koch C., Ullman S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)

    Google Scholar 

  17. Peters, R.J., Itti, L.: Applying computational tools to predict gaze direction in interactive visual environments. ACM Trans. Appl. Percept. 5(2), Article 8 (2008)

    Google Scholar 

  18. Navalpakkam V., Itti L.: Modeling the influence of task on attention. Vis. Res. 45, 205–231 (2005)

    Article  Google Scholar 

  19. Frintrop, S.: VOCUS: a visual attention system for object detection and goal-directed search, vol. 3899. PhD thesis, Lecture Notes in Artificial Intelligence (LNAI) (2006)

  20. Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 2049–2056 (2006)

  21. Gavrila, D.M.: Traffic sign recognition revisited. In: Mustererkennung (DAGM). Springer, Bonn (1999)

  22. Barnes, N., Zelinsky, A.: Real-time radial symmetry for speed sign detection. In: IEEE Intelligent Vehicles Symposium (IV), pp. 566–571, Parma, Italy (2004)

  23. de la Escalera A., Armingol J.M., Mata M.: Traffic sign recognition and analysis for intelligent vehicles. Image Vis. Comput. 21, 247–258 (2003)

    Article  Google Scholar 

  24. Paclik P., Novovicova J., Somol P., Pudil P.: Road sign classification using Laplace kernel classifier. Pattern Recognit. Lett. 21(13–14), 1165–1173 (2000)

    Article  MATH  Google Scholar 

  25. Gomez, L.C., Fuentes, O.: Color-based road sign detection and tracking. In: Proceedings Image Analysis and Recognition (ICIAR), Montreal (2007)

  26. Fang C.Y., Chen S.W., Fuh C.S.: Road-sign detection and tracking. IEEE Trans. Vehicular Technol. 52(5), 1329–1341 (2003)

    Article  Google Scholar 

  27. de la Escalera A., Moreno L.: Road traffic sign detection and classification. IEEE Trans. Ind. Electron. 44, 848–859 (1997)

    Article  Google Scholar 

  28. Burt P.J., Adelson E.H.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31(4), 532–540 (1983)

    Article  Google Scholar 

  29. Itti L., Koch C.: Feature combination strategies for saliency-based visual attention systems. J. Electron. Imaging 10(1), 161–169 (2001)

    Article  Google Scholar 

  30. Saliency toolbox homepage. http://www.saliencytoolbox.net/

  31. Liang J.J., Qin A.K., Suganthan P.N., Baskar S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. comput. 10(3), 281–295 (2006)

    Article  Google Scholar 

  32. Wolfe J.M.: Visual search. In: Pashler, H.(eds) Attention, East Sussex, Psychology Press, UK (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Borji.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Borji, A., Ahmadabadi, M.N. & Araabi, B.N. Cost-sensitive learning of top-down modulation for attentional control. Machine Vision and Applications 22, 61–76 (2011). https://doi.org/10.1007/s00138-009-0192-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-009-0192-0

Keywords

Navigation