Skip to main content

Bottom-Up Visual Attention for Still Images: A Global View

  • Chapter
  • First Online:
From Human Attention to Computational Attention

Part of the book series: Springer Series in Cognitive and Neural Systems ((SSCNS,volume 10))

Abstract

Studies in neuroscience are suggesting that human visual attention is enhanced through a process of competing interactions among neurons representing all of the stimuli present in the visual field. This chapter explores current avenues of research into models of visual attention that reflect human behaviour. The approaches are categorised broadly into feature-based and structural methods exposing advantages and disadvantages of both.

The potential benefits that can arise from a model of attention are manifold and include applications to visual inspection in manufacturing processes, medical diagnosis, spotting security breaches, removing redundancy in data, various targeting applications and many others.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Desimone, R. (1998). Visual attention mediated by biased competition in extrastriate visual cortex. Philosophical Transactions of the Royal Society of London, Series B, 353, 1245–1255.

    Article  CAS  Google Scholar 

  2. Parker, A. (2003). In the blink of an eye: How vision sparked the big bang of evolution. Pemeus, Publishing, New York.

    Google Scholar 

  3. Treisman, A. (1988). Preattentive processing in vision. In Z. Pylyshyn (Ed.), Computational processes in human vision: An interdisciplinary perspective. Norwood: Ablex Publishing Corporation.

    Google Scholar 

  4. Nothdurft, H.-C., Gallant, J. L., & Van Essen, D. C. (1999). Response modulation by texture surround in primate area V1: Correlates of “popout” under anesthesia. Visual Neuroscience, 16, 15–34.

    Article  CAS  PubMed  Google Scholar 

  5. Stentiford, F. W. M. (2001). An estimator for visual attention through competitive novelty with application to image compression. In Picture Coding Symposium, Seoul, 25–27 Apr 2001.

    Google Scholar 

  6. Hou, X., & Zhang, L. (2007). Saliency detection: A spectral residual approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis.

    Google Scholar 

  7. Butko, N. J., & Movellan, J. R. (2010). Infomax control of eye movements. IEEE Transactions on Autonomous Mental Development, 2(2), 1–17.

    Article  Google Scholar 

  8. Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28(9), 1059–1074.

    Article  CAS  PubMed  Google Scholar 

  9. Stevenson, I. H., Rebesco, J. M., Hatsopoulos, N. G., Haga, Z., Miller, L. E., & Kording, K. P. (2009). Bayesian inference of functional connectivity and network structure from spikes. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17, 203–213.

    Article  PubMed  Google Scholar 

  10. Borji, A., & Itti, L. (2012). State-of-the-art in visual attention modelling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 185–207.

    Article  Google Scholar 

  11. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.

    Article  Google Scholar 

  12. Han, J., Ngan, K. N., Li, M. J., & Zhang, H.-J. (2006). Unsupervised extraction of visual attention objects in color images. IEEE Transactions on Circuits and Systems for Video Technology, 16(1), 141–145.

    Article  Google Scholar 

  13. Gao, D., & Vasconcelos, N. (2007). Bottom-up saliency is a discriminant process. In Proceedings of the International Conference on Computer Vision, Rio de Janeiro.

    Google Scholar 

  14. Tsotsos, J. K., Culhane, S. M., Wai, W. Y. K., Lai, Y., Davis, N., & Nuflo, F. (1995). Modeling visual attention via selective tuning. Artificial Intelligence, 78, 507–545.

    Article  Google Scholar 

  15. Osberger, W., & Maeder, A. J. (1998). Automatic identification of perceptually important regions in an image. In Proceedings of the 14th IEEE International Conference on Pattern Recognition, 16–20 Aug 1998, Brisbane.

    Google Scholar 

  16. Luo, J., & Singhal, A. (2000). On measuring low-level saliency in photographic images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island.

    Google Scholar 

  17. Meur, O. L., Callet, P. L., Barba, D., & Thoreau, D. (2006). A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(5), 802–817.

    Article  PubMed  Google Scholar 

  18. Gopalakrishnan, V., Hu, Y., & Rajan, D. (2009). Salient region detection by modelling distributions of color and orientation. IEEE Transactions on Multimedia, 11, 892–905.

    Article  Google Scholar 

  19. Valenti, R., Sebe, N., & Gevers, T. (2009). Isocentric color saliency in images. In Proceedings of the IEEE International Conference on Image Processing, Cairo.

    Google Scholar 

  20. Cheng, M.-M., Mitra, N. J., Huang, X., Torr, P. H. S., & Hu, S.-M. (2015). Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 569–582.

    Article  PubMed  Google Scholar 

  21. Achanta, R., Hamami, S., Astrada, F., & Susstrunk, S. (2009). Frequency-tuned salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1597–1604, Miami Beach.

    Google Scholar 

  22. Zhang, J., & Sclaroff, S. (2013). Saliency detection: A Boolean map approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 153–160, Portland.

    Google Scholar 

  23. MIT saliency benchmark, http://saliency.mit.edu/#anchor_submit

  24. Zhang, L., Tong, M. H., Marks, T. K., Shan, H., & Cottrell, G. W. (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 32, 1–20.

    Article  PubMed  Google Scholar 

  25. Bruce, N. D. B., & Tsotsos, J. K. (2009). Saliency, attention and visual search: An information theoretic approach. Journal of Vision, 9(3), 5, 1–24.

    Article  PubMed  Google Scholar 

  26. Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin, B., & Dutoit, T. (2013). RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication, 28, 642–658.

    Google Scholar 

  27. Liu, T., Sun, J., Zheng, N., Tang, X., & Shum, H. Y. (2007). Learning to detect a salient object. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Minneapolis.

    Google Scholar 

  28. Oliva, A., Torralba, A., Castelhano, M. S., & Henderson, J. M. (2003). Top-down control of visual attention in object detection. In Proceedings of the International Conference on Image Processing, Barcelona.

    Google Scholar 

  29. Kavak, Y., Erdem, E., & Erdem, A. (2013). Visual saliency estimation by integrating features using multiple kernel learning. In Proceedings of the 6th International Symposium on Attention in Cognitive Systems, Beijing.

    Google Scholar 

  30. Vig, E., Dorr, M., & Cox, D. (2014). Large-scale optimization of hierarchical features for saliency prediction in natural images. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, Columbus.

    Google Scholar 

  31. Kummerer, M., Theis, L., & Bethge, M. (2014). Deep Gaze I: Boosting saliency prediction with feature maps trained on ImageNet. arkXiv preprint arkXiv:1411.1045

    Google Scholar 

  32. Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real world scenes: The role of global features on object search. Psychological Review, 113(4), 766–786.

    Article  PubMed  Google Scholar 

  33. Judd, T., Ehinger, K., Durand, F., & Torraolba, A. (2009). Learning to predict where humans look. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2106–2113, Kyoto.

    Google Scholar 

  34. Garcia-Diaz, A., Leboran, V., & Fdez-Vidal, X. R. (2012). On the relationship between optical variability, visual saliency, and eye fixations: A computational approach. Journal of Vision, 12(6), 17, 1–22.

    Article  PubMed  Google Scholar 

  35. Erdem, E., & Erdem, A. (2013). Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of Vision, 13(4), 11, 1–20.

    Article  PubMed  Google Scholar 

  36. Seo, H. J., & Milanfar, P. (2009). Static and space-time visual saliency detection by self-resemblance. Journal of Vision, 9(12), 15, 1–27.

    Article  PubMed  Google Scholar 

  37. Fang, Y., Lin, W., Lee, B.-S., Lau, C.-T., Chen, Z., & Lin, C.-W. (2012). Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum. IEEE Transactions on Multimedia, 14, 187–198.

    Article  Google Scholar 

  38. Chen, C., Tang, H., Lyu, Z., Lyang, H., Shang, J., & Serem, M. (2014). Saliency modeling via outlier detection. Journal of Electronic Imaging, 23(5).

    Google Scholar 

  39. Boiman, O., & Irani, M. (2005). Detecting irregularities in images and in video. In Proceedings of the International Conference on Computer Vision, Beijing.

    Google Scholar 

  40. Goferman, S., Zelnik-Manor, L., & Tal, A. (2012). Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 1915–1926.

    Article  PubMed  Google Scholar 

  41. Borji, A., & Itti, L. (2012). Exploiting local and global patch rarities for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 478–485, Providence.

    Google Scholar 

  42. Harel, J., Koch, C., & Perona, P. (2006). Graph-based visual saliency. In Proceedings of the Neural Information Processing Systems, Vancouver.

    Google Scholar 

  43. Stentiford, F. W. M. (2001). An estimator for visual attention through competitive novelty with application to image compression. In Proceedings of the Picture Coding Symposium, Seoul, pp. 101–104.

    Google Scholar 

  44. Hou, X., Harel, J., & Koch, C. (2012). Image signature: Highlighting sparse salient regions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1), 194–201.

    Google Scholar 

  45. Kadir, T., & Brady, M. (2001). Saliency, scale and image description. International Journal of Computer Vision, 45, 83–105.

    Article  Google Scholar 

  46. Lindeberg, T. (1993). Detecting salient blob-like image structures and their scales with a scale-space primal sketch: A method for focus-of-attention. International Journal of Computer Vision, 11, 3. http://www.scholarpedia.org/article/Computational_models_of_visual_attention.

    Article  Google Scholar 

  47. Schectman, E., & Irani, M. (2007). Matching local self-similarities across images and videos. In Proceedings of the IEEE Conference on CVPR, Minneapolis.

    Google Scholar 

  48. Itti, L., & Baldi, P. (2009). Bayesian surprise attracts attention. Vision Research, 49, 1295–1306.

    Article  PubMed  Google Scholar 

  49. Stentiford, F. W. M. (2013). Saliency identified by absence of background structure. In SPIE Human Vision & Electronic Imaging XVIII, San Francisco.

    Google Scholar 

  50. Stentiford, F. W. M. (2014). Face recognition by detection of matching cliques of points. In Image Processing Machine Vision Applications VII Conference, IS&T/SPIE Electronic Imaging 2014, San Francisco.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fred Stentiford .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media New York

About this chapter

Cite this chapter

Stentiford, F. (2016). Bottom-Up Visual Attention for Still Images: A Global View. In: Mancas, M., Ferrera, V., Riche, N., Taylor, J. (eds) From Human Attention to Computational Attention. Springer Series in Cognitive and Neural Systems, vol 10. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-3435-5_8

Download citation

Publish with us

Policies and ethics