Advertisement

Attention in Image Sequences: Biology, Computational Models, and Applications

  • Mariofanna Milanova
  • Engin Mendi
Chapter
Part of the Intelligent Systems Reference Library book series (ISRL, volume 29)

Abstract

The ability to automatically detect visually interesting regions in images and video has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems.

Keywords

Independent Component Analysis Visual Attention Salient Object Salient Region Saliency Detection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  2. 2.
    Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned Salient Region Detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami Beach, Florida (June 2009)Google Scholar
  3. 3.
    Mancas, M.: Computational Attention: Modelisation&Application to Audio and Image Processing, PhD. Thesis, University of Mons (2007)Google Scholar
  4. 4.
    Harel, J., Koch, C., Perona, P.: Graph-Based Visual Saliency. In: Proceedings of Neural Information Processing Systems, NIPS (2006)Google Scholar
  5. 5.
    Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  6. 6.
    Achanta, R., Estrada, F.J., Wils, P., Süsstrunk, S.: Salient Region Detection and Segmentation. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 66–75. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Kanan, C., Cottrell, G.W.: Robust Classification of Objects, Faces, and Flowers Using Natural Image Statistics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2010)Google Scholar
  8. 8.
    Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision 8(7), 32, 1–20 (2008)CrossRefGoogle Scholar
  9. 9.
    Elazary, L., Itti, L.: A Bayesian model for efficient visual search and recognition. Visual Research 50, 1138–1352 (2010)Google Scholar
  10. 10.
    Tsotsos, J.K., Gulhane, S.M., et al.: Modeling visul attention via selective tuning. Artificial Intelligence 78(1-2), 507–545 (1995)CrossRefGoogle Scholar
  11. 11.
    Olshausen, B., Anderson, C., Van Essen, D.: A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. Journal of Neuroscience 13, 470–4719 (1993)Google Scholar
  12. 12.
    Hayhoe, M., Ballard, D.H.: Eye Movements in Natural Behavior. Trends in Cognitive Sciences 9(4), 188–193 (2005)CrossRefGoogle Scholar
  13. 13.
    Glimcher, P.: Making choices: the neurophysiology of visual-saccadic dcission making. Trends in Neuro- sciences 24, 654–659 (2001)CrossRefGoogle Scholar
  14. 14.
    Frintrop, S., Klodt, M., Rome, E.: A real- time visual attention system using integral images. In: International Conference on Computer Vision Systems (2007)Google Scholar
  15. 15.
    Marchesottu, L., Cifarelli, C., Csurka, C.: A framework for visual saliency detection with applications to image thumbnalling. In: IEEE ICCV, pp. 2232–2239 (2009)Google Scholar
  16. 16.
    Chen, L.Q., Xie, X., Fan, X., Ma, W.Y., Zhang, H.J., Zhou, H.Q.: A visual attention model for adapting images on small displays. ACM Multimedia Systems Journal 9(4) (2003)Google Scholar
  17. 17.
    Wang, Z., Li, B.: A two –stage approach to saliency detection in images. In: ICASSP, pp. 964–968 (2008)Google Scholar
  18. 18.
    Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Video Attention: Learning to detect a salient object. In: CVPR (2007)Google Scholar
  19. 19.
    Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.Y.: Learning to detect salient object. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(2), 353–367 (2011)CrossRefGoogle Scholar
  20. 20.
    Bruce, Tsotsos: An Attention Framework for Stereo Vision. Computer and Robot Vision, 88–95 (2005)Google Scholar
  21. 21.
    Treiosman, A., Gelade, G.: A feature-integration theory of attention. Cognitive Psychology 12(1), 97–136 (1980)CrossRefGoogle Scholar
  22. 22.
    Kanan, C., Cottrell, G.W.: Robust Classification of Objects, Faces, and Flowers Using Natural Image Statistics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2010), Kanan’s Web page http://cseweb.ucsd.edu/~ckanan/NIMBLE.html
  23. 23.
    Tingjun, L., Zhang, F., Cai, X., Huang, Q., Guo, Q.: The Model of Visual Attention Infrared Target Detection Algorithm. In: International Conference on Communications and Mobile Computing, pp. 87–91 (2010)Google Scholar
  24. 24.
  25. 25.
    Peters, R., Itti, L.: Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention. In: CVPR (2007)Google Scholar
  26. 26.
    Le Meur, O., Le, C.P., Barba, D., Thoreau, D.: Predicting visual fixation on video based on low-level visual features. Visual Research 47(19), 2483–2498 (2007)Google Scholar
  27. 27.
    Bur, A., Hügli, H.: Optimal Cue Combination for Saliency Computation: A Comparison with Human Vision. In: Mira, J., Álvarez, J.R. (eds.) IWINAC 2007. LNCS, vol. 4528, pp. 109–118. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  28. 28.
    Oliva, A., Torralba, A., Castelhano, M.S., Henderson, J.: Top-down control of visual attention in object detection. In: IEEE ICIP, vol. 1, pp. 253–256 (2003)Google Scholar
  29. 29.
    Bruce, N.D.B., Tsotsos, J.K.: Saliency, Attention, and Visual Search: An Information Theoretic Approach. Journal of Vision 9(3), 1–24 (2009), http://journalofvision.org/9/3/5/, doi:10.1167/9.3.5CrossRefGoogle Scholar
  30. 30.
    Gao, D., Mahadevan, V., Vasconcelos, N.: On the plausibility of the discriminant center-surround hypothesis for visual saliency. Journal of Vision 8(7), 1–18 (2008), http://www.svcl.ucsd.edu/projects/discsalbu/ CrossRefGoogle Scholar
  31. 31.
    Itti, L., Baldi, P.F.: Bayesian surprise attracts human attention. In: Advance in Neural Information Processing Systems, Cambridge, MA, pp. 547–554 (2006)Google Scholar
  32. 32.
    Fecteau, J.H., Munoz, D.P.: Salience, relevance, and firing: a priority map for target selection. Trends in cognitive science 10, 382–390 (2006)CrossRefGoogle Scholar
  33. 33.
    Lee, S., Kim, G., Choi, S.: Real -time tracking of visually attended objects in virtual environments. IEEE Transaction on Visualization and Computer Graphics 15(1), 6–19 (2009)CrossRefMathSciNetGoogle Scholar
  34. 34.
    Koldovský, Z., Tichavský, P., Oja, E.: Efficient Variant Of Algorithm FastICA For Independent Component Analysis Attaining The Cramér-Rao Lower Bound. IEEE Trans. on Neural Networks 17, 1090–1095 (2006)CrossRefGoogle Scholar
  35. 35.
    Itii, I., Baldi, P.: A principal approach to detecting surprising events in video. In: Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, pp. 631–637 (2005)Google Scholar
  36. 36.
    Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned Salient Region Detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2009)Google Scholar
  37. 37.
    Santella, A., Agravala, D., et al.: Gaze-based interaction for semi-automatioc photo cropping. In: CHI, pp. 771–780 (2006)Google Scholar
  38. 38.
    Chen, L., Xie, X., Fan, X., Ma, W., Shang, H., Zhou, H.: A visual attention mode for adapting images on small displays, Technical report, Microsoft Research, Redmond, WA (2002)Google Scholar
  39. 39.
    Itti, L.: Models of Bottom-Up and Top-Dawn Visual Attention, Ph.D thesis, California Institute of technology, Pasadena (2000)Google Scholar
  40. 40.
    Larson, E.C., Cuong, V., Chandler, M.: Can visual fixation patterns improve image fidelity assessment? In: IEEE International Conference on Image Processing (2008)Google Scholar
  41. 41.
    Barlow, H.B.: What is the computational goal of the neocortex? In: Koch, C., Davis, J.L. (eds.) Large-Scale Neuronal Theories of the Brain, pp. 1–22. MIT Press, Cambridge (1994)Google Scholar
  42. 42.
    Milanova, M., Rubin, S., Kountchev, R., Todorov, V., Kountcheva, R.: Combined visual attention model for video sequences. In: ICPR, pp. 1–4 (2008)Google Scholar
  43. 43.
    Olshausen, B.: Sparse Codes and Spikes. In: Rao, R.P.N.B., Olshausen, A., Lewicki, M. (eds.) Probabilistic Models of Perception and Brain Function. MIT PressGoogle Scholar
  44. 44.
    Bamidele, A., Stentiford, F.W., Morphett, J.: An attention based approach to content based image retrieval. British Telecommunications Advanced Research Technology Journal on Intelligent 22(3) (2004)Google Scholar
  45. 45.
    Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Networks 19, 1395–1407 (2006)CrossRefzbMATHGoogle Scholar
  46. 46.
    Mendi, E., Milanova, M.: Image Segmentation with Active Contours based on Selective Visual Attention. In: 8th WSEAS International Conference on Signal Processing (SIP 2009) including 3rd WSEAS International Symposium on Wavelets Theory and Applications in Applied Mathematics, Signal Processing & Modern Science (WAV 2009), May 30-June 1, pp. 79–84 (2009)Google Scholar
  47. 47.
    Bottom-Up Visual Attention Home Page, http://ilab.usc.edu/bu/
  48. 48.
    Chua, L.O., Yang, L.: Cellular Neural Networks:Theory and Applications. IEEE Trans. On Circuits and Systems 35, 99–120 (1988)Google Scholar
  49. 49.
    Mancas, B.G., Macq, B.: Perceptual Image Representation. EURASIP. Journal on Image and Visual Proccesing (2007)Google Scholar
  50. 50.
  51. 51.
    Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: LabelMe: a database and web-based tool for image annotation. International Journal of Computer Vision 77, 1–3 (2008)CrossRefGoogle Scholar
  52. 52.
    Amsterdam Library of Objects Images (ALOI), http://staff.science.uva.nl/~aloi/
  53. 53.
    Geusebroek, J.M., Burghouts, G.J., Smeulders, A.W.M.: The Amsterdam library of object images. Int. Journal of Computer Vision 61(1), 103–112 (2005)CrossRefGoogle Scholar
  54. 54.
  55. 55.
  56. 56.
    UIUC Image Database for Car Detection, http://cogcomp.cs.illinois.edu/Data/Car/
  57. 57.
  58. 58.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: International Journal of Computer Vision 88(2), 303–338 (2010)CrossRefGoogle Scholar
  59. 59.
  60. 60.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  61. 61.
    Bayesian Surprise Toolkit for Matlab, http://sourceforge.net/projects/surprise-mltk
  62. 62.
    Bruce, N.D.B., Tsotsos, J.K.: Saliency, Attention, and Visual Search: An Information Theoretic Approach. Journal of Vision 9(3), 1–24 (2009)CrossRefGoogle Scholar
  63. 63.
  64. 64.
  65. 65.
    Bruce, N., Tsotsos, J.K.: Spatiotemporal Saliency: Towards a Hierarchical Representation of Visual Saliency. In: 5th Int. Workshop on Attention in Cognitive Systems, Santorini Greece, May 12 (2008)Google Scholar
  66. 66.
    Centre for Vision Research (CVR) at York University, http://www.cvr.yorku.ca/home/
  67. 67.
    Attention Models Comparison and Validation, http://www.tcts.fpms.ac.be/attention/index_old.php#validation
  68. 68.
    Steger, J., Wilming, N., Wolfsteller, F., Höning, N., König, P.: The JAMF Attention Modelling Framework. In: Paletta, L., Tsotsos, J.K. (eds.) WAPCV 2008. LNCS, vol. 5395, pp. 153–165. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  69. 69.

Copyright information

© Springer Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Arkansas at Little RockLittle RockUSA

Personalised recommendations