Selective Motion Analysis Based on Dynamic Visual Saliency Map Model

  • Inwon Lee
  • Sang-Woo Ban
  • Kunihiko Fukushima
  • Minho Lee
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4029)


We propose a biologically motivated motion analysis model using a dynamic bottom-up saliency map model and a neural network for motion analysis of which the input is an optical flow. The dynamic bottom-up saliency map model can generate a human-like visual scan path by considering dynamics of continuous input scenes as well as saliency of the primitive features of a static input scene. Neural network for motion analysis responds selectively to rotation, expansion, contraction and planar motion of the optical flow in a selected area. The experimental results show that the proposed model can generate effective motion analysis results for analyzing only an interesting area instead of considering the whole input scenes, which makes faster analysis mechanism for dynamic input scenes.


Independent Component Analysis Motion Analysis Independent Component Analysis Probability Mass Function Middle Temporal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Giboson, J.J.: The perception of the visual world. Boston, Houghton Mifflin (1950)Google Scholar
  2. 2.
    Duffy, C.J., Wurtz, R.H.: Sensitivity of MST neurons to optic flow stimuli. I. A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology 65(6), 1329–1345 (1991)Google Scholar
  3. 3.
    Duffy, C.J., Wurtz, R.H.: Sensitivity of MST neurons to optic flow stimuli. II. Mechanisms of response selectivity revealed by small-field stimuli. Journal of Neurophysiology 65(6), 1346–1359 (1991)Google Scholar
  4. 4.
    Kazuya, T., Fukushima, K.: Neural network model for extracting optic flow. Neural Networks 18(5-6), 1–8 (2005)Google Scholar
  5. 5.
    Taylor, N.R., Hartley, M., Taylor, J.G.: Coding of objects in low-level visual cortical area. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3696, pp. 57–63. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Park, S.J., An, K.H., Lee, M.: Saliency map model with adaptive masking based on independent component analysis. Neurocomputing 49, 417–422 (2002)CrossRefGoogle Scholar
  7. 7.
    Barlow, H.B., Tolhust, D.J.: Why do you have edge detection? Optical Society of America Technical Digest 23, 172 (1992)Google Scholar
  8. 8.
    Bell, J., Sejnowski, T.J.: The independent components of natural scenes are edge filters. Vision Research 37, 3327–3338 (1997)CrossRefGoogle Scholar
  9. 9.
    Lanyon, L.J., Denham, S.L.: A model of active visual search with object-based attention guiding scan paths. Neural Networks 17(5-6), 873–897 (2004)MATHCrossRefGoogle Scholar
  10. 10.
    Kadir, T., Brady, M.: Scale, saliency and image description. International Journal of Computer Vision 45(2), 83–105 (1996)CrossRefGoogle Scholar
  11. 11.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Inwon Lee
    • 1
  • Sang-Woo Ban
    • 2
  • Kunihiko Fukushima
    • 3
  • Minho Lee
    • 1
  1. 1.School of Electrical Engineering and Computer ScienceKyungpook National UniversityTaeguKorea
  2. 2.Dept. of Information and Communication EngineeringDongguk UniversityGyeongbukKorea
  3. 3.Graduate School of InformaticsKansai UniversityOsakaJapan

Personalised recommendations