Advertisement

Building Visual Surveillance Systems with Neural Networks

  • J. García-Rodríguez
  • A. Angelopoulou
  • F. J. Mora-Gimeno
  • A. Psarrou
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 394)

Abstract

Self-organising neural networks have shown promise in a variety of applications areas. Their massive and intrinsic parallelism makes those networks suitable to solve hard problems in image-analysis and computer vision applications, especially when non-stationary environments occur. Moreover, this kind of neural networks preserves the topology of an input space by using their inherited competitive learning property. In this work we use a kind of self-organising network, the Growing Neural Gas, to solve some computer vision tasks applied to visual surveillance systems. The neural network is also modified to accelerate the learning algorithm in order to support applications with temporal constraints. This feature has been used to build a system able to track image features in video sequences. The system automatically keeps the correspondence of features among frames in the sequence using its own structure. Information obtained during the tracking process and allocated in the neural network can also be used to analyse the objects motion.

Keywords

Input Space Previous Frame Reference Vector Hand Gesture Recognition Visual Surveillance 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hu, W., Tan, T., Wang, L., Maybank, S.: A Survey on Visual Surveillance of Object Mo-tion Behaviors. IEEE Transactions on Systems, Man and Cybernetics 34(3), 334–352 (2004)CrossRefGoogle Scholar
  2. 2.
    Haritaoglu, H., Harwood, D., Davis, L.S.: W4: Who? When? Where? What? A Real Time System for Detecting and Tracking People. In: Proceedings of the International Conference on Automatic Face and Gesture Recognition, pp. 222–227 (1998)Google Scholar
  3. 3.
    Wren, C., Azarbayejani, A., Darell, T., Pentland, A.: Pfinder: Real-Time Tracking of the Human Body. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 780–785 (1997)CrossRefGoogle Scholar
  4. 4.
    Olson, T., Brill, F.: Moving Object Detection and Event Recognition Algorithms for Smart Cameras. In: Proc. DARPA Image Understanding Workshop, pp. 159–175 (1997)Google Scholar
  5. 5.
    Lipton, A.J., Fujioshi, H., Patil, R.S.: Moving Target Classification and Tracking from Real-Time Video. In: Proc. IEEE Workshop Applications of Computer Vision, pp. 8–14 (1998)Google Scholar
  6. 6.
    Collins, T., Lipton, A.J., Kanade, T.: Introduction to the special section on video sur-veillance. IEEE Trans. Pattern Anal. Machine Intell. 22, 745–746 (2000)CrossRefGoogle Scholar
  7. 7.
    Howarth, R.J., Buxton, H.: Conceptual descriptions from monitoring and watching im-age sequences. Image and Vision Computing 18(9), 105–135 (2000)CrossRefGoogle Scholar
  8. 8.
    Hu, W., Xie, D., Nan, T.N.: A Hierarchical Self-Organizing Approach for Learning the Patterns of Motion Trajectories. Chin. J. Comput. 26(4), 417–426 (2003)Google Scholar
  9. 9.
    Buxton, H.: Learning and Understanding dynamic scene activity: a review. Image. Vis. Comput. 21(1), 125–136 (2003)CrossRefGoogle Scholar
  10. 10.
    Bremond, F., Thonnat, M., Zuñiga, M.: Video understanding framework for automatic behaviour recognition. Behav. Res. Meth. (2006)Google Scholar
  11. 11.
    Brand, M., Kettnaker, V.: Discovery and Segmentation of Activities in Video. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 844–885 (2000)CrossRefGoogle Scholar
  12. 12.
    Makris, D., Ellis, T.: Learning semantic scene models from observating activity in vis-ual surveillance. IEEE Trans. Syst. Man Cybern. 35(3), 397–408 (2005)CrossRefGoogle Scholar
  13. 13.
    Mckenna, S.J., Nait Charif, H.: Summarising contextual activity and detecting unsual inac-tivity in a supportive home environment. Pattern Anal. Appl. 7(4), 386–401 (2004)CrossRefGoogle Scholar
  14. 14.
    Stauffer, C.: Estimating tracking sources and sinks. In: Proceedings of 2nd IEEE workshop of event mining, pp. 259–266 (2003)Google Scholar
  15. 15.
    Dee, H.M., Velastin, S.: How close are we to solving the problem of automated visual surveillance? Machine Vision and Applications 19, 329–343 (2008)CrossRefGoogle Scholar
  16. 16.
    Fritzke, B.: A Self-Organizing Network that Can Follow Non-stationary Distributions. In: Proceedings of the 7th International Conference on Artificial Neural Networks, pp. 613–618 (1997)Google Scholar
  17. 17.
    Fritzke, B.: A Growing Neural Gas Network Learns Topologies. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 625–632. MIT Press (1995)Google Scholar
  18. 18.
    Frezza-Buet, H.: Following Non-stationary Distributions by Controlling the Vector Quatization Accuracy of a Growing Neural Gas Network. Neurocomputing 71, 1191–1202 (2008)CrossRefGoogle Scholar
  19. 19.
    Stergiopoulou, E., Papamarkos, N., Atsalakis, A.: Hand Gesture Recognition Via a New Self-Organized Neural Network. In: Sanfeliu, A., Cortés, M.L. (eds.) CIARP 2005. LNCS, vol. 3773, pp. 891–904. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  20. 20.
    Cao, X., Suganthan, P.N.: Video Shot Motion Characterization Based on Hierachical Overlapped Growing Neural Gas Networks. Multimedia Systems 9, 378–385 (2003)CrossRefGoogle Scholar
  21. 21.
    García, J., Flórez-Revuelta, F., García, J.M.: Growing Neural Gas for Vision Tasks with Time Restrictions. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4132, pp. 578–586. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  22. 22.
    Martinetz, T., Shulten, K.: Topology Representing Networks. Neural Networks 7(3), 507–522 (1994)CrossRefGoogle Scholar
  23. 23.
    Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (2001)CrossRefzbMATHGoogle Scholar
  24. 24.
    Welch, G., Bishop, G.: An Introduction to the Kalman Filter. In: ACM SIGGRAPH, Course 8 (2001), http://www.cs.unc.edu/~welch/kalman/,2001
  25. 25.
    Ristic, B., Arulampalam, S., Gordon, N.: Beyond the Kalman Filter. Particle Filters for Tracking Applications. Artech House (2004)Google Scholar
  26. 26.
    Han, M., Xu, W., Tao, H., Gong, Y.: An Algorithm for Multiple Object Trajectory Tracking. In: Proceedings of IEEE Computer Vision and Pattern Recognition Conference (2004)Google Scholar
  27. 27.
    Hue, C., Le Cadre, J.C., Pérez, P.: Tracking Multiple Objects with Particle Filtering. IEEE Transactions on Aerospace and Electronic Systems 38(3), 791–812 (2002)CrossRefGoogle Scholar
  28. 28.
    Sullivan, J., Carlsson, S.: Tracking and Labelling of Interacting Multiple Targets. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 619–632. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  29. 29.
    Huang, Y., Essa, I.: Tracking Multiple Objects trough Occlusions. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (2005)Google Scholar
  30. 30.
    Fisher, R.B.: PETS 2004 Surveillance Ground Truth Data Set. In: Proc. Sixth IEEE Int. Work. on Performance Evaluation of Tracking and Surveillance (PETS 2004), pp. 1–5 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • J. García-Rodríguez
    • 1
  • A. Angelopoulou
    • 2
  • F. J. Mora-Gimeno
    • 1
  • A. Psarrou
    • 2
  1. 1.Dept. of Computer TechnologyUniversity of AlicanteAlicanteSpain
  2. 2.Dept. of Computer Science and Software EngineeringUniversity of WestminsterLondonUK

Personalised recommendations