Advertisement

Modeling Crowd Flow for Video Analysis of Crowded Scenes

  • Ko Nishino
  • Louis Kratz
Chapter
Part of the The International Series in Video Computing book series (VICO, volume 11)

Abstract

In this chapter, we describe a comprehensive framework for modeling and exploiting the crowd flow to analyze videos of densely crowded scenes. Our key insight is to model the characteristic patterns of motion that arise within local space-time regions of the video and then to identify and encode the statistical and temporal variation of those motion patterns to characterize the latent, collective movements of the people in the scene. We show that this statistical crowd flow model can be used to achieve critical analysis tasks for surveillance videos of extremely crowded scenes such as unusual event detection and pedestrian tracking. These results demonstrate the effectiveness of crowd flow modeling in video analysis and point to its use in related fields including simulation and behavioral analysis of people in dense crowds.

Keywords

Hide Markov Model Optical Flow Training Video Crowded Scene Query Video 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

This work was supported in part by National Science Foundation grants IIS-0746717 and IIS-0803670, and Nippon Telegraph and Telephone Corporation. The authors thank Nippon Telegraph and Telephone Corporation for providing the train station videos.

References

  1. Ali, S., Shah, M.: A Lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1–6 (2007)Google Scholar
  2. Ali, S., Shah, M.: Floor fields for tracking in high density crowd scenes. In: Proceedings of European Conference on Computer Vision (2008)Google Scholar
  3. Andrade, E., Blunsden, S., Fisher, R.: Modelling crowd scenes for event detection. In: Proceedings of International Conference on Pattern Recognition, pp. 175–178 (2006)Google Scholar
  4. Andrade, E.L., Blunsden, S., Fisher, R.B.: Hidden markov models for optical flow analysis in crowds. In: Proceeding of International Conference on Pattern Recognition, pp. 460–463 (2006)Google Scholar
  5. Arulampalam, S.M., Maskell, S., Gordon, N.: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50, 174–188 (2002)CrossRefGoogle Scholar
  6. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2007)Google Scholar
  7. Choi, S.C., Wette, R.: Maximum likelihood estimation of the parameters of the gamma distribution and their bias. Technometrics 11(4), 683–690 (1969)CrossRefzbMATHGoogle Scholar
  8. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2005)Google Scholar
  9. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Tech. rep., Cambridge, MA (1980)Google Scholar
  10. Hospedales, T., Gong, S., Xiang, T.: A Markov clustering topic model for mining behaviour in video. In: Proceedings of IEEE International Conference on Computer Vision (2009)Google Scholar
  11. Hu, M., Ali, S., Shah, M.: Detecting global motion patterns in complex videos. In: Proceedings of International Conference on Pattern Recognition (2008)Google Scholar
  12. Hu, M., Ali, S., Shah, M.: Learning motion patterns in crowded scenes using motion flow field. In: Proceedings of International Conference on Pattern Recognition, pp. 1–5 (2008)Google Scholar
  13. Isard, M., Blake, A.: CONDENSATION-conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)CrossRefGoogle Scholar
  14. Ke, Y., Sukthankar, R., Hebert, M.: Event detection in crowded videos. In: Proceedings of IEEE International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar
  15. Kläser, A., Marszałek, M., Schmid, C.: A spatio-temporal descriptor based on 3D-Gradients. In: Proceedings of British Macine Vision Conference, pp. 995–1004 (2008)Google Scholar
  16. Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (2008)Google Scholar
  17. Leibe, B., Seemann, E., Schiele, B.: Pedestrian detection in crowded scenes. In: Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 878–885 (2005)Google Scholar
  18. Mardia, A., El-Atoum, S.: Bayesian inference for the Von Mises-Fisher distribution miscellanea. Biometrika 63(1), 203–206 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  19. Mardia, K.V., Jupp, P.: Directional Statistics. Wiley, Chichester (1999)CrossRefGoogle Scholar
  20. Mehran, R., Moore, B.E., Shah, M.: A streakline representation of flow in crowded scenes. In: European Conference on Computer Vision (ECCV) (2010)Google Scholar
  21. Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proceedings of European Conference on Computer Vision, pp. 661–675 (2002)Google Scholar
  22. Rabiner, L.: A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  23. Rodriguez, M., Ali, S., Kanade, T.: Tracking in unstructured crowded scenes. In: Proceedings of IEEE International Conference on Computer Vision (2009)Google Scholar
  24. Shechtman, E., Irani, M.: Space-time behavior based correlation. In: Proceedings of IEEE Internationl Conference on Computer Vision and Pattern Recognition, pp. 405–412 (2005)Google Scholar
  25. Wang, X., Ma, X., Grimson, W.E.L.: Unsupervised activity perception in crowded and complicated scenes using hierarchical Bayesian models. IEEE Trans. Pattern Anal. Mach. Intell. 31, 539–55 (2009)CrossRefGoogle Scholar
  26. Wright, J., Pless, R.: Analysis of persistent motion patterns using the 3D structure tensor. In: IEEE Workshop on Motion and Video Computing, pp 14–19 (2005)Google Scholar
  27. Yang, Y., Liu, J., Shah, M.: Video scene understanding using multi-scale analysis. In: Proceeding of IEEE International Conference on Computer Vision (2009)Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Department of Computer ScienceDrexel UniversityPhiladelphiaUSA

Personalised recommendations