Advertisement

Multimedia Tools and Applications

, Volume 67, Issue 1, pp 311–335 | Cite as

Fast moving object detection with non-stationary background

  • Jiman Kim
  • Xiaofei Wang
  • Hai Wang
  • Chunsheng Zhu
  • Daijin Kim
Article

Abstract

The detection of moving objects under a free-moving camera is a difficult problem because the camera and object motions are mixed together and the objects are often detected into the separated components. To tackle this problem, we propose a fast moving object detection method using optical flow clustering and Delaunay triangulation as follows. First, we extract the corner feature points using Harris corner detector and compute optical flow vectors at the extracted corner feature points. Second, we cluster the optical flow vectors using K-means clustering method and reject the outlier feature points using Random Sample Consensus algorithm. Third, we classify each cluster into the camera and object motion using its scatteredness of optical flow vectors. Fourth, we compensate the camera motion using the multi-resolution block-based motion propagation method and detect the objects using the background subtraction between the previous frame and the motion compensated current frame. Finally, we merge the separately detected objects using Delaunay triangulation. The experimental results using Carnegie Mellon University database show that the proposed moving object detection method outperforms the existing other methods in terms of detection accuracy and processing time.

Keywords

Harris corner detection K-means optical flow clustering Scatteredness Motion compensation Delaunay triangulation 

Notes

Acknowledgements

This work was supported by the MKE (The Ministry of Knowledge Economy), Korea,under the Core Technology Development for Breakthrough of Robot Vision Research support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2010-C7000-1001-0006). And this research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (No. 2011-0027953).

References

  1. 1.
    Bajpai N (2010) Business statistics. PearsonGoogle Scholar
  2. 2.
    Barnich O, Droogenbroeck MV (2009) VIBE: a powerful random technique to estimate the background in video sequences. In: Proc. IEEE ICASSP 2009, pp 945–948Google Scholar
  3. 3.
    Borshukov GD, Bozdagi G, Altunbasak Y, Tekalp AM (1997) Motion segmentation by multistage affine classification. IEEE Trans Image Process 6(11):1591–1594CrossRefGoogle Scholar
  4. 4.
    Bouguet JY (2000) Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. OpenCV DocumentationGoogle Scholar
  5. 5.
    Chen M, Gonzalez S, Cao H, Zhang Y, Vuong S (2010) Enabling low bit-rate and reliable video surveillance over practical wireless sensor networks. J. Supercomput. doi: 10.1007/s11227-010-0475-2 Google Scholar
  6. 6.
    Duda RO, Hart PE, Stork DG (2001) Pattern classification. Wiley-Interscience 2001Google Scholar
  7. 7.
    Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetCrossRefGoogle Scholar
  8. 8.
    Guibas LJ, Knuth DE, Sharir M (1992) Randomized incremental construction of Delaunay and Voronoi diagrams. Algorithmica 7(4):381–413MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Han B, Comaniciu D, Zhu Y, Davis LS (2008) Sequential kernel density approximation and its application to real-time visual tracking. IEEE Trans Pattern Anal Mach Intell 30(7):1186–1197CrossRefGoogle Scholar
  10. 10.
    Hayman E, Eklundh J (2003) Statistical background subtraction for a mobile observer. In: Proc. IEEE ICCV 2003, pp 67–74Google Scholar
  11. 11.
    Jin Y, Tao L, Di H, Rao NI, Xu G (2008) Background modeling from a free-moving camera by multi-layer homography algorithm. In: Proc. IEEE ICIP 2008Google Scholar
  12. 12.
    Ke Q, Kanade T (2001) A subspace approach to layer extraction. In: Proc. IEEE CVPR 2001Google Scholar
  13. 13.
    Li L, Huang W, Gu IYH, Tian Q (2003) Foreground object detection from videos containing complex background. In: Proc. ACMMM 2003Google Scholar
  14. 14.
    Maddalena L, Petrosino A (2008) A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans Image Process 17(7):1168–1177MathSciNetCrossRefGoogle Scholar
  15. 15.
    Mittal A, Hunttenlocher D (2000) Scene modeling for wide area surveillance and image synthesis. In: Proc. IEEE CVPR 2000, pp 160–167Google Scholar
  16. 16.
    Oliver NM, Rosario B, Pentland AP (2000) A Bayesian computer vision system for modeling human interactions. IEEE Trans Pattern Anal Mach Intell 22(8):831–843CrossRefGoogle Scholar
  17. 17.
    Ren X, Malik J (2007) Tracking as repeated figure/ground segmentation. In: Proc. IEEE CVPR 2007, pp 1–8Google Scholar
  18. 18.
    Ren Y, Chua CS, Ho YK (2003) Statistical background modeling for non-stationary camera. Pattern Recogn 24(1–3):183–196zbMATHGoogle Scholar
  19. 19.
    Ren X, Song J, Ying H, Zhu Y, Qiu X (2007) Robust nose detection and tracking using GentleBoost and improved Lucas–Kanade optical flow algorithms. In: Proc. IEEE ICIC 2007, pp 1240–1246Google Scholar
  20. 20.
    Sand P, Teller S (2006) Particle video: long-range motion estimation using point trajectories. In: Proc. IEEE CVPR 2006, pp 2195–2202Google Scholar
  21. 21.
    Schmid C, Mohr R, Bauckhage C (2000) Evaluation of interest point detectors. Int J Comput Vis 37(2):151–172zbMATHCrossRefGoogle Scholar
  22. 22.
    Schoenemann T, Cremers D (2008) High resolution motion layer decomposition using dual-space graph cuts. In: Proc. IEEE CVPR 2008, pp 1–7Google Scholar
  23. 23.
    Sheikh Y, Javed O, Kanade T (2009) Background subtraction for freely moving cameras. In: Proc. IEEE ICCV 2009, pp 1219–1225Google Scholar
  24. 24.
    Shi J, Tomasi C (1994) Good features to track. In: Proc. IEEE CVPR 1994, pp 593–600Google Scholar
  25. 25.
    Tao H, Sawhney HS, Kumar R (2002) Object tracking with bayesian estimation of dynamic layer representations. IEEE Trans Pattern Anal Mach Intell 24(1):75–89CrossRefGoogle Scholar
  26. 26.
    Uemura H, Ishikawa S, Mikolajczyk K (2008) Feature tracking and motion compensation for action recognition. In: Proc. BMVC 2008Google Scholar
  27. 27.
    Xiao J, Shah M (2005) Motion layer extraction in the presence of occlusion using graph cuts. IEEE Trans Pattern Anal Mach Intell 27(10):1644–1659CrossRefGoogle Scholar
  28. 28.
    Zivkovic Z, Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn Lett 27(7):773–780CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Jiman Kim
    • 1
  • Xiaofei Wang
    • 2
  • Hai Wang
    • 1
  • Chunsheng Zhu
    • 3
  • Daijin Kim
    • 1
  1. 1.Computer Science and EngineeringPohang University of Science and TechnologyPohangSouth Korea
  2. 2.School of Computer Science and EngineeringSeoul National UniversitySeoulSouth Korea
  3. 3.Department of Computer ScienceSt. Francis Xavier UniversityAntigonishCanada

Personalised recommendations