Wireless Personal Communications

, Volume 87, Issue 3, pp 629–643 | Cite as

Improved Gaussian Mixture Models for Adaptive Foreground Segmentation

  • Nikolaos Katsarakis
  • Aristodemos Pnevmatikakis
  • Zheng-Hua Tan
  • Ramjee Prasad


Adaptive foreground segmentation is traditionally performed using Stauffer and Grimson’s algorithm that models every pixel of the frame by a mixture of Gaussian distributions with continuously adapted parameters. In this paper we provide an enhancement of the algorithm by adding two important dynamic elements to the baseline algorithm: The learning rate can change across space and time, while the Gaussian distributions can be merged together if they become similar due to their adaptation process. We quantify the importance of our enhancements and the effect of parameter tuning using an annotated outdoors sequence.


Adaptive foreground segmentation Adaptive background mixture models Gaussian mixture models Background subtraction 


  1. 1.
    Bradski, G. (2000). The OpenCV library. Dr. Dobb’s Journal of Software Tools, 25(11), 120, 122–125.
  2. 2.
    Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library. Sebastopol: O’Reilly Media, Inc.Google Scholar
  3. 3.
    Chen, Z., & Ellis, T. (2014). A self-adaptive Gaussian mixture model. Computer Vision and Image Understanding, 122(0), 35–46. doi: 10.1016/j.cviu.2014.01.004.
  4. 4.
    Friedman, N., & Russell, S. (1997). Image segmentation in video sequences: A probabilistic approach. In: Proceedings of the thirteenth conference on uncertainty in artificial intelligence, UAI’97 (pp. 175–181). San Francisco, CA: Morgan Kaufmann Publishers Inc.
  5. 5.
    Pnevmatikakis, A., & Polymenakos, L. (2006). Robust estimation of background for fixed cameras. In: 15th International conference on computing (CIC ’06) (pp. 37–42). Mexico City, Mexico.Google Scholar
  6. 6.
    Powers, D. M. W. (2011). Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation. International Journal of Machine Learning Technology, 2(1), 37–63.MathSciNetGoogle Scholar
  7. 7.
    Stauffer, C., & Grimson, W. E. L. (2000). Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8), 747–757.CrossRefGoogle Scholar
  8. 8.
    Vacavant, A., Chateau, T., Wilhelm, A., & Lequivre, L. (2013). A benchmark dataset for outdoor foreground/background extraction. In J. I. Park & J. Kim (Eds.), Computer vision—ACCV 2012 workshops, Lecture notes in computer science (Vol. 7728, pp. 291–300). Berlin, Heidelberg: Springer. doi: 10.1007/978-3-642-37410-4_25.
  9. 9.
    Xu, L., Landabaso, J., & Pardas, M. (2005). Shadow removal with blob-based morphological reconstruction for error correction. In: IEEE international conference on acoustics, speech, and signal processing (ICASSP 2005). Philadelphia, PA.Google Scholar
  10. 10.
    Zivkovic, Z., & van der Heijden, F. (2006). Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognition Letters, 27(7), 773–780. doi: 10.1016/j.patrec.2005.11.005.

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Nikolaos Katsarakis
    • 1
  • Aristodemos Pnevmatikakis
    • 2
  • Zheng-Hua Tan
    • 1
  • Ramjee Prasad
    • 1
  1. 1.Center for TeleInFrastrukturAalborg UniversityAalborgDenmark
  2. 2.Athens Information TechnologyPeania, AthensGreece

Personalised recommendations