Skip to main content
Log in

Background subtraction via incremental maximum margin criterion: a discriminative subspace approach

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Background subtraction is one of the basic low-level operations in video analysis. The aim is to separate static information called “background” from the moving objects called “foreground”. The background needs to be modeled and updated over time to allow robust foreground detection. Recently, reconstructive subspace learning models, such as principal component analysis (PCA) have been used to model the background by significantly reducing the data’s dimension. This approach is based on the assumption that the main information contained in the training sequence is the background meaning that the foreground has a low contribution. However, this assumption is only verified when the moving objects are either small or far away from the camera. Furthermore, the reconstructive representations strive to be as informative as possible in terms of well approximating the original data. Their objective is mainly to encompass the variability of the training data and so they give more effort to model the background in an unsupervised manner than to precisely classify pixels as foreground or background in the foreground detection. On the other hand, discriminative methods are usually less adapted to the reconstruction of data; although they are spatially and computationally much more efficient and often give better classification results compared with the reconstructive methods. Based on this fact, we propose the use of a discriminative subspace learning model called incremental maximum margin criterion (IMMC). The objective is first to enable a robust supervised initialization of the background and secondly a robust classification of pixels as background or foreground. Furthermore, IMMC also allows us an incremental update of the eigenvectors and eigenvalues. Experimental results on different datasets demonstrate the performance of this proposed approach in the presence of illumination changes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Allili, M., Bouguila, N., Ziou, D.: A robust video foreground segmentation by using generalized Gaussian mixture modeling. In: Fourth Canadian Conference on Computer and Robot Vision, CRV 2007, pp. 503–509 (2007)

  2. Bell A., Sejnowski T.: An information-maximization approach to blind separation and blind deconvolution. Neural Comput. 7(11), 1129–1159 (1995)

    Article  Google Scholar 

  3. Bouwmans T., El Baf F.: Modeling of dynamic backgrounds by type-2 fuzzy Gaussians mixture models. MASAUM J. Basics Appl. Sci. 1(2), 265–277 (2009)

    Google Scholar 

  4. Bouwmans, T., El Baf, F., Vachon, B.: Statistical background modeling for foreground detection: a survey. In: Handbook of Pattern Recognition and Computer Vision, vol. 4(2), pp. 181–189. World Scientific Publishing, Singapore (2010)

  5. Bucak S., Gunsel B.: Incremental subspace learning and generating sparse representations via non-negative matrix factorization. Pattern Recognit. 42(5), 788–797 (2009)

    Article  MATH  Google Scholar 

  6. Bucak, S., Gunsel, B., Gursoy, O.: Incremental non-negative matrix factorization for dynamic background modelling. In: International Workshop on Pattern Recognition in Information Systems (2007)

  7. Carranza J., Theobalt C., Magnor M., Seidel H.: Free-viewpoint video of human actors. ACM Trans. Graphics 22(3), 569–577 (2003)

    Article  Google Scholar 

  8. Chang, R., Ghandi, T., Trivedi, M.: Vision modules for a multi sensory bridge monitoring approach. In: ITSC 2004, pp. 971–976 (2004)

  9. Chen, D., Zhang, L.: An incremental linear discriminant analysis using fixed point method. In: ISSN 2006, vol. 3971, pp. 1334–1339 (2006)

  10. Cheung S., Kamath C.: Background subtraction with foreground validation for urban traffic video. J. Appl. Signal Process. Special Issue Adv. Intell. Vis. Syst. Methods Appl. 14, 2330–2340 (2005)

    Google Scholar 

  11. Dong, Y., DeSouza, G.: Adaptive learning of multi-subspace for foreground detection under illumination changes. Comput. Vis. Image Understand. (2010)

  12. Dong Y., Han T., DeSouza G.: Illumination invariant foreground detection using multi-subspace learning. J. Int. Knowl. Based Intell. Eng. Syst. 14(1), 31–41 (2010)

    Google Scholar 

  13. El Baf, F., Bouwmans, T., Vachon, B.: Comparison of background subtraction methods for a multimedia learning space. In: International Conference on Signal Processing and Multimedia (2007)

  14. El Baf, F., Bouwmans, T., Vachon, B.: Type-2 fuzzy mixture of Gaussians model: application to background modeling. In: International Symposium on Visual Computing, ISVC 2008, pp. 772–781 (2008)

  15. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: ECCV 2000, pp. 751–767 (2000)

  16. Elhabian S., El-Sayed K., Ahmed S.: Moving object detection in spatial domain using background removal techniques—state-of-art. Recent Pat. Comput. Sci. 1(1), 32–54 (2008)

    Article  Google Scholar 

  17. Fang, V., Chen, J., Tang, Y.: A linear subspace learning algorithm for incremental data. In: International Conference on Wavelet Analysis and Pattern Recognition, ICWAPR 2009, pp. 101–106 (2009)

  18. Ferryman, J.: http://www.cvg.rdg.ac.uk/pets2000/data.html. (2000)

  19. Ferryman, J.: http://www.cvg.rdg.ac.uk/pets2006/data.html. (2006)

  20. Ferryman, J., Tweed, D.: An overview of the PETS 2006 dataset. In: IEEE International Workshop on Performance Evaluation and Tracking and Surveillance, PETS 2006 (2006)

  21. Ferryman, J., Tweed, D.: Overall evaluation of the PETS 2007 results. In: IEEE International Workshop on Performance Evaluation and Tracking and Surveillance, PETS 2007 (2007)

  22. Han, B., Jain, R.: Real-time subspace-based background modeling using multi-channel data. In: ISVC 2007, pp. 162–172 (2007)

  23. Hardoon D., Szedmak S., Shawe S.: Canonical correlation analysis: an overview with application to learning methods. Neural Comput. 16, 2639–2664 (2006)

    Article  Google Scholar 

  24. Horprasert, T., Haritaoglu, I., Wren, C., Harwood, D., Davis, L., Pentland, A.: Real-time 3D motion capture. In: Workshop on Perceptual User Interfaces, PUI 1998, pp. 87–90 (1998)

  25. Hu W., Li X., Zhang X., Shi X., Maybank S., Zhang Z.: Incremental tensor subspace learning and its applications for foreground segmentation and tracking. Int. J. Comput. Vis. 91, 303–327 (2011)

    Article  MATH  Google Scholar 

  26. Javed, O., Shah, M.: Automated video surveillance. In: Automated Multi-Camera Surveillance: Algorithms and Practice. The Kluwer International Series in Video Computing, vol. 10, pp. 1–9 (2008)

  27. Kawabata, S., Hiura, S., Sato, K.: Real-time detection of anomalous objects in dynamic scene. In: ICPR 2006, vol. 3, pp. 1171–1174 (2006)

  28. Kawanishi Y., Mitsugami I., Mukunoki M., Minoh M.: Background image generation by preserving lighting condition of outdoor scenes. Proc. Soc. Behav. Sci. 2(1), 129–136 (2010)

    Article  Google Scholar 

  29. Kawanishi, Y., Mitsugami, I., Mukunoki, M., Minoh, M.: Background image generation keeping lighting condition of outdoor scenes. In: International Conference on Security Camera Network, Privacy Protection and Community Safety, SPC 2009 (2009)

  30. Kim, H., Sakamoto, R., Kitahara, I., Toriyama, T., Kogure, K.: Robust silhouette extraction technique using background subtraction. In: 10th Meeting on Image Recognition and Understand, MIRU 2007 (2007)

  31. Kim, T., Wong, S., Stenger, B., Kittler, J., Cipolla, R.: Incremental linear discriminant analysis using sufficient spanning set approximations. In: CVPR, pp. 1–8 (2007)

  32. Lee, B., Hedley, M.: Background estimation for video surveillance. In: International Conference on Image and Vision Computing New Zealand, IVCNZ 2002, vol. 1, pp. 315–320 (2002)

  33. Li, H., Jiang, T., Zhang, K.: Efficient and robust feature extraction by maximum margin criterion. Adv. Neural Inform. Process. Syst. 16 (2004)

  34. Li L., Huang W., Gu I., Tian Q.: Statistical modeling of complex background for foreground object detection. IEEE Trans. IP 13(11), 1459–1472 (2004)

    Google Scholar 

  35. Li, R., Chen, Y., Zhang, X.: Fast robust eigen-background updating for foreground detection. In: ICIP 2006, pp. 1833–1836 (2006)

  36. Li, X., Hu, W., Zhang, Z., Zhang, X.: Robust foreground segmentation based on two effective background models. In: MIR 2008, pp. 223–228 (2008)

  37. Li Y.: On incremental and robust subspace learning. Pattern Recognit. 37(7), 1509–1518 (2004)

    Article  MATH  Google Scholar 

  38. Li, Y., Xu, L., Morphett, J., Jacobs, R.: An integrated algorithm of incremental and robust PCA. In: ICIP 2003, pp. 245–248 (2003)

  39. Lin, H., Liu, T., Chuang, J.: A probabilistic SVM approach for background scene initialization. In: ICIP 2002, vol. 3, pp. 893–896 (2002)

  40. Maddalena, L., Petrosino, A.: A self-organizing neural system for background and foreground modeling. In: International Conference on Artificial Neural Networks, ICANN 2008, vol. 1, pp. 652–661 (2008)

  41. Maddalena, L., Petrosino, A.: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection. Neural Computing and Applications, NCA 2009, pp. 1–8 (2009)

  42. Maddalena L., Petrosino A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008)

    Article  MathSciNet  Google Scholar 

  43. McFarlane N., Schofield C.: Segmentation and tracking of piglets in images. Mach. Vis. Appl. 8(3), 187–193 (1995)

    Article  Google Scholar 

  44. Messelodi, S., Modena, C., Segata, N., Zanin, M.: A Kalman filter based background updating algorithm robust to sharp illumination changes. In: ICIAP 2005, vol. 3617, pp. 163–170 (2005)

  45. Mikic I., Trivedi M., Hunterand E.: Cosman P.: Human body model acquisition and tracking using voxel data. Int. J. Comput. Vis. 53(3), 199–223 (2003)

    Article  Google Scholar 

  46. NVDIA. CUDA. Programming Guide 2.0 (2008)

  47. Oliver, N., Rosario, B., Pentland, A.: A Bayesian computer vision system for modeling human interactions. In: Proceedings of International Conference on Vision Systems, ICVS 1999 (1999)

  48. Pande, A., Verma, A., Mittal, A.: Network aware optimal resource allocation for e-learning videos. In: 6th International Conference on Mobile Learning (2007)

  49. Parks, D.: http://www.dparks.wikidot.com/source-code (2010)

  50. Parks, D., Fels, S.: Evaluation of background subtraction algorithms with post-processing. In: International Conference on Advanced Video and Signal Based Surveillance, AVSS 2008, pp. 192–199 (2008)

  51. Piccardi, M.: Background subtraction techniques: a review. In: Proceedings of the International Conference on Systems, Man and Cybernetics, pp. 3199–3204 (2004)

  52. Rosipal R., Kramer N.: Overview and recent advances in partial least squares. SLSFS 3940, 34–35 (2005)

    Google Scholar 

  53. Rymel, J., Renno, J., Greenhill, D., Orwell, J., Jones, G.: Adaptive eigen-backgrounds for object detection. In: ICIP 2004, pp. 1847–1850 (2004)

  54. Schutte J., Groenwold A.: A study of global optimization using particle swarms. J. Global Optim. 31(1), 93–108 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  55. Shah M., Javed O.: Automated visual surveillance in realistic scenarios. IEEE Multimed. 1(14), 30–39 (2007)

    Article  Google Scholar 

  56. Sheikh Y., Shah M.: Bayesian modeling of dynamic scenes for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 27, 1778–1792 (2005)

    Article  Google Scholar 

  57. Sigari M., Mozayani N., Pourreza H.: Fuzzy running average and fuzzy background subtraction: concepts and application. Int. J. Comput. Sci. Netw. Security 8(2), 138–143 (2008)

    Google Scholar 

  58. Skocaj, D., Leonardis, A.: Weighted and robust incremental method for subspace learning. In: ICCV 2003, pp. 1494–1501 (2003)

  59. Skocaj, D., Leonardis, A.: Incremental and robust learning of subspace representations. In: Image in Visual Computing IVC 2006, pp. 1–12 (2006)

  60. Skocaj, D., Leonardis, A.: Canonical correlation analysis for appearance-based orientation and self-estimation and self-localization. CogVis Meeting (2004)

  61. Skocaj, D., Leonardis, A., Uray, M., Bischof, H.: Why to combine reconstructive and discriminative information for incremental subspace learning. In: Computer Vision Winter Workshop, Czech Society for Cybernetics and Informatics (2006)

  62. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR 1999, pp. 246–252 (1999)

  63. Tang, F., Tao, H.: Fast linear discriminant analysis using binary bases. In: International Conference on Pattern Recognition, ICPR 2006, vol. 2 (2006)

  64. Tavakkoli, A., Nicolescu, M., Bebis, G.: A novelty detection approach for foreground region detection in videos with quasi-stationary backgrounds. In: ISVC 2006, pp. 40–49 (2006)

  65. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: International Conference on Computer Vision, pp. 255–261 (1999)

  66. Tsai, D., Lai, C.: Independent component analysis-based background subtraction for indoor surveillance. In: IEEE Transactions on Image Processing, IP 2009, vol. 8(1), pp. 158–167 (2009)

  67. Wang, F., Zhang, C.: Feature extraction by maximizing the average neighborhood margin. In: CVPR 2007, pp. 1–8 (2007)

  68. Wang, J., Bebis, G., Miller, R.: Robust video-based surveillance by integrating target detection with tracking. In: IEEE Workshop on OTCBVS in Conjunction with CVPR 2006, p. 137 (2006)

  69. Wang, L., Wang, L., Wen, M., Zhou, Q., Wang, W.: Background subtraction using incremental subspace learning. In: ICIP 2007, vol. 5, pp. 45–48 (2007)

  70. Wang, L., Wang, L., Zhuo, Q., Xiao, H., Wang, W.: Adaptive eigenbackground for dynamic background modeling. In: Intelligent Computing in Signal Processing and Pattern Recognition. LNCIS 2006, vol. 345, pp. 670–675 (2006)

  71. Warren, J.: Unencumbered full body interaction in video games. PhD thesis, MFA Design and Technology Parsons School of Design, New York, USA (2003)

  72. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: real-time tracking of the human body. In: IEEE Transactions on PAMI, vol. 19(7), pp. 780–785 (1997)

  73. Xu, Z., Gu, I., Shi, P.: Recursive error-compensated dynamic eigenbackground learning and adaptive background subtraction in video. Opt. Eng. 47(5) (2008)

  74. Xu, Z., Shi, P., Gu, I.: An eigenbackground subtraction method using recursive error compensation. In: Advances in Multimedia Information Processing, PCM 2006, vol. 4261, pp. 779–787 (2006)

  75. Yamazaki, M., Xu, G., Chen, Y.: Detection of moving objects by independent component analysis. In: ACCV 2006, pp. 467–478 (2006)

  76. Yan, J., Zhang, B., Yan, S., Yang, Q., Li, H., Chen, Z., Xi, W., Fan, W. Ma, W., Cheng, Q.: IMMC: incremental maximum margin criterion. In: KDD 2004, pp. 725–730 (2004)

  77. Yang, W., Wang, J., Ren, M., Yang, J.: Feature extraction base on local maximum margin criterion. In: ICPR 2008, pp. 1–4 (2008)

  78. Zhang, J., Zhuang, Y.: Adaptive weight selection for incremental eigen-background modeling. In: ICME 2007, pp. 851–854 (2007)

  79. Zhao, Y., Gong, H., Lin, L., Jia, Y.: Spatio-temporal patches for night background modeling by subspace learning. In: ICPR 2008, pp. 1–4 (2008)

  80. Zheng, J., Wang, Y., Nihan, N., Hallenbeck, E.: Extracting roadway background image: a mode based approach. J. Transp. Res. Rep. 1944, 82–88 (2006)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thierry Bouwmans.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Farcas, D., Marghes, C. & Bouwmans, T. Background subtraction via incremental maximum margin criterion: a discriminative subspace approach. Machine Vision and Applications 23, 1083–1101 (2012). https://doi.org/10.1007/s00138-012-0421-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-012-0421-9

Keywords

Navigation