Machine Vision and Applications

, Volume 25, Issue 5, pp 1101–1103 | Cite as

Special issue on background modeling for foreground detection in real-world dynamic scenes

  • Thierry Bouwmans
  • Jordi Gonzàlez
  • Caifeng Shan
  • Massimo Piccardi
  • Larry Davis

Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called “foreground” from the remaining part of the scene called “background”, and permits different algorithmic treatment in the video processing field such as video surveillance, optical motion capture, multimedia applications, teleconferencing and human–computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background, and the foreground detection is made using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, illumination changes in real scenes with fixed cameras or mobile devices are needed and so different strategies may be used such as automatic feature selection, model selection or hierarchical models. Another feature of background modeling methods is that the use of advanced models has to be computed in real-time and with low memory requirements. Algorithms may need to be redesigned to meet these requirements. Thus, the readers can find (1) new methods to model the background, (2) recent strategies to improve foreground detection to tackle challenges such as dynamic backgrounds and illumination changes, and (3) adaptive and incremental algorithms to achieve real-time applications.

First, Shah et al. [10] adopt the mixture of Gaussians (MOG) [12] as the basic framework for their complete system. A new online and self-adaptive method permits an automatic selection of the parameters for the GMM. Second, they introduce several new solutions to address key challenges such as sudden illumination changes and ghosts. Indeed, a novel hierarchical SURF feature matching algorithm suppresses ghosts in the foreground mask. Moreover, a voting-based scheme is used to exploit spatial and temporal information to refine the foreground mask. Finally, temporal and spatial history of foreground blobs is used to detect and handle paused objects. The proposed model shows significant robustness in presence of illumination changes and ghosts.

Shimada et al. [11] propose a novel framework for the GMM to reduce the memory requirement without loss of accuracy. This “case-based background modeling” creates or removes a background model only when necessary. Furthermore, a case-by-case model is shared by some of the pixels. Finally, pixel features are divided into two groups, one for model selection and the other for modeling. This complete approach realizes a low-cost and high-accurate background model. The memory usage and the computational cost could be reduced by half of the traditional GMM with better accuracy.

Alvar et al. [1] present an algorithm called mixture of merged Gaussian algorithm (MMGA) to reduce drastically the execution time to reach real-time implementation, without altering the reliability and accuracy. The algorithm is based on a combination of the probabilistic model of the MOG [12], and the learning process of real-time dynamic ellipsoidal neural network (RTDENN) model. Results show that the MMGA achieves a very significant reduction of execution time compared to the MOG with a higher degree of robustness against noise and illumination changes.

Modeling the background using the Gaussian mixture is based on the assumption that the background and foreground distributions are Gaussians which is not always the case for most environments. Furthermore, it is unable to distinguish between moving shadows and moving objects. In this context, Elguebaly and Bouguila [4] propose a mixture of asymmetric Gaussians to enhance the robustness and flexibility of mixture modeling, and a shadow detection scheme to remove unwanted shadows from the scene.

Narayana and Learned-Miller [8] simply use Bayes’ rule to classify pixels arguing. Then, their approach uses a background likelihood, a foreground likelihood, and a prior at each pixel. Then, they describe a model for the likelihoods that is built using not only the past observations at a given pixel location but by also including observations in a spatial neighborhood around the location. This allows them to model the influence between neighboring pixels. Although similar in spirit to the joint domain-range model, their model overcomes certain deficiencies in that model.

Hernandez-Lopez and Rivera [7] adopt a change detection method to achieve real-time performance. This approach implements a probabilistic segmentation based on the quadratic Markov measure fields model. This framework regularizes the likelihood of each pixel belonging to each one of the classes, that is background or foreground; a likelihood that takes into account two cases. The first one is when the background is static and the foreground might be static or moving. The second one is when the background is unstable and the foreground is moving. Moreover, this likelihood is robust to illumination changes, cast shadows and camouflage situations. Furthermore, the algorithm was implemented in CUDA using a NVIDIA graphics processing unit (GPU) in order to fulfill real-time execution requirement.

Camplani et al. [3] develop a Bayesian framework that is able to accurately segment foreground objects in RGB-D imagery. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian network with a depth-based dynamic model, and by considering two independent depth and color-based GMM background models. As a result, more compact segmentations and refined foreground object silhouettes are obtained.

In another way, Fernandez-Sanchez et al. [5] propose a depth-extended Codebook model which fuses range and color information, as well as a post-processing mask fusion stage to get the best of each feature. Results are presented with a complete dataset of stereo images.

Seidel et al. [9] adopt a Robust PCA model to separate the sparse foreground objects from the background. While many RPCA algorithms use the \(l_1\)-norm as a convex relaxation, their approach uses a smoothed \(l_p\)-quasi-norm Robust online subspace tracking. The algorithm is based on alternating minimization on manifolds. The implementation on a GPU achieves real-time performance at a resolution of \(160 \times 120\). Experimental results show that the method succeeds in a variety of challenges such as camera jitter and dynamic backgrounds.

Hagege [6] describes a scene appearance model as a function of the behavior of static illumination sources, within or beyond the scene, and arbitrary three-dimensional configurations of patches and their reflectance distributions. Then, a spatial prediction technique was developed to predict the appearance of the scene, given a few measurements within it. The scene appearance model and the prediction technique were developed analytically and tested empirically. Results show that this scene appearance model permit to detect changes that are not the result of illumination changes at the resolution of single pixels, despite sudden and complex illumination changes, and to do so independently of the texture of the region in the neighborhood of the pixel.

Maritime environment represents a challenging application due to the complexity of the observed scene (waves on the water surface, boat wakes, weather issues). In this context, Bloisi et al. [2] present a method for creating a discretization of an unknown distribution that can model highly dynamic background such as water background with varying light and weather conditions. A quantitative evaluation carried out on the recent MAR datasets demonstrates the effectiveness of this approach.

Zeng et al. [13] propose an effective mosaic algorithm which combined SIFT and dynamic programming for image mosaic which is a useful preprocessing step for background subtraction in videos recorded by a moving camera. To deal with the ghosting effect and mosaic failure, this algorithm uses an improved optimal seam searching criterion that provides a protection mechanism for moving objects with an edge-enhanced weighting intensity difference operator. Furthermore, it addresses the ghosts and incomplete effect induced by moving objects. Experimental results show the effectiveness in the presence of huge exposure difference and big parallax between adjacent images.



We thank all the reviewers for their valuable comments that ensure the high quality of the special issue, and all the contributing authors for their interesting and innovative work. We would also like to thank the current Editor-in-Chief Prof. Mubarak Shah for sharing our vision and providing guidance. The editorial staff of MVA, especially Cherry Place and Shradha Menon, have been extremely supportive, helpful, and patient throughout the entire process.


  1. 1.
    Alvar, M., Rodriguez-Calvo, A., Sanchez-Miralles, A., Arranz, A.: Mixture of merged gaussian algorithm using RTDENN. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0550-9
  2. 2.
    Bloisi, D., Pennisi, A., Iocchi, L.: Background modelling in the maritime domain. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0554-5
  3. 3.
    Camplani, M., Del Blanco, C., Salgad, L., Garca, N., Jaureguizar, F.: Advanced background modeling with rgb-d sensors through classifiers combination and inter-frame foreground prediction. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0557-2
  4. 4.
    Elguebaly, T., Bouguila, N.: Background subtraction using finite mixtures of asymmetric gaussian distributions and shadow detection. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0568-7
  5. 5.
    Fernandez-Sanchez, E., Diaz, J., Ros, E.: Background subtraction model based on color and depth cues. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0562-5
  6. 6.
    Hagege, R.: Scene appearance model based on spatial prediction. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0565-2
  7. 7.
    Hernandez-Lopez, F., Rivera, M.: Change detection by probabilistic segmentation from monocular view. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0564-3
  8. 8.
    Narayana, M., Learned-Miller, E.: Background subtraction—separating the modeling and the inference. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0569-y
  9. 9.
    Seidel, F., Hage, C., Kleinsteuber, M.: pROST : a smoothed lp-norm robust online subspace tracking method for background subtraction in video. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0555-4
  10. 10.
    Shah, M., Deng, J.D., Woodford, B.J.: Video background modeling: recent approaches, issues and our proposed techniques. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0552-7
  11. 11.
    Shimada, A., Nonaka, Y., Nagahara, H., Taniguchi, R.: Case-based background modelling: associative background database towards low-cost and high-performance change detection. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0563-4
  12. 12.
    Stauffer, C., Grimson, E.: Adaptive background mixture models for real-time tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 246–252 (1999)Google Scholar
  13. 13.
    Zeng, L., Zhang, S., Zhang, Y.: Dynamic image mosaic via SIFT and dynamic programming. Mach. Vis. Appl. (2013). doi: 10.1007/s00138-013-0551-8

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Thierry Bouwmans
    • 1
  • Jordi Gonzàlez
    • 2
  • Caifeng Shan
    • 3
  • Massimo Piccardi
    • 4
  • Larry Davis
    • 5
  1. 1.Laboratory of MIAUniversity of La RochelleLa RochelleFrance
  2. 2.Computer Vision CenterUniversity of Autònoma de BarcelonaBarcelonaSpain
  3. 3.Philips ResearchEindhovenThe Netherlands
  4. 4.University of TechnologySydneyAustralia
  5. 5.CV LabUniversity of MarylandCollege ParkUSA

Personalised recommendations