Exploiting Color and Depth for Background Subtraction

  • Lucia MaddalenaEmail author
  • Alfredo Petrosino
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10590)


Background subtraction from color and depth data is a fundamental task for indoor video surveillance applications that use data acquired by RGBD sensors. This paper proposes a method based on two background models for color and depth information, exploiting a self-organizing neural background model previously adopted for RGB videos. The resulting color and depth detection masks are combined, not only to achieve the final results, but also to better guide the selective model update procedure. The experimental evaluation on the SBM-RGBD dataset shows that the exploitation of depth information allows to achieve much higher performance than just using color, accurately handling color and depth background maintenance challenges.


Background subtraction Color and depth data RGBD 



L. Maddalena wishes to acknowledge the GNCS (Gruppo Nazionale di Calcolo Scientifico) and the INTEROMICS Flagship Project funded by MIUR, Italy. A. Petrosino acknowledges Project VIRTUALOG Horizon 2020-PON 2014/2020.


  1. 1.
    Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: an overview. Comput. Sci. Rev. 11, 31–66 (2014)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bouwmans, T., Maddalena, L., Petrosino, A.: Scene background initialization: a taxonomy. Pattern Recogn. Lett. 96, 3–11 (2017)CrossRefGoogle Scholar
  3. 3.
    Camplani, M., Maddalena, L., Alcover, G.M., Petrosino, A., Salgado, L.: SBM-RGBD Dataset.
  4. 4.
    Camplani, M., Maddalena, L., Moyà Alcover, G., Petrosino, A., Salgado, L.: A benchmarking framework for background subtraction in RGBD videos. In: Battiato, S., Gallo, G., Farinella, G., Leo, M. (eds.) New Trends in Image Analysis and Processing-ICIAP 2017 Workshops. LNCS. Springer, Heidelberg (2017)Google Scholar
  5. 5.
    Camplani, M., Salgado, L.: Background foreground segmentation with RGB-D Kinect data: an efficient combination of classifiers. J. Vis. Commun. Image Represent. 25(1), 122–136 (2014)CrossRefGoogle Scholar
  6. 6.
    Cuevas, C., Martínez, R., García, N.: Detection of stationary foreground objects: a survey. Comput. Vis. Image Underst. 152, 41–57 (2016)CrossRefGoogle Scholar
  7. 7.
    Ding, J., Ma, R., Chen, S.: A scale-based connected coherence tree algorithm for image segmentation. IEEE Trans. Image Process. 17(2), 204–216 (2008)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Fernandez-Sanchez, E.J., Rubio, L., Diaz, J., Ros, E.: Background subtraction model based on color and depth cues. Mach. Vis. Appl. 25(5), 1211–1225 (2014)CrossRefGoogle Scholar
  9. 9.
    Frick, A., Kellner, F., Bartczak, B., Koch, R.: Generation of 3D-TV LDV-content with time-of-flight camera. In: 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video, pp. 1–4, May 2009Google Scholar
  10. 10.
    Gallego, J., Pardás, M.: Region based foreground segmentation combining color and depth sensors via logarithmic opinion pool decision. J. Vis. Commun. Image Represent. 25(1), 184–194 (2014)CrossRefGoogle Scholar
  11. 11.
    Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: a new change detection benchmark dataset. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1–8, June 2012Google Scholar
  12. 12.
    Harville, M., Gordon, G., Woodfill, J.: Foreground segmentation using adaptive mixture models in color and depth. In: Proceedings of IEEE Workshop on Detection and Recognition of Events in Video, pp. 3–11 (2001)Google Scholar
  13. 13.
    Huang, J., Wu, H., Gong, Y., Gao, D.: Random sampling-based background subtraction with adaptive multi-cue fusion in RGBD videos. In: International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, pp. 30–35 (2016)Google Scholar
  14. 14.
    Laugraud, B., Piérard, S., Braham, M., Van Droogenbroeck, M.: Simple median-based method for stationary background generation using background subtraction algorithms. In: Murino, V., Puppo, E., Sona, D., Cristani, M., Sansone, C. (eds.) ICIAP 2015. LNCS, vol. 9281, pp. 477–484. Springer, Cham (2015). CrossRefGoogle Scholar
  15. 15.
    Liang, Z., Liu, X., Liu, H., Chen, W.: A refinement framework for background subtraction based on color and depth data. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 271–275, September 2016Google Scholar
  16. 16.
    Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Maddalena, L., Petrosino, A.: A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection. Neural Comput. Appl. 19, 179–186 (2010)CrossRefGoogle Scholar
  18. 18.
    Maddalena, L., Petrosino, A.: The SOBS algorithm: what are the limits? In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 21–26, June 2012Google Scholar
  19. 19.
    Shah, M., Deng, J.D., Woodford, B.J.: Video background modeling: recent approaches, issues and our proposed techniques. Mach. Vis. Appl. 25(5), 1105–1119 (2014)CrossRefGoogle Scholar
  20. 20.
    Stormer, A., Hofmann, M., Rigoll, G.: Depth gradient based segmentation of overlapping foreground objects in range images. In: International Conference on Information Fusion, pp. 1–4, July 2010Google Scholar
  21. 21.
    Xu, Y., Dong, J., Zhang, B., Xu, D.: Background modeling methods in video analysis: a review and comparative evaluation. CAAI Trans. Intell. Tech. 1(1), 43–60 (2016)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.National Research CouncilNaplesItaly
  2. 2.University of Naples ParthenopeNaplesItaly

Personalised recommendations