A Benchmarking Framework for Background Subtraction in RGBD Videos

  • Massimo Camplani
  • Lucia MaddalenaEmail author
  • Gabriel Moyá Alcover
  • Alfredo Petrosino
  • Luis Salgado
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10590)


The complementary nature of color and depth synchronized information acquired by low cost RGBD sensors poses new challenges and design opportunities in several applications and research areas. Here, we focus on background subtraction for moving object detection, which is the building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. The aim of this paper is to describe a novel benchmarking framework that we set up and made publicly available in order to evaluate and compare scene background modeling methods for moving object detection on RGBD videos. The proposed framework involves the largest RGBD video dataset ever made for this specific purpose. The 33 videos span seven categories, selected to include diverse scene background modeling challenges for moving object detection. Seven evaluation metrics, chosen among the most widely used, are adopted to evaluate the results against a wide set of pixel-wise ground truths. Moreover, we present a preliminary analysis of results, devoted to assess to what extent the various background modeling challenges pose troubles to background subtraction methods exploiting color and depth information.


Background subtraction Color and depth data RGBD 



We would like to thank all the authors who submitted their results to the SBM-RGBD Challenge, which will serve as reference for future generation methods. L. Maddalena wishes to acknowledge the GNCS (Gruppo Nazionale di Calcolo Scientifico) and the INTEROMICS Flagship Project funded by MIUR, Italy. A. Petrosino wishes to acknowledge Project VIRTUALOG Horizon 2020-PON 2014/2020. L. Salgado wishes to acknowledge projects TEC2013-48453 (MR-UHDTV) and TEC2016-75981 (IVME) funded by the Ministerio de Economa, Industria y Competitividad (AEI/FEDER) of the Spanish Government.


  1. 1.
    Bouwmans, T., Maddalena, L., Petrosino, A.: Scene background initialization: a taxonomy. Pattern Recogn. Lett. 96, 3–11 (2017)CrossRefGoogle Scholar
  2. 2.
    Bouwmans, T., Sobral, A., Javed, S., Jung, S.K., Zahzah, E.-H.: Decomposition into low-rank plus additive matrices for background/foreground separation: a review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 23, 1–71 (2017)CrossRefzbMATHGoogle Scholar
  3. 3.
    Camplani, M., Salgado, L.: Background foreground segmentation with RGB-D Kinect data: an efficient combination of classifiers. J. Vis. Commun. Image Represent. 25(1), 122–136 (2014)CrossRefGoogle Scholar
  4. 4.
    De Gregorio, M., Giordano, M.: CwisarDH\(^{+}\): Background Detection in RGBD Videos by Learning. In: Battiato, S., Gallo, G., Farinella, G., Leo, M. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 242–253. Springer, Cham (2017)Google Scholar
  5. 5.
    Fernandez-Sanchez, E.J., Diaz, J., Ros, E.: Background subtraction based on color and depth using active sensors. Sensors 13, 8895–8915 (2013)CrossRefGoogle Scholar
  6. 6.
    Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: a new change detection benchmark dataset. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 1–8, June 2012Google Scholar
  7. 7.
    Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)CrossRefGoogle Scholar
  8. 8.
    Laugraud, B., Piérard, S., Braham, M., Van Droogenbroeck, M.: Simple median-based method for stationary background generation using background subtraction algorithms. In: Murino, V., Puppo, E., Sona, D., Cristani, M., Sansone, C. (eds.) ICIAP 2015. LNCS, vol. 9281, pp. 477–484. Springer, Cham (2015). CrossRefGoogle Scholar
  9. 9.
    Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Maddalena, L., Petrosino, A.: Background model initialization for static cameras. In: Bouwmans, T., Porikli, F., Hoferlin, B., Vacavant, A. (eds.) Background Modeling and Foreground Detection for Video Surveillance, pp. 3-1-3-16. Chapman & Hall/CRC (2014)Google Scholar
  11. 11.
    Maddalena, L., Petrosino, A.: Exploiting color and depth for background subtraction. In: Battiato, S., Gallo, G., Farinella, G., Leo, M. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 254–265. Springer, Cham (2017)Google Scholar
  12. 12.
    Minematsu, T., Shimada, A., Uchiyama, H., Taniguchi, R.: Simple combination of appearance and depth for foreground segmentation. In: Battiato, S., Gallo, G., Farinella, G., Leo, M. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 266–277. Springer, Cham (2017)Google Scholar
  13. 13.
    Moyá-Alcover, G., Elgammal, A., Jaume-i-Capó, A., Varona, J.: Modeling depth for nonparametric foreground segmentation using RGBD devices. Pattern Recogn. Lett. 96, 76–85 (2017)CrossRefGoogle Scholar
  14. 14.
    Song, S., Xiao, J.: Tracking revisited using RGBD camera: unified benchmark and baselines. In: Proceedings of the 2013 IEEE International Conference on Computer Vision, ICCV 2013, pp. 233–240. IEEE Computer Society (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Massimo Camplani
    • 1
  • Lucia Maddalena
    • 2
    Email author
  • Gabriel Moyá Alcover
    • 3
  • Alfredo Petrosino
    • 4
  • Luis Salgado
    • 5
    • 6
  1. 1.University of BristolBristolUK
  2. 2.National Research CouncilNaplesItaly
  3. 3.Universitat de les Illes BalearsPalmaSpain
  4. 4.University of Naples ParthenopeNaplesItaly
  5. 5.Universidad Politécnica de MadridMadridSpain
  6. 6.Universidad Autónoma de MadridMadridSpain

Personalised recommendations