Advertisement

Machine Vision and Applications

, Volume 25, Issue 5, pp 1197–1210 | Cite as

Advanced background modeling with RGB-D sensors through classifiers combination and inter-frame foreground prediction

  • Massimo CamplaniEmail author
  • Carlos Roberto del Blanco
  • Luis Salgado
  • Fernando Jaureguizar
  • Narciso García
Special Issue Paper

Abstract

An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.

Keywords

Background modeling Foreground prediction Mixture of Gaussian RGB-D cameras  Microsoft Kinect Classifier combination  

Notes

Acknowledgments

This work has been partially supported by the Ministerio de Economía y Competitividad of the Spanish Government under the project TEC2010-20412 (Enhanced 3DTV). M. Camplani would like to acknowledge the European Union and the Universidad Politécnica de Madrid (UPM) for supporting his activities through the Marie Curie-Cofund research grant.

References

  1. 1.
    Albiol, A., Albiol, A., Mossi, J., Oliver, J.: Who is who at different cameras: people re-identification using depth cameras. IET Comput. Vision 6(5), 378–387 (2012)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Arulampalam, M., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)CrossRefGoogle Scholar
  3. 3.
    Barbosa, I., Cristani, M., Bue, A., Bazzani, L., Murino, V.: Re-identification with RGB-D sensors. In: Computer Vision ECCV 2012. Workshops and Demonstrations. Lecture Notes in Computer Science 7583, 433–442 (2012)Google Scholar
  4. 4.
    Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)CrossRefMathSciNetGoogle Scholar
  5. 5.
    del Blanco, C.R., Jaureguizar, F., García, N.: Robust tracking in aerial imagery based on an ego-motion Bayesian model. EURASIP J. Adv. Signal Process. 2010, 1–19 (2010)Google Scholar
  6. 6.
    del Blanco, C.R., Jaureguizar, F., García, N.: An advanced Bayesian model for the visual tracking of multiple interacting objects. EURASIP J. Adv. Signal Process. 1, 130 (2011)CrossRefGoogle Scholar
  7. 7.
    Bouwmans, T.: Recent advanced statistical background modeling for foreground detection - a systematic survey. Recent Patents Comput. Sci. 4(3), 147–176 (2011)Google Scholar
  8. 8.
    Bouwmans, T., Baf, F.E.: Background modeling using mixture of gaussians for foreground detection-a survey. Recent Patents Comput. Sci. 3, 219–237 (2008)CrossRefGoogle Scholar
  9. 9.
    Camplani, M., Salgado, L.: Background foreground segmentation with RGB-D Kinect data: an efficient combination of classifiers. J. Vis. Commun. Image Represent. (2013) (in press)Google Scholar
  10. 10.
    Camplani, M., Mantecon, T., Salgado, L.: Accurate depth-color scene modeling for 3D contents generation with low cost depth cameras. In: 2012 19th IEEE International Conference on Image Processing (ICIP), pp 1741–1744 (2012)Google Scholar
  11. 11.
    Camplani, M., Mantecon, T., Salgado, L.: Depth-Color Fusion Strategy for 3D scene modeling with Kinect. IEEE Transactions on Cybernetics (accepted paper) (2013)Google Scholar
  12. 12.
    Cristani, M., Farenzena, M., Bloisi, D., Murino, V.: Background subtraction for automated multisensor surveillance: a comprehensive review. EURASIP J. Adv. Signal Process. 2010, 1–24 (2010)CrossRefGoogle Scholar
  13. 13.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern classification, 2nd edn. Wiley-Interscience, Newyork (2001)Google Scholar
  14. 14.
    Frick, A., Kellner, F., Bartczak, B., Koch, R.: Generation of 3D-TV LDV-content with time-of-flight camera. In: IEEE 3DTV Conference, pp 1–4 (2009)Google Scholar
  15. 15.
    Gordon, G., Darrell, T., Harville, M., Woodfill, J.: Background estimation and removal based on range and color. In: IEEE Computer Society Conference on Computer Vision and. Pattern Recognition 2, 464 (1999)Google Scholar
  16. 16.
    Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: IEEE computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, pp 1–8 (2012)Google Scholar
  17. 17.
    Han, J., Shao, L., Xu, D., Shotton, J.: Enhanced computer vision with microsoft Kinect Sensor: a review. IEEE transactions on Cybernetics (accepted paper) (2013).Google Scholar
  18. 18.
    Hofmann, M.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 38–43 (2012)Google Scholar
  19. 19.
    KaewTraKulPong, P., Bowden, R.: An improved adaptive background mixture model for real-time tracking with shadow detection. In: European Workshop on Advanced Video Based Surveillance Systems, pp 149–158 (2001)Google Scholar
  20. 20.
    Khoshelham, K., Elberink, S.O.: Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2), 1437–1454 (2012)CrossRefGoogle Scholar
  21. 21.
    Klare, B., Sarkar, S.: Background subtraction in varying illuminations using an ensemble based on an enlarged feature set. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 66–73 (2009)Google Scholar
  22. 22.
    Kuncheva, L.: Combining pattern classifiers: methods and algorithms. Wiley-Interscience, Newyork (2004)CrossRefGoogle Scholar
  23. 23.
    Leens, J., Barnich, O., Piérard, S., Droogenbroeck, M., Wagner, J.M.: Combining color, depth, and motion for video segmentation. In: Computer Vision Systems. Lecture Notes in Computer Science 5815, 104–113 (2009)Google Scholar
  24. 24.
    Li, L., Huang, W., Gu, I.Y.H., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)CrossRefGoogle Scholar
  25. 25.
    Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008)CrossRefMathSciNetGoogle Scholar
  26. 26.
    Mastorakis, G., Makris, D.: Fall detection system using kinect’s infrared sensor. J. Real-Time Image Process. pp 1–12 (2012). doi: 10.1007/s11554-012-0246-9
  27. 27.
    Molina, J., Escudero-Viñolo, M., Signoriello, A., Pardàs, M., Ferrán, C., Bescós, J., Marqués, F., Martínez, J.M.: Real-time user independent hand gesture recognition from time-of-flight camera video using static and dynamic models. Mach. Vis. Appl. 24(1), 187–204 (2011)CrossRefGoogle Scholar
  28. 28.
    Spinello, L., Arras, K.: People detection in rgb-d data. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 3838–3843 (2011)Google Scholar
  29. 29.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: IEEE Conference on Computer Vision and, Pattern Recognition, pp 246–252 (1999)Google Scholar
  30. 30.
    Stone, E., Skubic, M.: Evaluation of an inexpensive depth camera for in-home gait assessment. J. Ambient Intell. Smart Environ. 3(4), 349–361 (2011)Google Scholar
  31. 31.
    Stormer, A., Hofmann, M., Rigoll, G.: Depth gradient based segmentation of overlapping foreground objects in range images. In: IEEE Conference on, Information Fusion, pp 1–4 (2010)Google Scholar
  32. 32.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: Proceedings of the Seventh IEEE International Conference on Computer Vision pp 255–261 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Massimo Camplani
    • 1
    Email author
  • Carlos Roberto del Blanco
    • 1
  • Luis Salgado
    • 1
    • 2
  • Fernando Jaureguizar
    • 1
  • Narciso García
    • 1
  1. 1.E.T.S.I TelecomunicaciónUniversidad Politécnica de Madrid, Grupo de Tratamiento de ImágenesMadridSpain
  2. 2.Video Processing and Understanding LaboratoryUniversidad Autónoma de MadridMadridSpain

Personalised recommendations