Fast and accurate extraction of moving object silhouette for personalized Virtual Reality Studio @ Home

Original Research Paper

Abstract

Accurate segmentation of moving object silhouette in a real-time video is very important for object silhouette extraction in the vision-based interactive systems. However, the inherent problem of moving object segmentation based on the background subtraction criteria is to distinguish the changes occurring from background disturbing effects such as noise, shadows and illumination changes. The present paper proposes a hybrid method based on the background subtraction criteria that preserves the boundary of moving object and also robust against the noise and illumination changes. In the proposed method, the object regions are well identified by fusing the results from the background difference and motion-based change detection criterion. The shadows and highlights are well detected by utilizing the normalized luminance and background difference in Hue and Saturation component. The paper also introduces a novel connected component analysis procedure for detecting the object blob from the noise blobs, and a robust pixel-based background update scheme for updating the dynamic changes in the background. Moreover, the computational complexity of the proposed algorithm is analyzed. The proposed method has been implemented and evaluated regarding the segmentation quality and the frame rate. Further, the method has been shown to successfully extract the moving object silhouette and robust against the disturbing effects. Moreover, the proposed method has been tested in the VR@Home platform.

Keywords

Object silhouette Background subtraction Interactive systems Connected component analysis Change detection Shadow elimination 

References

  1. 1.
    Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785 (1997)CrossRefGoogle Scholar
  2. 2.
    Hu, S., Mortensen, J., Buxton, B.F.: A real-time tracking system developed for an interactive stage performance. Trans. Eng. Comput. Technol. 5, 102–105 (2005)Google Scholar
  3. 3.
    Haritaoglu, I., Harwood, D., Davis, L.S.: W 4: real-time surveillance of people and their activities. IEEE Trans. Pattern Anal. Mach. Intell. 25(8), 809–830 (2000)CrossRefGoogle Scholar
  4. 4.
    Zhang, R., C., V., Metaxas, D.: Human gait recognition. In: IEEE Workshop on Articulated and Nonrigid Motion (in conjunction with CVPR). Rutgers University, Piscataway, pp. 18–18 (2004)Google Scholar
  5. 5.
    Mlayim, Y., U.Y., Atalay, V.: Silhouette-based 3D model reconstruction from multiple images. IEEE Trans. Syst. Man Cybern. B33(4), 582–591 (2003)Google Scholar
  6. 6.
    Wang, D.: Unsupervised video segmentation based on watersheds and temporal tracking. IEEE Trans. Circuits Syst. Video Technol. 8(5), 539–546 (1998)CrossRefGoogle Scholar
  7. 7.
    Emrullah, D., Touradj, E.: Change detection and background extraction by linear algebra. Proc. IEEE. 89(10), 1368–1381 (2001)CrossRefGoogle Scholar
  8. 8.
    Hong, D., Woo, W.: A background subtraction for a vision-based user interface. In: Proceedings of ICICS-PCM. Singapore 1B3.3.1–5 (2003)Google Scholar
  9. 9.
    Spagnolo, P., Leo, M., Attolico, G., Distante, A.: A supervised approach in background modelling for visual surveillance. In: Audio- and Video-Based Biometric Person Authentication. LNCS, vol. 2688. Springer, Berlin, pp. 592–599 (2004)Google Scholar
  10. 10.
    Chien, S.Y., Ma, S.Y., Chen, L.G.: Efficient moving object segmentation algorithm using background registration technique. IEEE Trans. Circuits Syst. Video Technol 12(7), 577–586 (2002)CrossRefGoogle Scholar
  11. 11.
    Zhao, J.M., Chen, C.: Robust background subtraction in HSV color space. In: Proceedings of SPIE: Multimedia Systems and Applications, vol. 4861, pp. 325–332 (2002)Google Scholar
  12. 12.
    Francois, A., Medioni, G.G.: Adaptive color background modeling for real time segmentation of video streams. In: Proceedings of International Conference on Imaging Science, Systems, and Technology. Vegas, NA (1999)Google Scholar
  13. 13.
    Horprasert, T., D.H., Davis, L.: A statistical approach for real time robust background subtraction and shadow detection. In: IEEE Frame Rate Workshop (1999)Google Scholar
  14. 14.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and practice of background maintenance. In: International Conference on Computer Vision, pp. 780–785 (1999)Google Scholar
  15. 15.
    Harville, M.: A framework for high-level feedback to adaptive per-pixel mixture of gaussian models. In: Proceedings of European Conference on Computer Vision, vol. III. Springer, London, pp. 543–560 (2002)Google Scholar
  16. 16.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, Fort Collins, CO, USA, pp. 248–252 (1999)Google Scholar
  17. 17.
    Elgammal, A., Harwood, D., Davis, L.S.: Non-parametric model for background subtraction. In: Proceedings of the 6th European Conference on Computer Vision, vol. III. Springer, London, pp. 751–767 (2000)Google Scholar
  18. 18.
    Prati, A., Mikic, I., Trivedi, M.M., Cucchiara, R.: Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 918–923 (2003)CrossRefGoogle Scholar
  19. 19.
    Chalidabhongse, T.H., Kim, K., Harwood, D., Davis, L.: A perturbation method for evaluating background subtraction algorithms. In: Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS 2003) (2003)Google Scholar
  20. 20.
    Lee, W., Kim, K., Rambabu, C., Yu, J., Lee, J., Lee, K., Woo, W.: VR@Home: A personal VR studio platform. In: Proceeding of Fourth International Symposium on Ubiquitous VR, vol. 191, GIST, U-VR Lab, S. Korea, pp. 53–56 (2006)Google Scholar
  21. 21.
    Wonwoo, Lee, Rambabu, C., Woontack Woo, J.L.: VR@Home: an immersive contents creation system for 3D user-generated contents. In: Technologies for E-Learning and Digital Entertainment. LNCS, vol. 4469. Springer, Berlin, pp. 81–91 (2007)Google Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  1. 1.Imaging Informatics GroupBioinformatics InstituteSingaporeSingapore
  2. 2.U-VR LabGISTKwangjuSouth Korea

Personalised recommendations