Hybrid Silhouette Extraction Method for Detecting and Tracking the Human Motion

  • Moon Hwan Kim
  • Jin Bae Park
  • In Ho Ra
  • Young Hoon Joo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4222)


Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.


Sequential Image Optical Flow Human Motion Spatial Gradient Robot System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aggarwal, J.K., Cai, Q.: Human Motion Analysis: A Review. Computer Vision and Image Understanding 73, 428–440 (1999)CrossRefGoogle Scholar
  2. 2.
    Fan, B., Wang, Z.-F.: Pose estimation of human body based on silhouette images. In: International Conference on Information Acquisition Proceedings, pp. 296–300 (2004)Google Scholar
  3. 3.
    Haritaoglu, I., Cutler, R., Hawood, D., Davis, L.: Backpack: Detection of people carrying objects using silhouettes. Computer Vision and Image Understanding 3, 385–397 (2001)CrossRefGoogle Scholar
  4. 4.
    Haritaoglu, I., Harwood, D., Davis, L.: A real time system for detection and tracking people. Journal of Image and Vision Computing 7, 345–352 (1999)Google Scholar
  5. 5.
    Haritaoglu, I., Hawood, D., Davis, L.: Who? When? Where? What? A Real Time System for Detecting and Tracking People. Automatic Face and Gesture Recognition 3, 222–227 (1998)Google Scholar
  6. 6.
    Blake, A., Isard, M., Reynard, D.: Learning to track curves in motion of contours. Artificial Intelligence 78, 101–133 (1995)CrossRefGoogle Scholar
  7. 7.
    Anderson, C., Burt, P., van der Wal, G.: Change detection and tracking using pyramid transformation techniques. In: Proceedings of SPIE - Intelligent Robots and Computer Vision, vol. 579, pp. 72–78 (1985)Google Scholar
  8. 8.
    Barron, J., Fleet, D., Beauchemin, S.: Performance of optical flow techniques. Int. Journal of Computer Vision 12, 42–77 (1994)CrossRefGoogle Scholar
  9. 9.
    Bobick, A., Davis, J., Intille, S., Baird, F., Cambell, L., Irinov, Y., Pinhanez, C., Wilson, A.: KidsRoom: Action Recognition In An Interactive Story environment. M.I.T.TR. 398 (1996)Google Scholar
  10. 10.
    Fejes, S., Davis, L.S.: Exploring Visual Motion Using Projections of Flow Fields. In: Proc. of the DARPA Image Understanding Workshop, New Orleans, LA, pp. 113–122 (1997)Google Scholar
  11. 11.
    Ju, S., Black, M., Yacoob, Y.: Cardboard People: A Parameterized Model of Articulated Image Motion. In: Int. Conference on Face and Gesture Analysis, pp. 38–44 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Moon Hwan Kim
    • 1
  • Jin Bae Park
    • 1
  • In Ho Ra
    • 2
  • Young Hoon Joo
    • 2
  1. 1.Yonsei UniversitySeoulKorea
  2. 2.Kunsan National UniversityKunsanKorea

Personalised recommendations