Advertisement

Multiple camera based human motion estimation

  • Akira Utsumi
  • Hiroki Mori
  • Jun Ohya
  • Masahiko Yachida
Poster Session III
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1352)

Abstract

We propose a human motion detection method using multiple-viewpoint images. We employ a simple elliptic model and a small number of reliable image features detected in multiple-viewpoint images to estimate the pose (position and normal axis) of a human body, where feature extraction is employed based on distance transformation. The COG (center of gravity) position and its distance value are extracted in the process. These features are robust against changes in human shapes caused by hand/leg bending and produce stable pose estimation results. After a pose estimation, a "best view" is selected based on the estimation result and further processing is performed including body-side detection and gesture recognition (in a 2D image of the selected view). This viewpoint selection approach can overcome the problem of self-occlusions. We confirmed the stability of the system through experiments.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jakub Segen and Sarma Pingali. A camera-based system for tracking people in real time. In Proc. of 13th International Conference on Pattern Recognition, pages 63–67,1996.Google Scholar
  2. 2.
    Q. Cai, A. Mitiche, and J. K. Aggarwal. Tracking human motion in an indoor environment. In Proc. of 2nd International Conference on Image Processing, pages 215–218, 1995.Google Scholar
  3. 3.
    Q. Cai and J. K. Aggarwal. Tracking human motion using multiple cameras. In Proc. of 13th International Conference on Pattern Recognition, pages 68–72, 1996.Google Scholar
  4. 4.
    A. Azarbayejani and A. Pentland. Real-time self-calibrating stereo person tracking using 3-d shape estimation from blob features. In Proc. of 13th International Conference on Pattern Recognition, pages 627–632, 1996.Google Scholar
  5. 5.
    C. Wren, A. Azarbayejani, T. Darrell, and A. Pentland. Pfinder: Real-time tracking of the human body. In SPIE Proceeding vol.2615, pages 89–98, 1996.Google Scholar
  6. 6.
    M. Patrick Johnson, P. Maes, and T. Darrell. Evolving visual routines. In Proc. of Artificial Life IV, pages 198–209, 1994.Google Scholar
  7. 7.
    D. M. Gavrila and L. S. Davis. 3-d model-based tracking of humans in action: a multi-view approach. In Proc. of CVPR, pages 73–80, 1996.Google Scholar
  8. 8.
    K. Rohr. Towards model-based recognition of human movements in image sequences. CVGIP: Image Understanding, 59(1):94–115, 1994.Google Scholar
  9. 9.
    M. Yamamoto and K. Koshikawa. Human motion analysis based on a robot arm model. In Proc. of CVPR, pages 664–665, 1991.Google Scholar
  10. 10.
    J. O'rourke and N. J. Badler. Model-based image analysis of human motion using constraint propagation. IEEE Pattern Anal. Machine Intell., 2(6):522–536, 1980.Google Scholar
  11. 11.
    Akira Utsumi, Tsutomu Miyasato, Fumio Kishino, and Ryohei Nakatsu. Hand gesture recognition system using multiple cameras. In Proc. of 13th International Conference on Pattern Recognition, pages 219–224, 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Akira Utsumi
    • 1
  • Hiroki Mori
    • 2
  • Jun Ohya
    • 1
  • Masahiko Yachida
    • 2
  1. 1.ATR Media Integration & Communications Research LaboratoriesSorakugun KyotoJapan
  2. 2.Faculty of Engineering ScienceOsaka UniversityToyonaka-shi OsakaJapan

Personalised recommendations