Computer Vision — ECCV 2002

Volume 2352 of the series Lecture Notes in Computer Science pp 666-680


Estimating Human Body Configurations Using Shape Context Matching

  • Greg MoriAffiliated withComputer Science Division, University of California at Berkeley
  • , Jitendra MalikAffiliated withComputer Science Division, University of California at Berkeley

* Final gross prices may vary according to local VAT.

Get Access


The problem we consider in this paper is to take a single two-dimensional image containing a human body, locate the joint positions, and use these to estimate the body configuration and pose in three-dimensional space. The basic approach is to store a number of exemplar 2D views of the human body in a variety of different configurations and viewpoints with respect to the camera. On each of these stored views, the locations of the body joints (left elbow, right knee, etc.) are manually marked and labelled for future use. The test shape is then matched to each stored view, using the technique of shape context matching in conjunction with a kinematic chain-based deformation model. Assuming that there is a stored view sufficiently similar in configuration and pose, the correspondence process will succeed. The locations of the body joints are then transferred from the exemplar view to the test shape. Given the joint locations, the 3D body configuration and pose are then estimated. We can apply this technique to video by treating each frame independently - tracking just becomes repeated recognition! We present results on a variety of datasets.