Body-Part Templates for Recovery of 2D Human Poses under Occlusion

  • Ronald Poppe
  • Mannes Poel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5098)

Abstract

Detection of humans and estimation of their 2D poses from a single image are challenging tasks. This is especially true when part of the observation is occluded. However, given a limited class of movements, poses can be recovered given the visible body-parts. To this end, we propose a novel template representation where the body is divided into five body-parts. Given a match, we not only estimate the joints in the body-part, but all joints in the body. Quantitative evaluation on a HumanEva walking sequence shows mean 2D errors of approximately 27.5 pixels. For simulated occlusion of the head and arms, similar results are obtained while occlusion of the legs increases this error by 6 pixels.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Poppe, R.: Vision-based human motion analysis: An overview. Computer Vision and Image Understanding (CVIU) 108(1-2), 4–18 (2007)CrossRefGoogle Scholar
  2. 2.
    Shakhnarovich, G., Viola, P.A., Darrell, T.: Fast pose estimation with parameter-sensitive hashing. In: Proceedings of the International Conference on Computer Vision (ICCV 2003), Nice, France, October 2003, vol. 2, pp. 750–759 (2003)Google Scholar
  3. 3.
    Agarwal, A., Triggs, B.: Recovering 3D human pose from monocular images. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 28(1), 44–58 (2006)CrossRefGoogle Scholar
  4. 4.
    Agarwal, A., Triggs, B.: A local basis representation for estimating human pose from cluttered images. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3851, pp. 50–59. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Howe, N.R.: Boundary fragment matching and articulated pose under occlusion. In: Perales, F.J., Fisher, R.B. (eds.) AMDO 2006. LNCS, vol. 4069, pp. 271–280. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. 6.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. International Journal of Computer Vision 61(1), 55–79 (2005)CrossRefGoogle Scholar
  7. 7.
    Ramanan, D., Forsyth, D.A., Zisserman, A.: Tracking people by learning their appearance. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 29(1), 65–81 (2007)CrossRefGoogle Scholar
  8. 8.
    Siddiqui, M., Medioni, G.: Efficient upper body pose estimation from a single image or a sequence. In: Elgammal, A., Rosenhahn, B., Klette, R. (eds.) Human Motion 2007. LNCS, vol. 4814, pp. 74–87. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Demirdjian, D., Urtasun, R.: Patch-based pose inference with a mixture of density estimators. In: Zhou, S.K., Zhao, W., Tang, X., Gong, S. (eds.) AMFG 2007. LNCS, vol. 4778, pp. 96–108. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  10. 10.
    Sigal, L., Black, M.J.: HumanEva: Synchronized video and motion capture dataset for evaluation of articulated human motion. Technical Report CS-06-08, Brown University, Department of Computer Science, Providence, RI (September 2006)Google Scholar
  11. 11.
    Howe, N.: Recognition-based motion capture and the HumanEva II test data. In: Proceedings of the Workshop on Evaluation of Articulated Human Motion and Pose Estimation (EHuM) at the Conference on Computer Vision and Pattern Recognition (CVPR 2007), Minneapolis, MN (June 2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Ronald Poppe
    • 1
  • Mannes Poel
    • 1
  1. 1.Human Media Interaction Group, Dept. of Computer ScienceUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations