Human Animation from 2D Correspondence Based on Motion Trend Prediction

  • Li Zhang
  • Ling Li
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4035)


A model-based method is proposed in this paper for 3-dimensional human motion recovery, taking un-calibrated monocular data as input. This method is designed to recover smooth human motions with high efficiency, while its outputs are guaranteed to resemble the original motion not only from the same viewpoint the sequence was taken, but also look natural and reasonable from any other viewpoint. The proposed method is called “Motion trend prediction (MTP)”. To evaluate the accuracy of the MTP, it is first tested on some “synthesized” input. After that experiments are conducted on real video data, which demonstrate that the proposed method is able to recover smooth human motions from their 2D image features with high accuracy.


Human Motion Computer Animation Human Posture Rotational Acceleration Rotation Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hodgins, J.K., O’Brien, J.F., Bodenheimer, R.E.: Computer Animation. In: Webster, J.G. (ed.) Wiley Encyclopedia of Electrical and Electronics Engineering, vol. 3, pp. 686–690 (1999)Google Scholar
  2. 2.
    Taylor, C.J.: Reconstruction of articulated objects from point correspondences in a single image. Computer Vision and Image Understanding 80(3), 349–363 (2000)MATHCrossRefGoogle Scholar
  3. 3.
    Remondina, F., Roditakis, A.: Human figure reconstruction and modeling from single Image or monocular video sequence. In: Proceeding of the 4th International Conference on 3D Digital Imaging and Modeling, pp. 116–123 (October 2003)Google Scholar
  4. 4.
    Xiaoming, L., Yueting, Z., Yunhe, P.: Video based Human animation technique. In: Proceeding of the 7th ACM International Conference on Multimedia, pp. 353–362 (October 1999)Google Scholar
  5. 5.
    Park, M.J., Choi, M.G., Shin, S.Y.: Human Motion Reconstruction from inter-frame feature correspondences of a single video stream using a notion library. In: Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 113–120 (July 2002)Google Scholar
  6. 6.
    DiFranco, D.E., Cham, T.-J., Rehg, J.M.: Reconstruction of 3D figure motion from 2D correspondences. In: Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  7. 7.
    Fdor, M.: Application of inverse kinenatics for skeleton manipulation in real-time. In: Proceedings of the 19th spring conference on Computer graphics, pp. 203–212 (April 2003)Google Scholar
  8. 8.
    Jianhui, Z., Ling, L.: Human motion reconstruction from monocular images using genetic algorithms. Computer Animation and Virtual World 15(3-4), 407–414 (2004)Google Scholar
  9. 9.
    Holt, R.J., Netravali, A.N., Huang, T.S., Qian, R.J.: Determining Articulated Motion from Perspective Views: A Decomposition Approach. Pattern Recognition 30, 1435–1449 (1997)CrossRefGoogle Scholar
  10. 10.
    Welch, G., Bishop, G.: An introduction for kalman filter. Technical report TR 95-041, Department of Computer Science, University of North Carolina at Chapel Hill (1995)Google Scholar
  11. 11.
    Li, Z., Ling, L.: 3D Human Animation from 2D Monocular Data Based on Motion Trend Prediction. In: Proceedings of the 14th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, pp. 117–124 (February 2006)Google Scholar
  12. 12.
    Rose, C., Guenter, B., Bodenheimer, B., Cohen, M.F.: Efficient generation of motion transitions using spacetime constraints. In: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 147–154 (August 1996)Google Scholar
  13. 13.
    Popovic, Z., Witkin, A.: Physically based motion transformation. In: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 11–20 (July 1999)Google Scholar
  14. 14.
    Agarwal, A., Triggs, B.: Learning to track 3D human motion from silhouettes. In: Proceedings of the 21st International Conference on Machine Learning, pp. 9–16 (July 2004)Google Scholar
  15. 15.
    Gibson, D.J., Oziem, D.J., Daltion, C.J., Campbell, N.W.: Capture and synthesis of insect motion. In: Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 39–48 (August 2005)Google Scholar
  16. 16.
    Sminchisescu, C., Jepson, A.: Generative modeling for continuous non-linearly embedded visual inference. In: Proceedings of the 21st International Conference on Machine Learning, pp. 9–16 (July 2004)Google Scholar
  17. 17.
    Juan, C.D., Bodenheimer, B.: Cartoon texture. In: Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 267–276 (August 2004)Google Scholar
  18. 18.
    Müller, M., Röder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. ACM Transactions on Graphics 24(3), 677–685 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Li Zhang
    • 1
  • Ling Li
    • 1
  1. 1.Department of ComputingCurtin University of TechnologyPerthAustralia

Personalised recommendations