Motion Composition of 3D Video

  • Jianfeng Xu
  • Toshihiko Yamasaki
  • Kiyoharu Aizawa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4261)


3D video, which is composed of a sequence of mesh models and can provide the user with interactivity, is attracting increasing attention in many research groups. However, it is time-consuming and expensive to generate 3D video sequences. In this paper, a motion composition method is proposed to edit 3D video based on the user’s requirements so that 3D video can be re-used. By analyzing the feature vectors, the hierarchical motion structure is parsed and then a motion database is set up by selecting the representative motions. A motion graph is constructed to organize the motion database by finding the possible motion transitions. Then, the best path is searched based on a proposed cost function by a modified Dijkstra algorithm after the user selects the desired motions in the motion database, which are called key motions in this paper. Our experimental results show the edited 3D video sequence looks natural and realistic.


Mesh Model Motion Transition Good Path Dijkstra Algorithm Motion Capture Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Kanade, T., Rander, P., Narayanan, P.: Virtualized reality: constructing virtual worlds from real scenes. IEEE Multimedia 4(1), 34–47 (1997)CrossRefGoogle Scholar
  2. 2.
    Tomiyama, K., Orihara, Y., et al.: Algorithm for dynamic 3D object generation from multi-viewpoint images. In: Proc. of SPIE., vol. 5599, pp. 153–161 (2004)Google Scholar
  3. 3.
    Matsuyama, T., Wu, X., Takai, T., Wada, T.: Real–time dynamic3–D object shape reconstruction and high–fidelity texture mapping for 3–D video. IEEE Trans. Circuit and System for Video Technology. 14(3), 357–369 (2004)CrossRefGoogle Scholar
  4. 4.
    Wurmlin, S., Lamboray, E., Staadt, O.G., Gross, M.H.: 3D video recorder. In: Proc. of Pacific Graphics 2002, pp. 325–334 (2002)Google Scholar
  5. 5.
    Carranza, J., Theobalt, C., Magnor, M.A., Seidel, H.P.: Free–Viewpoint Video of Human Actors. In: SIGGRAPH 2003, vol. 22(3), pp. 569–577 (2003)Google Scholar
  6. 6.
    Hua, X., Lu, L., Zhang, H.J.: AVE-Automated Home Video Editing. In: Proc. of ACM Multimedia, pp. 490–497 (2003)Google Scholar
  7. 7.
    Xu, J., Yamasaki, T., Aizawa, K.: Motion Structure Extraction for 3D Video Editing. In: ITE Annual Convention, pp. 18–13 (2006)Google Scholar
  8. 8.
    Starck, J., Miller, G., Hilton, A.: Video-Based Character Animation. In: Symposium on Computer Animation (SCA 2005), pp. 49–58 (2005)Google Scholar
  9. 9.
    Xu, J., Yamasaki, T., Aizawa, K.: Motion Editing in 3D Video Database. In: Third International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006) (2006)Google Scholar
  10. 10.
    Arikan, O., Forsyth, D.A.: Interactive Motion Generation from Examples. In: SIGGRAPH 2002, pp. 483–490 (2002)Google Scholar
  11. 11.
    Lee, J., Chai, J., Reitsma, P.S.A.: Interactive Control of Avatars Animated with Human Motion Data. In: Conference on Computer graphics and interactive techniques, pp. 491–500 (2002)Google Scholar
  12. 12.
    Kovar, L., Gleicher, M., et al.: Motion Graphs. In: SIGGRAPH 2002, pp. 473–482 (2002)Google Scholar
  13. 13.
    Lai, Y.C., Chenney, S., et al.: Group Motion Graphs. In: Symposium on Computer Animation (SCA 2005), pp. 281–290 (2005)Google Scholar
  14. 14.
    Xu, J., Yamasaki, T., Aizawa, K.: Effective 3D Video Segmentation Based on Feature Vectors Using Spherical Coordinate System. In: Meeting on Image Recognition and Understanding, pp. 136–143 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jianfeng Xu
    • 1
  • Toshihiko Yamasaki
    • 2
  • Kiyoharu Aizawa
    • 1
    • 2
  1. 1.Dept. of Electronics Engineering 
  2. 2.Dept. of Frontier InformaticsThe University of TokyoChibaJapan

Personalised recommendations