Advertisement

Automatic Character Motion Style Transfer via Autoencoder Generative Model and Spatio-Temporal Correlation Mining

  • Dong Hu
  • Xin Liu
  • Shujuan Peng
  • Bineng Zhong
  • Jixiang Du
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 771)

Abstract

The style of motion is essential for virtual characters animation, and it is significant to generate motion style efficiently in computer animation. In this paper, we present an efficient approach to automatically transfer motion style by using autoencoder generative model and spatio-temporal correlation mining, which allows users to transform an input motion into a new style while preserving its original content. To this end, we introduce a history vector of previous motion frames into autoencoder generative network, and extract the spatio-temporal feature of input motion. Accordingly, the spatio-temporal correlation within motions can be represented by the correlated hidden units in this network. Subsequently, we established the constraints of Gram matrix in such feature space to produce transferred motion by pre-trained generative model. As a result, various motions of particular semantic can be automatically transferred from one style to another one, and the extensive experiments have shown its outstanding performance.

Keywords

Deep autoencoder Style transfer Character animation Spatio-temporal correlation History vector 

Notes

Acknowledgment

The work described in this paper was supported by the National Science Foundation of China (Nos. 61673185, 61572205, 61673186), National Science Foundation of Fujian Province (Nos. 2015J01656, 2017J01112), Promotion Program for Young and Middle-aged Teacher in Science and Technology Research (No. ZQN-PY309), the Promotion Program for graduate student in Scientific research and innovation ability of Huaqiao University (No. 1611414006).

References

  1. 1.
    Arikan, O., Forsyth, D.A., O’Brien, J.F.: Motion synthesis from annotations. ACM Trans. Graph. (TOG) 22(3), 402–408 (2003)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bruderlin, A., Williams, L.: Motion signal processing. In: Proceedings of the 22nd Annual Conference on Computer Graphics And Interactive Techniques, pp. 97–104 (1995)Google Scholar
  3. 3.
    Chai, J., Hodgins, J.K.: Constraint-based motion optimization using a statistical dynamic model. ACM Trans. Graph. (TOG) 26(3), 8:1–8:12 (2007)CrossRefGoogle Scholar
  4. 4.
    Fragkiadaki, K., Levine, S., Felsen, P., Malik, J.: Recurrent network models for human dynamics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4346–4354 (2015)Google Scholar
  5. 5.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)Google Scholar
  6. 6.
    Guo, X., Xu, S., Che, W., Zhang, X.: Automatic motion generation based on path editing from motion capture data. In: Pan, Z., Cheok, A.D., Müller, W., Zhang, X., Wong, K. (eds.) Transactions on Edutainment IV. LNCS, vol. 6250, pp. 91–104. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14484-4_9 CrossRefGoogle Scholar
  7. 7.
    Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. (TOG) 35(4), 138:1–138:11 (2016)CrossRefGoogle Scholar
  8. 8.
    Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. (TOG) 24(3), 1082–1089 (2005)CrossRefGoogle Scholar
  9. 9.
    Ikemoto, L., Arikan, O., Forsyth, D.: Generalizing motion edits with Gaussian processes. ACM Trans. Graph. (TOG) 28(1), 1:1–1:12 (2009)CrossRefGoogle Scholar
  10. 10.
    Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-RNN: deep learning on spatio-temporal graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5308–5317 (2016)Google Scholar
  11. 11.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. (TOG) 21(3), 473–482 (2002)CrossRefGoogle Scholar
  12. 12.
    Lee, J., Chai, J., Reitsma, P.S., Hodgins, J.K., Pollard, N.S.: Interactive control of avatars animated with human motion data. ACM Trans. Graph. (TOG) 21(3), 491–500 (2002)Google Scholar
  13. 13.
    Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. ACM Trans. Graph. (TOG) 24(3), 1062–1070 (2005)CrossRefGoogle Scholar
  14. 14.
    Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Proceedings of Graphics Interface, pp. 33–39 (2006)Google Scholar
  15. 15.
    Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1025–1032. ACM (2009)Google Scholar
  16. 16.
    Tieleman, T.: Training restricted Boltzmann machines using approximations to the likelihood gradient. In: Proceedings of the 25th International Conference on Machine learning, pp. 1064–1071. ACM (2008)Google Scholar
  17. 17.
    Xia, S., Wang, C., Chai, J., Hodgins, J.: Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans. Graph. (TOG) 34(4), 119–128 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  • Dong Hu
    • 1
    • 2
  • Xin Liu
    • 1
    • 2
  • Shujuan Peng
    • 1
    • 2
  • Bineng Zhong
    • 1
    • 2
  • Jixiang Du
    • 1
    • 2
  1. 1.Department of Computer ScienceHuaqiao UniversityXiamenChina
  2. 2.Xiamen Key Laboratory of Computer Vision and Pattern RecognitionHuaqiao UniversityXiamenChina

Personalised recommendations