Efficient Human Motion Transition via Hybrid Deep Neural Network and Reliable Motion Graph Mining

  • Bing Zhou
  • Xin Liu
  • Shujuan PengEmail author
  • Bineng Zhong
  • Jixiang Du
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 771)


Skeletal motion transition is of crucial importance to the simulation in interactive environments. In this paper, we propose a hybrid deep learning framework that allows for flexible and efficient human motion transition from motion capture (mocap) data, which optimally satisfies the diverse user-specified paths. We integrate a convolutional restricted Boltzmann machine with deep belief network to detect appropriate transition points. Subsequently, a quadruples-like data structure is exploited for motion graph building, which significantly benefits for the motion splitting and indexing. As a result, various motion clips can be well retrieved and transited fulfilling the user inputs, while preserving the smooth quality of the original data. The experiments show that the proposed transition approach performs favorably compared to the state-of-the-art competing approaches.


Skeletal motion transition Hybrid deep learning Convolutional restricted Boltzmann machine Quadruples-like data structure 



The work described in this paper was supported by the National Science Foundation of China (Nos. 61673185, 61572205, 61673186), National Science Foundation of Fujian Province (Nos. 2015J01656, 2017J01112), Promotion Program for Young and Middle-aged Teacher in Science and Technology Research (No. ZQN-PY309), the Promotion Program for graduate student in Scientific research and innovation ability of Huaqiao University (No. 1511414012).


  1. 1.
    Cho, K.H., Ilin, A., Raiko, T.: Improved learning of gaussian-bernoulli restricted boltzmann machines. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 10–17. Springer, Heidelberg (2011). CrossRefGoogle Scholar
  2. 2.
    Gan, Z., Li, C., Henao, R., Carlson, D.E., Carin, L.: Deep temporal sigmoid belief networks for sequence modeling. In: Proceedings of Advances in Neural Information Processing Systems, pp. 2467–2475 (2015)Google Scholar
  3. 3.
    Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. (TOG) 35(4), 138:1–138:11 (2016)CrossRefGoogle Scholar
  4. 4.
    Kovar, L., Gleicher, M.: Flexible automatic motion blending with registration curves. In: Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 214–224 (2003)Google Scholar
  5. 5.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. (TOG) 21, 473–482 (2002)CrossRefGoogle Scholar
  6. 6.
    Lai, Y.C., Chenney, S., Fan, S.: Group motion graphs. In: Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 281–290 (2005)Google Scholar
  7. 7.
    Lee, H., Grosse, R., Ranganath, R., Ng, A.Y.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of 26th Annual International Conference on Machine Learning, pp. 609–616 (2009)Google Scholar
  8. 8.
    Peng, J.Y., Lin, I., Chao, J.H., Chen, Y.J., Juang, G.H., et al.: Interactive and flexible motion transition. Comput. Animat. Virtual Worlds 18(4–5), 549–558 (2007)CrossRefGoogle Scholar
  9. 9.
    Rose, C., Cohen, M.F., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–40 (1998)CrossRefGoogle Scholar
  10. 10.
    Saida, K., Foudil, C.: Hybrid motion graphs for character animation. Int. J. Adv. Comput. Sci. Appl. 1(7), 45–51 (2016)Google Scholar
  11. 11.
    Slabaugh, G.G.: Computing euler angles from a rotation matrix. Retrieved on August 6(2000), 39–63 (1999)Google Scholar
  12. 12.
    Song, J., Gan, Z., Carin, L.: Factored temporal sigmoid belief networks for sequence learning. arXiv preprint arXiv:1605.06715 (2016)
  13. 13.
    Sukhbaatar, S., Makino, T., Aihara, K., Chikayama, T.: Robust generation of dynamical patterns in human motion by a deep belief nets. In: Proceedings of Asian Conference on Machine Learning, pp. 231–246 (2011)Google Scholar
  14. 14.
    Tanco, L.M., Hilton, A.: Realistic synthesis of novel human movements from a database of motion capture examples. In: Proceedings of International Workshop on Human Motion, pp. 137–142 (2000)Google Scholar
  15. 15.
    Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of 26th Annual International Conference on Machine Learning, pp. 1025–1032 (2009)Google Scholar
  16. 16.
    Taylor, G.W., Hinton, G.E., Roweis, S.T.: Modeling human motion using binary latent variables. In: Proceedings of Advances in Neural Information Processing Systems, pp. 1345–1352 (2007)Google Scholar
  17. 17.
    Wang, J., Bodenheimer, B.: Synthesis and evaluation of linear motion transitions. ACM Trans. Graph. (TOG) 27(1), 1–12 (2008)Google Scholar
  18. 18.
    Yu, Q., Terzopoulos, D.: Synthetic motion capture: implementing an interactive virtual marine world. Vis. Comput. 15(7), 377–394 (1999)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  • Bing Zhou
    • 1
    • 2
  • Xin Liu
    • 1
    • 2
  • Shujuan Peng
    • 1
    • 2
    Email author
  • Bineng Zhong
    • 1
    • 2
  • Jixiang Du
    • 1
    • 2
  1. 1.Department of Computer ScienceHuaqiao UniversityXiamenChina
  2. 2.Xiamen Key Laboratory of Computer Vision and Pattern RecognitionHuaqiao UniversityXiamenChina

Personalised recommendations