An improved calculation system for phase-functioned neural network and implementation in unreal engine

  • Ping Kuang
  • Dingli Luo
  • Haoshuang Wang
  • Lixue Zhang
Article
  • 37 Downloads

Abstract

The problem attempting to be solved in this paper is optimizing phase-functioned neural network to support generated animation for game engine. The approach adopted is using CUDA and parallel programming to improve large prediction of matrices calculation. The results of this research included a 4-layer architecture of PFNN prediction framework, a CUDA calculation solution and a showcase binding in unreal engine. As for the effects of the results obtained, PFNN calculation has been sped up from 1.8 ms to 1.0, 1.1 ms. And according to the result of performance test of the utility of PFNN in real game development, its optimization has been proven.

Keywords

Computing methodologies Procedural animation Artificial intelligence 

Notes

Acknowledgements

This work is supported by Sichuan Sci-Tech Support Plan, Item Numbers: 2017GZ0025, 2017GZ0321 and 2016GZ0313.

References

  1. 1.
    Arikan, O., Forsyth, D.: Interactive motion generation from examples. ACM Trans. Graph. 21(3), 483–490 (2002).  https://doi.org/10.1145/566654.566606 CrossRefMATHGoogle Scholar
  2. 2.
    Clavet, S.: Motion-matching in ubisoft’s for honor. http://www.gameanim.com/2016/05/03/motion-matching-ubisofts-honor/ (2017)
  3. 3.
    Farber, R.: CUDA, supercomputing for the masses: part 4, the CUDA memory model. Under the High Performance Computing section of the Dr. Dobbs websiteGoogle Scholar
  4. 4.
    Fragkiadaki, K., Levine, S., Felsen, P., Malik, J.: Recurrent network models for human dynamics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4346– 4354. (2015)Google Scholar
  5. 5.
    Harris, M.: Optimizing CUDA. SC07: High Performance Computing With CUDA (2007)Google Scholar
  6. 6.
    Holden, D., Komura, T., Saito, J.: Phase-functioned neural networks for character control. ACM Trans. Graph. 36(42), 1–13 (2017).  https://doi.org/10.1145/3072959.3073663 CrossRefGoogle Scholar
  7. 7.
    Holden, D., Saito, J., Komura, T.: A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. 35(4), 1–11 (2016). http://dl.acm.org/citation.cfm?doid=2897824.2925975,  https://doi.org/10.1145/2897824.2925975
  8. 8.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. 21(3), 1–10 (2002).  https://doi.org/10.1145/566654.566605 CrossRefGoogle Scholar
  9. 9.
    Lee, Y., Wampler, K., Bernstein, G., Popović, J., Popović, Z.: Motion fields for interactive character locomotion. ACM Trans. Graph. (TOG) 29, 138 (2010)Google Scholar
  10. 10.
    Taylor, G.W., Hinton, G.E.: Factored conditional restricted boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1025–1032. ACM, New York (2009)Google Scholar
  11. 11.
    Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models for human motion. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 283–298 (2008)CrossRefGoogle Scholar
  12. 12.
    Xia, S., Wang, C., Chai, J., Hodgins, J.: Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans. Graph. (TOG) 34(4), 119 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Ping Kuang
    • 1
  • Dingli Luo
    • 1
  • Haoshuang Wang
    • 1
  • Lixue Zhang
    • 2
  1. 1.Information and Software EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.Chengdu Productivity Promotion CenterChengduChina

Personalised recommendations