Advertisement

WalkNet: A Neural-Network-Based Interactive Walking Controller

  • Omid Alemi
  • Philippe Pasquier
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)

Abstract

We present WalkNet, an interactive agent walking movement controller based on neural networks. WalkNet supports controlling the agent’s walking movements with high-level factors that are semantically meaningful, providing an interface between the agent and its movements in such a way that the characteristics of the movements can be directly determined by the internal state of the agent. The controlling factors are defined across the dimensions of planning, affect expression, and personal movement signature. WalkNet employs Factored, Conditional Restricted Boltzmann Machines to learn and generate movements. We train the model on a corpus of motion capture data that contains movements from multiple human subjects, multiple affect expressions, and multiple walking trajectories. The generation process is real-time and is not memory intensive. WalkNet can be used both in interactive scenarios in which it is controlled by a human user and in scenarios in which it is driven by another AI component.

Keywords

Agent movement Machine learning Movement animation Affective agents 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alemi, O., Li, W., Pasquier, P.: Affect-expressive movement generation with factored conditional restricted boltzmann machines. In: Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 442–448 (2015)Google Scholar
  2. 2.
    Brand, M., Hertzmann, A.: Style machines. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 183–192. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  3. 3.
    Crnkovic-Friis, L., Crnkovic-Friis, L.: Generative choreography using deep learning. In: Proceedings of the 7th International Conference on Computational Creativity (2016)Google Scholar
  4. 4.
    Grassia, F.S.: Practical Parameterization of Rotations Using the Exponential Map. Journal of Graphics Tools 3(3), 29–48 (1998)CrossRefGoogle Scholar
  5. 5.
    Heck, R., Gleicher, M.: Parametric motion graphs. In: Proceedings of the 29th Representation Learning Workshop, International Conference on Machine Learning, pp. 129–136. ACM Press (2007)Google Scholar
  6. 6.
    Herzog, D., Krueger, V., Grest, D.: Parametric hidden markov models for recognition and synthesis of movements. In: Proceedings of the British Machine Vision Conference, pp. 163–172 (2008)Google Scholar
  7. 7.
    Holden, D., Saito, J., Komura, T.: A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Transactions on Graphics (TOG) 35(4), 138–11 (2016)CrossRefGoogle Scholar
  8. 8.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: SIGGRAPH 2002: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pp. 473–482. ACM Press (2002)Google Scholar
  9. 9.
    Plutchik, R., Conte, H.R.: Circumplex Models of Personality and Emotions. American Psychological Association (1997)Google Scholar
  10. 10.
    Samadani, A.A., Kubica, E., Gorbet, R., Kulić, D.: Perception and Generation of Affective Hand Movements. International Journal of Social Robotics 5(1), 35–51 (2013)CrossRefGoogle Scholar
  11. 11.
    Taubert, N., Endres, D., Christensen, A., Giese, M.A.: Shaking hands in latent space - modeling emotional interactions with gaussian process latent variable models. In: Bach, J., Edelkamp, S. (eds.) KI 2011. LNCS, vol. 7006, pp. 330–334. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-24455-1_32 CrossRefGoogle Scholar
  12. 12.
    Taylor, G.W., Hinton, G.E.: Factored conditional restricted boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning (2009)Google Scholar
  13. 13.
    Tilmanne, J., d’Alessandro, N., Astrinaki, M., Ravet, T.: Exploration of a stylistic motion space through realtime synthesis. In: Proceedings of the 9th International Conference on Computer Vision Theory and Applications, pp. 1–7 (2014)Google Scholar
  14. 14.
    Wang, J.M., Fleet, D.J., Hertzmann, A.: Multifactor gaussian process models for style-content separation. In: Proceedings of the 24th International Conference on Machine Learning, pp. 975–982. ACM Press (2007)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.School of Interactive Arts + TechnologySimon Fraser UniversitySurreyCanada

Personalised recommendations