The Visual Computer

, Volume 32, Issue 2, pp 191–203 | Cite as

Animating with style: defining expressive semantics of motion

Original Article

Abstract

Actions performed by a virtual character can be controlled with verbal commands such as ‘walk five steps forward’. Similar control of the motion style, meaning how the actions are performed, is complicated by the ambiguity of describing individual motions with phrases such as ‘aggressive walking’. In this paper, we present a method for controlling motion style with relative commands such as ‘do the same, but more sadly’. Based on acted example motions, comparative annotations, and a set of calculated motion features, relative styles can be defined as vectors in the feature space. We present a new method for creating these style vectors by finding out which features are essential for a style to be perceived and eliminating those that show only incidental correlations with the style. We show with a user study that our feature selection procedure is more accurate than earlier methods for creating style vectors, and that the style definitions generalize across different actors and annotators. We also present a tool enabling interactive control of parametric motion synthesis by verbal commands. As the control method is independent from the generation of motion, it can be applied to virtually any parametric synthesis method.

Keywords

Computer animation Human motion Motion style  Motion synthesis Style vector Feature extraction Feature selection Verbal description of motion style 

Notes

Acknowledgments

This work has been supported by the HeCSE graduate school and the project Multimodally grounded language technology (254104) funded by the Academy of Finland. The Mocap toolbox by Neil Lawrence [13] was used in this research.

Supplementary material

Supplementary material 1 (mp4 23047 KB)

References

  1. 1.
    Aviezer, H., Hassin, R.R., Ryan, J., Grady, C., Susskind, J., Anderson, A., Moscovitch, M., Bentin, S.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–732 (2008)CrossRefGoogle Scholar
  2. 2.
    Bruderlin, A., Williams, L.: Motion signal processing. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 97–104. ACM, New York (1995)Google Scholar
  3. 3.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The emote model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘00, pp. 173–182. ACM Press/Addison-Wesley Publishing Co., New York (2000)Google Scholar
  4. 4.
    Cho, K., Chen, X.: Classifying and visualizing motion capture sequences using deep neural networks. In: Proceedings of the 9th International Conference on Computer Vision Theory and Applications, VISAPP2014. SciTePress (2014)Google Scholar
  5. 5.
    Clavel, C., Plessier, J., Martin, J.C., Ach, L., Morel, B.: Combining facial and postural expressions of emotions in a virtual character. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 5773, pp. 287–300. Springer, Berlin (2009)Google Scholar
  6. 6.
    Förger, K., Honkela, T., Takala, T.: Impact of varying vocabularies on controlling motion of a virtual actor. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) Intelligent Virtual Agents. Lecture Notes in Computer Science, vol. 8108, pp. 239–248. Springer, Berlin (2013)Google Scholar
  7. 7.
    Gleicher, M.: Retargetting motion to new characters. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘98, pp. 33–42. ACM, New York (1998)Google Scholar
  8. 8.
    Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)CrossRefGoogle Scholar
  9. 9.
    Joachims, T.: Optimizing search engines using clickthrough data. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ‘02, pp. 133–142. ACM, New York (2002)Google Scholar
  10. 10.
    Johnson, K.L., McKay, L.S., Pollick, F.E.: He throws like a girl (but only when hes sad): emotion affects sex-decoding of biological motion displays. Cognition 119(2), 265–280 (2011)CrossRefGoogle Scholar
  11. 11.
    Kleinsmith, A., Bianchi-Berthouze, N.: Affective body expression perception and recognition: a survey. IEEE Trans. Affect. Comput. 4(1), 15–33 (2013)CrossRefGoogle Scholar
  12. 12.
    Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘02, pp. 473–482. ACM, New York (2002)Google Scholar
  13. 13.
    Lawrence, N.: Mocap toolbox for matlab. Available on-line at http://staffwww.dcs.shef.ac.uk/people/N.Lawrence/mocap/ (2011). Accessed 9 Feb 2015
  14. 14.
    Min, J., Chai, J.: Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Trans. Graph. 31(6), 153:1–153:12 (2012)CrossRefGoogle Scholar
  15. 15.
    Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. In: ACM SIGGRAPH 2005 Papers. SIGGRAPH ‘05, pp. 1062–1070. ACM, New York (2005)Google Scholar
  16. 16.
    Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput. 28(6), 976–990 (2010)CrossRefGoogle Scholar
  17. 17.
    Rose, C., Cohen, M., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), 32–40 (1998)CrossRefGoogle Scholar
  18. 18.
    Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Proceedings of Graphics Interface 2006, pp. 33–39. Canadian Information Processing Society, Toronto, Canada (2006)Google Scholar
  19. 19.
    Shoemake, K.: Animating rotation with quaternion curves. SIGGRAPH Comput. Graph. 19(3), 245–254 (1985)CrossRefGoogle Scholar
  20. 20.
    Troje, N.F.: Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J. Vis. 2(5), 371–387 (2002)CrossRefGoogle Scholar
  21. 21.
    Troje, N.F.: Retrieving information from human movement patterns. In: Shipley, T.F., Zacks, M. (eds.) Understanding Events: How Humans See, Represent, and Act on Events, pp. 308–334. Oxford University Press, New York (2008)Google Scholar
  22. 22.
    Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ‘95, pp. 91–96. ACM, New York (1995)Google Scholar
  23. 23.
    Urtasun, R., Glardon, P., Boulic, R., Thalmann, D., Fua, P.: Style-based motion synthesis. Comput. Graph. Forum 23(4), 799–812 (2004)CrossRefGoogle Scholar
  24. 24.
    Wang, X., Jia, J., Cai, L.: Affective image adjustment with a single word. Vis. Comput. 29(11), 1121–1133 (2013)CrossRefGoogle Scholar
  25. 25.
    Wu, J., Hu, D., Chen, F.: Action recognition by hidden temporal models. Vis. Comput. 30(12), 1395–1404 (2014)CrossRefGoogle Scholar
  26. 26.
    Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., Benes, B.: Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30(2), 213–227 (2014)CrossRefGoogle Scholar
  27. 27.
    Zhuang, Y., Pan, Y., Xiao, J.: A modern approach to intelligent animation: theory and practice. In: Chapter Automatic Synthesis and Editing of Motion Styles, pp. 255–265. Springer, Berlin (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceAalto UniversityEspooFinland

Personalised recommendations