Skip to main content

Automating the Transfer of a Generic Set of Behaviors onto a Virtual Character

  • Conference paper
Motion in Games (MIG 2012)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7660))

Included in the following conference series:

Abstract

Humanoid 3D models can be easily acquired through various sources, including online. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system, and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associate arbitrary joint names with canonical ones, and describe a fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Autodesk motionbuilder real-time 3d character animation software, http://www.autodesk.com/motionbuilder

  2. Cross-platform game engine with authoring tool, new feature demo of version 4.0 pre-release, http://www.unity3d.com

  3. Amaya, K., Bruderlin, A., Calvert, T.: Emotion from motion. In: Proceedings of the Conference on Graphics Interface 1996, GI 1996, pp. 222–229. Canadian Information Processing Society, Toronto (1996)

    Google Scholar 

  4. Arikan, O., Ikemoto, L.: Animeeple character animation tool (2011)

    Google Scholar 

  5. Baran, I., Popović, J.: Automatic rigging and animation of 3d characters. In: ACM SIGGRAPH 2007 Papers, SIGGRAPH 2007. ACM, New York (2007), http://doi.acm.org/10.1145/1275808.1276467

    Google Scholar 

  6. Cassell, J., Vilhjálmsson, H.H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, pp. 477–486. ACM, New York (2001), http://doi.acm.org/10.1145/383259.383315

    Google Scholar 

  7. Jin Choi, K., Seok Ko, H.: On-line motion retargetting. Journal of Visualization and Computer Animation 11, 223–235 (1999)

    Google Scholar 

  8. Feng, A.W., Xu, Y., Shapiro, A.: An example-based motion synthesis technique for locomotion and object manipulation. In: I3D, pp. 95–102 (2012)

    Google Scholar 

  9. Glardon, P., Boulic, R., Thalmann, D.: Robust on-line adaptive footplant detection and enforcement for locomotion. Vis. Comput. 22(3), 194–209 (2006)

    Article  Google Scholar 

  10. Gleicher, M.: Retargetting motion to new characters. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1998, pp. 33–42. ACM, New York (1998)

    Chapter  Google Scholar 

  11. Hecker, C., Raabe, B., Enslow, R.W., DeWeese, J., Maynard, J., van Prooijen, K.: Real-time motion retargeting to highly varied user-created morphologies. In: ACM SIGGRAPH 2008 Papers, SIGGRAPH 2008, pp. 27:1–27:11. ACM, New York (2008), http://doi.acm.org/10.1145/1399504.1360626

  12. Heloir, A., Kipp, M.: EMBR – A Realtime Animation Engine for Interactive Embodied Agents. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 393–404. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  13. Ho, E.S.L., Komura, T., Tai, C.L.: Spatial relationship preserving character motion adaptation. ACM Trans. Graph. 29(4), 33:1–33:8 (2010), http://doi.acm.org/10.1145/1778765.1778770

    Google Scholar 

  14. Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)

    Article  Google Scholar 

  15. Kopp, S., Krenn, B., Marsella, S., Marshall, A.N., Pelachaud, C., Pirker, H., Thórisson, K., Vilhjálmsson, H.: Towards a Common Framework for Multimodal Generation: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  16. Kovar, L., Schreiner, J., Gleicher, M.: Footskate cleanup for motion capture editing. In: Proceedings of the ACM SIGGRAPH Symposium on Computer Animation, pp. 97–104. ACM Press, San Antonio (2002)

    Google Scholar 

  17. Kulpa, R., Multon, F., Arnaldi, B.: Morphology-independent representation of motions for interactive human-like animation. Computer Graphics Forum, Eurographics 2005 Special Issue 24, 343–352 (2005)

    Google Scholar 

  18. Lee, J., Shin, S.Y.: A hierarchical approach to interactive motion editing for human-like figures. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1999, pp. 39–48. ACM Press/Addison-Wesley Publishing Co., New York (1999)

    Chapter  Google Scholar 

  19. Lee, S.P., Badler, J.B., Badler, N.I.: Eyes alive. ACM Trans. Graph. 21, 637–644 (2002), http://doi.acm.org/10.1145/566654.566629

    Google Scholar 

  20. Miller, C., Arikan, O., Fussell, D.: Frankenrigs: Building character rigs from multiple sources. IEEE Transactions on Visualization and Computer Graphics 17(8), 1060–1070 (2011)

    Article  Google Scholar 

  21. Min, J., Liu, H., Chai, J.: Synthesis and editing of personalized stylistic human motion. In: Proceedings of the 2010 ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, I3D 2010, pp. 39–46. ACM, New York (2010)

    Google Scholar 

  22. Monzani, J.S., Baerlocher, P., Boulic, R., Thalmann, D.: Using an intermediate skeleton and inverse kinematics for motion retargeting. Computer Graphics Forum 19(3) (2000), citeseer.nj.nec.com/monzani00using.html

  23. Neff, M., Kim, Y.: Interactive editing of motion style using drives and correlations. In: Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2009, pp. 103–112. ACM, New York (2009)

    Chapter  Google Scholar 

  24. Niewiadomski, R., Bevacqua, E., Mancini, M., Pelachaud, C.: Greta: an interactive expressive eca system. In: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2009, vol. 2, pp. 1399–1400. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2009), http://dl.acm.org/citation.cfm?id=1558109.1558314

    Google Scholar 

  25. Rose, C., Cohen, M., Bodenheimer, B.: Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Applications 18(5), 32–40 (1998)

    Article  Google Scholar 

  26. Shapiro, A.: Building a Character Animation System. In: Allbeck, J.M., Faloutsos, P. (eds.) MIG 2011. LNCS, vol. 7060, pp. 98–109. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  27. Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Proceedings of Graphics Interface, GI 2006, pp. 33–39. Canadian Information Processing Society, Toronto (2006), http://dl.acm.org/citation.cfm?id=1143079.1143086

    Google Scholar 

  28. Shin, H.J., Lee, J., Shin, S.Y., Gleicher, M.: Computer puppetry: An importance-based approach. ACM Trans. Graph. 20(2), 67–94 (2001), http://doi.acm.org/10.1145/502122.502123

    Article  Google Scholar 

  29. Thiebaux, M., Marsella, S., Marshall, A.N., Kallmann, M.: Smartbody: behavior realization for embodied conversational agents. In: 7th Int’l Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 151–158. International Foundation for Autonomous Agents and Multiagent Systems (2008)

    Google Scholar 

  30. Wang, J.M., Fleet, D.J., Hertzmann, A.: Multifactor gaussian process models for style-content separation. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, pp. 975–982. ACM, New York (2007)

    Google Scholar 

  31. van Welbergen, H., Reidsma, D., Ruttkay, Z., Zwiers, J.: Elckerlyc. Journal on Multimodal User Interfaces 3, 271–284 (2009)

    Article  Google Scholar 

  32. Zordan, V.B., Van Der Horst, N.C.: Mapping optical motion capture data to skeletal motion using a physical model. In: Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 2003, pp. 245–250. Eurographics Association, Aire-la-Ville (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Feng, A., Huang, Y., Xu, Y., Shapiro, A. (2012). Automating the Transfer of a Generic Set of Behaviors onto a Virtual Character. In: Kallmann, M., Bekris, K. (eds) Motion in Games. MIG 2012. Lecture Notes in Computer Science, vol 7660. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34710-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34710-8_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34709-2

  • Online ISBN: 978-3-642-34710-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics