Skip to main content
Log in

Keyframe-based multi-contact motion synthesis

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Most of the human daily activities include acyclic multi-contact motions. Yet, generating such motions is challenging because of its high-dimensional and nonlinear solution space made by combinations of individual movements of body parts. In this paper, we present a novel keyframe-based framework to automatically generate multi-contact character motions. Our system consists of two components: key-pose planning and interpolation. Given initial and goal poses in which each contact can be repositioned at most one time during the transition, our key-pose planning step generates intermediate key-poses that represent contact changes, taking into account a set of principles for goal-directed movements. Next, the key-poses of each joint are independently interpolated to generate an acyclic multi-contact motion. We demonstrate that our framework can synthesize plausible interaction motions with a number of man-made objects, such as chairs and bicycles, without using any motion data. In addition, we show the scalability of our method by creating a long-term motion of climbing a ladder.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Agrawal, S., van de Panne, M.: Task-based locomotion. ACM Trans. Graph. 35(4), 82:1–82:11 (2016). https://doi.org/10.1145/2897824.2925893

    Article  Google Scholar 

  2. Al-Asqhar, R.A., Komura, T., Choi, M.G.: Relationship descriptors for interactive motion adaptation. In: Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’13, pp. 45–53. ACM, New York, NY, USA (2013). https://doi.org/10.1145/2485895.2485905

  3. Boulic, R., Thalmann, N.M., Thalmann, D.: A global human walking model with real-time kinematic personification. Vis. Comput. 6(6), 344–358 (1990). https://doi.org/10.1007/BF01901021

    Article  Google Scholar 

  4. Bruderlin, A., Calvert, T.W.: Goal-directed, dynamic animation of human walking. SIGGRAPH Comput. Graph. 23(3), 233–242 (1989). https://doi.org/10.1145/74334.74357

    Article  Google Scholar 

  5. Carpentier, J., Tonneau, S., Naveau, M., Stasse, O., Mansard, N.: A versatile and efficient pattern generator for generalized legged locomotion. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3555–3561 (2016). https://doi.org/10.1109/ICRA.2016.7487538

  6. Coleman, P., Bibliowicz, J., Singh, K., Gleicher, M.: Staggered poses: a character motion representation for detail-preserving editing of pose and coordinated timing. In: Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’08, pp. 137–146. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2008). http://dl.acm.org/citation.cfm?id=1632592.1632612

  7. Escande, A., Kheddar, A., Miossec, S.: Planning contact points for humanoid robots. Robot. Autonom. Syst. 61(5), 428–442 (2013). https://doi.org/10.1016/j.robot.2013.01.008

    Article  Google Scholar 

  8. Ha, D., Han, J.: Motion synthesis with decoupled parameterization. Vis. Comput. 24(7–9), 587–594 (2008). https://doi.org/10.1007/s00371-008-0239-7

    Article  Google Scholar 

  9. Hämäläinen, P., Rajamäki, J., Liu, C.K.: Online control of simulated humanoids using particle belief propagation. ACM Trans. Graph. (2015). https://doi.org/10.1145/2767002

    Article  MATH  Google Scholar 

  10. Hauser, K., Bretl, T., Harada, K., Latombe, J.C.: Using motion primitives in probabilistic sample-based planning for humanoid robots. In: Workshop on the Algorithmic Foundations of Robotics (WAFR), pp. 507–522 (2006)

  11. Hauser, K., Bretl, T., Latombe, J.: Non-gaited humanoid locomotion planning. In: 5th IEEE-RAS International Conference on Humanoid Robots, 2005, pp. 7–12 (2005). https://doi.org/10.1109/ICHR.2005.1573537

  12. Heess, N., TB, D., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, S.M.A., Riedmiller, M.A., Silver, D.: Emergence of locomotion behaviours in rich environments. CoRR (2017). http://arxiv.org/abs/1707.02286

  13. Ho, E.S.L., Komura, T., Tai, C.L.: Spatial relationship preserving character motion adaptation. ACM Trans. Graph. 29(4), 33:1–33:8 (2010). https://doi.org/10.1145/1778765.1778770

    Article  Google Scholar 

  14. Holden, D., Komura, T., Saito, J.: Phase-functioned neural networks for character control. ACM Trans. Graph. 36(4), 42:1–42:13 (2017). https://doi.org/10.1145/3072959.3073663

    Article  Google Scholar 

  15. Igarashi, T., Moscovich, T., Hughes, J.F.: Spatial keyframing for performance-driven animation. In: ACM SIGGRAPH 2007 Courses, SIGGRAPH ’07. ACM, New York, NY, USA (2007). https://doi.org/10.1145/1281500.1281536

  16. Kandel, E.R., Mack, S.: Principles of Neural Science. McGraw-Hill Medical, New York (2014)

    Google Scholar 

  17. Kang, C., Lee, S.H.: Environment-adaptive contact poses for virtual characters. Comput. Graph. Forum 33(7), 1–10 (2014). https://doi.org/10.1111/cgf.12468

    Article  Google Scholar 

  18. Kang, C., Lee, S.H.: Multi-contact locomotion using a contact graph with feasibility predictors. ACM Trans. Graph. 36(2), 22:1–22:14 (2017). https://doi.org/10.1145/2983619

    Article  Google Scholar 

  19. Kang, C., Lee, S.H.: Scene reconstruction and analysis from motion. Graph. Models 94, 25–37 (2017). https://doi.org/10.1016/j.gmod.2017.10.002

    Article  MathSciNet  Google Scholar 

  20. Kim, V.G., Chaudhuri, S., Guibas, L., Funkhouser, T.: Shape2pose: human-centric shape analysis. ACM Trans. Graph. 33(4), 120:1–12 (2014). https://doi.org/10.1145/2601097.2601117

  21. Kim, Y., Park, H., Bang, S., Lee, S.H.: Retargeting human-object interaction to virtual avatars. IEEE Trans. Vis. Comput. Graph. 22(11), 2405–2412 (2016). https://doi.org/10.1109/TVCG.2016.2593780

    Article  Google Scholar 

  22. Kitagawa, N., Ogihara, N.: Estimation of foot trajectory during human walking by a wearable inertial measurement unit mounted to the foot. Gait & Posture 45, 110–114 (2016). https://doi.org/10.1016/j.gaitpost.2016.01.014

    Article  Google Scholar 

  23. Koyama, Y., Goto, M.: Precomputed optimal one-hop motion transition for responsive character animation. Vis. Comput. 35(6–8), 1131–142 (2019). https://doi.org/10.1007/s00371-019-01693-8

    Article  Google Scholar 

  24. Lee, B., Jin, T., Lee, S.H., Saakes, D.: Smartmanikin: virtual humans with agency for design tools. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 584. ACM (2019)

  25. Lee, J., Chai, J., Reitsma, P.S.A., Hodgins, J.K., Pollard, N.S.: Interactive control of avatars animated with human motion data. In: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’02, pp. 491–500. ACM, New York, NY, USA (2002). https://doi.org/10.1145/566570.566607

  26. Lee, K.H., Choi, M.G., Lee, J.: Motion patches: building blocks for virtual environments annotated with motion data. ACM Trans. Graph. 25(3), 898–906 (2006). https://doi.org/10.1145/1141911.1141972

    Article  Google Scholar 

  27. Merel, J., Tassa, Y., TB, D., Srinivasan, S., Lemmon, J., Wang, Z., Wayne, G., Heess, N.: Learning human behaviors from motion capture by adversarial imitation. CoRR (2017). http://arxiv.org/abs/1707.02201

  28. Mixamo. https://www.mixamo.com/

  29. Naderi, K., Rajamäki, J., Hämäläinen, P.: Discovering and synthesizing humanoid climbing movements. ACM Trans. Graph. (2017). https://doi.org/10.1145/3072959.3073707

    Article  Google Scholar 

  30. Peng, X.B., Berseth, G., Yin, K., Van De Panne, M.: Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans. Graph. 36(4), 41:1–41:13 (2017). https://doi.org/10.1145/3072959.3073602

    Article  Google Scholar 

  31. Roberts, R., Lewis, J.P., Anjyo, K., Seo, J., Seol, Y.: Optimal and interactive keyframe selection for motion capture. Comput. Vis. Media 5(2), 171–191 (2019). https://doi.org/10.1007/s41095-019-0138-z

    Article  Google Scholar 

  32. Savva, M., Chang, A.X., Hanrahan, P., Fisher, M., Nießner, M.: Pigraphs: learning interaction snapshots from observations. ACM Trans. Graph. 35(4), 139:1–139:12 (2016). https://doi.org/10.1145/2897824.2925867

    Article  Google Scholar 

  33. Terra, S.C.L., Metoyer, R.A.: Performance timing for keyframe animation. In: Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’04, pp. 253–258. Eurographics Association, Goslar Germany, Germany (2004). https://doi.org/10.1145/1028523.1028556

  34. Tonneau, S., Al-Ashqar, R.A., Pettré, J., Komura, T., Mansard, N.: In: Proceedings of the 37th Annual Conference of the European Association for Computer Graphics, EG ’16, pp. 127–138. Eurographics Association, Goslar Germany, Germany (2016). https://doi.org/10.1111/cgf.12817

  35. Tonneau, S., Del Prete, A., Pettré, J., Park, C., Manocha, D., Mansard, N.: An efficient acyclic contact planner for multiped robots. IEEE Trans. Robot. 34(3), 586–601 (2018). https://doi.org/10.1109/TRO.2018.2819658

    Article  Google Scholar 

  36. Tonneau, S., Fernbach, P., Prete, A.D., Pettré, J., Mansard, N.: 2pac: two-point attractors for center of mass trajectories in multi-contact scenarios. ACM Trans. Graph. 37(5), 176:1–176:14 (2018). https://doi.org/10.1145/3213773

    Article  Google Scholar 

  37. Wang, Q., Artières, T., Chen, M., Denoyer, L.: Adversarial learning for modeling human motion. Vis. Comput. 36(1), 141–160 (2018). https://doi.org/10.1007/s00371-018-1594-7

    Article  Google Scholar 

  38. Wang, Y., Che, W., Xu, B.: Encoder-decoder recurrent network model for interactive character animation generation. Vis. Comput. 33(6–8), 971–980 (2017). https://doi.org/10.1007/s00371-017-1378-5

    Article  Google Scholar 

  39. Wu, J.C., Popović, Z.: Terrain-adaptive bipedal locomotion control. ACM Trans. Graph. 29(4), 72:1–72:10 (2010). https://doi.org/10.1145/1778765.1778809

    Article  Google Scholar 

  40. Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., Benes, B.: Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30(2), 213–227 (2013). https://doi.org/10.1007/s00371-013-0797-1

    Article  Google Scholar 

Download references

Funding

Funding was provided by National Research Foundation (KR) (Grant No. NRF-2020R1A2C2011541) and by Korea Creative Content Agency (Grand No. R2020040211).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sung-Hee Lee.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 36372 KB)

Supplementary material 2 (mp4 57278 KB)

Appendices

Appendix A: Height of the pelvis

Let \(\bar{y}\) denote the highest value between the height of the previous key-pose and the height of the goal pose for the goal-directed movement and mobility. Then, the pelvis height is determined as \(\min (\bar{y}, y^{LF}, y^{RF})\). Here, \(y^{*F}\) is the maximum height of the pelvis from the given foot position and has two options: from the heel position or the toe position. The height from the toe position is determined by the tip-toe pose. Therefore, each height is simply computed because the position of the heel or toe is given by the contact position and the 2D position of the pelvis and the length of each case are known. The support foot uses the maximum height for the heel to keep the foot sole in contact with the ground, and the swing foot uses the maximum height for the toe to generate tip-toe pose at the time of contact engagement. In a transit key-pose, the swing foot is out of contact, so only the height of the support foot is considered.

Appendix B: Swing foot at transit key-pose

In an obstacle-free environment, only the preparation and touchdown key-poses are relevant to the path of the swing foot. Therefore, we use the two poses to generate a simple cubic Bezier curve for the swing footpath. As shown in Fig. 4, the tangents at both ends are computed from the vector from the foot to the hip at each key-pose, respectively. In addition, at the touchdown key-pose, the vector from the foot position to the foot at the preparation pose is added to the tangent:

$$\begin{aligned} \begin{aligned} v_0&=\alpha \Big (p^\mathrm{{prep}}_\mathrm{{hip}}-p^\mathrm{{prep}}_\mathrm{{foot}}\Big )\\ v_1&=\beta \Big (p^{td}_\mathrm{{hip}}-p^{td}_\mathrm{{foot}}\Big )+\alpha \Big (p^\mathrm{{prep}}_\mathrm{{foot}}-p^{td}_\mathrm{{foot}}\Big ) \end{aligned}s \end{aligned}$$
(9)

where \(\alpha \in (0,1)\) is a small value and \(\beta \in (0,1)\) is set to a value inversely proportional to \(|| p^\mathrm{{prep}}_\mathrm{{foot}}-p^{td}_\mathrm{{foot}} ||\). A larger distance between the feet, i.e., a smaller beta, produces a curve similar to the foot trajectory in walking [22, 39]. From the Bezier curve, its extreme point is chosen as the point of the swing foot at the transit key-pose. Given the configuration of the pelvis and the swing foot position, we can easily determine the pose of the swing leg. The goal-directed motion monotonically moves from the initial pose to the goal pose unless there are external constraints, so the rotation of the swing foot is determined by using spherical linear interpolation of both end poses, \(Slerp(R_f^\mathrm{{prep}}, R_f^{td};t_{sw})\), where \(R_f\) denotes the rotation of the foot at key-poses. We use the ratio of arc length as the parameter \(t_{sw}=l_\mathrm{{ext}}/L_\mathrm{{arc}}\), where \(l_\mathrm{{ext}}\) is the arc length at the extreme point and \(L_\mathrm{{arc}}\) is the total arc length of the curve. Given the facing direction of the leg and the extreme point, determining the leg pose is a problem of finding the knee angle using three lengths in the plane. The rotations of the ankle and toe are set to zero in the body frame to have a neutral pose.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, Y., Lee, SH. Keyframe-based multi-contact motion synthesis. Vis Comput 37, 1949–1963 (2021). https://doi.org/10.1007/s00371-020-01956-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-01956-9

Keywords

Navigation