Skip to main content

AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Today’s Mixed Reality head-mounted displays track the user’s head pose in world space as well as the user’s hands for interaction in both Augmented Reality and Virtual Reality scenarios. While this is adequate to support user input, it unfortunately limits users’ virtual representations to just their upper bodies. Current systems thus resort to floating avatars, whose limitation is particularly evident in collaborative settings. To estimate full-body poses from the sparse input sources, prior work has incorporated additional trackers and sensors at the pelvis or lower body, which increases setup complexity and limits practical application in mobile settings. In this paper, we present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user’s head and hands. Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations to guide pose estimation. To obtain accurate full-body motions that resemble motion capture animations, we refine the arm joints’ positions using an optimization routine with inverse kinematics to match the original tracking input. In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets (AMASS). At the same time, our method’s inference speed supports real-time operation, providing a practical interface to support holistic avatar control and representation for Metaverse applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. CMU MoCap Dataset (2004). http://mocap.cs.cmu.edu/

  2. RootMotion Final IK (2018). https://assetstore.unity.com/packages/tools/animation/final-ik-14290

  3. Ahuja, K., Ofek, E., Gonzalez-Franco, M., Holz, C., Wilson, A.D.: CoolMoves: user motion accentuation in virtual reality. Proc. ACM Interact. Mob. Wearable Ubiquit. Technol. 5(2), 1–23 (2021)

    Article  Google Scholar 

  4. Aksan, E., Kaufmann, M., Cao, P., Hilliges, O.: A spatio-temporal transformer for 3D human motion prediction. In: International Conference on 3D Vision (3DV) (2021)

    Google Scholar 

  5. Ames, B., Morgan, J.: IKFlow: generating diverse inverse kinematics solutions. IEEE Robot. Autom. Lett. 7, 7177–7184 (2022)

    Article  Google Scholar 

  6. Aristidou, A., Lasenby, J.: FABRIK: a fast, iterative solver for the inverse kinematics problem. Graph. Models 73(5), 243–260 (2011)

    Article  Google Scholar 

  7. Bócsi, B., Nguyen-Tuong, D., Csató, L., Schoelkopf, B., Peters, J.: Learning inverse kinematics with structured prediction. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 698–703. IEEE (2011)

    Google Scholar 

  8. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  9. Çavdar, T., Mohammad, M., Milani, R.A.: A new heuristic approach for inverse kinematics of robot arms. Adv. Sci. Lett. 19(1), 329–333 (2013)

    Article  Google Scholar 

  10. Csiszar, A., Eilers, J., Verl, A.: On solving the inverse kinematics problem using neural networks. In: 2017 24th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), pp. 1–6. IEEE (2017)

    Google Scholar 

  11. Dai, Z., Yang, Z., Yang, Y., Carbonell, J.G., Le, Q., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988 (2019)

    Google Scholar 

  12. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Annual Conference of the North American Chapter of the Association for Computational Linguistics (2019)

    Google Scholar 

  13. Dittadi, A., Dziadzio, S., Cosker, D., Lundell, B., Cashman, T.J., Shotton, J.: Full-body motion from a single head-mounted device: generating SMPL poses from partial observations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11687–11697 (2021)

    Google Scholar 

  14. Dosovitskiy, A., et al.: An image is worth \(16 \times 16\) words: transformers for image recognition at scale. In: International Conference on Learning Representations (2021)

    Google Scholar 

  15. Duka, A.V.: Neural network based inverse kinematics solution for trajectory tracking of a robotic arm. Procedia Technol. 12, 20–27 (2014)

    Article  Google Scholar 

  16. Fan, H., et al.: Multiscale vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6824–6835 (2021)

    Google Scholar 

  17. Goldenberg, A., Benhabib, B., Fenton, R.: A complete generalized solution to the inverse kinematics of robots. IEEE J. Robot. Autom. 1(1), 14–20 (1985)

    Article  Google Scholar 

  18. Grochow, K., Martin, S.L., Hertzmann, A., Popović, Z.: Style-based inverse kinematics. In: ACM SIGGRAPH 2004 Papers, pp. 522–531 (2004)

    Google Scholar 

  19. Heidicker, P., Langbehn, E., Steinicke, F.: Influence of avatar appearance on presence in social VR. In: 2017 IEEE Symposium on 3D User Interfaces (3DUI), pp. 233–234 (2017)

    Google Scholar 

  20. Huang, Y., Kaufmann, M., Aksan, E., Black, M.J., Hilliges, O., Pons-Moll, G.: Deep inertial poser: learning to reconstruct human pose from sparse inertial measurements in real time. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37, 185:1–185:15 (2018)

    Google Scholar 

  21. Kang, M., Cho, Y., Yoon, S.E.: RCIK: real-time collision-free inverse kinematics using a collision-cost prediction network. IEEE Robot. Autom. Lett. 7(1), 610–617 (2021)

    Article  Google Scholar 

  22. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)

    Google Scholar 

  23. Li, J., Xu, C., Chen, Z., Bian, S., Yang, L., Lu, C.: HybrIK: a hybrid analytical-neural inverse kinematics solution for 3D human pose and shape estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3383–3393 (2021)

    Google Scholar 

  24. Li, S., et al.: A mobile robot hand-arm teleoperation system by vision and IMU. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10900–10906. IEEE (2020)

    Google Scholar 

  25. Li, W., Liu, H., Tang, H., Wang, P., Van Gool, L.: MHFormer: multi-hypothesis transformer for 3D human pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13147–13156 (2022)

    Google Scholar 

  26. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: SwinIR: image restoration using swin transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1833–1844 (2021)

    Google Scholar 

  27. Lin, K., Wang, L., Liu, Z.: End-to-end human pose and mesh reconstruction with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1954–1963 (2021)

    Google Scholar 

  28. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  29. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graph. (TOG) 34(6), 1–16 (2015)

    Article  Google Scholar 

  30. Luenberger, D.G., Ye, Y., et al.: Linear and Nonlinear Programming, vol. 2. Springer, Heidelberg (1984)

    MATH  Google Scholar 

  31. Mahmood, N., Ghorbani, N., Troje, N.F., Pons-Moll, G., Black, M.J.: AMASS: archive of motion capture as surface shapes. In: International Conference on Computer Vision, pp. 5442–5451 (2019)

    Google Scholar 

  32. Marić, F., Giamou, M., Hall, A.W., Khoubyarian, S., Petrović, I., Kelly, J.: Riemannian optimization for distance-geometric inverse kinematics. IEEE Trans. Rob. 38(3), 1703–1722 (2021)

    Article  Google Scholar 

  33. Meinhardt, T., Kirillov, A., Leal-Taixe, L., Feichtenhofer, C.: TrackFormer: multi-object tracking with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8844–8854 (2022)

    Google Scholar 

  34. Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Watson, G.A. (ed.) Numerical Analysis. LNM, vol. 630, pp. 105–116. Springer, Heidelberg (1978). https://doi.org/10.1007/BFb0067700

    Chapter  Google Scholar 

  35. Müller, M., Röder, T., Clausen, M., Eberhardt, B., Krüger, B., Weber, A.: Documentation mocap database hdm05. Technical report. CG-2007-2, Universität Bonn (2007)

    Google Scholar 

  36. Parger, M., Mueller, J.H., Schmalstieg, D., Steinberger, M.: Human upper-body inverse kinematics for increased embodiment in consumer-grade virtual reality. In: Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 1–10 (2018)

    Google Scholar 

  37. Parker, J.K., Khoogar, A.R., Goldberg, D.E.: Inverse kinematics of redundant robots using genetic algorithms. In: 1989 IEEE International Conference on Robotics and Automation, pp. 271–272. IEEE Computer Society (1989)

    Google Scholar 

  38. Ren, H., Ben-Tzvi, P.: Learning inverse kinematics and dynamics of a robotic manipulator using generative adversarial networks. Robot. Auton. Syst. 124, 103386 (2020)

    Google Scholar 

  39. Rokbani, N., Casals, A., Alimi, A.M.: IK-FA, a new heuristic inverse kinematics solver using firefly algorithm. In: Azar, A.T., Vaidyanathan, S. (eds.) Computational Intelligence Applications in Modeling and Control. SCI, vol. 575, pp. 369–395. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-11017-2_15

    Chapter  Google Scholar 

  40. Ruppel, P., Hendrich, N., Starke, S., Zhang, J.: Cost functions to specify full-body motion and multi-goal manipulation tasks. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3152–3159. IEEE (2018)

    Google Scholar 

  41. Starke, S., Zhang, H., Komura, T., Saito, J.: Neural state machine for character-scene interactions. ACM Trans. Graph. 38(6), 209–1 (2019)

    Article  Google Scholar 

  42. Sumner, R.W., Zwicker, M., Gotsman, C., Popović, J.: Mesh-based inverse kinematics. ACM Trans. Graph. (TOG) 24(3), 488–495 (2005)

    Article  Google Scholar 

  43. Sun, P., et al.: TransTrack: multiple object tracking with transformer. arXiv preprint arXiv:2012.15460 (2020)

  44. Sun, Z., Cao, S., Yang, Y., Kitani, K.M.: Rethinking transformer-based set prediction for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3611–3620 (2021)

    Google Scholar 

  45. Troje, N.F.: Decomposing biological motion: a framework for analysis and synthesis of human gait patterns. J. Vis. 2(5), 2 (2002)

    Article  Google Scholar 

  46. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  47. Villegas, R., Yang, J., Ceylan, D., Lee, H.: Neural kinematic networks for unsupervised motion retargetting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8639–8648 (2018)

    Google Scholar 

  48. Von Marcard, T., Rosenhahn, B., Black, M.J., Pons-Moll, G.: Sparse inertial poser: automatic 3D human pose estimation from sparse IMUs. In: Computer Graphics Forum, vol. 36, pp. 349–360. Wiley Online Library (2017)

    Google Scholar 

  49. Waltemate, T., Gall, D., Roth, D., Botsch, M., Latoschik, M.E.: The impact of avatar personalization and immersion on virtual body ownership, presence, and emotional response. IEEE Trans. Visual Comput. Graph. 24(4), 1643–1652 (2018)

    Article  Google Scholar 

  50. Wang, J., Liu, L., Xu, W., Sarkar, K., Theobalt, C.: Estimating egocentric 3D human pose in global space. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11500–11509 (2021)

    Google Scholar 

  51. Wang, L.C., Chen, C.C.: A combined optimization method for solving the inverse kinematics problems of mechanical manipulators. IEEE Trans. Robot. Autom. 7(4), 489–499 (1991)

    Article  Google Scholar 

  52. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: a general U-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17683–17693 (2022)

    Google Scholar 

  53. Yang, D., Kim, D., Lee, S.H.: LoBSTr: real-time lower-body pose prediction from sparse upper-body tracking signals. In: Computer Graphics Forum, vol. 40, pp. 265–275. Wiley Online Library (2021)

    Google Scholar 

  54. Yi, X., et al.: Physical inertial poser (PIP): physics-aware real-time human motion tracking from sparse inertial sensors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13167–13178 (2022)

    Google Scholar 

  55. Yi, X., Zhou, Y., Xu, F.: Transpose: real-time 3D human translation and pose estimation with six inertial sensors. ACM Trans. Graph. (TOG) 40(4), 1–13 (2021)

    Article  Google Scholar 

  56. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)

    Google Scholar 

  57. Zhang, X., Bhatnagar, B.L., Guzov, V., Starke, S., Pons-Moll, G.: COUCH: towards controllable human-chair interactions. In: Avidan, S., et al. (eds.) ECCV 2022. LNCS, pp. 518–535. Springer, Cham (2022)

    Google Scholar 

  58. Zhao, J., Badler, N.I.: Inverse kinematics positioning using nonlinear programming for highly articulated figures. ACM Trans. Graph. (TOG) 13(4), 313–336 (1994)

    Article  Google Scholar 

  59. Zhao, Z., Wu, Z., Zhang, Y., Li, B., Jia, J.: Tracking objects as pixel-wise distributions. In: Proceedings of the European Conference on Computer Vision (ECCV) (2022)

    Google Scholar 

  60. Zheng, C., Zhu, S., Mendieta, M., Yang, T., Chen, C., Ding, Z.: 3D human pose estimation with spatial and temporal transformers. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  61. Zhou, Y., Barnes, C., Lu, J., Yang, J., Li, H.: On the continuity of rotation representations in neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5745–5753 (2019)

    Google Scholar 

  62. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. In: International Conference on Learning Representations (2020)

    Google Scholar 

Download references

Acknowledgments

We thank Christian Knieling for his early explorations of learning-based methods for pose estimation with us at ETH Zürich. We thank Zhi Li, Xianghui Xie, and Dengxin Dai from Max Planck Institute for Informatics for their helpful discussions. We also thank Olga Sorkine-Hornung and Alexander Sorkine-Hornung for early discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Holz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, J. et al. (2022). AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13665. Springer, Cham. https://doi.org/10.1007/978-3-031-20065-6_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20065-6_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20064-9

  • Online ISBN: 978-3-031-20065-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics