Human Motion Reconstruction

Abstract

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

3-D

three-dimensional

6-D

six-dimensional

CAD

computer-aided design

CHMM

coupled hidden Markov model

DMP

dynamic movement primitive

DOF

degree of freedom

EM

expectation maximization

EMG

electromyography

HHMM

hierarchical hidden Markov model

HMM

hidden Markov model

RNN

recurrent neural network

SAI

simulation and active interfaces

References

  1. 68.1
    M.J. Mataric: Getting humaniods to move and imitate, IEEE Intell. Syst. 15(4), 18–24 (2000)CrossRefGoogle Scholar
  2. 68.2
    S. Nakaoka, A. Nakazawa, F. Kanehiro, K. Kaneko, M. Morisawa, H. Hirukawa, K. Ikeuchi: Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances, Int. J. Robotics Res. 26(8), 829–844 (2010)CrossRefGoogle Scholar
  3. 68.3
    Organic Motion, Inc.: OpenStage, http://organicmotion.com/mocap-for-animation
  4. 68.4
  5. 68.5
    M. Vondrak, L. Sigal, J. Hodgins, O. Jenkins: Video-based 3D motion capture through biped control, ACM Trans. Graph. 31(4), 24 (2012)CrossRefGoogle Scholar
  6. 68.6
    Y. Nakamura, K. Yamane, Y. Fujita, I. Suzuki: somatosensory computation for man-machine interface from motion capture data and musculoskeletal human model, IEEE Trans. Robotics 21(1), 58–66 (2005)CrossRefGoogle Scholar
  7. 68.7
    K. Yamane, Y. Nakamura: Natural motion animation through constraining and deconstraining at will, IEEE Trans. Vis. Comput. Graph. 9(3), 352–360 (2003)CrossRefGoogle Scholar
  8. 68.8
    J.J. Craig: Introduction to Robotics: Mechanics and Control (Addison-Wesley, Reading 1986)Google Scholar
  9. 68.9
    A.V. Hill: The heat of shortening and the dynamic constants of muscle, Proc. R. Soc. Lond. B 126, 136–195 (1938)CrossRefGoogle Scholar
  10. 68.10
    S. Stroeve: Impedance chracteristics of a neuro-musculoskeletal model of the human arm I: Posture control, J. Biol. Cyberneics 81, 475–494 (1999)CrossRefMATHGoogle Scholar
  11. 68.11
    J. Rasmussen, M. Damsgaard, M. Voigt: Muscle recruitment by the min/max criterion—a comparative study, J. Biomech. 34(3), 409–415 (2001)CrossRefGoogle Scholar
  12. 68.12
    K. Yamane, Y. Fujita, Y. Nakamura: Estimation of physically and physiologically valid somatosensory information, Proc. IEEE/RSJ Int. Conf. Robotics Autom. (ICRA) (2005) pp. 2635–2641Google Scholar
  13. 68.13
    B.W. Mooring, Z.S. Roth, M.R. Driels: Fundamentals of Manipulator Calibration (Wiley, New York 1991)Google Scholar
  14. 68.14
    Digital Human Research Center, AIST: Human Body Properties Database, https://www.dh.aist.go.jp/database/properties/index-e.html
  15. 68.15
    K. Ayusawa, G. Venture, Y. Nakamura: Identification of humanoid robots dynamics using minimal set of sensors, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2008) pp. 2854–2859Google Scholar
  16. 68.16
    T. Kim, S.I. Park, S.Y. Shin: Nonmetric individual differences multidimensional scaling: An alternating least squres method with optimal scaling features, ACM Trans. Graph. 22(3), 392–401 (2003)CrossRefGoogle Scholar
  17. 68.17
    T. Shiratori, A. Nakazawa, K. Ikeuchi: Detecting dance motion structure through music analysisg, Proc. 6th IEEE Int. Conf. Autom. Face Gesture Recognit. (2004) pp. 857–862Google Scholar
  18. 68.18
    J. Kohlmorgen, S. Lemm: A dynamic HMM for on-line segmentation of sequential data, Proc. Conf. Neural Inf. Process. Syst. (2002) pp. 793–800Google Scholar
  19. 68.19
    D. Kulic, W. Takano, Y. Nakamura: Online segmentation and clustering from continuous observation of whole body motions, IEEE Trans. Robotics 25(5), 1158–1166 (2009)CrossRefGoogle Scholar
  20. 68.20
    J.L. Elman: Finding structure in time, Cogn. Sci. 14, 179–211 (1990)CrossRefGoogle Scholar
  21. 68.21
    W. Takano, Y. Nakamura: Humanoid robot’s autonomous acquisition of proto-symbols through motion segmentation, Proc. IEEE-RAS Int. Conf. Humanoid Robots (2006) pp. 425–431Google Scholar
  22. 68.22
    A.J. Ijspeert, J. Nakanishi, S. Shaal: Learning control policies for movement imitation and movement recognition, Neural Inf. Process. Syst. 15, 1547–1554 (2003)Google Scholar
  23. 68.23
    M. Haruno, D. Wolpert, M. Kawato: MOSAIC model for sensorimotor learning and control, Neural Comput. 13, 2201–2220 (2001)CrossRefMATHGoogle Scholar
  24. 68.24
    T. Inamura, I. Toshima, H. Tanie, Y. Nakamura: Embodied symbol emergence based on mimesis theory, Int. J. Robotics Res. 23(4), 363–377 (2004)CrossRefGoogle Scholar
  25. 68.25
    A. Billard, S. Calinon, F. Guenter: Discriminative and adaptive imitation in uni-manual and bi-manual tasks, Robotics Auton. Syst. 54, 370–384 (2006)CrossRefGoogle Scholar
  26. 68.26
    M. Okada, K. Tatani, Y. Nakamura: Polynomial design of the nonlinear dynamics for the brain-like information processing of whole body motion, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2002) pp. 1410–1415Google Scholar
  27. 68.27
    A.J. Ijispeert, J. Nakanishi, T. Shibata, S. Schaal: Nonlinear dynamical systems for imitation with humanoid robots, Proc. IEEE-RAS Int. Conf. Humanoid Robots (2001)Google Scholar
  28. 68.28
    T. Matsubara, S.H. Hyon, J. Morimoto: Learning parametric dynamic movement primitives from multiple demonstrations, Neural Netw. 24(5), 493–500 (2011)CrossRefGoogle Scholar
  29. 68.29
    J. Tani, M. Ito: Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment, IEEE Trans. Syst. Man Cybern. A 33(4), 481–488 (2003)CrossRefGoogle Scholar
  30. 68.30
    L. Rabiner: A Tutorial on hidden Markov models and selected applications in speech recognition, Proceedings IEEE (1989) pp. 257–286Google Scholar
  31. 68.31
    L. Kovar, M. Gleicher, F. Pighin: Motion graphs, ACM Trans. Graph. 21(3), 473–482 (2002)CrossRefGoogle Scholar
  32. 68.32
    W. Takano, H. Imagawa, D. Kulic, Y. Nakamura: What do you expect from a robot that tells your future? The crystal ball, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), Taipei (2008) pp. 1780–1785Google Scholar
  33. 68.33
    Y. Sugita, J. Tani: Learning semantic combinatoriality from the interaction between linguistic and behavioral processes, Adapt. Behav. 3(1), 33–52 (2005)CrossRefGoogle Scholar
  34. 68.34
    T. Ogata, M. Murase, J. Tani, K. Komatani, H.G. Okuno: Two-way translation of compound sentences and arm motions by recurrent neural networks, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2007) pp. 1858–1863Google Scholar
  35. 68.35
    P.F. Brown, S.A.D. Pietra, V.J.D. Pietra, R.L. Mercer: The mathematics of statistical machine translation: Parameter estimation, Comput. Linguist. 19(2), 263–311 (1993)Google Scholar
  36. 68.36
    W. Takano, Y. Nakamura: Construction of a space of motion labels from their mapping to full-body motion symbols, Adv. Robotics 29(2), 115–126 (2015)CrossRefGoogle Scholar
  37. 68.37
    W. Takano, Y. Nakamura: Statistical mutual conversion between whole body motion primitives and linguistic sentences for human motions, Int. J. Robot. Res. 34(10), 1314–1328 (2015)CrossRefGoogle Scholar
  38. 68.38
    S. Fine, Y. Singer, N. Tishby: The hierarchical hidden markov model: Analysis and application, Mach. Learn. 32, 41–62 (1998)CrossRefMATHGoogle Scholar
  39. 68.39
    M. Brand, N. Oliver, A. Pentland: Coupled hidden Markov models for complex action recognition, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (1999) pp. 994–999Google Scholar
  40. 68.40
    W. Takano, K. Yamane, T. Sugihara, K. Yamamoto, Y. Nakamura: Primitive communication based on motion recognition and generation with hierarchical mimesis model, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2006) pp. 3602–3609Google Scholar
  41. 68.41
    D. Lee, C. Ott, Y. Nakamura: Mimetic communication model with compliant physical contact in human-humanoid interaction, Int. J. Robotics Res. 29(13), 1684–1704 (2004)CrossRefGoogle Scholar
  42. 68.42
    E. Demerican, L. Sentis, V. De Sapio, O. Khatib: Human motion reconstruction by direct control of marker trajectories. In: Advances in Robot Kinematics: Analysis and Design, ed. by J. Lenarcic, P. Wenger (Springer, Berlin, Heidelberg 2008) pp. 263–272CrossRefGoogle Scholar
  43. 68.43
    O. Khatib, O. Brock, K. Chang, F. Conti, D. Ruspini, L. Sentis: Robotics and interactive simulation, Commun. ACM 45(3), 46–51 (2002)CrossRefGoogle Scholar
  44. 68.44
    J.M. Wang, S.R. Hamner, S.L. Delp, V. Koltun: Optimizing locomotion controllers using biologically-based actuators and objectives, ACM Trans. Robotics 31(4), 25 (2012)Google Scholar
  45. 68.45
    V.B. Zordan, J.K. Hodgins: Motion capture-driven simulations that hit and react, Proc. ACM SIGGRAPH Symp. Comput. Animat. (2002) pp. 89–96Google Scholar
  46. 68.46
    K.W. Sok, M.M. Kim, J.H. Lee: Simulating biped behaviors from human motion data, ACM Trans. Graph. 26(3), 107 (2007)CrossRefGoogle Scholar
  47. 68.47
    M. Da Silva, Y. Abe, J. Popović: Interactive simulation of stylized human locomotion, ACM Trans. Graph. 27(3), 82 (2008)CrossRefGoogle Scholar
  48. 68.48
    K. Miura, M. Morisawa, F. Kanehiro, S. Kajia, K. Kaneko, K. Yokoi: Human-like walking with toe supporting for humanoids, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2011) pp. 4428–4435Google Scholar
  49. 68.49
    C. Ott, D.H. Lee, Y. Nakamura: Motion capture based human motion recognition and imitation by direct marker control, Proc. IEEE-RAS Int. Conf. Humanoid Robots (2008) pp. 399–405Google Scholar
  50. 68.50
    K. Yamane, J.K. Hodgins: Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data, Proc. IEEE/RSJ Int. Conf. Intell. Robot Syst. (IROS) (2009) pp. 2510–2517Google Scholar
  51. 68.51
    K. Yamane, S.O. Anderson, J.K. Hodgins: Controlling humanoid robots with human motion data: Experimental validation, Proc. IEEE-RAS Int. Conf. Humanoid Robots (2010) pp. 504–510Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Disney ResearchPittsburghUSA
  2. 2.Department of Mechano-InformaticsUniversity of TokyoTokyoJapan

Personalised recommendations