# Towards real-time whole-body human dynamics estimation through probabilistic sensor fusion algorithms

- 243 Downloads

## Abstract

Physical human–robot interaction is receiving a growing attention from the scientific community. One of the main challenges is to understand the principles governing the mutual behaviour during collaborative interactions between humans. In this context, the knowledge of human whole-body motion and forces plays a pivotal role. Current state of the art methods, however, do not allow one for reliable estimations of the human dynamics during physical human–robot interaction. This paper builds upon our former work on human dynamics estimation by proposing a probabilistic framework and an estimation tool for online monitoring of the human dynamics during human–robot collaboration tasks. The soundness of the proposed approach is verified in human–robot collaboration experiments and the results show that our probabilistic framework is able to estimate the human dynamic, thereby laying the foundation for more complex collaboration scenarios.

## Keywords

Real-time human dynamics estimation Human–robot physical collaboration Probabilistic sensor fusion algorithm## 1 Introduction

In the last two decades, the scientific community has shown a great interest in understanding and shaping the interaction mechanisms between humans and robots. Human–robot interaction, in fact, implies communication between two dyadic agents and, if the type of interaction is ‘physical’, the communication is represented by the set of forces mutually exchanged. Within this context, the step of quantifying these forces becomes of pivotal importance for understanding the interaction mechanisms. At the current scientific stage, classical robots are built to act for humans, but in order to adapt their functionality to the current technological demand, the new generation of robots are required to collaborate with humans. This implies that the robots will be endowed with the capability to control physical collaboration through intentional interaction with humans. Future robots have to be aware of the human partner kinematics and dynamics (contact and exchanged forces, internal forces, joint torques). While several well-established technologies make nowadays possible the estimation of the real-time human motion (e.g., marker-based motion capture as Vicon or marker-less systems as Xsens and Microsoft Kinect), the estimation of exchanged forces with the external environment is still an outstanding problem. Most of the studies on the physical human–robot interaction (pHRI) derive from the intrinsic mutual adaptive nature that automatically occurs when two humans are cooperating together to accomplish a common task. Based on the pioneering study on the minimum-jerk model for human manipulation tasks (Flash and Hogan 1985), a method based on the minimization of the jerk can be used as a suitable approximation for estimating the human partner motions during a human–robot cooperative manipulation in Maeda et al. (2001). Here the authors’ attempt is to incorporate human characteristics into the control strategy of the robot. The weakness of this type of approach lies in the pre-determination of the task and in the role that the robot has to play in the task execution. Furthermore, the minimum-jerk model reliability decreases considerably if the human partner decides to apply non-scheduled trajectory changes during the task (Miossec and Kheddar 2009).

Another suitable route for pHRI is the imitation learning approach, justified by a wide variety of computational algorithms that have been recently developed [see an exhaustive overview in Schaal et al. (2003)]. This approach yields to the *teacher–student* concept and consists in (*i*) collecting movements of a human agent (i.e., the teacher in the imitation process), gathered by motion capture techniques and clustered in motion databases (Guerra-Filho and Biswas 2011; Kuehne et al. 2011; Mandery et al. 2015; Wojtusch and Stryk 2015; *ii*) classifying motions as a set of primitive actions (i.e., motion primitives) for model abstraction (Kulic et al. 2008; *iii*) mapping human models into a robot platform, i.e., the student in the imitation process. The retargeting implies the development of robot control algorithms capable of learning motion models from these primitives (Amor et al. 2014; Terlemez et al. 2014; Mandery et al. 2016) in order to emulate human-like movements. In general, although in the imitation approach the task pre-determination does not represent a drawback anymore, a relevant problem (termed, correspondence problem) concerns the comparability of the models (teacher vs. student). Human motion primitives are exploitable only if their knowledge could be properly mapped into a compatible robot kinematic/dynamic model. This inevitably narrows down the potential student-role candidate to anthropomorphic robots and it requires an accurate analysis on the imitation mechanism assumptions.

The paper is built on the theoretical framework described in our previous work (Latella et al. 2016) from which it inherits both the notation and formulation. This paper represents the real-time evolution of Latella et al. (2016) and a first attempt at advancing the current state of the art in pHRI by designing an estimation tool for monitoring the dynamics of the human while physically interacting with a robot, in real-time.

The paper is structured as follows. Section 2 introduces the theoretical framework which the paper is based on. Section 3 presents the modelling of the human body as an articulated multi-body system. In Sect. 4 the adopted Gaussian probabilistic domain for the sensor fusion methodology is briefly recalled. Section 5 outlines the experimental setup of the framework followed by the offline algorithm validation in Sect. 6. In Sect. 7 the methodology for the real-time estimation is defined. Conclusions, current limitations and several considerations on the pivotal role of further control and estimation developments are discussed in Sect. 8.

## 2 Background

*i*) the motion of the human, (

*ii*) the motion of the robot, (

*iii*) the linking equations characterizing the contacts between the systems.

*i*) and (

*ii*) are floating-base representations of the dynamics of the human and robot models, respectively. In general, a floating-base system is defined by its configuration \({\varvec{q}} = \left( {^{\mathcal I}}\varvec{H}_{\mathcal B},\varvec{q}_j\right) \in SE(3)\times \mathbb R^{n}\) and its velocity \({\varvec{\nu }} = \left( {^{\mathcal I}}\varvec{v}_{\mathcal B},\dot{\varvec{q}}_j\right) \in \mathbb R^{n+6}\), where

*n*is the (internal) number of Degrees of Freedoms (DoFs) of the system, \(\mathcal I\) is an inertial frame and \(\mathcal B\) the base frame. \({^{\mathcal I}}\varvec{H}_{\mathcal B}\) is the homogeneous transformation from \(\mathcal B\) to \(\mathcal I\) and \({^{\mathcal I}}\varvec{v}_{\mathcal B}\) is the base velocity with respect to (w.r.t.) \(\mathcal I\). The configuration and the velocity of the internal DoFs are denoted with \(\varvec{q}_j\) and \(\dot{\varvec{q}}_j\), respectively. The matrices \(\varvec{\mathrm {M}}\), \(\varvec{\mathrm {C}}\), \(\varvec{\mathrm {G}}\) and \(\mathbb {M}\), \(\mathbb {C}\), \(\mathbb {G}\) denote the mass matrix, the Coriolis matrix and the gravity bias term for the human and the robot systems, respectively. The exchanged forces between the two systems are denoted by the vector \(\varvec{\mathrm {f}} \in \mathbb R^{6k}\), where

*k*is the number of the wrenches

^{1}exchanged during the interaction.

^{2}It is worth remarking that while all the variables are different for the two systems, the exchanged wrenches \(\varvec{\mathrm {f}}\) are mandatorily the same quantities. Matrices \(\varvec{\mathrm {J}}(\varvec{q})\) and \(\mathbb {J}(\bar{\varvec{q}})\) are the proper Jacobians for the human and the robot, respectively. Equation (

*iii*) is due to the rigid contacts assumption between the two systems. Since in this paper the human performs the experiments with the feet fixed on the force plates, we consider the human system to be fixed base, i.e, \({^{\mathcal I}}\varvec{H}_{\mathcal B} = const\) and known a priori and \({^{\mathcal I}}\varvec{v}_{\mathcal B}=\varvec{0}\). This choice implies \(\varvec{q} = \varvec{q}_j\) and reduces remarkably the framework complexity.

## 3 Human body modelling

*n*internal DoFs. Note that \(n = \sum _{i = 1}^{N_B} n_i\) is the total number of DoFs of the system. The generic

*i*th link and its parent are coupled with a

*i*th joint by following the topological Denavit–Hartenberg convention for joint numbering (Denavit and Hartenberg 1955). The origin of each link is located at the parent joint origin (i.e., the joint that connects the link to its parent). We model the links with simple geometric shapes (parallelepiped, cylinder, sphere) whose dimensions are estimated by readings coming from the motion capture acquisition (Fig. 2).

The dynamic properties, such as center of mass (CoM) and inertia tensor for each link, are not embedded in the Xsens output data since they are usually computed in a post-processing phase. Since our aim is to go towards the direction of a real-time estimation for the human dynamic variables, the knowledge of the dynamic properties during the acquisition phase is mandatory (Drillis et al. 1964). Since it is impractical to retrieve these quantities in-vivo for humans, we relied on the available anthropometric data concerning the relation between the total body mass and the mass of each body (Winter 2009; Herman 2007), under the assumption of geometric approximations and homogeneous density for the rigid bodies (Hanavan 1964; Yeadon 1990).

## 4 Probabilistic sensor fusion algorithm

In this section we briefly recall the probabilistic method for estimating dynamic variables of an articulated mechanical system by exploiting the so-called sensor fusion information, already presented in our previous work [the reader should refer to Latella et al. (2016) for a more thorough description]. The following description is applied within the context of the fixed-base representation of equations in Sect. 2.

*i*th body spatial acceleration, \(\varvec{f_i}^B\) is the net wrench, \(\varvec{f}_i\) is the internal wrench exchanged from the parent link to the

*i*th link through the

*i*th joint, \( \tau _i ,~\ddot{q}_{j_i} \in \mathbb R^{n_i}\) are the torque and the acceleration at the

*i*th joint, respectively. The system can interact with the surrounding environment, and the result of this interaction is reflected in the presence of the external wrenches \(\varvec{f_i}^x\) acting on the

*i*th link.

^{3}(Featherstone 2008) whose matrix form yields to the following linear system of equations in the variable \(\varvec{d}\):

^{4}\(\in \mathbb R^{(18 N_B+n) \times d}\) and the bias vector \(\varvec{b}_D \in \mathbb R^{18 N_B+n}\). We now consider the presence of \(N_S\) measurements coming from different sensors (e.g., accelerometers, force/torque sensors) and we denote with \(\varvec{y} \in \mathbb R^{y}\) the vector containing all the measurements. The dynamic variables and the values measured by the sensors can be coupled with \(\varvec{d}\) by the following set of equations:

*Maximum-A-Posteriori*(MAP) estimator

^{5}such that

## 5 Experimental setup

This section describes the experimental setup for the algorithm analysis. The MAP framework requires: (1) the computation of the human kinematics (possible through whole-body distributed sensor measurements), (2) the estimation of the human dynamics (i.e., the solution [5b)].

To this purpose, the experimental setup encompassed the following different sensor modules: (*i*) a wearable suit for the motion tracking, (*ii*) two standard force platforms for the ground reaction wrenches tracking, (*iii*) the force/torque sensors of an iCub robot arms. Ten healthy adult subjects (7 female, 3 male) have been recruited in the experimental session, height (\(166.6\pm 4.5\,\hbox {cm}\)) and mass (\(61.14\pm 5.76\,\hbox {kg}\)). Data were collected at Istituto Italiano di Tecnologia (IIT), Genoa, Italy. Each subject was provided of a written informative consent before starting the experiment. Kinematics data were acquired by using a full-body wearable lycra suit provided by Xsens Technologies. The wearable suit is composed of 17 wired trackers (i.e., inertial sensor units-IMUs including an accelerometer, a gyroscope and a magnometer). The suit has signal transmitters that send measurements to the acquisition unit through a wireless receiver which collects data at a frequency of \(240\,\hbox {Hz}\). The human subject performed the required task standing with the feet on two standard AMTI OR6 force platforms mounted on the ground, while interacting with the robot. Each platform acquired a wrench sample at a frequency of \(1\,\hbox {kHz}\) by using AMTI acquisition units.

*BT*).

## 6 Algorithm validation

^{6}links/joints of the model. Figure 6a shows the comparison between the linear acceleration measured by IMUs (in red) and the related MAP estimation (in blue) for the right foot, upper leg and hand, respectively. These results show that the MAP algorithm is a reliable tool for estimating the human link acceleration. We also investigated the reliability for the external forces (Fig. 6b) by considering only the links (right foot, right hand and left hand) where external forces are applied. Even in this case we obtained a suitable estimation of the quantities measured by the force/torque sensors. The same analysis was computed for the joint acceleration (in Fig. 6c). By relying on the previous results we exploited the MAP approach for estimating the joint torques in humans. Figure 6d shows the estimation of the joint torque during a

*BT*for the right ankle, hip and shoulder, respectively.

### 6.1 Robustness test

*BT*in two different configurations, i.e., with and without an additional mass (

*W*) of \(6\,\hbox {kg}\) roughly positioned in correspondence of the torso CoM. The MAP computation was performed by considering the following cases as algorithm inputs (see Table 1):

Cases for the MAP validation

CASE A | CASE B | |
---|---|---|

Model | Without | Without |

Measurements | With | Without |

\(\varvec{\varSigma }\) model | \(10^{-1}\) | \(10^{-4}\) |

\(\varvec{\tau }\) estimation | \(\varvec{\tau }_{(model + 6\,\hbox {kg})}\) | \(\varvec{\tau }_{model}\) |

Both the cases are performed with the model of the subject without *W*. In order to highlight a lower reliability for the model used in *CASE A*, it is assigned to the model a variance^{7} equal to \(10^{-1}\) (differently from the \(10^{-4}\) value assigned for the *CASE B*).

*W*positioned on the torso.

^{8}Given (6), it is possible to retrieve the error \(\varvec{\varepsilon }_{\varvec{\tau }}\) on the \(\varvec{\tau }\) estimation for the subject with

*W*, due to the

*CASE A*:

### 6.2 Incremental sensor fusion analysis

^{9}\(\varvec{X}\) between the human foot reference frame (\({{\small {hFOOT }}}\)) and the force plate frame (\({{\small {FP }}}\)). Equation (8b) is built for each IMU and it exploits the trasformation between each human link frame \({{\small {L }}}\) on which the IMU is attached and that particular \({{\small {IMU }}}\) reference frame. Equation (8c) is used for the iCub force/torque sensors for which it is necessary the transformation from each human hand frame (\({{\small {hHAND }}}\)) to the robot sensor frame (\({{\small {iCubFT }}}\)). A detailed frames view in Fig. 4. In particular, we build (3) for three different cases:

The MAP algorithm is performed thrice by including (as set of sensors) Eqs. (9a), (9b), (9c), respectively. This analysis is to prove that, by progressively adding (i.e., increasing) sensors data, the variance associated to the estimated dynamic variables consequently decreases at each MAP computation, by making the estimation more reliable. Among the estimated variables in \(\varvec{d}\) we consider again the torque \(\varvec{\tau }\) (e.g., both the right and left ankle and hip) and the variance along the axis of major relevance (i.e., *y* axis). By passing progressively from *CASE 1* to *CASE 3* the variance associated to the mean torques decreases.^{10} Figure 8 shows the decreasing behaviour of the mean variance of the torque at the hips and the ankles computed among ten subjects. The variance values at the ankles do not vary significantly among the three sensor configurations since the ankle torque estimation depends mostly on the contribution of the force plates that are included in all the cases of the computation. Conversely, a significant decreasing behaviour is evident in the values associated to the hips. In this case the contribution of the three sensors becomes important since the torque estimation at the hips is affected by the complete set of sensors (i.e., contribution of the force plates weights lower, contribution of IMUs + iCub sensors weights more).

## 7 Towards the real-time estimation

- 1.
The

*human-state-provider*module reads the links information from a human model (built in a pre-acquisition phase) and returns the human state \((\varvec{q}_j, \dot{\varvec{q}}_j)\). Note that the results will depend on the complexity of the model (e.g., few-DoF model may lead to worst approximations) but the formulation does not depend on the model complexity. - 2.
The

*human-forces-provider*module reads forces coming from different YARP-based sources and transforms them into quantities expressed in human reference frames. - 3.
The

*human-dynamics-estimator*module estimates the dynamics vector \(\varvec{d}\) every 10 ms given the output data of the previous two modules.

*human-viz*module is composed of different sub-modules that are in charge of reading information from the Xsens system, the human state, the estimated vector \(\varvec{d}\), and of publishing information in the form of ROS-topic messages to be visualized in the RViz toolit (Kam et al. 2015). The advantage in adopting this tool is due to its versatility and agnosticism to the type of the data structure or algorithm. Figure 10 shows two different interaction tasks with their real-time visualizations, for a bowing (

*on top*sequence) and a squat task (

*on middle*sequence), respectively. In addition to providing a visual feedback to the estimation, the visualizer gives us an important information about how much the human joint ‘effort’ is. The joint torques (estimated via MAP) are represented with balls whose colour can be considered as an indicator of the effort: in a gray scale, a light ball means a high effort, a dark ball a minor effort.

## 8 Conclusions and future works

*to get towards*the real-time estimation: starting from an offline validation of the algorithm (performed in Matlab, Sect. 6), we designed a (C++ based) tool for the estimation of the online human joints torques (Sect. 7). Within the real-time framework any kind of pHRI can be visualized as long as it obeys the initial assumptions (i.e, fixed base and rigid contact conditions). Even if a mathematical formalism already exists for the floating-base representation (Sect. 2), the existing software tool does not support it yet. The upgrade to a floating-base representation is mandatory for the implementation of the algorithm in more complex experimental setups where both humans and robots could move while interacting.

In this work, we applied the proposed approach to human models composed of 1-DoF revolute joints, by using the classical formalism widespread in robotics (Featherstone 2008). In particular, we combined this type of joints to obtain a series of joints with a high number of DoFs, that however are only a rough approximation of the complexity exhibited by real-life biomechanical joints. Despite we chose this joint model for an initial exploration, the proposed algorithm is not limited to this particular choice. In particular, the properties of any joint (with an arbitrary number of DoFs) can be encapsulated in an interface where the relative position, velocity and acceleration of the two bodies connected by it are described by arbitrary functions of joint coordinates variables and their derivatives that can then be directly inserted in (2) and (3). A possible implementation to generalize our method to arbitrarily complex musculoskeletal models is presented in Seth et al. (2010), where it is used for biomechanical simulations.

Another important investigation will concern the estimation of the human inertial parameters (mass, CoM, inertias). The use of the anthropometric tables is currently the most widespread tool for estimating such values. Even though they allow to scale the model along with the subject, this could be a rough approximation for the non-standard population samples (children, elderly, obese, individual with prostheses) and this is pushing the scientific community towards the development of new alternative ways to estimate these parameters directly from data (Venture et al. 2009; Robert et al. 2017).

The long-term objective of the entire work is represented by the possibility to extend the human dynamics estimation framework to a new framework that encompasses an *active* collaboration with a robot. In the experimental setup of Sect. 5, the robot is considered only as a *passive* forces measurer, i.e., as a tool for estimating the external forces acting on the human hands. Since the human dynamics is of pivotal importance for a control design aimed at considering the *human in the loop*, forthcoming developments will provide (online) the robot with the human force feedback. This information could be used as a tool for *reactive* human–robot collaboration (implying a robot reactive control) and, in a long-term perspective, for *predictive* collaboration. A first investigation towards this direction has been done in Romano et al. (2018). The paper attempts to answer the question *“How can we predict human intentions so as to synthesize robot controllers that are aware of and can react to the human presence?”* by considering an interaction scenario between a human and a human-aware iCub robot. In the paper, the momentum-based balancing controller of the robot has been modified [see Sect. 2 of Romano et al. (2018)] to take into account and exploit the human forces. A task for the robot stand-up from a rigid support has been performed with (bottom sequence of Fig. 10) and without the human help showing that the robot needs to provide less torque when helped by the human since it is capable to exploit the human assistance.

## Footnotes

- 1.
With an abuse of notation, we define as wrench a quantity that is not the dual of a twist but a vector \(\in \mathbb R^{6}\) containing both the forces and the related moments.

- 2.
For the sake of simplicity, we omit the forces the two systems exchange with the external environment (i.e., the ground) from the formulation of (i) and (ii). As a straightforward consequence of this, the linking equations between each system with the external environment are not considered.

- 3.
Here we prefer to adopt the Newton–Euler formalism as an equivalent representation of the system dynamics. More details about this choice in Sect. 3.3 of Latella et al. (2016).

- 4.
Note on the notation adopted for \(\varvec{D} \in \mathbb R^{(18 N_B+n) \times d}\). Within this notation, \(\varvec{D}\) is a matrix with \((18 N_B+n)\) rows and

*d*columns, i.e., number of rows of \(\varvec{d}\) in (1a), namely \((24N_B+2n)\). This form will be recurring throughout the paper. - 5.
The benefits of the MAP estimator choice are explained in Sect. 4 of Latella et al. (2016).

- 6.
Our approach is able to estimate \(\varvec{d}\) for every link/joint in the model, but for the sake of simplicity we prefer to show only those results that are the most relevant in the specific-task analysis.

- 7.
We refer here to covariance associated to the model as a diagonal matrix where diagonal elements are the variance value.

- 8.
We consider a simple 2-DoF system [see Latella et al. (2016)] in which the position of

*W*and the hip joint angle are known. - 9.
See Featherstone (2008) for the definition of the trasformation matrix between two reference frames.

- 10.
In order to substantiate the statistical significance of the results, a paired-samples

*t*test is performed firstly between*CASE 1*and*CASE 2*(2 sensors vs. 3 sensors) and then between*CASE 2*and*CASE 3*(3 sensors vs. all sensors). Torque variances statistically significant,*p*value \(<0.05\).

## Notes

### Acknowledgements

This paper was supported by EU An.Dy Project (An.Dy has received funding from the European Union’s Horizon 2020 Research and Innovation Programme, No. 731540). This paper was also supported by the FP7 EU Projects CoDyCo (No. 600716 ICT 2011.2.1 Cognitive Systems and Robotics). The content of this publication is the sole responsibility of the authors. The European Commission or its services cannot be held responsible for any use that may be made of the information it contains.

## Supplementary material

Supplementary material 1 (mov 191079 KB)

## References

- Amor, H.B., Neumann, G., Kamthe, S., Kroemer, O., & Peters, J. (2014). Interaction primitives for human–robot cooperation tasks. In
*2014 IEEE International Conference on Robotics and Automation (ICRA)*(pp. 2831–2837).Google Scholar - Delp, S. L., Anderson, F. C., Arnold, A. S., Loan, P., Habib, A., John, C. T., et al. (2007). OpenSim: Open-source software to create and analyze dynamic simulations of movement.
*IEEE Transactions on Bio-medical Engineering*,*54*(11), 1940–1950.CrossRefGoogle Scholar - Denavit, J., & Hartenberg, R. (1955). A kinematic notation for lowe pair mechanism based on matrices.
*Transactions of the ASME. Journal of Applied Mechanics*,*22*, 215–221.zbMATHGoogle Scholar - Drillis, R., Contini, R., & Bluestein, M. (1964). Body segment parameters; a survey of measurements techniques.
*Artificial Limbs*,*8*, 44–66.Google Scholar - Featherstone, R. (2008).
*Rigid body dynamics algorithms*. New York: Springer.CrossRefGoogle Scholar - Flash, T., & Hogan, N. (1985). The coordination of arm movements: An experimentally confirmed mathematical model.
*The Journal of Neuroscience: The Official Journal of the Society for Neuroscience*,*5*(7), 1688–1703.CrossRefGoogle Scholar - Guerra-Filho, G., & Biswas, A. (2011). The human motion database: A cognitive and parametric sampling of human motion.
*Face and Gesture*,*2011*, 103–110.Google Scholar - Hanavan, E. P. (1964). A mathematical model of the human body.
*AMRL-TR. Aerospace Medical Research Laboratories (U.S.)*(pp. 1–149).Google Scholar - Herman, I. (2007).
*Physics of the Human Body | Irving P. Herman | Springer*. Berlin: Springer.Google Scholar - Kam, H. R., Lee, S.-H., Park, T., & Kim, C.-H. (2015). RViz: A toolkit for real domain data visualization.
*Telecommunication Systems*,*60*(2), 337–345.CrossRefGoogle Scholar - Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: A large video database for human motion recognition. In
*2011 International Conference on Computer Vision*(pp. 2556–2563).Google Scholar - Kulic, D., Lee, D., Ott, C., & Nakamura, Y. (2008). Incremental learning of full body motion primitives for humanoid robots. In
*Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots*(pp. 326–332).Google Scholar - Latella, C., Kuppuswamy, N., Romano, F., Traversaro, S., & Nori, F. (2016). Whole-body human inverse dynamics with distributed micro-accelerometers, gyros and force sensing.
*Sensors*,*16*(5), 727.CrossRefGoogle Scholar - Maeda, Y., Hara, T., & Arai, T. (2001). Human–robot cooperative manipulation with motion estimation. In
*Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180)*(Vol. 4, pp. 2240–2245).Google Scholar - Mandery, C., Terlemez, Do, M., Vahrenkamp, N., & Asfour, T. (2015). The KIT whole-body human motion database. In
*2015 International Conference on Advanced Robotics (ICAR)*(pp. 329–336).Google Scholar - Mandery, C., Terlemez, Do, M., Vahrenkamp, N., & Asfour, T. (2016). Unifying representations and large-scale whole-body motion databases for studying human motion.
*IEEE Transactions on Robotics*,*32*(4), 796–809.CrossRefGoogle Scholar - Marsden, J . E., & Ratiu, T. (1999).
*Introduction to mechanics and symmetry: A basic exposition of classical mechanical systems. texts in applied mathematics*(2nd ed.). Berlin: Springer.CrossRefGoogle Scholar - Metta, G., Fitzpatrick, P., & Natale, L. (2006). YARP: Yet another robot platform.
*International Journal of Advanced Robotic Systems*,*3*(1), 8.CrossRefGoogle Scholar - Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., et al. (2010). The iCub humanoid robot: An open-systems platform for research in cognitive development.
*Neural Networks: The Official Journal of the International Neural Network Society*,*23*(8), 1125–1134.CrossRefGoogle Scholar - Miossec, S., & Kheddar, A. (2009). Human motion in cooperative tasks: Moving object case study. In
*2008 IEEE International Conference on Robotics and Biomimetics*(pp. 1509–1514).Google Scholar - Nori, F., Traversaro, S., Eljaik, J., Romano, F., Del Prete, A., Pucci, D. (2015). iCub whole-body control through force regulation on rigid non-coplanar contacts.
*Frontiers in Robotics and AI*,*2*.Google Scholar - Robert, T., Leborgne, P., Abid, M., Bonnet, V., Venture, G., & Dumas, R. (2017). Whole body segment inertia parameters estimation from movement and ground reaction forces: A feasibility study.
*Computer Methods in Biomechanics and Biomedical Engineering*,*20*, 175–176.CrossRefGoogle Scholar - Roetenberg, D., Luinge, H., & Slycke, P. (2009).
*Xsens MVN: Full 6dof human motion tracking using miniature inertial sensors*. Enschede: Xsens Motion Technologies BV.Google Scholar - Romano, F., Nava, G., Azad, M., amernik, J., Dafarra, S., Dermy, O., et al. (2018). The CoDyCo project achievements and beyond: Toward human aware whole-body controllers for physical human robot interaction.
*IEEE Robotics and Automation Letters*,*3*(1), 516–523.CrossRefGoogle Scholar - Savitzky, A., & Golay, M. (1964). Smoothing and differentiation of data by simplified least squares procedures—(ACS publications).
*Analytical Chemistry*,*36*(8), 1627–1639.CrossRefGoogle Scholar - Schaal, S., Ijspeert, A., & Billard, A. (2003). Computational approaches to motor learning by imitation.
*Philosophical Transactions of the Royal Society B: Biological Sciences*,*358*(1431), 537–547.CrossRefGoogle Scholar - Seth, A., Sherman, M., Eastman, P., & Delp, S. (2010). Minimal formulation of joint motion for biomechanisms.
*Nonlinear Dynamics*,*62*(1), 291–303.MathSciNetCrossRefGoogle Scholar - Terlemez, Ulbrich, S., Mandery, C., Do, M., Vahrenkamp, N., & Asfour, T. (2014). Master motor map (MMM). framework and toolkit for capturing, representing, and reproducing human motion on humanoid robots. In
*2014 IEEE-RAS International Conference on Humanoid Robots*(pp 894–901).Google Scholar - Venture, G., Ayusawa, K., Nakamura, Y. (2009). A numerical method for choosing motions with optimal excitation properties for identification of biped dynamics—An application to human. In
*2009 IEEE International Conference on Robotics and Automation*(pp. 1226–1231).Google Scholar - Winter, D. (2009).
*Biomechanics and motor control of human movement*(4th ed.). Hoboken: Wiley.CrossRefGoogle Scholar - Wojtusch, J., Stryk, O. V. (2015). HuMoD—A versatile and open database for the investigation, modeling and simulation of human motion dynamics on actuation level. In
*2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)*(pp. 74–79).Google Scholar - Yeadon, M. R. (1990). The simulation of aerial movementII. A mathematical inertia model of the human body.
*Journal of Biomechanics*,*23*(1), 67–74.CrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.