Skip to main content

Motion encoding with asynchronous trajectories of repetitive teleoperation tasks and its extension to human-agent shared teleoperation


Teleoperating a robot for complex and intricate tasks demands a high mental workload from a human operator. Deploying multiple operators can mitigate this problem, but it can be also a costly solution. Learning from Demonstrations can reduce the human operator’s burden by learning repetitive teleoperation tasks. Yet, the demonstrations via teleoperation tend to be inconsistent compared to other modalities of human demonstrations. In order to handle less consistent and asynchronous demonstrations effectively, this paper proposes a learning scheme based on Dynamic Movement Primitives. In particular, a new Expectation Maximization algorithm which synchronizes and encodes demonstrations with high temporal and spatial variances is proposed. Furthermore, we discuss two shared teleoperation architectures, where, instead of multiple human operators, a learned artificial agent and a human operator share authority over a task while teleoperating cooperatively. The agent controls the more mundane and repetitive motion in the task whereas human takes charge of the more critical and uncertain motion. The proposed algorithm together with the two shared teleoperation architectures (human-synchronized and agent-synchronized shared teleoperation) has been tested and validated through simulation and experiments on 3 Degrees-of-Freedom Phantom-to-Phantom teleoperation. Conclusively, the both proposed shared teleoperation architectures have shown superior performance when compared with the human-only teleoperation for a peg-in-hole task.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13


  1. Akgun, B., & Subramanian, K. (2011). Robot learning from demonstration: kinesthetic teaching vs. teleoperation. Unpublished manuscript.

  2. Akgun, B., Subramanian, K., & Thomaz, A. (2012). Novel interaction strategies for learning from teleoperation. In AAAI fall symposium series (pp. 2–9).

  3. Alizadeh, T. (2014). Statistical learning of task modulated human movements through demonstration. Ph.D. thesis, Istituto Italiano di Tecnologia.

  4. Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483.

    Article  Google Scholar 

  5. Billard, A., Calinon, S., Dillmann, R., & Schaal, S. (2008). Robot programming by demonstration. Springer handbook of robotics (pp. 1371–1394). Berlin: Springer.

    Chapter  Google Scholar 

  6. Bukchin, J., Luquer, R., & Shtub, A. (2002). Learning in tele-operations. IIE Transactions, 34(3), 245–252.

    Google Scholar 

  7. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 37(2), 286–298.

    Article  Google Scholar 

  8. Calinon, S., & Lee, D. (2018). Learning control. In P. Vadakkepat & A. Goswami (Eds.), Humanoid robotics: A reference. Berlin: Springer.

    Google Scholar 

  9. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society Series B (Methodological), 39, 1–38.

    MathSciNet  Article  Google Scholar 

  10. Dragan, A. D., & Srinivasa, S. S. (2012). Assistive teleoperation for manipulation tasks. In Proceedings of the seventh annual ACM/IEEE international conference on human-robot interaction (pp. 123–124). ACM.

  11. Fischer, K., Kirstein, F., Jensen, L. C., Kr, N., Kukli, K., aus der Wieschen, M., & avarimuthu, T. (2016). A comparison of types of robot control for programming by demonstration. In ACM/IEEE international conference on human-robot interaction (HRI) (pp. 213–220).

  12. Ghahramani, Z., & Jordan, M. I. (1994). Supervised learning from incomplete data via an EM approach. Advances in neural information processing systems (Vol. 6). Princeton: Citeseer.

    Google Scholar 

  13. Gromov, B., Ivanova, G., & Ryu, J. H. (2012). Field of view deficiency-based dominance distribution for collaborative teleoperation. In 12th international conference on control, automation and systems (ICCAS), 2012 (pp. 1990–1993). IEEE.

  14. Hart, S. G., & Staveland, L. E. (1988). Development of nasa-tlx (task load index): Results of empirical and theoretical research. Advances in psychology (Vol. 52, pp. 139–183). Amsterdam: Elsevier.

    Google Scholar 

  15. Havoutis, I., & Calinon, S. (2019). Learning from demonstration for semi-autonomous teleoperation. Autonomous Robots, 43(3), 713–726.

    Article  Google Scholar 

  16. Hokayem, P. F., & Spong, M. W. (2006). Bilateral teleoperation: An historical survey. Automatica, 42(12), 2035–2057.

    MathSciNet  Article  Google Scholar 

  17. Hu, K., Ott, C., & Lee, D. (2014). Online human walking imitation in task and joint space based on quadratic programming. In IEEE international conference on robotics and automation (pp. 3458–3464).

  18. Khansari-Zadeh, S. M., & Billard, A. (2011). Learning stable nonlinear dynamical systems with gaussian mixture models. IEEE Transactions on Robotics, 27(5), 943–957.

    Article  Google Scholar 

  19. Kober, J., & Peters, J. (2010). Imitation and reinforcement learning. IEEE Robotics & Automation Magazine, 17(2), 55–62.

    Article  Google Scholar 

  20. Lee, D., & Ott, C. (2010). Incremental motion primitive learning by physical coaching using impedance control. In IEEE/RSJ international conference on intelligent robots and systems (pp. 4133–4140).

  21. Medina, J., Lee, D., & Hirche, S. (2012). Risk-sensitive optimal feedback control for haptic assistance. In 2012 IEEE international conference on robotics and automation (pp. 1025–1031). IEEE.

  22. Ott, C., Lee, D., & Nakamura, Y. (2008). Motion capture based human motion recognition and imitation by direct marker control. In IEEE-RAS international conference on humanoid robots (pp. 399–405).

  23. Pervez, A., Ali, A., Ryu, J. H., & Lee, D. (2017). Novel learning from demonstration approach for repetitive teleoperation tasks. In World haptics conference (WHC), 2017 IEEE (pp. 60–65). IEEE.

  24. Pervez, A., & Lee, D. (2015). A componentwise simulated annealing em algorithm for mixtures. In Joint German/Austrian conference on artificial intelligence (KI) (pp. 287–294).

  25. Pervez, A., & Lee, D. (2018). Learning task-parameterized dynamic movement primitives using mixture of gmms. Intelligent Service Robotics, 11(1), 61–78.

    Article  Google Scholar 

  26. Peternel, L., & Babic, J. (2013). Humanoid robot posture-control learning in real-time based on human sensorimotor learning ability. pp. 5329–5334.

  27. Peternel, L., Öztop, E., & Babic, J. (2016). A shared control method for online human-in-the-loop robot learning based on locally weighted regression. In IROS (pp. 3900–3906). IEEE.

  28. Peternel, L., Petriăź, T., & Babiăź, J. (2018). Robotic assembly solution by human-in-the-loop teaching method based on real-time stiffness modulation. Autonomous Robots, 42(1), 1–17.

    Article  Google Scholar 

  29. Peters, R. A., Campbell, C. L., Bluethmann, W. J., & Huber, E. (2003). Robonaut task learning through teleoperation. In IEEE international conference on robotics and automation (pp. 2806–2811).

  30. Power, M., Rafii-Tari, H., Bergeles, C., Vitiello, V., & Yang, G. Z. (2015). A cooperative control framework for haptic guidance of bimanual surgical tasks based on learning from demonstration. In IEEE international conference robotics and automation (ICRA) (pp. 5330–5337).

  31. Rozo, L., Jiménez, P., & Torras, C. (2013). A robot learning from demonstration framework to perform force-based manipulation tasks. Intelligent Service Robotics, 6(1), 33–51.

    Article  Google Scholar 

  32. Rozo, L., Jimenez Schlegl, P., & Torras, C. (2010). Sharpening haptic inputs for teaching a manipulation skill to a robot. In IEEE international conference on applied bionics and biomechanics (pp. 331–340).

  33. Rozo, L. D., Jiménez, P., & Torras, C. (2010). Learning force-based robot skills from haptic demonstration. CCIA (pp. 331–340). Washington, DC: CCIA.

    Google Scholar 

  34. Sanguansat, P. (2012). Multiple multidimensional sequence alignment using generalized dynamic time warping. WSEAS Transactions on Mathematics, 11(8), 668–678.

    Google Scholar 

  35. Saveriano, M., An, S., & Lee, D. (2015). Incremental kinesthetic teaching of end-effector and null-space motion primitives. In IEEE international conference on robotics and automation (pp. 3570–3575).

  36. Schaal, S. (2006). Dynamic movement primitives—a framework for motor control in humans and humanoid robotics. Adaptive motion of animals and machines (pp. 261–280). Berlin: Springer.

    Chapter  Google Scholar 

  37. Schaal, S., Mohajerian, P., & Ijspeert, A. (2007). Dynamics systems vs. optimal controla unifying view. Progress in Brain Research, 165, 425–445.

    Article  Google Scholar 

  38. Schmidts, A. M., Lee, D., & Peer, A. (2011). Imitation learning of human grasping skills from motion and force data. In International conference on intelligent robots and systems (IROS), 2011 IEEE/RSJ (pp. 1002–1007). IEEE.

  39. Stulp, F., Raiola, G., Hoarau, A., Ivaldi, S., & Sigaud, O. (2013). Learning compact parameterized skills with a single regression. In 13th IEEE-RAS international conference on humanoid robots (humanoids) (pp. 417–422).

  40. Usmani, N. A., Kim, T. H., & Ryu, J. H. (2015). Dynamic authority distribution for cooperative teleoperation. In International conference on intelligent robots and systems (IROS), 2015 IEEE/RSJ (pp. 5222–5227). IEEE.

  41. Yang, J., Xu, Y., & Chen, C. S. (1994). Hidden Markov model approach to skill learning and its application to telerobotics. IEEE Transactions on Robotics and Automation, 10(5), 621–631.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Affan Pervez.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is partially supported by Helmholtz Association and the Industrial Strategic Technology Development Program (10069072) funded by the MOTIE.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 104249 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Pervez, A., Latifee, H., Ryu, JH. et al. Motion encoding with asynchronous trajectories of repetitive teleoperation tasks and its extension to human-agent shared teleoperation. Auton Robot 43, 2055–2069 (2019).

Download citation


  • Dynamic movement primitives
  • Learning from demonstrations
  • Teleoperation
  • Human-agent shared teleoperation
  • Cooperative teleoperation
  • Human-synchronized
  • Agent-synchronized
  • Haptic feedback