Advertisement

Human-Agent Shared Teleoperation: A Case Study Utilizing Haptic Feedback

  • Affan Pervez
  • Hiba LatifeeEmail author
  • Jee-Hwan Ryu
  • Dongheui Lee
Conference paper
  • 463 Downloads
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 535)

Abstract

Even though teleoperation has been widely used in many application areas including nuclear waste handling, underwater manipulation and outer space applications, the required mental workload from human operator still remains high. Some delicate and complex tasks even require multiple operators. Learning from Demonstration (LfD) through teleoperation can provide a solution for repetitive tasks, but in many cases, one task can be a combination of repetitive and varying motion. This paper introduces a shared teleoperation method between human and agent, trained by LfD through teleoperation. In the proposed method, human takes charge of uncertain or critical motion, whereas more mundane and repetitive motion could be carried out through the assistance of the agent. The proposed method has exhibited superior performance as compared to the human-only teleoperation for a peg-in-hole task.

Keywords

Teleoperation Human-agent shared teleoperation Cooperative teleoperation Dynamic Movement Primitive Learning from Demonstrations Haptic feedback 

References

  1. 1.
    Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57(5), 469–483 (2009)CrossRefGoogle Scholar
  2. 2.
    Billard, A., Calinon, S., Dillmann, R., Schaal, S.: Robot programming by demonstration. In: Springer Handbook of Robotics, pp. 1371–1394 (2008)CrossRefGoogle Scholar
  3. 3.
    Calinon, S., Lee, D.: Learning control. In: Vadakkepat, P., Goswami, A. (eds.) Humanoid Robotics: A Reference. Springer, Heidelberg (2018)Google Scholar
  4. 4.
    Dragan, A.D., Srinivasa, S.S.: Assistive teleoperation for manipulation tasks. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 123–124. ACM (2012)Google Scholar
  5. 5.
    Gromov, B., Ivanova, G., Ryu, J.H.: Field of view deficiency-based dominance distribution for collaborative teleoperation. In: 2012 12th International Conference on Control, Automation and Systems (ICCAS), pp. 1990–1993. IEEE (2012)Google Scholar
  6. 6.
    Hart, S.G., Staveland, L.E.: Development of NASA-TLX (task load index): results of empirical and theoretical research. In: Advances in Psychology, vol. 52, pp. 139–183. Elsevier (1988)Google Scholar
  7. 7.
    Medina, J., Lorenz, T., Lee, D., Hirche, S.: Adaptive risk-sensitive optimal feedback control for haptic assistance, pp. 3639–3645 (2012)Google Scholar
  8. 8.
    Pervez, A., Ali, A., Ryu, J.H., Lee, D.: Novel learning from demonstration approach for repetitive teleoperation tasks. In: 2017 IEEE World Haptics Conference (WHC), pp. 60–65. IEEE (2017)Google Scholar
  9. 9.
    Pervez, A., Lee, D.: Learning task-parameterized dynamic movement primitives using mixture of GMMs. Intell. Serv. Robot. 11(1), 61–78 (2018)CrossRefGoogle Scholar
  10. 10.
    Schaal, S.: Dynamic movement primitives-a framework for motor control in humans and humanoid robotics. In: Adaptive Motion of Animals and Machines, pp. 261–280. Springer (2006)Google Scholar
  11. 11.
    Usmani, N.A., Kim, T.H., Ryu, J.H.: Dynamic authority distribution for cooperative teleoperation. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5222–5227. IEEE (2015)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Affan Pervez
    • 1
  • Hiba Latifee
    • 2
    Email author
  • Jee-Hwan Ryu
    • 2
  • Dongheui Lee
    • 1
    • 3
  1. 1.Department of Electrical and Computer EngineeringTechnical University of Munich (TUM)MunichGermany
  2. 2.Department of Mechanical EngineeringKorea University of Technology and EducationCheonanSouth Korea
  3. 3.Institute of Robotics and MechatronicsGerman Aerospace Center (DLR)WeßlingGermany

Personalised recommendations