Skip to main content
Log in

A noncontact robot demonstration method with human supervision

  • Article
  • Published:
Science China Technological Sciences Aims and scope Submit manuscript

Abstract

The widely used kinesthetic demonstration method of dragging robotic manipulators cannot obtain reliable information for autonomous robot manipulation because an additional external force rather than pure contact force will be reflected on the force sensor in end-constrained manipulation tasks. Therefore, a noncontact robot demonstration method with human supervision is proposed to avoid external influence. A human demonstrator sends motion commands by mouse and observes force data reflected in a monitor to protect the robotic manipulator. Simultaneously, the human demonstrator supervises the position-orientation relationship between the end-effector and the manipulated object. The wrench insertion task is adopted to illustrate the advantage of the proposed method. A contact model is established according to demonstration data acquired from the proposed demonstration method, and an orientation adjustment strategy is verified. The strategy verification experiment illustrates not only the effectiveness of the simplified contact model and corresponding strategy but also the advantage of the proposed demonstration method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Billard A, Calinon S, Dillmann R, et al. Robot Programming by demonstration. In: Springer Handbook of Robotics. Springer, 2008. 1371–1394

  2. Argall B D, Chernova S, Veloso M, et al. A survey of robot learning from demonstration. Robotics Autonomous Syst, 2009, 57: 469–483

    Article  Google Scholar 

  3. Billard A, Grollman D. Robot learning by demonstration. Scholarpedia, 2013, 8: 3824

    Article  Google Scholar 

  4. Racca M, Pajarinen J, Montebelli A, et al. Learning in-contact control strategies from demonstration. In: The Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon, Korea (South): IEEE, 2016. 688–695

    Google Scholar 

  5. Montebelli A, Steinmetz F, Kyrki V. On handing down our tools to robots: Single-phase kinesthetic teaching for dynamic in-contact tasks. In: The Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA). Seattle, WA, USA: IEEE, 2015. 5628–5634

    Google Scholar 

  6. Steinmetz F, Montebelli A, Kyrki V. Simultaneous kinesthetic teaching of positional and force requirements for sequential in-contact tasks. In: The Proceedings of 15th IEEE-RAS International Conference on Humanoid Robots (Humanoids). Seoul, Korea (South): IEEE, 2015. 202–209

    Google Scholar 

  7. Yang C, Luo J, Pan Y, et al. Personalized variable gain control with tremor attenuation for robot teleoperation. IEEE Trans Syst Man Cybern Syst, 2018, 48: 1759–1770

    Article  Google Scholar 

  8. Tung C P, Kak A C. Automatic learning of assembly tasks using a dataglove system. In: The Proceedings of 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems-Human Robot Interaction and Cooperative Robots. Pittsburgh, PA, USA: IEEE, 1995. 1–8

    Google Scholar 

  9. Nair A, Mcgrew B, Andrychowicz M, et al. Overcoming exploration in reinforcement learning with demonstrations. In: The Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Brisbane, Australia: IEEE, 2018. 6292–6299

    Google Scholar 

  10. Yang C, Jiang Y, Li Z, et al. Neural control of bimanual robots with guaranteed global stability and motion precision. IEEE Trans Ind Inf, 2017, 13: 1162–1171

    Article  Google Scholar 

  11. Chong J W S, Ong S K, Nee A Y C, et al. Robot programming using augmented reality: An interactive method for planning collision-free paths. Robot Comput-Integ Manuf, 2009, 25: 689–701

    Article  Google Scholar 

  12. Fang H C, Ong S K, Nee A Y C. Novel AR-based interface for human-robot interaction and visualization. Adv Manuf, 2014, 2: 275–288

    Article  Google Scholar 

  13. Fang H C, Ong S K, Nee A Y C. Orientation planning of robot end-effector using augmented reality. Int J Adv Manuf Technol, 2013, 67: 2033–2049

    Article  Google Scholar 

  14. Ng A Y, Coates A, Diel M, et al. Autonomous inverted helicopter flight via reinforcement learning. In: Ang M H, Khatib O, eds. Experimental Robotics IX. Springer Tracts in Advanced Robotics. Springer, 2006. 31: 363

  15. Xu Y, Yang C, Zhong J, et al. Robot teaching by teleoperation based on visual interaction and extreme learning machine. Neurocomputing, 2018, 275: 2093–2103

    Article  Google Scholar 

  16. Tang T, Lin H C, Yu Z, et al. Teach industrial robots peg-hole-insertion by human demonstration. In: The Proceedings of IEEE International Conference on Advanced Intelligent Mechatronics (AIM). Banff, Canada: IEEE, 2016. 488–494

    Google Scholar 

  17. Mees O, Merklinger M, Kalweit G, et al. Adversarial skill networks: Unsupervised robot skill learning from video. In: The Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). Paris, France: IEEE, 2020. 4188–4194

    Google Scholar 

  18. Rosenhahn B, Schmaltz C, Brox T, et al. Markerless motion capture of man-machine interaction. In: The Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, AK, USA: IEEE, 2008. 1381–1388

    Google Scholar 

  19. Sigal L, Balan A O, Black M J. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. Int J Comput Vis, 2010, 87: 4–27

    Article  Google Scholar 

  20. Lee D, Christian O. Mimetic communication model with compliant physical contact in human-humanoid interaction. Int J Robot Res, 2010, 29: 1684–1704

    Article  Google Scholar 

  21. Ijspeert A, Nakanishi J, Schaal S. Movement imitation with nonlinear dynamical systems in humanoid robots. In: The Proceedings of 19th IEEE International Conference on Robotics and Automation (ICRA). Washington, DC, USA: IEEE, 2002. 1398–1403

    Google Scholar 

  22. Kulić D, Takano W, Nakamura Y. Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains. Int J Robotics Res, 2008, 27: 761–784

    Article  Google Scholar 

  23. Kulic D, Imagawa H, Nakamura Y. Online acquisition and visualization of motion primitives for humanoid robots. In: The Proceedings of 18th IEEE International Symposium on Robot and Human Interactive Communication. Toyama, Japan: IEEE, 2009. 1210–1215

    Google Scholar 

  24. Todorov E, Erez T, Tassa Y. MuJoCo: A physics engine for model-based control. In: The Proceedings of 25th IEEE≐J International Conference on Intelligent Robots and Systems (IROS). Algarve, Portugal: IEEE, 2012. 5026–5033

    Google Scholar 

  25. Shahid, A, Roveda L, Piga D, et al. Learning continuous control actions for robotic grasping with reinforcement learning. In: The Proceedings of 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2020. 4066–4072

  26. Roveda L, Magni M, Cantoni M, et al. Human-robot collaboration in sensorless assembly task learning enhanced by uncertainties adaptation via bayesian optimization. Robot Autonomous Syst, 2021, 136: 103711

    Article  Google Scholar 

  27. Roveda, L. A user-intention based adaptive manual guidance with force-tracking capabilities applied to walk-through programming for industrial robots. In: The Proceedings of 15th International Conference on Ubiquitous Robots (UR). Honolulu, HI: IEEE, 2018. 369–376

    Google Scholar 

  28. Roveda L, Haghshenas S, Caimmi M, et al. Assisting operators in heavy industrial tasks: On the design of an optimized cooperative impedance fuzzy-controller with embedded safety rules. Front Robot AI, 2019, 6: 75

    Article  Google Scholar 

  29. Roveda L, Maskani J, Franceschi P, et al. Model-based reinforcement learning variable impedance control for human-robot collaboration. J Intell Robot Syst, 2020, 100: 417–433

    Article  Google Scholar 

  30. Wu J, Wang J, You Z. An overview of dynamic parameter identification of robots. Robot Comput-Integ Manuf, 2010, 26: 414–419

    Article  Google Scholar 

  31. Wu J, Yu G, Gao Y, et al. Mechatronics modeling and vibration analysis of a 2-DOF parallel manipulator in a 5-DOF hybrid machine tool. Mechanism Machine Theor, 2018, 121: 430–445

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This work was supported by the National Natural Science Foundation of China (Grant No. 91848202).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Xie, Z. & Liu, Y. A noncontact robot demonstration method with human supervision. Sci. China Technol. Sci. 64, 2360–2372 (2021). https://doi.org/10.1007/s11431-021-1886-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11431-021-1886-1

Keywords

Navigation