Skip to main content
Log in

A Robot Learning from Demonstration Method Based on Neural Network and Teleoperation

  • Research Article-Mechanical Engineering
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

Industrial robots are widely employed in electronics, aerospace, machining, and other fields due to their flexibility, efficiency, and accuracy characteristics. However, traditional robots necessitate skilled professionals to accomplish intricate programming tasks of trajectory planning through teaching pendant or offline programming, which imposes high demands on the programming skills of users and significantly affects the work efficiency of the robot. This paper develops a Learning from demonstration method based on the neural network and teleoperation to solve this problem. The method establishes a neural network model that utilizes the input data of the master side of the teleoperation system and the error of the slave robot to predict and compensate for the mapping error in the teleoperation, and optimizes the robot's reproducing trajectory through the extreme learning machine. Besides, a teaching process can be performed by non-professionals, and the robot can reproduce the operation trajectory according to the collected trajectory data, which solves the problems of long-time cost and high operator proficiency in the traditional robot programming process. This paper builds a teaching system and conducts experimental verification based on Omega-7 equipment and the UR robot. The results show that the established teleoperating system can reproduce the mission trajectory through a single demonstration operation, and the taught trajectory is smoother in the reproduction process after the training of the extreme learning machine. In conclusion, this paper provides a trajectory-optimized method of teaching robots without traditional programming.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data Availability

For any supplementary information on the paper’s data contact the corresponding author.

Code Availability

For any request contact the corresponding author.

References

  1. Landscheidt, S.; Kans, M.; Winroth, M.; Wester, H.: The future of industrial robot business: product or performance based? Procedia Manuf. 25, 495–502 (2018). https://doi.org/10.1016/j.promfg.2018.06.125

    Article  Google Scholar 

  2. Ajaykumar, G.; Stiber, M.; Huang, C.-M.: Designing user-centric programming aids for kinesthetic teaching of collaborative robots. Robot. Auton. Syst. 145, 103845 (2021). https://doi.org/10.1016/j.robot.2021.103845

    Article  Google Scholar 

  3. Li, Z.; Gao, S.; Zhang, W.; Liu, X.: Robot programming language based on VB scripting for robot motion control. In: Yu, H.; Liu, J.; Liu, L.; Ju, Z.; Liu, Y.; Zhou, D. (Eds.) Intelligent Robotics and Applications, pp. 87–98. Springer, Cham (2019)

    Chapter  Google Scholar 

  4. Pan, Z.; Polden, J.; Larkin, N.; Van Duin, S.; Norrish, J.: Recent progress on programming methods for industrial robots. Robot. Comput.-Integr. Manuf. 28, 87–94 (2012). https://doi.org/10.1016/j.rcim.2011.08.004

    Article  Google Scholar 

  5. Zhang, H.-D.; Liu, S.-B.; Lei, Q.-J.; He, Y.; Yang, Y.; Bai, Y.: Robot programming by demonstration: a novel system for robot trajectory programming based on robot operating system. Adv. Manuf. 8, 216–229 (2020). https://doi.org/10.1007/s40436-020-00303-4

    Article  CAS  Google Scholar 

  6. Ong, S.K.; Yew, A.W.W.; Thanigaivel, N.K.; Nee, A.Y.C.: Augmented reality-assisted robot programming system for industrial applications. Robot. Comput.-Integr. Manuf. 61, 101820 (2020). https://doi.org/10.1016/j.rcim.2019.101820

    Article  Google Scholar 

  7. Chernova, S.; Thomaz, A.L.: Robot learning from human teachers. Synth. Lect. Artif. Intell. Mach. Learn. 8, 1–121 (2014). https://doi.org/10.2200/S00568ED1V01Y201402AIM028

    Article  Google Scholar 

  8. Calinon, S.; Guenter, F.; Billard, A.: On learning, representing, and generalizing a task in a humanoid robot. IEEE Trans. Syst. Man Cybern. B. 37, 286–298 (2007). https://doi.org/10.1109/TSMCB.2006.886952

    Article  Google Scholar 

  9. Argall, B.D.; Chernova, S.; Veloso, M.; Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57, 469–483 (2009). https://doi.org/10.1016/j.robot.2008.10.024

    Article  Google Scholar 

  10. Abbeel, P.; Coates, A.; Ng, A.Y.: Autonomous helicopter aerobatics through apprenticeship learning. Int. J. Robot. Res. 29, 1608–1639 (2010). https://doi.org/10.1177/0278364910371999

    Article  Google Scholar 

  11. Peters, R.A.; Campbell, C.L.; Bluethmann, W.J.; Huber, E.: Robonaut task learning through teleoperation. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), pp. 2806–2811. IEEE, Taipei, Taiwan (2003)

  12. Whitney, D.; Rosen, E.; Phillips, E.; Konidaris, G.; Tellex, S.: Comparing robot grasping teleoperation across desktop and virtual reality with ROS reality. In: Robotics Research. pp. 335–350. Springer (2020)

  13. Mohseni-Kabir, A.; Rich, C.; Chernova, S.; Sidner, C.L.; Miller, D.: Interactive hierarchical task learning from a single demonstration. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction. pp. 205–212 (2015)

  14. Togias, T.; Gkournelos, C.; Angelakis, P.; Michalos, G.; Makris, S.: Virtual reality environment for industrial robot control and path design. Procedia CIRP. 100, 133–138 (2021). https://doi.org/10.1016/j.procir.2021.05.021

    Article  Google Scholar 

  15. Ravichandar, H.; Polydoros, A.S.; Chernova, S.; Billard, A.: Recent advances in robot learning from demonstration. Annu. Rev. Control Robot. Auton. Syst. 3, 297–330 (2020). https://doi.org/10.1146/annurev-control-100819-063206

    Article  Google Scholar 

  16. Xu, Y.; Yang, C.; Zhong, J.; Wang, N.; Zhao, L.: Robot teaching by teleoperation based on visual interaction and extreme learning machine. Neurocomputing 275, 2093–2103 (2018). https://doi.org/10.1016/j.neucom.2017.10.034

    Article  Google Scholar 

  17. Pan, Y.; Chen, C.; Li, D.; Zhao, Z.; Hong, J.: Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device. Robot. Comput.-Integr. Manuf. 71, 102167 (2021). https://doi.org/10.1016/j.rcim.2021.102167

    Article  Google Scholar 

  18. Ardakani, M.M.G.; Cho, J.H.; Johansson, R.; Robertsson, A.: Trajectory generation for assembly tasks via bilateral teleoperation. IFAC Proc. Vol. 47, 10230–10235 (2014). https://doi.org/10.3182/20140824-6-ZA-1003.02559

    Article  Google Scholar 

  19. Yuan, Q.; Weng, C.-Y.; Suárez-Ruiz, F.; Chen, I.-M.: Flexible telemanipulation based handy robot teaching on tape masking with complex geometry. Robot. Comput.-Integr. Manuf. 66, 101990 (2020). https://doi.org/10.1016/j.rcim.2020.101990

    Article  Google Scholar 

  20. Wang, Z.; Sun, Y.; Liang, B.: Synchronization control for bilateral teleoperation system with position error constraints: a fixed-time approach. ISA Trans. 93, 125–136 (2019). https://doi.org/10.1016/j.isatra.2019.03.003

    Article  PubMed  Google Scholar 

  21. Falezza, F.; Vesentini, F.; Di Flumeri, A.; Leopardi, L.; Fiori, G.; Mistrorigo, G.; Muradore, R.: A novel inverse dynamic model for 3-DoF delta robots. Mechatronics 83, 102752 (2022). https://doi.org/10.1016/j.mechatronics.2022.102752

    Article  Google Scholar 

  22. Zou, J.; Han, Y.; So, S.-S.: Overview of artificial neural networks. In: Livingstone, D.J. (Ed.) Artificial Neural Networks, pp. 14–22. Humana Press, Totowa (2008)

    Chapter  Google Scholar 

  23. Górecki, T.; Łuczak, M.: Non-isometric transforms in time series classification using DTW. Knowl.-Based Syst. 61, 98–108 (2014). https://doi.org/10.1016/j.knosys.2014.02.011

    Article  Google Scholar 

  24. Godin, C.; Lockwood, P.: DTW schemes for continuous speech recognition: a unified view. Comput. Speech Lang. 3, 169–198 (1989). https://doi.org/10.1016/0885-2308(89)90028-4

    Article  Google Scholar 

  25. Fan, Y.; Zhan, Q.; Tang, L.; Liu, H.; Gao, S.: Temporal characterization of minute-level PM2.5 variation within a local monitoring network using DWT-DTW. Build. Environ. 205, 108221 (2021). https://doi.org/10.1016/j.buildenv.2021.108221

  26. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006). https://doi.org/10.1016/j.neucom.2005.12.126

    Article  Google Scholar 

  27. Huang, G.; Huang, G.-B.; Song, S.; You, K.: Trends in extreme learning machines: a review. Neural Netw. 61, 32–48 (2015). https://doi.org/10.1016/j.neunet.2014.10.001

    Article  PubMed  Google Scholar 

  28. Hosseini Nazhad, S.H.; Lotfinejad, M.M.; Danesh, M.; ul Amin, R.; Shamshirband, S.: A comparison of the performance of some extreme learning machine empirical models for predicting daily horizontal diffuse solar radiation in a region of southern Iran. Int. J. Remote Sens. 38, 6894–6909 (2017). https://doi.org/10.1080/01431161.2017.1368098

  29. Li, B.; Tian, W.; Zhang, C.; Hua, F.; Cui, G.; Li, Y.: Positioning error compensation of an industrial robot using neural networks and experimental study. Chin. J. Aeronaut. 35, 346–360 (2022). https://doi.org/10.1016/j.cja.2021.03.027

    Article  Google Scholar 

  30. Wang, H.: Towards manipulability of interactive Lagrangian systems. Automatica. 119, 108913 (2020). https://doi.org/10.1016/j.automatica.2020.108913

  31. Chmarra, M.K.; Kolkman, W.; Jansen, F.W.; Grimbergen, C.A.; Dankelman, J.: The influence of experience and camera holding on laparoscopic instrument movements measured with the TrEndo tracking system. Surg Endosc. 21, 2069–2075 (2007). https://doi.org/10.1007/s00464-007-9298-5

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported in part by Guangxi Science and Technology Base and Talent Special Project (Grant No. 2021AC19324), Guangxi Key Laboratory of Manufacturing System & Advanced Manufacturing Technology (Grant No. 20-065-40S006), National Natural Science Foundation of China (Grant No. 61963005).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Methodology, writing-original draft, visualization and software were performed by Ke Liang, Yupeng Wang and Yizhong Lin. Validation and investigation were performed by Yupeng Wang. Writing-review and editing were performed by Lei Pan. Conceptualization and formal analysis were performed by Yu Tang and Jing Li. Project administration and Funding acquisition were performed by Mingzhang Pan. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mingzhang Pan.

Ethics declarations

Conflict of interest

No known conflicts of interest.

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, K., Wang, Y., Pan, L. et al. A Robot Learning from Demonstration Method Based on Neural Network and Teleoperation. Arab J Sci Eng 49, 1659–1672 (2024). https://doi.org/10.1007/s13369-023-07851-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-023-07851-4

Keywords

Navigation