Advertisement

A Robot Self-learning Grasping Control Method Based on Gaussian Process and Bayesian Algorithm

  • Yong Tao
  • Hui Liu
  • Xianling Deng
  • Youdong Chen
  • Hegen Xiong
  • Zengliang Fang
  • Xianwu Xie
  • Xi Xu
Conference paper
Part of the Transactions on Intelligent Welding Manufacturing book series (TRINWM)

Abstract

A robot self-learning grasping control method combining Gaussian process and Bayesian algorithm was proposed. The grasping gesture and parameters of the robot end-effector were adjusted according to the position and pose changes of target location to realize accurate grasping of the target. Firstly, a robot self-adaptive grasping method based on Gaussian process was proposed for grasping training in order to realize modeling and matching of position and pose information of target object and robot joint variables. The trained Gaussian process model is combined with Bayesian algorithm. The model was taken as priori knowledge and the semi-supervised self-learning was implemented in new grasping region so that posterior Gaussian process model was generated. This method omits the complex visual calibration process and inverse kinematics solves only with a small group of samples. Besides, when the environment of grasping changes, the previous learning experience can be used to perform self-learning, and adapt to the grasping task in the new environment, which reduces the workload of operators. The effectiveness of the robot self-learning grasping control method based on Gaussian process and Bayesian algorithm was verified through simulation and grasping experiment of UR3.

Keywords

Gaussian process Bayesian algorithm Robot grasping Semi-supervised self-learning 

References

  1. 1.
    Kroemer OB, Detry R, Piater J et al (2010) Combining active learning and reactive control for robot grasping. Robot Auton Syst 58(9):1105–1116CrossRefGoogle Scholar
  2. 2.
    Lenz I, Lee H, Saxena A (2013) Deep learning for detecting robotic grasps. Int J Robot Res 34(4–5):705–724Google Scholar
  3. 3.
    Manti M, Hassan T, Passetti G et al (2015) A bioinspired soft robotic gripper for adaptable and effective grasping. Soft Robot 2(3):107–116CrossRefGoogle Scholar
  4. 4.
    Levine S, Finn C, Darrell T et al (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17(39):1–40MathSciNetzbMATHGoogle Scholar
  5. 5.
    Zeng A et al (2017) Multi-view self-supervised deep learning for 6D pose estimation in the amazon picking challenge. In: Proceedings of the IEEE international conference on robotics and automation, IEEE, Singapore, pp 1386–1383Google Scholar
  6. 6.
    Garcia-Sillas D et al (2016) Learning from demonstration with Gaussian processes. In: IEEE conference on mechatronics, adaptive and intelligent systems, IEEE, Hermosillo, pp 1–6Google Scholar
  7. 7.
    Shimojo M, Namiki A, Ishikawa M et al (2004) A tactile sensor sheet using pressure conductive rubber with electrical-wires stitched method. IEEE Sens J 4(5):589–596CrossRefGoogle Scholar
  8. 8.
    Bekiroglu Y, Laaksonen J, Jorgensen JA et al (2011) Assessing grasp stability based on learning and haptic data. IEEE Trans Rob 27(3):616–629CrossRefGoogle Scholar
  9. 9.
    Dang H, Allen PK (2014) Stable grasping under pose uncertainty using tactile feedback. Auton Robots 36(4):309–330CrossRefGoogle Scholar
  10. 10.
    Chebotar Y et al (2016) Self-supervised regrasping using spatio-temporal tactile features and reinforcement learning. In: IEEE/RSJ international conference on intelligent robots and systems, IEEE, Daejeon, pp 1960–1966Google Scholar
  11. 11.
    Haschke R et al (2005) Task-oriented quality measures for dextrous grasping. IEEE international symposium on computational intelligence in robotics and automation. IEEE, Espoo, pp 689–694Google Scholar
  12. 12.
    Roa MA, Suárez R (2015) Grasp quality measures: review and performance. Auton Robots 38(1):65–88CrossRefGoogle Scholar
  13. 13.
    Miller AT et al (2003) Automatic grasp planning using shape primitives. In: Proceedings of the IEEE international conference on robotics and automation, vol 2, IEEE, pp 1824–1829Google Scholar
  14. 14.
    Ciocarlie MT, Allen PK (2009) Hand posture subspaces for dexterous robotic grasping. Int J Robot Res 28(7):851–867CrossRefGoogle Scholar
  15. 15.
    Li Y, Saut J, Pettré J et al (2015) Fast grasp planning using cord geometry. IEEE Trans Rob 31(6):1393–1403CrossRefGoogle Scholar
  16. 16.
    Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334CrossRefGoogle Scholar
  17. 17.
    Wang Y, Liu CJ, Ren YJ et al (2009) Global calibration of visual inspection system based on universal robots. Opt Precis Eng 17(12):3028–3033Google Scholar
  18. 18.
    Tsai RY, Lenz RK (1989) A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans Robot Autom 5(3):345–358CrossRefGoogle Scholar
  19. 19.
    Levine S et al (2016) Learning hand-eye coordination for robotic grasping with largescale data collection. International symposium on experimental robotics. Springer, Berlin, pp 173–184Google Scholar
  20. 20.
    Pinto L et al (2016) Supersizing self-supervision: learning to grasp from 50 k tries and 700 robot hours. In: Proceedings of the IEEE international conference on robotics and automation, IEEE, Stockholm, pp 3406–3413Google Scholar
  21. 21.
    Finn C et al (2017) Deep visual foresight for planning robot motion. In: Proceedings of the IEEE international conference on robotics and automation, IEEE, Singapore, pp 2786–2793Google Scholar
  22. 22.
    Hutchinson S, Hager GD, Corke PI (1996) A tutorial on visual servo control. IEEE Trans Robot Autom 12(5):651–670CrossRefGoogle Scholar
  23. 23.
    Chaumette F, Hutchinson S (2006) Visual servo control. I. basic approaches. IEEE Robot Autom Mag 13(4):82–90CrossRefGoogle Scholar
  24. 24.
    Siradjuddin I et al (2012) A position based visual tracking system for a 7 DOF robot manipulator using a kinect camera. The 2012 international joint conference on neural networks. IEEE, Brisbane, pp 1–7Google Scholar
  25. 25.
    Thomas J et al (2014) Toward image based visual servoing for aerial grasping and perching. In: Proceedings of the IEEE international conference on robotics and automation, IEEE, Hong Kong, pp 2113–2118Google Scholar
  26. 26.
    Wang Y, Lang H, Silva CW (2010) A hybrid visual servo controller for robust grasping by wheeled mobile robots. IEEE/ASME Trans Mechatron 15(5):757–769CrossRefGoogle Scholar
  27. 27.
    Lin Y et al (2014) Grasp planning based on strategy extracted from demonstration. IEEE/RSJ international conference on intelligent robots and systems. IEEE, Chicago, pp 4458–4463Google Scholar
  28. 28.
    Sauser EL, Argall BD, Metta G et al (2012) Iterative learning of grasp adaptation through human corrections. Robot Auton Syst 60(1):55–71CrossRefGoogle Scholar
  29. 29.
    Faria DR, Martins R, Lobo J et al (2012) Extracting data from human manipulation of objects towards improving autonomous robotic grasping. Robot Auton Syst 60(3):396–410CrossRefGoogle Scholar
  30. 30.
    Rasmussen CE, Williams CK (2004) Gaussian processes in machine learning. Lect Notes Comput Sci 3176:63–71CrossRefGoogle Scholar
  31. 31.
    Ghadirzadeh A et al (2016) A sensorimotor reinforcement learning framework for physical human-robot interaction. IEEE/RSJ international conference on intelligent robots and systems. IEEE, Daejeon, pp 2682–2688Google Scholar
  32. 32.
    Ghadirzadeh A et al (2014) Learning visual forward models to compensate for self-induced image motion. The 23rd IEEE international symposium on robot and human interactive communication. IEEE, Edinburgh, pp 1110–1115CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Yong Tao
    • 1
  • Hui Liu
    • 1
  • Xianling Deng
    • 2
  • Youdong Chen
    • 1
  • Hegen Xiong
    • 3
  • Zengliang Fang
    • 1
  • Xianwu Xie
    • 3
  • Xi Xu
    • 3
  1. 1.Beihang UniversityBeijingChina
  2. 2.Chongqing University of Science and TechnologyChongqingChina
  3. 3.Wuhan University of Science and TechnologyWuhanChina

Personalised recommendations