Skip to main content
Log in

Automatic grasping control of mobile robot based on monocular vision

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

Vision-based grasping control shows great flexibility and accuracy particularly in dynamic environment. This paper designs an automatic grasping control system with an eye-to-hand monocular camera and a mobile robot. The grasping control manipulation consists of coarse positioning stage and fine grasping stage. In coarse positioning stage, the mobile robot is guided to the location that placing the target based on the integrated vision navigation. In fine grasping stage, the target detection network and the target contour feature extraction strategy are elaborately designed to ensure the accuracy and efficiency of contour detection. In particular, the detection network based on improved Single Shot MultiBox Detector (SSD) is established to accurately detect the target in real-time. The obtained bounding box including the target from the detection network is set the region of interested (ROI). Then the color segmentation and morphological operation are well combined to extract the contour feature of the target, which improves the accuracy of target contour extraction. The pose of the target is estimated based on Perspective-n-Point (PNP) algorithm. Besides, a target tracking controller based on visual servoing is designed considering the movement of the mobile robot in grasping process. Numerous practical experiments are conducted to verify the effectiveness of the proposed dynamic grasping control method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Availability of data and materials

The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.

Code availability

Not applicable.

References

  1. Liu S, Li Y F, Wang X W(2020) A novel dual-probe-based micrograsping system allowing dexterous 3-d orientation adjustment. IEEE Trans Autom Sci Eng PP(99):1-15

  2. Ma Y, Du K, Zhou D (2019) Automatic precision robot assembly system with microscopic vision and force sensor. Int J Adv Advanced Robot Syst 16(3):1090–1103

    Google Scholar 

  3. Ren L, Wang L, Mills JK, Sun D (2008) Vision-based 2-D automatic micrograsping using coarse-to-fine grasping strategy. IEEE Trans Ind Electron 55(9):3324–3331

    Article  Google Scholar 

  4. Liu S, Li YF (2019) A high precision automatic wire wrapping approach based on microscopic vision and force information. IEEE Trans Ind Info 16(1):161–170

    Article  Google Scholar 

  5. Ma Y, Liu X, Zhang J (2020) Robotic grasping and alignment for small size components assembly based on visual servoing. Int J Adv Manuf Technol 106(11–12):1–17

    Google Scholar 

  6. Xing D, Liu F, Liu S, Xu D (2018) Efficient collision detection and detach control for convex prisms in precision manipulation. IEEE Trans Ind Info 14(12):5316–5326

    Article  Google Scholar 

  7. Varley J, Weisz J, Weiss J, Allen P (2015) Generating multi-fingered robotic grasps via deep learning. IEEE/RSJ International Conference on Intelligent Robots and Systems. Hamburg, Germany, pp 4415–4420

    Google Scholar 

  8. Wong J M, Kee V, Le T, Wagner S, Mariottini G L, Schneider A, Hamilton L, Chipalkatty R, Hebert M (2017) SegICP: Integrated deep semantic segmentation and pose estimation. arXiv: 1703.01661

  9. Steger C, Ulrich M, Wiedemann C (2007) Robust real-time pattern matching using Bayesian sequential hypothesis testing. IEEE Trans Pattern Anal Mach Intell 30(8):1427–1443

    Google Scholar 

  10. Lins RG, Givigi SN, Kurka PG (2015) Vision-based measurement for localization of objects in 3-D for robotic applications. IEEE Trans Instrum Meas 64(11):2950–2958

    Article  Google Scholar 

  11. Cao Z, Liu X, Gu N, Xu D, Zhou C, Tan M (2016) A fast orientation estimation approach of natural images. IEEE Trans Syst Man Cybern Syst 46(11):1589–1597

    Article  Google Scholar 

  12. Ying W, Zuo B, Lang H (2014) Vision based robotic grasping with a hybrid camera configuration. IEEE International Conference on Systems, Man, and Cybernetics. San Diego, USA, pp 3178–3182

    Google Scholar 

  13. Xiao N (2006) Active stereovision-based robot learning control for object tracking, fixating and grasping. Int J Adv Manuf Technol 28(2006):184–189

    Google Scholar 

  14. Suzuki Y, Koyama K, Ming A (2015) Grasping strategy for moving object using Net-Structure Proximity Sensor and vision sensor. IEEE International Conference on Robotics and Automation. Seattle, WA, USA, pp 1403–1409

    Google Scholar 

  15. Oliver G, Gil P, Gomez JF (2021) Towards footwear manufacturing 4.0: shoe sole robotic grasping in assembling operations. Int J Adv Manuf Technol 114(2021):811–827

  16. Gu Q, Aoyama T, Takaki T, Ishii I (2015) Simultaneous vision-based shape and motion analysis of cells fast-flowing in a microchannel. IEEE Trans Automat Sci Eng 12(1):204–215

    Article  Google Scholar 

  17. Chen G, Xu D, Fang Z, Jiang Z, Tan M (2013) Visual measurement of the racket trajectory in spinning ball striking for table tennis player. IEEE Trans Instrum Meas 62(11):2901–2911

    Article  Google Scholar 

  18. Sun S, Yi Y, Wang X, Xu D (2019) Robust landmark detection and position measurement based on monocular vision for autonomous aerial refueling of UAVs. IEEE Trans Cyber 49(12):4167–4179

    Article  Google Scholar 

  19. Ma Y, Liu X, Xu D (2020) Precision pose measurement of an object with flange based on shadow distribution. IEEE Trans Instrum Meas 69(5):2003–2015

    Article  Google Scholar 

  20. Xu L, Cao Z, Liu X (2016) A monocular vision system for pose measurement in indoor environment. IEEE International Conference on Robotics and Biomimetics. Qingdao, China, pp 1977–1982

    Google Scholar 

  21. Pence WG, Farelo F, Alqasemi R (2012) Visual servoing control of a 9DoF WMRA to perform ADL tasks. IEEE International Conference on Robotics and Automation. Saint Paul, MN, USA, pp 916–922

    Google Scholar 

  22. Tanaka M, Tadakuma K, Nakajima M, Fujita M (2019) Task-space control of articulated mobile robots with a soft gripper for operations. IEEE Trans Robot 35(1):135–146

    Article  Google Scholar 

  23. Martucci G, Bimbo J, Prattichizzo D, Malvezzi M (2020) Maintaining stable grasps during highly dynamic robot trajectories. IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, USA, pp 9198–9204

    Google Scholar 

  24. Li Z, Zhao T, Chen F, Hu Y, Su C, Fukuda T (2018) Reinforcement learning of manipulation and grasping using dynamical movement primitives for a humanoidlike mobile manipulator. IEEE/ASME Trans Mechatron 23(1):121–131

    Article  Google Scholar 

  25. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: Single shot multibox detector. Proc Eur Con Comput Vision. The Netherlands, Amsterdam, pp 21–37

    Google Scholar 

  26. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Computer Sci, pp: 1-12

  27. Anh LT, Song JB (2010) Object Tracking and visual servoing using features computed from local feature descriptor. International Conference on Control. Automation and Systems KINTEX, Gyeonggi-do, Korea, pp 1044–1048

    Google Scholar 

  28. Zhang K, Fang Z, Liu J, Tan M (2015) An adaptive way to detect the racket of the table tennis robot based on HSV and RGB. In:34th Chinese Control Conference. Hangzhou, China, pp 5936–5940

  29. Xing D, Xu D, Li H (2014) Active calibration and its applications on micro-operating platform with multiple manipulators. IEEE International Conference on Robotics and Automation. China, Hong Kong, pp 5455–5460

    Google Scholar 

  30. Siciliano KE (1989) Springer handbook of robotics. Springer-Verlag, Berlin Heidelberg, pp 564–584

    Google Scholar 

  31. (2011). http://www.image-net.org/

  32. (2018). https://github.com/tzutalin/labelImg/

  33. Liu Z, Fang F, Qian K (2020) Optimization method of target detection and tracking system for mobile robot. IEEE International Conference on Real-time Computing and Robotics. Asahikawa, Japan, pp 1–7

    Google Scholar 

Download references

Funding

The authors acknowledge that this work was supported in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 21KJB510040, in part by the National Natural Science Foundation of China under Grant 62076123 and Grant 61803198, and in part by the Introduction of Talents Research Start-Up Fund Project under Grant YK21-05-01.

Author information

Authors and Affiliations

Authors

Contributions

Yanqin Ma and Wenjun Zhu contributed equally to this work. Yanqin Ma: Data curation and writing-original draft. Wenjun Zhu: idea, writing-review and editing, Yuanwei Zhou: writing-review and editing.

Corresponding author

Correspondence to Yanqin Ma.

Ethics declarations

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

The authors declare that they participated in this paper willingly.

Conflict of interest/Competing interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Y., Zhu, W. & Zhou, Y. Automatic grasping control of mobile robot based on monocular vision. Int J Adv Manuf Technol 121, 1785–1798 (2022). https://doi.org/10.1007/s00170-022-09438-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-022-09438-z

Keywords

Navigation