Skip to main content
Log in

Robotic grasping and assembly of screws based on visual servoing using point features

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

The robotic assembly of screws is the basic task for the automation assembly of complex equipment. However, a complete robotic assembly framework is difficult to be designed due to the integration of multiple technologies to achieve efficient and stable operations. In this paper, a robotic assembly workflow is proposed, which mainly consists of a feature extraction stage, a grasping stage, and an installation stage. In the feature extraction stage, a feature extraction algorithm consisting of a semantic segmentation network and an object classification module is designed. The semantic segmentation network segments the areas of multiple categories’ objects, and the object classification module selects an appropriate target object. The grasping stage and installation stage involve the position alignment of the objects. A position alignment method is developed based on image-based visual servoing using the point features extracted from the segmented areas. The experiments are conducted on a real robot. The alignment errors in grasping stage are less 0.53 mm. The assemblies for a M6-sized screw in ten experiments are successful. The experiment results verify the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Palmieri G, Palpacelli M, Battistelli M, Callegari M (2012) A comparison between position-based and image-based dynamic visual servoings in the control of a translating parallel manipulator. J Robot 2012:103954

    Google Scholar 

  2. Lee C, Seo H, Kim HJ (2019) Position-based monocular visual servoing of an unknown target using online self-supervised learning. In: IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Macau, China, pp 4467–4473. https://doi.org/10.1109/IROS40897.2019.8968216

  3. Al-Shanoon A, Lang H (2022) Robotic manipulation based on 3-D visual servoing and deep neural networks. Robot Auton Syst 152:104041

    Article  Google Scholar 

  4. Liang X, Wang H, Liu YH, You B, Liu Z, Jing Z, Chen W (2022) Fully uncalibrated image-based visual servoing of 2DOFs planar manipulators with a fixed camera. IEEE Tran Cybern 52(10):10895–10908

    Article  Google Scholar 

  5. Ma Y, Du K, Zhou D, Zhang J, Liu X, Xu D (2019) Automatic precision robot assembly system with microscopic vision and force sensor. Int J Adv Robot Syst 16(3):1–15

    Article  Google Scholar 

  6. Bateux Q, Marchand E (2015) Direct visual servoing based on multiple intensity histograms. In: IEEE International Conference on Robotics and Automation. IEEE, Seattle, WA, USA, pp 6019–6024. https://doi.org/10.1109/ICRA.2015.7140043

  7. Li J, Xie H, Low KH, Yong J, Li B (2021) Image-based visual servoing of rotorcrafts to planar visual targets of arbitrary orientation. IEEE Robot Autom Lett 6(4):7861–7868

    Article  Google Scholar 

  8. Chaumette F, Hutchinson S (2007) Visual servo control. II. Advanced approaches [Tutorial]. IEEE Robot Autom Mag 14(1):109–118

    Article  Google Scholar 

  9. Shen F, Wu W, Yu D, Xu D, Cao Z (2015) High-precision automated 3-D assembly with attitude adjustment performed by LMTI and vision-based control. IEEE Trans Mechatron 20(4):1777–1789

    Article  Google Scholar 

  10. Ding W, Liu X, Xu D, Zhang D, Zhang Z (2017) A robust detection method of control points for calibration and measurement with defocused images. IEEE Trans Instrum Meas 66(10):2725–2735

    Article  Google Scholar 

  11. Liu S, Xu D, Liu F, Zhang D, Zhang Z (2016) Relative pose estimation for alignment of long cylindrical components based on microscopic vision. IEEE Trans Mechatron 21(3):1388–1398

    Article  Google Scholar 

  12. Qin F, Xu D, Hannaford B, Hao T (2022) Object-agnostic vision measurement framework based on one-shot learning and behavior tree. IEEE Trans Cybern 53(8):5202–5215

    Article  Google Scholar 

  13. Liu H, Li D, Jiang B, Zhou J, Wei T, Yao X (2022) MGBM-YOLO: a faster light-weight object detection model for robotic grasping of bolster spring based on image-based visual servoing. J Intell Robot Syst 104(4):1–17

    Article  Google Scholar 

  14. Ahlin K, Joffe B, Hu AP, McMurray G, Sadegh N (2016) Autonomous leaf picking using deep learning and visual-servoing. IFAC-PapersOnLine 49(16):177–183

    Article  Google Scholar 

  15. Tokuda F, Arai S, Kosuge K (2021) Convolutional neural network-based visual servoing for eye-to-hand manipulator. IEEE Access 9:91820–91835

    Article  Google Scholar 

  16. Ma Y, Liu X, Zhang J, Xu D, Zhang D, Wu W (2020) Robotic grasping and alignment for small size components assembly based on visual servoing. Int J Adv Manuf Technol 106:4827–4843

    Article  Google Scholar 

  17. Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision, Munich, Germany. Springer, Cham, pp 801–818. https://doi.org/10.1007/978-3-030-01234-2_49

  18. Berman M, Triki AR, Blaschko MB (2018) The Lovasz-Softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, USA, pp 4413–4421. https://doi.org/10.1109/CVPR.2018.00464

  19. Sun C, Wu X, Sun J, Sun C, Dong L (2022) Robust pose estimation via hybrid point and twin line reprojection for RGB-D vision navigation. IEEE Trans Instrum Meas 71:1–19

    Google Scholar 

  20. He Y, Sun W, Huang H, Liu J, Fan H, Sun J (2020) PVN3D: a deep point-wise 3D keypoints voting network for 6DOF pose estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Seattle, WA, USA, pp 11629–11638. https://doi.org/10.1109/CVPR42600.2020.01165

  21. Hou J, Yu L, Fei S (2020) A highly robust automatic 3D reconstruction system based on integrated optimization by point line features. Eng Appl Artif Intell 95:1–11

    Article  Google Scholar 

  22. Xing D, Xu D, Li H, Luo L (2014) Active calibration and its applications on micro-operating platform with multiple manipulators. In: IEEE International Conference on Robotics and Automation. IEEE, Hong Kong, China, pp 5455–5460. https://doi.org/10.1109/ICRA.2014.6907661

    Google Scholar 

Download references

Funding

This work was partly supported by the National Natural Science Foundation of China (62273341).

Author information

Authors and Affiliations

Authors

Contributions

The method in this paper is firstly presented by TH and further improved in the way of the discussion with Professor DX. The manuscript is written by TH, and it is checked and revised by DX. All authors read and approved the final manuscript.

Corresponding author

Correspondence to De Xu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hao, T., Xu, D. Robotic grasping and assembly of screws based on visual servoing using point features. Int J Adv Manuf Technol 129, 3979–3991 (2023). https://doi.org/10.1007/s00170-023-12562-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-023-12562-z

Keywords

Navigation