Skip to main content
Log in

Robotic Grasping of Pillow Spring Based on M-G-YOLOv5s Object Detection Algorithm and Image-Based Visual Serving

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

The pillow spring grabbing robot is the core module of the intelligent assembly of railway wagon pillow springs in the overhauling workshop. The pillow spring end face and the center of the outer spring notch need to be quickly detected and accurately positioned. To address the shortage of hardware computing power in pillow spring assembly systems, a lightweight object detection YOLO model based on MobileNetv3 and GhostNet network architecture is proposed in this paper: M-G-YOLOv5s. The COCO2017 dataset and custom dataset are employed for model training and validation. The results indicate that compared to the YOLOv5s model, the M-G-YOLOv5s model reduces the model size by 81%, decreases the model parameter count by 83%, and improves the detection speed by 1.7 times. Based on the M-G-YOLOv5s algorithm, a novel image visual serving control method is proposed to address the automatic positioning problem of pillow springs and improve the efficiency of grabbing operations. This method is based on the mixed corner point features composed of the corner points of the pillow end object detection box and the pillow spring gap center. The visual servo positioning and grasping comparison experiments are carried out on the pillow spring grasping robot platform. The results show that the proposed M-G-YOLOv5s detection model can meet the grasping requirements of the pillow spring assembly system based on IBVS. The research findings have been successfully applied to the development of the pillow spring assembly manipulator for railway wagon bogie.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Not applicable.

Code Availability

Not applicable.

References

  1. Xin, J.S., Shang, Y.J., Xue, H., et al.: Dynamic reliability sensitivity analysis of bolster spring for heavy-haul Freight Car[J]. J. Lanzhou Jiaotong Univ. 39(06), 86–91 (2020)

    Google Scholar 

  2. Li, C.S., Luo, S.H., Cole, C., et al.: Bolster spring fault detection strategy for heavy haul wagons[J]. Veh. Syst. Dyn. Int. J. Veh. Mech. Mobil. 56(10), 1604–1621 (2018)

    Google Scholar 

  3. Li, D.F., Liu, H.L., Wei, T., Zhou, J.Y.: Robotic grasping method of bolster spring based on image-based visual servoing with YOLOv3 object detection algorithm[J]. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 236(3), 1780–1795 (2022)

    Article  Google Scholar 

  4. Liu, H.L., Li, D., Jiang, B., et al.: MGBM-YOLO: a faster light-weight object detection model for robotic grasping of bolster spring based on image-based visual servoing[J]. J. Intell. Rob. Syst. 104(4), 1–17 (2022)

    Article  Google Scholar 

  5. Xu, D.: A tutorial for monocular visual servoing. Acta Aotumatica Sinica. 44(10), 1729–1746 (2018)

    Google Scholar 

  6. Chaumette, F., Hutchinson, S.: Visual servo control, Part I: Basic approaches[J]. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006)

    Article  Google Scholar 

  7. Malis, E., Ezio, S., Chaumette, F., et al.: 2–1/2-D visual servoing[J]. IEEE Trans. Robot. Autom. 15, 238–250 (1999)

    Article  MATH  Google Scholar 

  8. Espiau, B.: Effect of Camera Calibration Errors on Visual Servoing in Robotics[C]// Preprints of the Third International Symposium on Experimental Robotics, pp 182–192 (1993)

  9. Yang, Y., Song, Y., Pan, H., et al.: Visual servo simulation of EAST articulated maintenance arm Robot[J]. Fusion Eng. Des. 104(Mar.), 28–33 (2016)

  10. Dong, G.Q., Zhu, Z.H.: Kinematics-based incremental visual servo for robotic capture of non-cooperative Target[J]. Robot. Auton. Syst. 112, 221–228 (2019)

    Article  Google Scholar 

  11. Xie, W.F., Li, Z., Tu, X.W., et al.: Switching control of image-based visual servoing with laser pointer in robotic manufacturing systems[J]. IEEE Trans. Industr. Electron. 56(2), 520–529 (2009)

    Article  Google Scholar 

  12. Ibarguren, A., Martínez-Otzeta, J.M., Maurtua, I.: Particle filtering for industrial 6DOF visual servoing[J]. J. Intell. Rob. Syst. 74(3), 689–696 (2014)

    Article  Google Scholar 

  13. Zhao, K., Wang, Y., Zuo, Y., et al.: Palletizing robot positioning bolt detection based on improved YOLO-V3[J]. J. Intell. Rob. Syst. 104(3), 1–12 (2022)

    Article  Google Scholar 

  14. Howard, A., Sandler, M., Chu, G., et al.: Searching for Mobilenetv3[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, pp 1314–1324 (2020)

  15. Sandler, M., Howard, A., Zhu, M., et al.: Mobilenetv2: Inverted Residuals and Linear Bottlenecks[C]//. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 4510-4520 (2018)

  16. Howard, A., Sandler, M., Chu, G., et al.: Searching for Mobilenetv3[C]// 2019 IEEE/ CVF International Conference on Computer Vision (I CCV). IEEE, pp 1314–1324 (2020)

  17. Zhang, X., Zhou, X., Lin, M., et al.: ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices[C]// 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 6848-6856 (2018)

  18. Ma, N., Zhang, X., Zheng, H.T., et al.: ShuffleNet v2: Practical Guidelines for Efficient CNN Architecture Design[C]// European Conference on Computer Vision (ECCV). Springer, Cham, pp 116–131 (2018)

  19. Han, K., Wang, Y., Tian, Q., et al.: GhostNet: More Features from Cheap Operations[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 1580–1589 (2020)

  20. Kragic, D., Christensen, H.I.: A framework for visual servoing[J]. Lect. Notes Comput. Sci. 22, 345–354 (2003)

    Article  MATH  Google Scholar 

  21. Corke, P.: Robotics, Vision and Control[M]// Robotics, Vision and Control - Fundamental Algorithms in MATLAB®. Springer Berlin Heidelberg (2011)

Download references

Funding

This work is supported by National Natural Science Foundation of China under Grant 52275030.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study. Hao Tian: Software, Experimental validation, Data curation, Writing-Reviewing and Editing. Wenhai Wu: Conceptualization and Funding Acquisition. Huanlong Liu: Conceptualization, Methodology and Funding Acquisition. Yadong Liu: Data curation, Visualization. Jincheng Zou: Investigation. Yifei Fei: Investigation and Edition.

Corresponding author

Correspondence to Wenhai Wu.

Ethics declarations

Conflicts of Interest

No conflict of interest exits in the submission of this article, and the article is approved by all authors for publication.

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, H., Wu, W., Liu, H. et al. Robotic Grasping of Pillow Spring Based on M-G-YOLOv5s Object Detection Algorithm and Image-Based Visual Serving. J Intell Robot Syst 109, 67 (2023). https://doi.org/10.1007/s10846-023-01989-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-023-01989-x

Keywords

Navigation