Skip to main content
Log in

An automatic feature point extraction method based on laser vision for robotic multi-layer multi-pass weld seam tracking

  • ORIGINAL ARTICLE
  • Published:
The International Journal of Advanced Manufacturing Technology Aims and scope Submit manuscript

Abstract

Laser vision-based seam tracking has been an important research hotspot in modern welding manufacturing. However, severe noise interference during welding and the complex contour curves of filling welds hinder the development of high-precision seam tracking in multi-layer multi-pass (MLMP) welding. To solve this problem, a point distribution model (PDM) has been implemented to express the laser stripe pattern of MLMP welds. Then, an end-to-end feature point extraction algorithm is proposed. The “coarse-to-fine” positioning strategy achieves global correlation and local constraints. The low-resolution heatmap regression and coordinate offset regression balance the efficiency and precision, where the backbone is improved with attention mechanisms. Furthermore, the soft coordinate loss and the Gaussian mixture model were combined to improve the generalization performance. Based on the model, an automatic ROI extraction method and output points filtering are implemented to complete the whole tracking process. In experiments, the proposed method achieved good tracking performance even under strong noise, with the mean absolute error (MAE) being controlled within 0.3 mm. The feature point extraction method shows advantages in both precision and stability, laying a foundation for advanced robotic MLMP welding production.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

Data availability

The data sets are not exposed.

References

  1. Xu FJ, Xiao RQ, Hou Z, Xu YL, Zhang HJ, Chen, SB (2021) Multi-layer multi-pass welding of medium thickness plate: technologies, advances and future prospects. transactions on intelligent welding manufacturing: Volume III No. 4 2019, 3–33.

  2. Xu FJ, Xu YL, Zhang HJ, Chen SB (2022) Application of sensing technology in intelligent robotic arc welding: a review. J Manuf Process 79:854–880

    Article  Google Scholar 

  3. Wang NF, Zhong KF, Shi XD, Zhang XM (2020) A robust weld seam recognition method under heavy noise based on structured-light vision. Robotics and Computer-Integrated Manufacturing 61:101821

    Article  Google Scholar 

  4. Zou YB, Chen XZ, Gong GJ, Li JC (2018) A seam tracking system based on a laser vision sensor. Measurement 127:489–500

    Article  Google Scholar 

  5. Fan JF, Deng S, Ma YK, Zhou C, Jing FS, Tan M (2020) Seam feature point acquisition based on efficient convolution operator and particle filter in GMAW. IEEE Trans Industr Inf 17(2):1220–1230

    Article  Google Scholar 

  6. Zou YB, Wei XZ, Chen JX (2020) Conditional generative adversarial network-based training image inpainting for laser vision seam tracking. Opt Lasers Eng 134:106140

    Article  Google Scholar 

  7. Wu KY, Wang TQ, He JJ, Liu Y, Jia ZW (2020) Autonomous seam recognition and feature extraction for multi-pass welding based on laser stripe edge guidance network. The International Journal of Advanced Manufacturing Technology 111(9–10):2719–2731

    Article  Google Scholar 

  8. He YS, Xu YL, Chen YX, Chen HB, Chen SB (2016) Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robotics and Computer-Integrated Manufacturing 37:251–261

    Article  Google Scholar 

  9. Chen SF, Liu J, Chen B, Suo XY (2022) Universal fillet weld joint recognition and positioning for robot welding using structured light. Robotics and Computer-Integrated Manufacturing 74:102279

    Article  Google Scholar 

  10. Xiao RQ, Xu YL, Hou Z, Chen C, Chen SB (2021) A feature extraction algorithm based on improved Snake model for multi-pass seam tracking in robotic arc welding. J Manuf Process 72:48–60

    Article  Google Scholar 

  11. He YS, Ma GH, Chen SB (2021) Autonomous decision-making of welding position during multipass GMAW with T-joints: a Bayesian network approach. IEEE Trans Industr Electron 69(4):3909–3917

    Article  Google Scholar 

  12. Zhao Z, Luo J, Wang YY, Bai LF, Han J (2021) Additive seam tracking technology based on laser vision. The International Journal of Advanced Manufacturing Technology 116(1–2):197–211

    Article  Google Scholar 

  13. Yang GW, Wang YZ, Zhou N (2021) Detection of weld groove edge based on multilayer convolution neural network. Measurement 186:110129

    Article  Google Scholar 

  14. Yang HL, Lyu JY, Cheng PJ, Tang XY (2021) Lddmm-face: large deformation diffeomorphic metric learning for flexible and consistent face alignment. arXiv preprint arXiv:2108.00690.

  15. Li JF, Bian SY, Zeng AL, Wang C, Pang B, Liu, WT, Lu CW (2021) Human pose regression with residual log-likelihood estimation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 11025–11034).

  16. Saragih JM, Lucey S, Cohn JF (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vision 91:200–215

    Article  MathSciNet  Google Scholar 

  17. Xiao RQ, Xu YL, Hou Z, Chen C, Chen SB (2022) An automatic calibration algorithm for laser vision sensor in robotic autonomous welding system. Journal of Intelligent Manufacturing, 1–14.

  18. Milborrow S, Nicolls F (2008) Locating facial features with an extended active shape model. In Computer Vision–ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 504–513). Springer Berlin Heidelberg.

  19. Geng ZG, Sun K, Xiao B, Zhang ZX, Wang JD (2021) Bottom-up human pose estimation via disentangled keypoint regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14676–14686).

  20. Fard AP, Mahoor MH (2022) Facial landmark points detection using knowledge distillation-based neural networks. Comput Vis Image Underst 215:103316

    Article  Google Scholar 

  21. Xu ZX, Li BH, Yuan Y, Geng M (2021, May) Anchorface: an anchor-based facial landmark detector across large poses. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 4, pp. 3092–3100).

  22. Wang JD, Sun K, Cheng TH, Jiang BR, Deng CR, Zhao Y, Liu D, Mu YD, Tan MK, Wang XG, Liu WY, Xiao B (2020) Deep high-resolution representation learning for visual recognition. IEEE Trans Pattern Anal Mach Intell 43(10):3349–3364

    Article  Google Scholar 

  23. Lan X, Hu QH, Chen Q, Xue J, Cheng J (2021) Hih: towards more accurate face alignment via heatmap in heatmap. arXiv preprint arXiv:2104.03100

  24. Xiong YL, Zhou ZJ, Dou YH, Su ZZ (2020) Gaussian vector: an efficient solution for facial landmark detection. In Proceedings of the Asian Conference on Computer Vision

  25. Jin HB, Liao SC, Shao L (2021) Pixel-in-pixel net: towards efficient facial landmark detection in the wild. Int J Comput Vision 129:3174–3194

    Article  Google Scholar 

  26. He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778)

  27. Jin X, Xie YP, Wei XS, Zhao BR, Chen ZM, Tan XY (2022) Delving deep into spatial pooling for squeeze-and-excitation networks. Pattern Recogn 121:108159

    Article  Google Scholar 

  28. Feng ZH., Kittler J, Awais M, Huber P, Wu XJ (2018) Wing loss for robust facial landmark localisation with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2235–2245)

  29. Fard AP, Abdollahi H, Mahoor M (2021) ASMNet: a lightweight deep neural network for face alignment and pose estimation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 1521–1530)

  30. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  31. Liu Z, Lin YT, Cao Y, Hu H, Wei YX, Zhang Z, Lin S, Guo BN (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012–10022)

  32. Ren SQ, Cao XD, Wei YC, Sun J (2014) Face alignment at 3000 fps via regressing local binary features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1685–1692)

Download references

Funding

This work is supported by the Shanghai Excellent Technical Leader Talent Plan (No. 20xd1432700).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, methodology, validation, and writing—original draft, Fengjing Xu. Data curation, investigation, and resources, Lei He. Methodology, supervision, software, and resources, Zhen Hou and Runquan Xiao. Software, validation, and formal analysis, Tianyi Zuo and Jiacheng Li. Supervision, writing—review and editing, and resources, Yanling Xu. Conceptualization, writing—review and editing, and funding acquisition, Huajun Zhang. All authors have read and approved the final manuscript.

Corresponding authors

Correspondence to Yanling Xu or Huajun Zhang.

Ethics declarations

Ethical approval

The authors hereby state that the present work is in compliance with the ethical standards.

Consent for participation

The authors declare that they all consent to participate in this research.

Consent for publication

The authors declare that they all consent to publish the manuscript.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, F., He, L., Hou, Z. et al. An automatic feature point extraction method based on laser vision for robotic multi-layer multi-pass weld seam tracking. Int J Adv Manuf Technol 131, 5941–5960 (2024). https://doi.org/10.1007/s00170-024-13245-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00170-024-13245-z

Keywords

Navigation