Abstract
Laser vision-based seam tracking has been an important research hotspot in modern welding manufacturing. However, severe noise interference during welding and the complex contour curves of filling welds hinder the development of high-precision seam tracking in multi-layer multi-pass (MLMP) welding. To solve this problem, a point distribution model (PDM) has been implemented to express the laser stripe pattern of MLMP welds. Then, an end-to-end feature point extraction algorithm is proposed. The “coarse-to-fine” positioning strategy achieves global correlation and local constraints. The low-resolution heatmap regression and coordinate offset regression balance the efficiency and precision, where the backbone is improved with attention mechanisms. Furthermore, the soft coordinate loss and the Gaussian mixture model were combined to improve the generalization performance. Based on the model, an automatic ROI extraction method and output points filtering are implemented to complete the whole tracking process. In experiments, the proposed method achieved good tracking performance even under strong noise, with the mean absolute error (MAE) being controlled within 0.3 mm. The feature point extraction method shows advantages in both precision and stability, laying a foundation for advanced robotic MLMP welding production.
Similar content being viewed by others
Data availability
The data sets are not exposed.
References
Xu FJ, Xiao RQ, Hou Z, Xu YL, Zhang HJ, Chen, SB (2021) Multi-layer multi-pass welding of medium thickness plate: technologies, advances and future prospects. transactions on intelligent welding manufacturing: Volume III No. 4 2019, 3–33.
Xu FJ, Xu YL, Zhang HJ, Chen SB (2022) Application of sensing technology in intelligent robotic arc welding: a review. J Manuf Process 79:854–880
Wang NF, Zhong KF, Shi XD, Zhang XM (2020) A robust weld seam recognition method under heavy noise based on structured-light vision. Robotics and Computer-Integrated Manufacturing 61:101821
Zou YB, Chen XZ, Gong GJ, Li JC (2018) A seam tracking system based on a laser vision sensor. Measurement 127:489–500
Fan JF, Deng S, Ma YK, Zhou C, Jing FS, Tan M (2020) Seam feature point acquisition based on efficient convolution operator and particle filter in GMAW. IEEE Trans Industr Inf 17(2):1220–1230
Zou YB, Wei XZ, Chen JX (2020) Conditional generative adversarial network-based training image inpainting for laser vision seam tracking. Opt Lasers Eng 134:106140
Wu KY, Wang TQ, He JJ, Liu Y, Jia ZW (2020) Autonomous seam recognition and feature extraction for multi-pass welding based on laser stripe edge guidance network. The International Journal of Advanced Manufacturing Technology 111(9–10):2719–2731
He YS, Xu YL, Chen YX, Chen HB, Chen SB (2016) Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robotics and Computer-Integrated Manufacturing 37:251–261
Chen SF, Liu J, Chen B, Suo XY (2022) Universal fillet weld joint recognition and positioning for robot welding using structured light. Robotics and Computer-Integrated Manufacturing 74:102279
Xiao RQ, Xu YL, Hou Z, Chen C, Chen SB (2021) A feature extraction algorithm based on improved Snake model for multi-pass seam tracking in robotic arc welding. J Manuf Process 72:48–60
He YS, Ma GH, Chen SB (2021) Autonomous decision-making of welding position during multipass GMAW with T-joints: a Bayesian network approach. IEEE Trans Industr Electron 69(4):3909–3917
Zhao Z, Luo J, Wang YY, Bai LF, Han J (2021) Additive seam tracking technology based on laser vision. The International Journal of Advanced Manufacturing Technology 116(1–2):197–211
Yang GW, Wang YZ, Zhou N (2021) Detection of weld groove edge based on multilayer convolution neural network. Measurement 186:110129
Yang HL, Lyu JY, Cheng PJ, Tang XY (2021) Lddmm-face: large deformation diffeomorphic metric learning for flexible and consistent face alignment. arXiv preprint arXiv:2108.00690.
Li JF, Bian SY, Zeng AL, Wang C, Pang B, Liu, WT, Lu CW (2021) Human pose regression with residual log-likelihood estimation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 11025–11034).
Saragih JM, Lucey S, Cohn JF (2011) Deformable model fitting by regularized landmark mean-shift. Int J Comput Vision 91:200–215
Xiao RQ, Xu YL, Hou Z, Chen C, Chen SB (2022) An automatic calibration algorithm for laser vision sensor in robotic autonomous welding system. Journal of Intelligent Manufacturing, 1–14.
Milborrow S, Nicolls F (2008) Locating facial features with an extended active shape model. In Computer Vision–ECCV 2008: 10th European Conference on Computer Vision, Marseille, France, October 12–18, 2008, Proceedings, Part IV 10 (pp. 504–513). Springer Berlin Heidelberg.
Geng ZG, Sun K, Xiao B, Zhang ZX, Wang JD (2021) Bottom-up human pose estimation via disentangled keypoint regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14676–14686).
Fard AP, Mahoor MH (2022) Facial landmark points detection using knowledge distillation-based neural networks. Comput Vis Image Underst 215:103316
Xu ZX, Li BH, Yuan Y, Geng M (2021, May) Anchorface: an anchor-based facial landmark detector across large poses. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 4, pp. 3092–3100).
Wang JD, Sun K, Cheng TH, Jiang BR, Deng CR, Zhao Y, Liu D, Mu YD, Tan MK, Wang XG, Liu WY, Xiao B (2020) Deep high-resolution representation learning for visual recognition. IEEE Trans Pattern Anal Mach Intell 43(10):3349–3364
Lan X, Hu QH, Chen Q, Xue J, Cheng J (2021) Hih: towards more accurate face alignment via heatmap in heatmap. arXiv preprint arXiv:2104.03100
Xiong YL, Zhou ZJ, Dou YH, Su ZZ (2020) Gaussian vector: an efficient solution for facial landmark detection. In Proceedings of the Asian Conference on Computer Vision
Jin HB, Liao SC, Shao L (2021) Pixel-in-pixel net: towards efficient facial landmark detection in the wild. Int J Comput Vision 129:3174–3194
He KM, Zhang XY, Ren SQ, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778)
Jin X, Xie YP, Wei XS, Zhao BR, Chen ZM, Tan XY (2022) Delving deep into spatial pooling for squeeze-and-excitation networks. Pattern Recogn 121:108159
Feng ZH., Kittler J, Awais M, Huber P, Wu XJ (2018) Wing loss for robust facial landmark localisation with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2235–2245)
Fard AP, Abdollahi H, Mahoor M (2021) ASMNet: a lightweight deep neural network for face alignment and pose estimation. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 1521–1530)
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556
Liu Z, Lin YT, Cao Y, Hu H, Wei YX, Zhang Z, Lin S, Guo BN (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012–10022)
Ren SQ, Cao XD, Wei YC, Sun J (2014) Face alignment at 3000 fps via regressing local binary features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1685–1692)
Funding
This work is supported by the Shanghai Excellent Technical Leader Talent Plan (No. 20xd1432700).
Author information
Authors and Affiliations
Contributions
Conceptualization, methodology, validation, and writing—original draft, Fengjing Xu. Data curation, investigation, and resources, Lei He. Methodology, supervision, software, and resources, Zhen Hou and Runquan Xiao. Software, validation, and formal analysis, Tianyi Zuo and Jiacheng Li. Supervision, writing—review and editing, and resources, Yanling Xu. Conceptualization, writing—review and editing, and funding acquisition, Huajun Zhang. All authors have read and approved the final manuscript.
Corresponding authors
Ethics declarations
Ethical approval
The authors hereby state that the present work is in compliance with the ethical standards.
Consent for participation
The authors declare that they all consent to participate in this research.
Consent for publication
The authors declare that they all consent to publish the manuscript.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xu, F., He, L., Hou, Z. et al. An automatic feature point extraction method based on laser vision for robotic multi-layer multi-pass weld seam tracking. Int J Adv Manuf Technol 131, 5941–5960 (2024). https://doi.org/10.1007/s00170-024-13245-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00170-024-13245-z