Skip to main content
Log in

Deep learning-based melt pool and porosity detection in components fabricated by laser powder bed fusion

  • Full Research Article
  • Published:
Progress in Additive Manufacturing Aims and scope Submit manuscript

Abstract

Microstructure analysis is a crucial aspect of additive manufacturing (AM) processes, as it offers valuable insights into material properties, defects, and quality of printed parts. Quantification of microstructural features such as melt pool dimensions and porosity can help optimize the process parameters by aiding the development of useful correlations of the processing conditions and resulting print quality and properties. However, detecting melt pool boundaries and porosity-related defects in cross-sectional microstructure samples presents significant challenges due to the complex nature of these features and inherent difficulties in acquiring their images with quality and quantity required for accurate and efficient detection. To address this, we propose a deep learning-based approach that leverages state-of-the-art backbone models (EfficientNet b7 and DenseNet 201) with various convolutional neural networks (U-Net, LinkNet, and FPN) using transfer learning techniques to automatically segment and detect the melt pools and porosity from AM microstructure images. The results demonstrate our ability to accurately identify and segment melt pools and porosity, even with limited set of training data. A comparative study of the performance of the different neural network architectures was done and it was found that the quantification of results (of microstructural features) from the employed set of networks are statistically comparable although the accuracy from the combination of U-Net with EfficeintNet b7 backbone was highest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The computational and experimental data required to reproduce these findings can be made available upon request.

References

  1. Fu Y, Downey ARJ, Yuan L et al (2022) Machine learning algorithms for defect detection in metal laser-based additive manufacturing: a review. J Manuf Process 75:693–710. https://doi.org/10.1016/j.jmapro.2021.12.061

    Article  Google Scholar 

  2. Liu Z, Zhao D, Wang P et al (2022) Additive manufacturing of metals: Microstructure evolution and multistage control. J Mater Sci Technol 100:224–236. https://doi.org/10.1016/j.jmst.2021.06.011

    Article  Google Scholar 

  3. Holm EA, Cohn R, Gao N et al (2020) Overview: computer vision and machine learning for microstructural characterization and analysis. Metall Mater Trans A 51:5985–5999. https://doi.org/10.1007/s11661-020-06008-4

    Article  Google Scholar 

  4. Qin J, Hu F, Liu Y et al (2022) Research and application of machine learning for additive manufacturing. Addit Manuf 52:102691. https://doi.org/10.1016/j.addma.2022.102691

    Article  Google Scholar 

  5. Anantatamukala A, Mani Krishna KV, Dahotre NB (2023) Generative adversarial networks assisted machine learning based automated quantification of grain size from scanning electron microscope back scatter images. Mater Charact 206:113396. https://doi.org/10.1016/j.matchar.2023.113396

    Article  Google Scholar 

  6. Campbell A, Murray P, Yakushina E, Marshall S, Ion W (2018) New methods for automatic quantification of microstructural features using digital image processing. Mater Des. https://doi.org/10.1016/j.matdes.2017.12.049

    Article  Google Scholar 

  7. Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9:62–66. https://doi.org/10.1109/TSMC.1979.4310076

    Article  Google Scholar 

  8. Sural S, Gang Q, Pramanik S (2002) Segmentation and histogram generation using the HSV color space for image retrieval. In: Sural S, Gang Q, Pramanik S (eds) Proceedings International Conference on Image Processing Image Processing. IEEE, p II-589–II−92

    Google Scholar 

  9. Rafael C, Gonzalez REW (2018) Digital image processing, 4th edn. Pearson

    Google Scholar 

  10. Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell PAMI-8:679–698. https://doi.org/10.1109/TPAMI.1986.4767851

    Article  Google Scholar 

  11. Marr D, Hildreth E (1980) Theory of edge detection. Proc R Soc Lond B Biol Sci 207:187–217. https://doi.org/10.1098/rspb.1980.0020

    Article  Google Scholar 

  12. Fukunaga K (1990) Introduction to statistical pattern recognition. Elsevier

    Google Scholar 

  13. Lloyd S (1982) Least squares quantization in PCM. IEEE Trans Inf Theory 28:129–137. https://doi.org/10.1109/TIT.1982.1056489

    Article  MathSciNet  Google Scholar 

  14. Zhang J, Marszałek M, Lazebnik S, Schmid C (2007) Local features and kernels for classification of texture and object categories: a comprehensive study. Int J Comput Vis 73:213–238. https://doi.org/10.1007/s11263-006-9794-4

    Article  Google Scholar 

  15. Wang M, Deng W (2021) Deep face recognition: a survey. Neurocomputing 429:215–244. https://doi.org/10.1016/j.neucom.2020.10.081

    Article  Google Scholar 

  16. Grigorescu S, Trasnea B, Cocias T, Macesanu G (2019) A survey of deep learning techniques for autonomous driving. J Field Robot. https://doi.org/10.1002/rob.21918

    Article  Google Scholar 

  17. Qian R, Lai X, Li X (2022) 3D object detection for autonomous driving: a survey. Pattern Recognit 130:108796. https://doi.org/10.1016/j.patcog.2022.108796

    Article  Google Scholar 

  18. Kumar Y, Koul A, Singla R, Ijaz MF (2022) Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-021-03612-z

    Article  Google Scholar 

  19. Shen D, Wu G, Suk H-I (2017) Deep learning in medical image analysis. Annu Rev Biomed Eng 19:221–248. https://doi.org/10.1146/annurev-bioeng-071516-044442

    Article  Google Scholar 

  20. Yang S, Zhu F, Ling X et al (2021) Intelligent health care: applications of deep learning in computational medicine. Front Genet. https://doi.org/10.3389/fgene.2021.607471

    Article  Google Scholar 

  21. Alom MZ, Taha TM, Yakopcic C, Westberg S, Sidike P, Nasrin MS, Van Esesn BC, Awwal AAS, Asari VK (2018) The history began from alexnet: a comprehensive survey on deep learning approaches. arXiv:1803.01164

  22. Alrfou K, Kordijazi A, Zhao T (2022) Computer vision methods for the microstructural analysis of materials: the state-of-the-art and future perspectives. arXiv:2208.04149

  23. Dai Z, Liu H, Le QV, Tan M (2021) Coatnet: marrying convolution and attention for all data sizes. Adv Neural Inf Process Syst 34:3965–3977

    Google Scholar 

  24. Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60:84–90. https://doi.org/10.1145/3065386

    Article  Google Scholar 

  25. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  26. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  27. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969

  28. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Cortes C, Lawrence N, Lee D et al (eds) Advances in neural information processing systems. Curran Associates Inc

    Google Scholar 

  29. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448

  30. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440

  31. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 580–587

  32. Wen H, Huang C, Guo S (2021) The application of convolutional neural networks (CNNs) to recognize defects in 3D-printed parts. Materials 14:2575. https://doi.org/10.3390/ma14102575

    Article  Google Scholar 

  33. Zhang B, Jaiswal P, Rai R et al (2019) Convolutional neural network-based inspection of metal additive manufacturing parts. Rapid Prototyp J 25:530–540. https://doi.org/10.1108/RPJ-04-2018-0096

    Article  Google Scholar 

  34. Farhan Khan M, Alam A, Ateeb Siddiqui M et al (2021) Real-time defect detection in 3D printing using machine learning. Mater Today Proc 42:521–528. https://doi.org/10.1016/j.matpr.2020.10.482

    Article  Google Scholar 

  35. Westphal E, Seitz H (2021) A machine learning method for defect detection and visualization in selective laser sintering based on convolutional neural networks. Addit Manuf 41:101965. https://doi.org/10.1016/j.addma.2021.101965

    Article  Google Scholar 

  36. Snow Z, Diehl B, Reutzel EW, Nassar A (2021) Toward in-situ flaw detection in laser powder bed fusion additive manufacturing through layerwise imagery and machine learning. J Manuf Syst 59:12–26. https://doi.org/10.1016/j.jmsy.2021.01.008

    Article  Google Scholar 

  37. Li J, Zhou Q, Huang X et al (2023) In situ quality inspection with layer-wise visual images based on deep transfer learning during selective laser melting. J Intell Manuf 34:853–867. https://doi.org/10.1007/s10845-021-01829-5

    Article  Google Scholar 

  38. Zhang B, Liu S, Shin YC (2019) In-process monitoring of porosity during laser additive manufacturing process. Addit Manuf 28:497–505. https://doi.org/10.1016/j.addma.2019.05.030

    Article  Google Scholar 

  39. Li X, Jia X, Yang Q, Lee J (2020) Quality analysis in metal additive manufacturing with deep learning. J Intell Manuf 31:2003–2017. https://doi.org/10.1007/s10845-020-01549-2

    Article  Google Scholar 

  40. Scime L, Beuth J (2018) A multi-scale convolutional neural network for autonomous anomaly detection and classification in a laser powder bed fusion additive manufacturing process. Addit Manuf 24:273–286. https://doi.org/10.1016/j.addma.2018.09.034

    Article  Google Scholar 

  41. Angelone R, Caggiano A, Teti R et al (2020) Bio-intelligent selective laser melting system based on convolutional neural networks for in-process fault identification. Procedia CIRP 88:612–617. https://doi.org/10.1016/j.procir.2020.05.107

    Article  Google Scholar 

  42. Baumgartl H, Tomas J, Buettner R, Merkel M (2020) A deep learning-based model for defect detection in laser-powder bed fusion using in-situ thermographic monitoring. Prog Addit Manuf 5:277–285. https://doi.org/10.1007/s40964-019-00108-3

    Article  Google Scholar 

  43. Cui W, Zhang Y, Zhang X et al (2020) Metal additive manufacturing parts inspection using convolutional neural network. Appl Sci 10:545. https://doi.org/10.3390/app10020545

    Article  Google Scholar 

  44. Zhang Y, Hong GS, Ye D et al (2018) Extraction and evaluation of melt pool, plume and spatter information for powder-bed fusion AM process monitoring. Mater Des 156:458–469. https://doi.org/10.1016/j.matdes.2018.07.002

    Article  Google Scholar 

  45. Caggiano A, Zhang J, Alfieri V et al (2019) Machine learning-based image processing for on-line defect recognition in additive manufacturing. CIRP Ann 68:451–454. https://doi.org/10.1016/j.cirp.2019.03.021

    Article  Google Scholar 

  46. Davtalab O, Kazemian A, Yuan X, Khoshnevis B (2022) Automated inspection in robotic additive manufacturing using deep learning for layer deformation detection. J Intell Manuf 33:771–784. https://doi.org/10.1007/s10845-020-01684-w

    Article  Google Scholar 

  47. Ertay DS, Kamyab S, Vlasea M et al (2021) Toward sub-surface pore prediction capabilities for laser powder bed fusion using data science. J Manuf Sci Eng Trans ASME. https://doi.org/10.1115/1.4050461

    Article  Google Scholar 

  48. Schmid S, Krabusch J, Schromm T et al (2021) A new approach for automated measuring of the melt pool geometry in laser-powder bed fusion. Prog Addit Manuf 6:269–279. https://doi.org/10.1007/s40964-021-00173-7

    Article  Google Scholar 

  49. Peles A, Paquit VC, Dehoff RR (2023) Deep-learning quantitative structural characterization in additive manufacturing. arXiv:2302.06389

  50. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, 2015, proceedings, part III 18. Springer International Publishing, pp 234–241

  51. Chaurasia A, Culurciello E (2017) LinkNet: Exploiting encoder representations for efficient semantic segmentation. IEEE Vis Commun Image Process. https://doi.org/10.1109/VCIP.2017.8305148

    Article  Google Scholar 

  52. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  53. Li Z, Montomoli F (2023) Surrogate modeling and uncertainty quantification based on multi-fidelity deep neural network. Reliab Eng Syst Saf. https://doi.org/10.1016/j.ress.2024.109975

    Article  Google Scholar 

  54. Cowley B, Pillow JW (2020) High-contrast “gaudy” images improve the training of deep neural network models of visual cortex. Adv Neural Inf Process Syst 33:21591–21603

    Google Scholar 

  55. Maini R, Aggarwal H (2010) A comprehensive review of image enhancement techniques. arXiv:1003.4053

  56. Liang W, Tadesse GA, Ho D et al (2022) Advances, challenges and opportunities in creating data for trustworthy AI. Nat Mach Intell 4:669–677. https://doi.org/10.1038/s42256-022-00516-1

    Article  Google Scholar 

  57. Schneider CA, Rasband WS, Eliceiri KW (2012) NIH image to ImageJ: 25 years of image analysis. Nat Methods 9:671–675. https://doi.org/10.1038/nmeth.2089

    Article  Google Scholar 

  58. GIMP - GNU image manipulation program. https://www.gimp.org/. Accessed 29 Mar 2023

  59. Takahashi R, Matsubara T, Uehara K (2020) Data augmentation using random image cropping and patching for deep CNNs. IEEE Trans Circuits Syst Video Technol 30:2917–2931. https://doi.org/10.1109/TCSVT.2019.2935128

    Article  Google Scholar 

  60. Chollet F (2017) Deep learning with Python. Manning Publications, New York

    Google Scholar 

  61. Deng J, Dong W, Socher R et al. (2009) ImageNet: A large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp 248–255

  62. Ghosh S, Das N, Das I, Maulik U (2020) Understanding deep learning techniques for image segmentation. ACM Comput Surv 52:1–35. https://doi.org/10.1145/3329784

    Article  Google Scholar 

  63. Yan C, Fan X, Fan J, Wang N (2022) Improved U-net remote sensing classification algorithm based on multi-feature fusion perception. Remote Sens (Basel) 14:1118. https://doi.org/10.3390/rs14051118

    Article  Google Scholar 

  64. Wang Z, Zou N, Shen D, Ji S (2020) Non-local U-nets for biomedical image segmentation. Proc AAAI Conf Artif Intell 34:6315–6322. https://doi.org/10.1609/aaai.v34i04.6100

    Article  Google Scholar 

  65. Tan C, Sun F, Kong T et al (2018) A survey on deep transfer learning. Springer International Publishing, Cham

    Book  Google Scholar 

  66. Goodfellow I, Yosgua Bengio AC (2016) Deep learning. The MIT Press, Cambridge

    Google Scholar 

  67. Tan M, Le Q (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning. PMLR, pp 6105–6114

  68. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  69. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  70. Sudre CH, Li W, Vercauteren T et al (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. Lecture Notes Comput Sci. https://doi.org/10.1007/978-3-319-67558-9_28

    Article  Google Scholar 

  71. Beucher S (1992) The watershed transformation applied to image segmentation. Scan Microsc 1992:28

    Google Scholar 

Download references

Acknowledgements

Authors acknowledge the infrastructure and support of Center for Agile and Adaptive Additive Manufacturing (CAAAM) funded through State of Texas Appropriation: 190405-105-805008-220.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Narendra B. Dahotre.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gu, Z., Mani Krishna, K.V., Parsazadeh, M. et al. Deep learning-based melt pool and porosity detection in components fabricated by laser powder bed fusion. Prog Addit Manuf (2024). https://doi.org/10.1007/s40964-024-00603-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40964-024-00603-2

Keywords

Navigation