Skip to main content

Advertisement

Log in

Automatic pixel-level detection and measurement of corrosion-related damages in dim steel box girders using Fusion-Attention-U-net

  • Original Paper
  • Published:
Journal of Civil Structural Health Monitoring Aims and scope Submit manuscript

Abstract

To detect corrosion-related damages inside dim steel box girders, an improved U-net, Fusion-Attention-U-net (FAU-net), is proposed in this paper. A fusion module and a bottleneck-attention module are embedded in FAU-net for aggregating multi-level features and learning representative information, respectively. To realize this, a database of 300 damage images is built after data augmentation. Then, the proposed FAU-net is modified, trained, and validated. Based on the selected best training, the network achieves 98.61% pixel accuracy (PA), 92.73% mean pixel accuracy (MPA), 77.57% mean intersection over union (MIoU), and 97.52% frequency weighted intersection over union (FWIoU) on the validation set. Subsequently, the robustness and adaptability of the trained FAU-net are tested and compared with state-of-the-art networks. For a deep understanding, an ablation study is conducted to learn the contribution of main components in FAU-net. To establish the relationship between the detected damage pixel area and its actual physical area, photography experiments, and theoretical analyses are conducted to study the effect of three critical shooting variables: shooting distance, focal length, and shooting angle. Finally, a theoretical equation linking the pixel and physical areas is derived and further validated using the field-taken damage images under different shooting cases. The results show that the proposed method substantiates excellent performance to detect damage at the pixel level and measure damage areas accurately for the current samples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Yeum CM, Dyke SJ (2015) Vision-based automated crack detection for bridge inspection. Comput Civ Infrastruct Eng 30:759–770. https://doi.org/10.1111/mice.12141

    Article  Google Scholar 

  2. Cha Y-J, Choi W, Büyüköztürk O (2017) Deep learning-based crack damage detection using convolutional neural networks. Comput Civ Infrastruct Eng 32:361–378. https://doi.org/10.1111/mice.12263

    Article  Google Scholar 

  3. Abdel-Qader I, Abudayyeh O, Kelly ME (2003) Analysis of edge-detection techniques for crack identification in bridges. J Comput Civ Eng 17:255–263. https://doi.org/10.1061/(ASCE)0887-3801(2003)17:4(255)

    Article  Google Scholar 

  4. Nishikawa T, Yoshida J, Sugiyama T, Fujino Y (2012) Concrete crack detection by multiple sequential image filtering. Comput Civ Infrastruct Eng 27:29–47. https://doi.org/10.1111/j.1467-8667.2011.00716.x

    Article  Google Scholar 

  5. German S, Brilakis I, DesRoches R (2012) Rapid entropy-based detection and properties measurement of concrete spalling with machine vision for post-earthquake safety assessments. Adv Eng Inform 26:846–858. https://doi.org/10.1016/j.aei.2012.06.005

    Article  Google Scholar 

  6. Koch C, Brilakis I (2011) Pothole detection in asphalt pavement images. Adv Eng Inform 25:507–515. https://doi.org/10.1016/j.aei.2011.01.002

    Article  Google Scholar 

  7. Chen P-H, Yang Y-C, Chang L-M (2010) Box-and-ellipse-based ANFIS for bridge coating assessment. J Comput Civ Eng 24:389–398. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000041

    Article  Google Scholar 

  8. Chen P-H, Chang L-M (2006) Effectiveness of neuro-fuzzy recognition approach in evaluating steel bridge paint conditions. Can J Civ Eng 33:103–108. https://doi.org/10.1139/l05-077

    Article  Google Scholar 

  9. Jahanshahi MR, Kelly JS, Masri SF, Sukhatme GS (2009) A survey and evaluation of promising approaches for automatic image-based defect detection of bridge structures. Struct Infrastruct Eng 5:455–486. https://doi.org/10.1080/15732470801945930

    Article  Google Scholar 

  10. Vorobel R, Ivasenko I, Berehulyak O, Mandzii T (2021) Segmentation of rust defects on painted steel surfaces by intelligent image analysis. Autom Constr 123:103515. https://doi.org/10.1016/j.autcon.2020.103515

    Article  Google Scholar 

  11. Furuta H, Deguchi T, Kushida M (1995) Neural network analysis of structural damage due to corrosion. In: Proceedings of 3rd international symposium on uncertainty modeling and analysis and annual conference of the North American fuzzy information processing society. IEEE Comput. Soc. Press, pp 109–114

  12. Choi KY, Kim SS (2005) Morphological analysis and classification of types of surface corrosion damage by digital image processing. Corros Sci 47:1–15. https://doi.org/10.1016/j.corsci.2004.05.007

    Article  Google Scholar 

  13. Lee S, Chang LM, Skibniewski M (2006) Automated recognition of surface defects using digital color image processing. Autom Constr 15:540–549. https://doi.org/10.1016/j.autcon.2005.08.001

    Article  Google Scholar 

  14. Ghanta S, Karp T, Lee S (2011) Wavelet domain detection of rust in steel bridge images. In: 2011 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 1033–1036

  15. Jahanshahi MR, Masri SF (2013) Parametric performance evaluation of wavelet-based corrosion detection algorithms for condition assessment of civil infrastructure systems. J Comput Civ Eng 27:345–357. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000225

    Article  Google Scholar 

  16. Bonnin-Pascual F, Ortiz A (2014) Corrosion detection for automated visual inspection. In: Aliofkhazraei M (ed) Developments in corrosion protection. InTech, London, pp 619–632

    Google Scholar 

  17. Atha DJ, Jahanshahi MR (2018) Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection. Struct Heal Monit 17:1110–1128. https://doi.org/10.1177/1475921717737051

    Article  Google Scholar 

  18. Ma Y, Yao Y, Zhao X, et al (2018) Image-based corrosion recognition for ship steel structures. In: Meyendorf NG (ed) Smart structures and NDE for Industry 4.0. SPIE, London, p 102134

  19. Du J, Yan L, Wang H, Huang Q (2018) Research on grounding grid corrosion classification method based on convolutional neural network. MATEC Web Conf 160:01008. https://doi.org/10.1051/matecconf/201816001008

    Article  Google Scholar 

  20. Feng J, Li F, Lu S et al (2017) Injurious or noninjurious defect identification from MFL images in pipeline inspection using convolutional neural network. IEEE Trans Instrum Meas 66:1883–1892. https://doi.org/10.1109/TIM.2017.2673024

    Article  Google Scholar 

  21. Kang DH (2021) Autonomous unmanned aerial vehicles and deep learning-based damage detection. Dissertation, The University of Manitoba

  22. Cha Y-J, Choi W, Suh G et al (2018) Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput Civ Infrastruct Eng 33:731–747. https://doi.org/10.1111/mice.12334

    Article  Google Scholar 

  23. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In: 2014 IEEE conference on computer vision and pattern recognition. IEEE, pp 580–587

  24. Girshick R (2015) Fast R-CNN. In: 2015 IEEE international conference on computer vision (ICCV). IEEE, pp 1440–1448

  25. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39:1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  26. Rahman A, Wu ZY, Kalfarisi R (2021) Semantic deep learning integrated with RGB feature-based rule optimization for facility surface corrosion detection and evaluation. J Comput Civ Eng 35:04021018. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000982

    Article  Google Scholar 

  27. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3431–3440

  28. Liu W, Rabinovich A, Berg AC (2015) Parsenet: looking wider to see better. 1–11

  29. Li H, Xiong P, An J, Wang L (2018) Pyramid Attention Network for Semantic Segmentation. 1–13

  30. Li X, Lai T, Wang S, et al (2019) Weighted feature pyramid networks for object detection. In: 2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, pp 1500–1504

  31. Zhao H, Shi J, Qi X, et al (2017) Pyramid scene parsing network. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 6230–6239

  32. Chen L, Papandreou G, Kokkinos I et al (2018) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40:834–848. https://doi.org/10.1109/TPAMI.2017.2699184

    Article  Google Scholar 

  33. Hoskere V, Narazaki Y, Hoang T, Spencer B (2018) Vision-based structural inspection using multiscale deep convolutional neural networks

  34. Nash W, Drummond T, Birbilis N (2018) Quantity beats quality for semantic segmentation of corrosion in images. 1–10

  35. Tong T, Lin J, Hua J et al (2021) Crack identification for bridge condition monitoring using deep convolutional networks trained with a feedback-update strategy. Maintenance Reliab Cond Monit 1:37–51. https://doi.org/10.21595/mrcm.2021.22032

    Article  Google Scholar 

  36. Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds) Medical image computing and computer-assisted intervention – MICCAI 2015. Springer, Cham, pp 234–241

    Chapter  Google Scholar 

  37. Katsamenis I, Protopapadakis E, Doulamis A et al (2020) Pixel-level corrosion detection on metal constructions by fusion of deep learning semantic and contour segmentation. In: Bebis G, Yin Z, Kim E et al (eds) Advances in Visual Computing. Springer, Cham, pp 160–169

    Chapter  Google Scholar 

  38. Nguyen T, Ozaslan T, Miller ID, et al (2018) U-net for mav-based penstock inspection: an investigation of focal loss in multi-class segmentation for corrosion identification

  39. Shi J, Dang J, Cui M et al (2021) Improvement of damage segmentation based on pixel-level data balance using vgg-unet. Appl Sci 11:518. https://doi.org/10.3390/app11020518

    Article  Google Scholar 

  40. Yang K, Ding Y, Sun P et al (2021) Computer vision-based crack width identification using F-CNN model and pixel nonlinear calibration. Struct Infrastruct Eng. https://doi.org/10.1080/15732479.2021.1994617

    Article  Google Scholar 

  41. Li S, Zhao X, Zhou G (2019) Automatic pixel-level multiple damage detection of concrete structure using fully convolutional network. Comput Civ Infrastruct Eng 34:616–634. https://doi.org/10.1111/mice.12433

    Article  Google Scholar 

  42. Li S, Zhao X (2021) Pixel-level detection and measurement of concrete crack using faster region-based convolutional neural network and morphological feature extraction. Meas Sci Technol 32:065010. https://doi.org/10.1088/1361-6501/abb274

    Article  Google Scholar 

  43. Li S, Zhao X (2020) Automatic crack detection and measurement of concrete structure using convolutional encoder-decoder network. IEEE Access 8:134602–134618. https://doi.org/10.1109/ACCESS.2020.3011106

    Article  Google Scholar 

  44. Shelhamer E, Long J, Darrell T (2017) Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 39:640–651. https://doi.org/10.1109/TPAMI.2016.2572683

    Article  Google Scholar 

  45. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. J Pract 10:730–743

    Google Scholar 

  46. Szandała T (2021) Review and comparison of commonly used activation functions for deep neural networks. Bio-inspired neurocomputing. Springer, Singapore, pp 203–224

    Chapter  Google Scholar 

  47. Nagi J, Ducatelle F, Di Caro GA, et al (2011) Max-pooling convolutional neural networks for vision-based hand gesture recognition. In: 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA). IEEE, pp 342–347

  48. Qiu X (2021) A new multilevel feature fusion network for medical image segmentation. Sens Imaging 22:23. https://doi.org/10.1007/s11220-021-00346-2

    Article  Google Scholar 

  49. Wang C, Wang Y, Liu Y et al (2020) ScleraSegNet: an improved U-net model with attention for accurate sclera segmentation. IEEE Trans Biometrics Behav Identity Sci 2:40–54. https://doi.org/10.1109/TBIOM.2019.2962190

    Article  Google Scholar 

  50. Woo S, Park J, Lee J-Y, Kweon IS (2018) CBAM: Convolutional block attention module. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision – ECCV 2018. Springer, Cham, pp 3–19

    Chapter  Google Scholar 

  51. Russell BC, Torralba A, Murphy KP, Freeman WT (2008) LabelMe: a database and web-based Tool for image annotation. Int J Comput Vis 77:157–173. https://doi.org/10.1007/s11263-007-0090-8

    Article  Google Scholar 

  52. Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big Data 6:60. https://doi.org/10.1186/s40537-019-0197-0

    Article  Google Scholar 

  53. Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. In: Montavon G, Orr GB, Müller K (eds) Neural networks: tricks of the trade. Springer, Berlin, pp 437–478

    Chapter  Google Scholar 

  54. Reed R, Marks RJ (1999) Neural Smithing: Supervised learning in feedforward artificial neural networks. The MIT Press

  55. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12:2121–2159

    MATH  Google Scholar 

  56. Garcia-Garcia A, Orts-Escolano S, Oprea S, et al (2017) A review on deep learning techniques applied to semantic segmentation. 1–23

  57. McKay MD, Beckman RJ, Conover WJ (2000) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 42:55–61. https://doi.org/10.1080/00401706.2000.10485979

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors appreciate the support of the Distinguished Young Scientists of Jiangsu Province [Grant Number BK20190013], the National Natural Science Foundation of China [Grant Number 51978154], and the Jiangsu Natural Science Foundation [Grant Number BK20211003].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youliang Ding.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, F., Ding, Y., Song, Y. et al. Automatic pixel-level detection and measurement of corrosion-related damages in dim steel box girders using Fusion-Attention-U-net. J Civil Struct Health Monit 13, 199–217 (2023). https://doi.org/10.1007/s13349-022-00631-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13349-022-00631-y

Keywords

Navigation