Skip to main content
Log in

MSSIF-Net: an efficient CNN automatic detection method for freight train images

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Freight trains are one of the most important modes of transportation. The fault detection of freight train parts is crucial to ensure the safety of train operation. Given the low detection efficiency and accuracy of traditional train fault detection methods, a novel one-stage object detection method called the multi-scale spatial information fusion CNN network (MSSIF-Net) based on YOLOv4 is proposed in this study. The adaptive spatial feature fusion method and multi-scale channel attention mechanism are used to construct the multi-scale feature sharing network and consequently realize feature information sharing at different levels and promote detection accuracy. The mean average precision values of MSSIF-Net on the train image test set, PASCAL VOC 2007 test set, and surface defect detection dataset are 94.73%, 87.76%, and 75.54%, respectively, outperforming YOLOv4, Faster R-CNN, CenterNet, RetinaNet, and YOLOX-l. The detection speed of MSSIF-Net is 33.10 FPS, achieving a good balance between detection accuracy and speed. In addition, the MSSIF-Net performance is estimated after adding noise or rotating the train images at a slight angle to simulate a real scene. Experimental results indicate that MSSIF-Net has favorable anti-interference ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

The PASCAL VOC 2007 and 2012 datasets that support the findings of this study are available in/from hyperlink to data source “https://host.robots.ox.ac.uk/pascal/VOC”. The NEU-DET dataset used in this study is available with the identifier “https://doi.org/10.1109/TIM.2019.2915404” [39].

References

  1. Sun J, Xiao Z, Xie Y (2017) Automatic multi-fault recognition in tfds based on convolutional neural network. Neurocomputing 222:127–136

    Article  Google Scholar 

  2. Fu X, Li K, Liu J, Li K, Zeng Z, Chen C (2020) A two-stage attention aware method for train bearing shed oil inspection based on convolutional neural networks. Neurocomputing 380:212–224

    Article  Google Scholar 

  3. De Bruin T, Verbert K, Babuška R (2016) Railway track circuit fault diagnosis using recurrent neural networks. IEEE Trans Neural Netw Learn Syst 28(3):523–533

    Article  MathSciNet  Google Scholar 

  4. Zhang Y, Liu M, Yang Y, Guo Y, Zhang H (2021) A unified light framework for real-time fault detection of freight train images. IEEE Trans Industr Inf 17(11):7423–7432

    Article  Google Scholar 

  5. Leng J, Liu Y (2021) Single-shot augmentation detector for object detection. Neural Comput Appl 33(8):3583–3596

    Article  Google Scholar 

  6. Gao F, Ji S, Guo J, Li Q, Ji Y, Liu Y, Feng S, Wei H, Wang N, Yang B (2021) Id-net: an improved mask r-cnn model for intrusion detection under power grid surveillance. Neural Comput Appl 1–17 (2021)

  7. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587

  8. Girshick R (2015) Fast r-cnn. In: Proceedings of The IEEE International Conference on Computer Vision, pp. 1440–1448

  9. Ren S, He K, Girshick R, Sun J (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  10. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of The IEEE International Conference on Computer Vision, pp. 2961–2969

  11. Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW (2013) Selective search for object recognition. Int J Comput Vision 104(2):154–171

    Article  Google Scholar 

  12. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788

  13. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271

  14. Farhadi A, Redmon J (2018) Yolov3: An incremental improvement. In: Computer Vision and Pattern Recognition, vol. 1804 (2018). Springer Berlin/Heidelberg, Germany

  15. Bochkovskiy A, Wang C-Y, Liao H-YM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

  16. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: Single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37 (2016). Springer

  17. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2020) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell 42(2):318–327

    Article  Google Scholar 

  18. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125

  19. Liu S, Qi L, Qin H, Shi J, Jia J (2018) Path aggregation network for instance segmentation. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768

  20. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  21. Law H, Deng J (2018) Cornernet: Detecting objects as paired keypoints. In: Proceedings of The European Conference on Computer Vision (ECCV), pp. 734–750

  22. Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv preprint arXiv:1904.07850

  23. Ge Z, Liu S, Wang F, Li Z, Sun J (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430

  24. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S (2020) End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229. Springer

  25. Zhu X, Su W, Lu L, Li B, Wang X, Dai J (2020) Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159

  26. Fang Y, Liao B, Wang X, Fang J, Qi J, Wu R, Niu J, Liu W (2021) You only look at one sequence: Rethinking transformer in vision through object detection. Adv Neural Inf Proc Syst 34

  27. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929

  28. Jaderberg M, Simonyan K, Zisserman A (2015) Spatial transformer networks. Adv Neural Inf Process Syst 28:2017–2025

    Google Scholar 

  29. Zhu X, Cheng D, Zhang Z, Lin S, Dai J (2019) An empirical study of spatial attention mechanisms in deep networks. In: Proceedings of The IEEE/CVF International Conference on Computer Vision, pp. 6688–6697

  30. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141

  31. Li X, Wang W, Hu X, Yang J (2019) Selective kernel networks. In: Proceedings of The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 510–519

  32. Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q (2020) Eca-net: efficient channel attention for deep convolutional neural networks, 2020 ieee. In: CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp. 11531–11539

  33. Qin Z, Zhang P, Wu F, Li X (2021) Fcanet: Frequency channel attention networks. In: Proceedings of The IEEE/CVF International Conference on Computer Vision, pp. 783–792

  34. Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: Convolutional block attention module. In: Proceedings of The European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  35. Zhang H, Zu K, Lu J, Zou Y, Meng D (2021) Epsanet: An efficient pyramid split attention block on convolutional neural network. arXiv preprint arXiv:2105.14447

  36. Liu S, Huang D, Wang Y (2019) Learning spatial fusion for single-shot object detection. arXiv preprint arXiv:1911.09516

  37. Neubeck A, Van Gool L (2006) Efficient non-maximum suppression. In: 18th International Conference on Pattern Recognition (ICPR’06), vol. 3, pp. 850–855

  38. Song K, Yan Y (2013) A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl Surf Sci 285:858–864

    Article  Google Scholar 

  39. He Y, Song K, Meng Q, Yan Y (2019) An end-to-end steel surface defect detection approach via fusing multiple hierarchical features. IEEE Trans Instrum Meas 69(4):1493–1504

    Article  Google Scholar 

  40. Bao Y, Song K, Liu J, Wang Y, Yan Y, Yu H, Li X (2021) Triplet-graph reasoning network for few-shot metal generic surface defect segmentation. IEEE Trans Instrum Meas 70:1–11

    Google Scholar 

  41. Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their comments and suggestions to improve the manuscript. This work was partially funded by the National Key R &D Program of China (Grant No. 2018YFB1003401), the National Natural Science Foundation of China (Grant Nos. 61702178, 62072172, 62002115), the Natural Science Foundation of Hunan Province (Grant Nos. 2019JJ50123, 2019JJ60054), the Research Foundation of Education Bureau of Hunan Province (Grant Nos. 20C0625, 18C0528, 19B321), the Key R &D Program of Hunan Province (Grant No. 2021NK2020), and in part by China Scholarship Council (Grant No. 201808430297).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Longxin Zhang or Chuang Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Hu, Y., Chen, J. et al. MSSIF-Net: an efficient CNN automatic detection method for freight train images. Neural Comput & Applic 35, 6767–6785 (2023). https://doi.org/10.1007/s00521-022-08035-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-08035-1

Keywords

Navigation