Skip to main content
Log in

Improved YOLOv5 for real-time traffic signs recognition in bad weather conditions

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

One of significant tasks in autonomous vehicle technology is traffic signs recognizing. It helps to avoid traffic violations on the road. However, recognition of traffic signs becomes more complicated in bad weather such as lack of light, rain, fog. Those bad weather conditions cause low accuracy of detecting and recognizing. In this paper, we aim to build a model to recognize and classify the traffic signs in different bad weather conditions by applying deep learning technique. Weather data are collected from variety types as well as generated from different techniques. Collected data are trained on the YOLOv5s, YOLOv7 model. In order to increase the accuracy, those YOLOv5s are improved on different models by replacing Squeeze-and-Excitation (SE) attention module or Global Context(GC) block. On the test set: the accuracy of YOLOv5s is 76.8%, the accuracy of YOLOv7 is 78% the accuracy of YOLOv5s+SE attention module is 78.4% and the accuracy of YOLOv5s+C3GC is 79.2%. The results show that YOLOv5s+C3GC model significantly improves the accuracy in recognition of blurred-distant-objects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The experiment data used to support the findings of this study are available from the corresponding author upon request.

References

  1. Vennelakanti A, Shreya S, Rajendran R, Sarkar D, Muddegowda D, Hanagal P (2019) Traffic sign detection and recognition using a cnn ensemble. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp 1–4. https://doi.org/10.1109/ICCE.2019.8662019

  2. Tabernik D, Skočaj D (2020) Deep learning for large-scale traffic-sign detection and recognition. IEEE Trans Intell Transp Syst 21(4):1427–1440. https://doi.org/10.1109/TITS.2019.2913588

    Article  Google Scholar 

  3. He S, Chen L, Zhang S, Guo Z, Sun P, Liu H, Liu H (2021) Automatic recognition of traffic signs based on visual inspection. IEEE Access 9:43253–43261. https://doi.org/10.1109/ACCESS.2021.3059052

    Article  Google Scholar 

  4. Pourghahestani FA, Rashedi E (2015) Object detection in images using artificial neural network and improved binary gravitational search algorithm. In: 2015 4th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), pp 1–4. https://doi.org/10.1109/CFIS.2015.7391683

  5. Muralidharan R, Chandrasekar C (2011) Object recognition using svm-knn based on geometric moment invariant. Int J Comput Trends Technol 1(1):215–220

    Google Scholar 

  6. Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20:273–297

    Article  MATH  Google Scholar 

  7. Liu C, Li S, Chang F, Wang Y (2019) Machine vision based traffic sign detection methods: review, analyses and perspectives. IEEE Access 7:86578–86596. https://doi.org/10.1109/ACCESS.2019.2924947

    Article  Google Scholar 

  8. Ahmed I, Din S, Jeon G, Piccialli F (2020) Exploring deep learning models for overhead view multiple object detection. IEEE Internet Things J 7(7):5737–5744. https://doi.org/10.1109/JIOT.2019.2951365

    Article  Google Scholar 

  9. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan, D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9. https://doi.org/10.1109/CVPR.2015.7298594

  10. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  11. Xiang J, Zhu G (2017) Joint face detection and facial expression recognition with mtcnn. In: 2017 4th International Conference on Information Science and Control Engineering (ICISCE), pp 424–427. https://doi.org/10.1109/ICISCE.2017.95

  12. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. NIPS’12. Curran Associates Inc., Red Hook, NY, USA, pp 1097–1105

  13. He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition. CoRR abs/1512.03385

  14. Girshick R, Donahue J, Darrell T, Malik J (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158

    Article  Google Scholar 

  15. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer vision - ECCV 2016. Springer, Cham, pp 21–37

    Chapter  Google Scholar 

  16. Girshick R (2015) Fast r-cnn. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp 1440–1448. https://doi.org/10.1109/ICCV.2015.169

  17. Ren S, He K, Girshick R, Sun J (2017) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  18. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp 2980–2988. https://doi.org/10.1109/ICCV.2017.322

  19. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2020) Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell 42(2):318–327. https://doi.org/10.1109/TPAMI.2018.2858826

    Article  Google Scholar 

  20. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 779–788. https://doi.org/10.1109/CVPR.2016.91

  21. Güney E, Bayilmiş C, Çakan B (2022) An implementation of real-time traffic signs and road objects detection based on mobile gpu platforms. IEEE Access 10:86191–86203. https://doi.org/10.1109/ACCESS.2022.3198954

    Article  Google Scholar 

  22. Wan H, Gao L, Su M, You Q, Qu H, Sun Q (2021) A novel neural network model for traffic sign detection and recognition under extreme conditions. J. Sens 2021:9984787–1998478716

    Article  Google Scholar 

  23. Sharma T, Debaque B, Duclos N, Chehri A, Kinder B, Fortier P (2022) Deep learning-based object detection and scene perception under bad weather conditions. Electronics. https://doi.org/10.3390/electronics11040563

    Article  Google Scholar 

  24. Tarachandy SM, J A (2021) Enhanced local features using ridgelet filters for traffic sign detection and recognition. In: 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), pp 1150–1156. https://doi.org/10.1109/ICESC51422.2021.9532967

  25. Hu R, Li H, Huang D, Xu X, He K (2022) Traffic sign detection based on driving sight distance in haze environment. IEEE Access 10:101124–101136. https://doi.org/10.1109/ACCESS.2022.3208108

    Article  Google Scholar 

  26. Lee HS, Kim K (2018) Simultaneous traffic sign detection and boundary estimation using convolutional neural network. IEEE Trans Intell Transp Syst 19(5):1652–1663. https://doi.org/10.1109/tits.2018.2801560

    Article  Google Scholar 

  27. Hnewa M, Radha H (2021) Multiscale domain adaptive yolo for cross-domain object detection. In: 2021 IEEE International Conference on Image Processing (ICIP). IEEE. https://doi.org/10.1109/icip42928.2021.9506039

  28. Zhang J, Xie Z, Sun J, Zou X, Wang J (2020) A cascaded r-cnn with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8:29742–29754. https://doi.org/10.1109/ACCESS.2020.2972338

    Article  Google Scholar 

  29. Zhu X, Lyu S, Wang X, Zhao Q (2021) Tph-yolov5: Improved yolov5 based on transformer prediction head for object detection on drone-captured scenarios. CoRR abs/2108.11539

  30. Li Y, Fan Y, Wang S, Bai J, Li K (2022) Application of yolov5 based on attention mechanism and receptive field in identifying defects of thangka images. IEEE Access 10:81597–81611

    Article  Google Scholar 

  31. Song Y, Xie Z, Wang X, Zou Y (2022) Ms-yolo: object detection based on yolov5 optimized fusion millimeter-wave radar and machine vision. IEEE Sens J 22(15):15435–15447. https://doi.org/10.1109/JSEN.2022.3167251

    Article  Google Scholar 

  32. Uijlings JR, Sande KE, Gevers T, Smeulders AW (2013) Selective search for object recognition. Int J Comput Vis 104(2):154–171. https://doi.org/10.1007/s11263-013-0620-5

    Article  Google Scholar 

  33. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6517–6525. https://doi.org/10.1109/CVPR.2017.690

  34. Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. CoRR abs/1804.02767

  35. Bochkovskiy A, Wang C, Liao HM (2020) Yolov4: Optimal speed and accuracy of object detection. CoRR abs/2004.10934

  36. Jiang Z, Zhao L, Li S, Jia Y (2020) Real-time object detection method based on improved yolov4-tiny. CoRR abs/2011.04244.

  37. Li C, Li L, Jiang H, Weng K, Geng Y, Li L, Ke Z, Li Q, Cheng M, Nie W, Li Y, Zhang B, Liang Y, Zhou L, Xu X, Chu X, Wei X, Wei X (2022) Yolov6: A single-stage object detection framework for industrial applications. https://doi.org/10.48550/ARXIV.2209. 02976

  38. Wang C-Y, Bochkovskiy A, Liao H-YM (2022) YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv. arXiv:2207.02696

  39. Xiao B, Guo J, He Z (2021) Real-time object detection algorithm of autonomous vehicles based on improved yolov5s. In: 2021 5th CAA International Conference on Vehicular Control and Intelligence (CVCI), pp 1–6. https://doi.org/10.1109/CVCI54083.2021.9661149

  40. Karthi M, Muthulakshmi V, Priscilla R, Praveen P, Vanisri K (2021) Evolution of yolo-v5 algorithm for object detection: automated detection of library books and performace validation of dataset. In: 2021 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES), pp 1–6. https://doi.org/10.1109/ICSES52305.2021.9633834

  41. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824

    Article  Google Scholar 

  42. Wang C-Y, Mark Liao H-Y, Wu Y-H, Chen P-Y, Hsieh J-W, Yeh I-H (2020) Cspnet: a new backbone that can enhance learning capability of cnn. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 1571–1580. https://doi.org/10.1109/CVPRW50498.2020.00203

  43. Zhang Y, Han JH, Kwon YW, Moon YS (2020) A new architecture of feature pyramid network for object detection. In: 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pp 1224–1228. https://doi.org/10.1109/ICCC51575.2020.9345302

  44. Du S, Zhang B, Zhang P, Xiang P (2021) An improved bounding box regression loss function based on ciou loss for multi-scale object detection. In: 2021 IEEE 2nd International Conference on Pattern Recognition and Machine Learning (PRML), pp 92–98. https://doi.org/10.1109/PRML52754.2021.9520717

  45. Guo M-H, Xu T-X, Liu J-J, Liu Z-N, Jiang P-T, Mu T-J, Zhang S-H, Martin RR, Cheng M-M, Hu S-M (2022) Attention mechanisms in computer vision: a survey. Comput Vis Media 8(3):331–368. https://doi.org/10.1007/s41095-022-0271-y

    Article  Google Scholar 

  46. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. arXiv. arxiv:1706.03762

  47. Zhang H, Goodfellow I, Metaxas D, Odena A (2018) Self-attention generative adversarial networks. arXiv. arxiv:1805.08318

  48. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7132–7141. https://doi.org/10.1109/CVPR.2018.00745

  49. Cao Y, Xu J, Lin S, Wei F, Hu H (2020) Global context networks. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2020.3047209

    Article  Google Scholar 

Download references

Acknowledgements

All authors would like to express gratitude to Faculty of Information Technology, Industrial University of Ho Chi Minh City for their assistance.

Funding

The authors have not received any financial support from any person, institution, or organization for this research work.

Author information

Authors and Affiliations

Authors

Contributions

TTN and HTV contributed to the design of the algorithms and the data acquisition. MKTT wrote some sections, and performed the final corrections. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Thi Phuc Dang.

Ethics declarations

Conflict of interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Ethics approval

This research work does not involve human and/or animal subjects. Traffic signs are collected from street cameras. The model is built on training data and has been tested through the testing data set.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dang, T.P., Tran, N.T., To, V.H. et al. Improved YOLOv5 for real-time traffic signs recognition in bad weather conditions. J Supercomput 79, 10706–10724 (2023). https://doi.org/10.1007/s11227-023-05097-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-023-05097-3

Keywords

Navigation