Skip to main content
Log in

LLTH-YOLOv5: A Real-Time Traffic Sign Detection Algorithm for Low-Light Scenes

  • Published:
Automotive Innovation Aims and scope Submit manuscript

Abstract

Traffic sign detection is a crucial task for autonomous driving systems. However, the performance of deep learning-based algorithms for traffic sign detection is highly affected by the illumination conditions of scenarios. While existing algorithms demonstrate high accuracy in well-lit environments, they suffer from low accuracy in low-light scenarios. This paper proposes an end-to-end framework, LLTH-YOLOv5, specifically tailored for traffic sign detection in low-light scenarios, which enhances the input images to improve the detection performance. The proposed framework comproses two stages: the low-light enhancement stage and the object detection stage. In the low-light enhancement stage, a lightweight low-light enhancement network is designed, which uses multiple non-reference loss functions for parameter learning, and enhances the image by pixel-level adjustment of the input image with high-order curves. In the object detection stage, BIFPN is introduced to replace the PANet of YOLOv5, while designing a transformer-based detection head to improve the accuracy of small target detection. Moreover, GhostDarkNet53 is utilized based on Ghost module to replace the backbone network of YOLOv5, thereby improving the real-time performance of the model. The experimental results show that the proposed method significantly improves the accuracy of traffic sign detection in low-light scenarios, while satisfying the real-time requirements of autonomous driving.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig.11

Similar content being viewed by others

Abbreviations

BIFPN:

Bidirectional feature pyramid network

CNN:

Convolutional neural networks

GAN:

Generative adversarial network

LLIE:

Low-light image enhancement

References

  1. Yu, Q., Wang, K., Wang, H.: A multi-scale YOLOv3 detection algorithm of road scene object. J. Jiangsu Univ. 42(6), 628–633 (2021)

    Google Scholar 

  2. Wang, W., Luo, S., Geng, G.Q., Liu, J.: Pedestrian trajectory and intention estimation under low brightness based on human key points. J. Jiangsu Univ. 43(4), 400–406 (2022)

    Google Scholar 

  3. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)

    Article  PubMed  Google Scholar 

  4. Bochkovskiy, A., Wang, C.Y., Liao, H.Y. M.: YOLOv4: Optimal speed and accuracy of object detection. arXiv:2004.10934 (2020)

  5. Redmon, J., Farhadi, A.: YOLO9000: Better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525 (2017)

  6. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

  7. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv180402767 (2018)

  8. Niu, J., Qian, K., Bo, X.Z.: Place recognition method based on YOLOv3 and deep features. J. Jiangsu Univ. 42(6), 733–737 (2021)

    Google Scholar 

  9. Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)

    Article  ADS  Google Scholar 

  10. McCann, J.: Retinex Theory. In: Encyclopedia of Color Science and Technology, pp. 1118–1125. Springer, New York (2016)

    Chapter  Google Scholar 

  11. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. arXiv1808.04560 (2018)

  12. Jiang, Y., Gong, X., Liu, D., Cheng, Y., et al.: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  ADS  PubMed  Google Scholar 

  13. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, pp. 234–241 (2015)

  14. Guo, C., Li, C., Guo, J., Loy, C.C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1777–1786 (2020)

  15. Yuan, C.C., Wang, J.X., He, Y.G., et al.: Active obstacle avoidance control of intelligent vehicle based on pothole detection. J. Jiangsu Univ. 43(5), 504–511 (2022)

    Google Scholar 

  16. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: Scaled-YOLOv4: scaling cross stage partial network. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13024–13033 (2021)

  17. Zhang, H., Wang, Y., Dayoub, F., Sunderhauf, N.: VarifocalNet: an IoU-aware dense object detector. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8510–8519 (2021)

  18. Duan, K., Bai, S., Xie, L., et al.: CenterNet: Keypoint triplets for object detection. Presented at the Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)

  19. Wang, B., Li, X.M., Zhao, Z.P.: Hand-held call detection of driver based on improved Faster RCNN. J. Jiangsu Univ. 44(3), 318–323 (2023)

    Google Scholar 

  20. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  21. Wang, C.Y., Liao, H.Y.M., Wu, Y.H., et al: CSPNet: A new backbone that can enhance learning capability of CNN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 1571–1580 (2020)

  22. Liu, Z., Lin, Y., Cao, Y., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: 2021 IEEE/CVF International Conference on Computer Vision. pp. 9992–10002 (2021)

  23. Wang, H., Liu, M.L., Cai, Y.F., Chen, L.: Vehicle target detection algorithm based on fusion of lidar and millimeter wave radar. J. Jiangsu Univ. 42(4), 389–394 (2021)

    Google Scholar 

  24. Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6848–6856 (2018)

  25. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4510–4520 (2018)

  26. Han, K., Wang, Y., Tian, Q., et al: GhostNet: More features from cheap operations. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1577–1586 (2020)

  27. Lin, T.Y., Dollár, P., Girshick, R., K. He, Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition. pp. 936–944 (2017)

  28. Liu, S., Qi, L., Qin, H., Shi J., Jia, J.: Path aggregation network for instance segmentation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8759–8768 (2018)

  29. Ghiasi, G., Lin, T.Y., Le, Q.V.: NAS-FPN: learning scalable feature pyramid architecture for object detection. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7029–7038 (2019)

  30. Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognitio. pp. 10778–10787 (2020)

  31. Zhao, Q., Sheng, T., Wang, Y., Tang, Z., Chen, Y., Cai, L., Ling, H.: M2Det: a single-shot object detector based on multi-level feature pyramid network. Proc. AAAI Conf. Artif. Intell. 33(01), 9259–9266 (2019)

    Google Scholar 

  32. Dodge, S., Karam, L.: Understanding how image quality affects deep neural networks. In: 2016 Eighth International Conference on Quality of Multimedia Experience. pp. 1–6 (2016)

  33. Kvyetnyy, R., Maslii, R., Harmash, V., Bogach, I., Bogach, A. , Grądz, Ż., Zhanpeisova, A., Askarova, N.: Object detection in images with low light condition. In: Photonics Applications in Astronomy, Communications, Industry, and High Energy Physics Experiments 2017. pp. 250–259 (2017)

  34. Mosleh, A., Sharma, A., Onzon, E., Mannan, F., Robidoux, N., Heide, F.: Hardware-in-the-loop end-to-end optimization of camera image processing pipelines. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7526–7535 (2020)

  35. Tan, M., Le, Q.: EfficientNet: Rethinking model scaling for convolutional neural networks. In: Proceedings of the 36th International Conference on Machine Learning. pp. 6105–6114 (2019)

  36. Wang, H., Chen, Y., Cai, Y., Chen, L., Li, Y., Sotelo, M.A., Li, Z.: SFNet-N: an improved SFNet algorithm for semantic segmentation of low-light autonomous driving road scenes. IEEE Trans. Intell. Transp. Syst. 23(11), 13 (2022)

    Article  Google Scholar 

  37. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  ADS  MathSciNet  Google Scholar 

  38. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  39. Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., Hu, S.: Traffic-sign detection and classification in the wild. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition. pp. 2110–2118 (2016)

  40. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision. pp. 2242–2251 (2017)

  41. Zhang, J., Xie, Z., Sun, J., Zou, X., Wang, J.: A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8, 29742–29754 (2020)

    Article  Google Scholar 

  42. Li, C.X., Jiang, H.B., Wang, C.Y., et al.: Estimation method of vehicle position and attitude based on sensor information fusion. J. Jiangsu Univ. 43(6), 636–644 (2022)

    Google Scholar 

  43. Wang, B., Zhao, Z.P.: Driver pose estimation based on dual stream fully convolutional network. J. Jiangsu Univ. 43(2), 161–168 (2022)

    MathSciNet  Google Scholar 

Download references

Funding

National Natural Science Foundation of China, U20A20331, Long Chen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoqiang Sun.

Ethics declarations

Conflict of interest

On behalf of all the authors, the corresponding author states that there is no conflict of interest.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, X., Liu, K., Chen, L. et al. LLTH-YOLOv5: A Real-Time Traffic Sign Detection Algorithm for Low-Light Scenes. Automot. Innov. 7, 121–137 (2024). https://doi.org/10.1007/s42154-023-00249-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42154-023-00249-w

Keywords

Navigation