Skip to main content
Log in

A real-time and high-precision method for small traffic-signs recognition

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

As a fundamental element of the traffic system, traffic signs reduce the risk of accidents by providing essential information about the road condition to drivers, pedestrians, etc. With the rapid progress of computer vision and artificial intelligence, traffic-signs recognition systems have been applied for the advanced driver assistance system and auto driving system, to help drivers and self-driving vehicles capture the important road information precisely. However, in real applications, small traffic-signs recognition is still challenging. In this article, we propose an efficient method for small-size traffic-signs recognition, named traffic-signs recognition small-aware, with the inspiration of the state-of-the-art object detection framework YOLOv4 and YOLOv5. In general, there are four contributions in our work: (1) for the Backbone of the model, we introduce high-level features to construct a better detector head; (2) for the Neck of the model, receptive field block-cross is utilized for capturing the contextual information of feature map; (3) for the Head of the model, we refine the detector head grid to achieve more accurate detection of small traffic signs; (4) for the input, we propose a data augmentation method named Random Erasing-Attention, which can increase difficult samples and enhance the robustness of the model. Real experiments on the challenging dataset TT100K demonstrate that our method can achieve significant performance improvement compared with the state of the art. Moreover, it is a real-time method and shows huge potential applications in advanced driver assistance system and auto driving system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability Statement

The public dataset is available from http://cg.cs.tsinghua.edu.cn/traffic-sign/. The source code is also available, by contacting this email: zhangrh25@mail.sysu.edu.cn.

References

  1. Zhang R, He Z, Wang H, You F, Li K (2017) Study on self-tuning tyre friction control for developing main-servo loop integrated chassis control system. IEEE Access 5:6649–6660

    Article  Google Scholar 

  2. Intelligent recognition system, https://www.qzshangwu.com/t/1/Wap/NewsView1.aspx?id=19583

  3. Dinh VQ, Lee Y, Choi H, Jeon M (2018) Real-time traffic sign recognition. pp 206–212

  4. Liang M, Yuan M, Hu X, Li J, Liu H (2013) Traffic sign detection by ROI extraction and histogram features-based recognition. pp 1–8

  5. Greenhalgh J, Mirmehdi M (2012) Real-time detection and recognition of road traffic signs. IEEE Trans Intell Transp Syst 13(4):1498–1506

    Article  Google Scholar 

  6. Yang Y, Luo H, Xu H, Wu F (2015) Towards real-time traffic sign detection and classification. IEEE Trans Intell Transp Syst 17(7):2022–2031

    Article  Google Scholar 

  7. Chen X, Li Z, Yang Y, Qi L, Ke R (2020) High-resolution vehicle trajectory extraction and denoising from aerial videos. IEEE Trans Intell Transp Syst 22(5):3190–3202

    Article  Google Scholar 

  8. Chen X, Lu J, Zhao J, Qu Z, Yang Y, Xian J (2020) Traffic flow prediction at varied time scales via ensemble empirical mode decomposition and artificial neural network. Sustainability 12(9):3678

    Article  Google Scholar 

  9. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection, pp 779–788

  10. Ren S, He K, Girshick R, Sun J (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  11. Everingham M, Gool LV, Williams CKI, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338

    Article  Google Scholar 

  12. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. European conference on computer vision. Springer, Cham, pp 740–755

    Google Scholar 

  13. Stallkamp J, Schlipsing M, Salmen J, Igel C (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw 32:323–332

    Article  Google Scholar 

  14. Houben S, Stallkamp J, Salmen J, Schlipsing M, Igel C (2013) Detection of traffic signs in real-world images: the German traffic sign detection benchmark. pp 1–8

  15. Møgelmose A, Liu D, Trivedi MM (2015) Detection of US traffic signs. IEEE Trans Intell Transp Syst 16(6):3116–3125

    Article  Google Scholar 

  16. Zhu Z, Liang D, Zhang S, Huang X, Li B, Hu S (2016) Traffic-sign detection and classification in the wild. pp 2110–2118

  17. Cai Z, Fan Q, Feris RS, Vasconcelos N (2016) A unified multi-scale deep convolutional neural network for fast object detection. European conference on compute vision. Springer, Cham, pp 354–370

    Google Scholar 

  18. Yang F, Choi W, Lin Y (2016) Exploit all the layers: fast and accurate CNN object detector with scale dependent pooling and cascaded rejection classifiers. pp 2129–2137

  19. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, FuC Y, Berg AC (2016) Ssd: Single shot multibox detector. European conference on compute vision. Springer, Cham, pp 21–37

    Google Scholar 

  20. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  21. Fu C.-Y, Liu W, Ranga A, Tyagi A, Berg A. C (2017) Dssd: deconvolutional single shot detector, arXiv preprint arXiv:1701.06659

  22. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp. 2961–2969

  23. Chen S, Wang B, Tan X, Hu X (2018) Embedding attention and residual network for accurate salient object detection. IEEE Trans Cybern 50(5):2050–2062

    Article  Google Scholar 

  24. Cui L, Ma R, Lv P, Jiang X, Gao Z, Zhou B, Xu M (2018) Mdssd: multi-scale deconvolutional single shot detector for small objects, arXiv preprint arXiv:1805.07009

  25. Bochkovskiy A, Wang C.-Y, Liao H.-Y. M (2020) Yolov4: optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934

  26. Zou Z, Shi Z, Guo Y, Ye J (2019) Object detection in 20 years: a survey, arXiv preprint arXiv:1905.05055

  27. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation, pp 580–587

  28. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  29. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International conference on computer vision, pp. 1440–1448

  30. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: single shot multibox detector. Eur Conf Comput Vis. Springer, Cham, pp 21–37

    Google Scholar 

  31. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. pp 2980–2988

  32. Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: 2005 IEEE Computer society conference on computer vision and pattern recognition (CVPR’05), pp 886–893. IEEE

  33. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  34. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263–7271

  35. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement, arXiv preprint arXiv:1804.02767

  36. Liu C, Li S, Chang F, Wang Y (2019) Machine vision based traffic sign detection methods: review, analyses and perspectives. IEEE Access 7:86578–86596

    Article  Google Scholar 

  37. Gómez-Moreno H, Maldonado-Bascón S, Gil-Jiménez P, Lafuente-Arroyo S (2010) Goal evaluation of segmentation algorithms for traffic sign recognition. IEEE Trans Intell Transp Syst 11(4):917–930

    Article  Google Scholar 

  38. Zhang K, Sheng Y, Li J (2012) Automatic detection of road traffic signs from natural scene images based on pixel vector and central projected shape feature. IET Intell Transp Syst 6(3):282–291

    Article  Google Scholar 

  39. Salti S, Petrelli A, Tombari F, Fioraio N, Stefano LD (2015) Traffic sign detection via interest region extraction. Pattern Recogn 48(4):1039–1049

    Article  Google Scholar 

  40. Fang CY, Chen SW, Fuh CS (2003) Road-sign detection and tracking. IEEE Trans Veh Technol 52(5):1329–1341

    Article  Google Scholar 

  41. Barnes N, Zelinsky A, Fletcher LS (2008) Real-time speed sign detection using the radial symmetry detector. IEEE Trans Intell Transp Syst 9(2):322–332

    Article  Google Scholar 

  42. Chen T, Lu S (2016) Accurate and efficient traffic sign detection using discriminative adaboost and support vector regression. IEEE Trans Veh Technol 65(6):4006–4015

    Article  Google Scholar 

  43. Mogelmose A, Liu D, Trivedi MM (2015) Detection of US traffic signs. IEEE Trans Intell Transp Syst 16(6):3116–3125

    Article  Google Scholar 

  44. Park JG, Kim KJ (2013) Design of a visual perception model with edge-adaptive Gabor filter and support vector machine for traffic sign detection. Expert Syst Appl 40(9):3679–3687

    Article  Google Scholar 

  45. Berkaya SK, Gunduz H, Ozsen O, Akinlar C, Gunal S (2016) On circular traffic sign detection and recognition. Expert Syst Appl 48:67–75

    Article  Google Scholar 

  46. Chen G,Wang H, Chen K, Li Z, Song Z, Liu Y, Chen W, Knoll A (2020) A survey of the four pillars for small object detection: Multiscale representation, contextual information, super-resolution, and region proposal, IEEE Transactions on systems, man, and cybernetics: systems

  47. Liu Z, Du J, Tian F, Wen J (2019) MR-CNN: a multi-scale region-based convolutional neural network for small traffic sign recognition. IEEE Access 7:57120–57128

    Article  Google Scholar 

  48. Liu Z, Li D, Ge SS, Tian F (2020) Small traffic sign detection from large image. Appl Intell 50(1):1–13

    Article  Google Scholar 

  49. Cui L, Lv P, Jiang X, Gao Z, Zhou B, Zhang L, Shao L, Xu M (2020) Context-aware block net for small object detection. In: IEEE Transactions on cybernetics

  50. Liu S, Huang D (2018) Receptive field block net for accurate and fast object detection

  51. Li J, Liang X, Wei Y, Xu T, Feng J, Yan S (2017) Perceptual generative adversarial networks for small object detection. pp 1222–1230

  52. Bai Y, Zhang Y, Ding M, Ghanem B (2018) Finding tiny faces in the wild with generative adversarial network, pp 21–30

  53. Fang L, Zhao X, Zhang S (2019) Small-objectness sensitive detection based on shifted single shot detector. Multimedia Tools Appl 78(10):13227–13245

    Article  Google Scholar 

  54. Eggert C, Zecha D, Brehm S, Lienhart R (2017) Improving small object proposals for company logo detection, pp 167–174

  55. Yu F, Koltun V (2016) Multi-scale context aggregation by dilated convolutions. In: ICLR

  56. Yu F, Koltun V, Funkhouser T (2017) Dilated residual networks

  57. Jocher G, Stoken A, Borovec J, NanoCode012, ChristopherSTAN, L. Changyu, Laughing, tkianai, yxNONG, A. Hogan, lorenzomammana, AlexWang1900, A. Chaurasia, L. Diaconu, Marc, wanghaoyang0106, ml5ah, Doug, Durgesh, F. Ingham, Frederik, Guilhen, A. Colmagro, H. Ye, Jacobsolawetz, J. Poznanski, J. Fang, J. Kim, K. Doan, L. Yu, ultralytics/yolov5: v4.0 - nn.SiLU() activations, Weights & Biases logging, PyTorch Hub integration, https://doi.org/10.5281/zenodo.4418161 (Jan. 2021)

  58. Wang C-Y, Mark Liao H-Y, Wu Y-H, Chen P-Y, Hsieh J-W, Yeh I-H (2020) CSPNeT: a new backbone that can enhance learning capability of CNN. pp 390–391

  59. Misra D (2019) Mish: a self regularized non-monotonic neural activation function, arXiv preprint arXiv:1908.08681

  60. Liu S, Qi L, Qin H, Shi J, Jia J (2018) Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp. 8759–8768

  61. Huang G, Liu Z, van der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

  62. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR)

  63. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  64. Zhong Z, Zheng L, Kang G, Li S, Yang Y (2020) Random erasing data augmentation. AAAI 34(7):13001–13008

    Article  Google Scholar 

  65. Zhang H, Qin L, Li J, Guo Y, Xu Z (2020) Real-time detection method for small traffic signs based on Yolov3. IEEE Access 8:64145–64156

    Article  Google Scholar 

Download references

Funding

This work was partially supported by National Key R&D Program of China (No. 2020YFB1600400), Guangzhou Science and Technology Plan Project (No. 202007050004), Shenzhen Fundamental Research Program (No. JCYJ20200109142217397), National Natural Science Foundation of China (Grant Nos. 52172350, U1811463).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronghui Zhang.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interests regarding the publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Jia, K., Chen, W. et al. A real-time and high-precision method for small traffic-signs recognition. Neural Comput & Applic 34, 2233–2245 (2022). https://doi.org/10.1007/s00521-021-06526-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06526-1

Keywords

Navigation