Skip to main content
Log in

A Traffic Sign Detection Network Based on PosNeg-Balanced Anchors and Domain Adaptation

  • Research Article-Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

For the purpose of achieving effective detection of traffic signs and improving the network transfer ability in different road scenarios, a traffic sign detection network based on PosNeg-balanced anchors and domain adaptation named STDN is proposed. The network is mainly composed of an improved single-stage prediction network (ISPN) and a two-stage domain adaptive network (TDAN). Specifically, the ISPN is a unique single-stage detector that introduces an anchor frame calibration module and a feature matching module to alleviate the imbalance of positive and negative samples of anchor frames, strengthen the expression of feature alignment, and promote efficient detection. The TDAN uses global and local hierarchical domain adaptive modules to reduce inter-domain deviations and improve the network stability and inter-domain migration performance in complex, dynamic, and irregular road scene. The experimental results confirm that the STDN has the advantages of high detection accuracy, fast response speed, and excellent domain transfer performance. It has considerable potential for engineering applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Zablocki, É.; Ben-Younes, H.; Pérez, P.; Cord, M.: Explain ability of vision-based autonomous driving systems: Review and challenges. arXiv (2021). http://arxiv.org/2101.05307.

  2. Ayachi, R.; Afif, M.; Said, Y.; Abdelali, A.B.: Real-time implementation of traffic signs detection and identification application on graphics processing units. Int. J. Pattern Recogn. Artif. Intell. 35, 2150024 (2021)

    Article  Google Scholar 

  3. Ibrahem, H.; Salem, A.; Kang, H.S.: Weakly supervised traffic sign detection in real time using single CNN architecture for multiple purposes. 2020 IEEE Int. Confer. Consumer Electr. (2020). Available: doi: https://doi.org/10.1109/icce46568.2020.9042974.

  4. Shao, F.; Wang, X.; Meng, F.; Zhu, J.; Wang, D.; Dai, J.: Improved faster R-CNN traffic sign detection based on a second region of interest and highly possible regions proposal network. Sensors 19, 2288 (2019)

    Article  Google Scholar 

  5. Zhang, J.; Xie, Z.; Sun, J.; Zou, X.; Wang, J.: A cascaded R-CNN with multiscale attention and imbalanced samples for traffic sign detection. IEEE Access 8, 29742–29754 (2020)

    Article  Google Scholar 

  6. Liang, Z.; Shao, J.; Zhang, D.; Gao, L.: Traffic sign detection and recognition based on pyramidal convolutional networks. Neural Comput. Appl. 32, 256–264 (2020)

    Article  Google Scholar 

  7. You, S.; Bi, Q.; Ji, Y.; Liu, S.; Feng, Y.; Wu, F.: Traffic sign detection method based on improved ssd. Information 11, 475 (2020)

    Article  Google Scholar 

  8. Dewi, C.; Chen, R.C.; Liu, Y.T.; Jiang, X.; Hartomo, K.D.: Yolo V4 for advanced traffic sign recognition with synthetic training data generated by various GAN. IEEE Access 9, 97228–97242 (2021)

    Article  Google Scholar 

  9. Arcos-Garcia, A.; Alvarez-Garcia, J.A.; Soria-Morillo, L.M.: Evaluation of deep neural networks for traffic sign detection systems. Neurocomputing 316, 332–344 (2018)

    Article  Google Scholar 

  10. Wang, W.; Chen, S.; Xiang, Y.; Sun, J.; Li, H.; Wang, Z.; Li, B.: Sparsely-labeled source assisted domain adaptation. Pattern Recogn. 112, 107803 (2021)

    Article  Google Scholar 

  11. Wang, W.; Li, P.; Wang, M.; Nie, F.; Wang, Z.; Li, H.: Confidence regularized label propagation based domain adaptation. Circuit Syst. Video Technol. 36, 9841 (2021)

    Google Scholar 

  12. Wang, W.; Li, H.; Ding, Z.; Nie, F.; Chen, J.; Dong, X.; Wang, Z.: Rethinking maximum mean discrepancy for visual domain adaptation. IEEE T Neural Netw. Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3093468

    Article  Google Scholar 

  13. Ben-David, S.; Blitzer, J.; Crammer, K.; Pereira, F.: Analysis of representations for domain adaptation. Adv. Neural Inform. Process. Syst. 19, 137 (2007)

    Google Scholar 

  14. Jiang, J.: A literature survey on domain adaptation of statistical classifiers. 3, 3(2008)

  15. Nguyen, T. H.; Plank, B.; Grishman, R.: Semantic Representations for Domain Adaptation: A Case Study on the Tree Kernel-based Method for Relation Extraction. Proceeding 53rd Annual Meeting Association Computer Linguistics 7th International Joint Conference National Language Process. 2015. Available: doi: https://doi.org/10.3115/v1/p15-1062.

  16. Yoo, H.J.: Deep convolution neural networks in computer vision: a review. IEIE T Smart Process. Comput. 4, 35–43 (2015)

    Article  Google Scholar 

  17. Zhou, X.; Zhuo, J.; Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points. CVPR 3, 850–859 (2019)

    Google Scholar 

  18. Chen, Y.; Zhang, Z.; Cao, Y.; Wang, L.; Lin, S.; Hu, H.: Reppoints v2: Verification meets regression for object detection. (2020). https://arxiv.org/2007.08508.

  19. Sun, Z.; Cao, S.; Yang, Y.; Kitani, K. M.: Rethinking transformer-based set prediction for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision pp. 3611–3620 (2021).

  20. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C.: SSD: single shot multibox detector. ECCV 2, 21–37 (2016)

    Google Scholar 

  21. Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A.: You only look once: Unified, real-time object detection. CVPR 6, 779–788 (2016)

    Google Scholar 

  22. Ren, S.; He, K.; Girshick, R.B.; Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017)

    Article  Google Scholar 

  23. Shrivastava, A.; Gupta, A.; Girshick, R.B.: Training region-based object detectors with online hard example mining. CVPR 5, 761–769 (2016)

    Google Scholar 

  24. Lin, T.; Goyal, P.; Girshick, R.B.; He, K.; Doll, P.: Focal loss for dense object detection. ICCV 8, 2999–3007 (2017)

    Google Scholar 

  25. Li, B.; Liu, Y.; Wang, X.: Gradient harmonized singlestage detector. AAAI 7, 8577–8584 (2019)

    Article  Google Scholar 

  26. Han, J.; Ding, J.; Li, J.; Xia, G.S.: Align deep features for oriented object detection. IEEE Geosci. Remote Sens. 612, 574 (2021)

    Google Scholar 

  27. Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.L.: Single-Shot Refinement Neural Network for Object Detection. 2018 IEEE/CVF Conference Computer Vision Pattern Recognition 2018. Available: doi: https://doi.org/10.1109/cvpr.2018.00442.

  28. Chi, C.; Zhang, S.; Xing, J.; Lei, Z.; Li, S.Z.; Zou, X.: Selective refinement network for high performance face detection. Proceeding AAAI conference artificial intelligence 33, 8231-8238(2019)

  29. Li, B.; Liu, Y.; Wang, X.: Gradient harmonized single-stage detector. Proceeding AAAI conference artificial intelligence 33, 8577-8584(2019)

  30. Nie, J.; Anwer, R. M.; Cholakkal, H.; Khan, F. S.; Pang, Y.; Shao, L.: Enriched feature guided refinement network for object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision pp. 9537–9546, (2019).

  31. Saito, K.; Ushiku, Y.; Harada, T.; Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6956–6965, (2019).

  32. Chen, Y.; Li, W.; Sakaridis, C.; Dai, D.; Van Gool, L.: Domain adaptive faster RCNN for object detection in the wild. 2018 IEEE/CVF Conference Computer Vision Pattern Recognition 2018. Available: doi: https://doi.org/10.1109/cvpr.2018.00352.

  33. RoyChowdhury, A.: Automatic adaptation of object detectors to new domains using self-training. 2019 IEEE/CVF Conference Computer Vision Pattern Recognition (2019).

  34. Kim, T.; Jeong, M.; Kim, S.; Choi, S. Kim, C.: Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection. 2019 IEEE/CVF Conference Computer Vision Pattern Recognition (2019). Available: doi: https://doi.org/10.1109/cvpr.2019.01274.

  35. Hsu, H. K.; Yao, C. H.; Tsai, Y. H.; Hung, W. C.; Tseng, H. Y.; Singh, M.; Yang, M. H.: Progressive domain adaptation for object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 749–757, (2020).

  36. Liu, Y.; Liu, Z.; Fang, F.; Fu, Z.; Chen, Z.: Hierarchical domain-consistent network for cross-domain object detection. 2021 IEEE Int. Confer. Image Process. pp. 474–478, (2021).

  37. Bahlmann, C.; Zhu, Y.; Ramesh, V.; Pellkofer, M.; Koehler, T.: A system for traffic sign detection, tracking, and recognition using color, shape, and motion information. IEEE Proceeding Intelligence Vehicles Symposium, pp. 255–260, (2005).

  38. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S.: Traffic-sign detection and classification in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2110–2118, (2016).

  39. Liu, Z.; Shen, C.; Fan, X.; Zeng, G.; Zhao, X.: Scale-aware limited deformable convolutional neural networks for traffic sign detection and classification. IET Intell. Transp. Syst. 14, 1712–1722 (2020)

    Article  Google Scholar 

  40. Liu, Y.; Peng, J.; Xue, J.H.; Chen, Y.; Fu, Z.H.: TSingNet: Scale-aware and context-rich feature learning for traffic sign detection and recognition in the wild. Neurocomputing 447, 10–22 (2021)

    Article  Google Scholar 

  41. Di, S.; Zhang, H.; Li, C.G.: Cross-domain traffic scene understanding: A dense correspondence-based transfer learning approach. IEEE T Intell. Transp. Syst. 19, 745–757 (2017)

    Article  Google Scholar 

  42. Peng, X.; Li, Y.; Wei, X.; Luo, J.; Murphey, Y. L.: Traffic sign recognition with transfer learning. 2017 IEEE Symposium Series Computer Intelligence pp. 1–7, 2017.

  43. Liu, Z.; Shen, C.; Qi, M.; Fan, X.: SADANet: integrating scale-aware and domain adaptive for traffic sign detection. IEEE Access 8, 77920–77933 (2020)

    Article  Google Scholar 

  44. Wu, Y.; Li, Z.; Chen, Y.; Nai, K.; Yuan, J.: Real-time traffic sign detection and classification towards real traffic scene. Multim. Tools Appl. 79, 18201–18219 (2020)

    Article  Google Scholar 

  45. Arruda, V.F.; Paixão, T.M.; Berriel, R.F.; Souza, A.F.D.; Badue, C.; Sebe, N.; Oliveira-Santos, T.: Cross-domain car detection using unsupervised image-to-image translation: from day to night. Comput. Vision Pattern Recogn. (2019). https://doi.org/10.1109/IJCNN.2019.8852008

    Article  Google Scholar 

  46. Liu, Z.; Qi, M.; Shen, C.; Fang, Y.; Zhao, X.: Cascade saccade machine learning network with hierarchical classes for traffic sign detection. Sustain. Cities Soc. 67, 102700 (2021)

    Article  Google Scholar 

  47. Tang, Q.; Hu, Y.: Single stage target detection algorithm based on positive and negative anchor frame equalization and feature alignment. Chinese J. Computer Aided Des. Gr. 32, 70–80 (2020)

    Google Scholar 

  48. Lin, T. Y.; Goyal, P.; Girshick, R. , He, K.; Dollár, P.: Focal Loss for Dense Object Detection. 2017 IEEE International Conference Computer Vision (2017).

  49. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. 2013 International Joint Conference Neural Networks, pp. 1–8, (2013).

  50. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S.: Traffic-Sign Detection and Classification in the Wild. 2016 IEEE Conference Computer Vision Pattern Recognition (2016). Available: doi: https://doi.org/10.1109/cvpr.2016.232.

  51. Zhang, J.; Huang, M.; Jin, X.; Li, X.: A real-time chinese traffic sign detection algorithm based on modified YOLOv2. Algorithms 10, 127 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  52. Ultralytics. yolov5. Available online: https://github.com/ultralytics/yolov5 (accessed on 18 May 2020).

  53. Yu, C.; Wang, J.; Chen, Y.; Huang, M.: Transfer learning with dynamic adversarial adaptation network. 2019 IEEE International Conference Data Mining pp. 778–786, (2019).

Download references

Acknowledgements

We thank the anonymous reviewers for the helpful comments. We are grateful to the Department of Mechanical Engineering, College of Field Engineering and Army Engineering University, PLA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohui He.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, G., He, X., Wang, Q. et al. A Traffic Sign Detection Network Based on PosNeg-Balanced Anchors and Domain Adaptation. Arab J Sci Eng 48, 1333–1347 (2023). https://doi.org/10.1007/s13369-022-06818-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-022-06818-1

Keywords

Navigation