Skip to main content
Log in

Research on highway vehicle detection based on faster R-CNN and domain adaptation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In order to solve the problems of the high missing detection rate of small target vehicles, the low detection ability and the single application scene when the traditional target detection model in the actual highway scene that due to factors such as bad weather, light changes, occlusion. This paper proposes an improved domain adaptive Faster R-CNN algorithm. By adding image-level and instance-level domain classifiers and consistency loss components to solve the problem of domain offset caused by the inconsistent distribution between training samples and actual samples. And the RPN network is improved by using multi-scale training and mining difficult samples for secondary training during the training process to improve the performance of the model. The improved model can increase the gain by 4.8%. The experimental results show that the domain adaptive component is effective for the migration between different sample domains, and the performance of small-scale target detection is significantly improved. The improved method can effectively improve the accuracy and robustness of the model, and has certain generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Ran B, Tan H, Zhang J, Qu X (2018) Development status and trend of connected automated vehicle highway system. J Automot Safety Energy 9(02):119–130

    Google Scholar 

  2. Rawat W, Wang ZH (2017) Deep convolutional neural networks for image classification: a comprehensive review. Neural Comput 29(9):2352–2449

    Article  MathSciNet  Google Scholar 

  3. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The Pascal visual object classes (VOC) challenge. IJCV 88(2):303–338

    Article  Google Scholar 

  4. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ra-manan D, Doll’ar P, Zitnick CL, Microsoft COCO (2014) Com-mon objects in context. ECCV 740–755

  5. Gopalan R, Li R, Chellappa R (2011) Domain adaptation for object recognition: An unsupervised approach. ICCV

  6. Hu X, Xu X, Xiao Y, et al. (2019) SINet: A scale-insensitive convolutional neural network for fast vehicle detection. IEEE Trans Intell Transp Syst 20(3):1010–1019

    Article  Google Scholar 

  7. Girshick R, Donahue J, Darrell T, et al. (2014) Rich feature hierarchies for accurate object detection and semantic segmentation[C]. In: 2014 IEEE conference on computer vision and pattern recognition, pp 580–587

  8. He K, Zhang X, Ren S, et al. (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  9. Girshick R (2015) Fast R-CNN. In: 2015 IEEE international conference on computer vision (ICCV), Santiago, pp 1440–1448

  10. Ren S, He K, Girshick R, et al. (2017) Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  11. Lin TY, Dollar P, Girshick R, et al. (2017) Feature pyramid networks for object detection. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, 936–944

  12. He K, Gkioxari G, Dollar P, et al. (2017) Mask R-CNN. In: 2017 IEEE international conference on computer vision (ICCV), Venice, pp 2980–2988

  13. Gidaris S, Komodakis N (2015) Object detection via a multi-region and semantic segmentation-aware CNN model. In: 2015 IEEE international conference on computer vision (ICCV), Santiago, pp 1134–1142

  14. Kong T, Yao A, Chen Y, et al. (2016) HyperNet: Towards accurate region proposal generation and joint object detection[C]. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 845–853

  15. Cai Z, Vasconcelos N (2018) Cascade R-CNN: Delving into high quality object detection[c]. In: 2018 IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, 6154– 6162

  16. Ghiasi G, Lin TY, Pang R, et al. (2019) NAS-FPN: learning scalable feature pyramid architecture for object detection[C]. In: 2019 IEEE conference on computer vision and pattern recognition (CVPR), Long Beach, CA, USA, 7029–7038

  17. Redmon J, Divvala S, Girshick R, et al. (2016) You only look once: unified, real-time object detection[C]. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 779–788

  18. Liu W, Anguelov D, Erhan D, et al. (2016) SSD:Single Shot Multi Box Detector[C]. In: Leibe B, Matas J, Sebe N, Welling M (eds) European conference on computer vision (ECCV), Lecture Notes in Computer Science, vol 9905, pp 21–37

  19. Redmon J, Farhadi A (2017) YOLO9000: Better, Faster, Stronger[C]. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 6517– 6525

  20. Lin TY, Goyal P, Girshick R, et al. (2020) Focal loss for dense object detection[J]. IEEE Trans Pattern Anal Mach Intell 42(2):318–327

    Article  Google Scholar 

  21. Redmon J, FarhadI A (2018) YOLOv3: An incremental improvement[J]. Computer Vision and Pattern Recognition, arXiv:1804.02767

  22. Lin TY, Dollar P, Girshick R, et al. (2017) Feature pyramid networks for object detection[C]. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 936–944

  23. Kong T, Sun FC, Yao AB, et al. (2017) RON: Reverse connection with objectness prior networks for object detection[C]. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 5244–5252

  24. Zhou P, Ni BB, Geng C et al (2018) Scale-transferrable object detection[C]. In: 2018 IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, 528–537

  25. Hu X et al (2019) SINet: A scale-insensitive convolutional neural network for fast vehicle detection. IEEE Trans Intell Transport Syst 20(3):1010–1019

    Article  Google Scholar 

  26. Dong W, Yang Z, Ling W, Yonghui Z, Ting L, Xiaoliang Q (2019) Research on vehicle detection algorithm based on convolutional neural network and combining color and depth images. In: 2019 2nd International conference on information systems and computer aided education (ICISCAE), Dalian, China, pp 274– 277

  27. Muchtar K, Afdhal A, Nasaruddin N (2020) Convolutional network and moving object analysis for vehicle detection in highway surveillance videos. In: 2020 3rd International seminar on research of information technology and intelligent systems (ISRITI), Yogyakarta, Indonesia, pp 509–513

  28. Zhang D, Li J, Xiong L, Lin L, Ye M, Yang S (2019) Cycle-consistent domain adaptive faster RCNN. In: IEEE Access, vol 7, pp 123903–123911

  29. Wang T, Zhang X, Yuan L, Feng J (2019) Few-shot adaptive faster R-CNN. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), Long Beach, CA, USA, pp 7166–7175

  30. Ren S, He K, Girshick R, Sun J (2015) Faster, R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Trans Pattern Anal 39(6):1137–1149

    Article  Google Scholar 

  31. Chen Y, Li W, Sakaridis C, et al. (2018) Domain adaptive faster R-CNn for object detection in the wild[J]

  32. Ben-David S, Blitzer J, Crammer K, et al. (2010) A theory of learning from different domains[J]. Mach Learn 79(1-2):151– 175

    Article  MathSciNet  Google Scholar 

  33. Ganin Y, Lempitsky V. (2015) Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd international conference on machine learning (ICML), Lille, France

  34. Geiger A, Lenz P, Stiller C, et al. (2013) Vision meets robotics: The KITTI dataset. Int J Robot Res 32(11):1231– 1237

    Article  Google Scholar 

  35. Lin TY, Maire M, Belongie S, et al. (2014) Microsoft COCO: Common objects in context[C]. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) European conference on computer vision (ECCV), vol 8693, pp 740–755

  36. Girshick R, Donahue J, Darrell T, et al. (2014) Rich feature hierarchies for accurate object detection and semantic segmentation[C]. In: 2014 IEEE conference on computer vision and pattern recognition, OH, pp 580–587

  37. Girshick R (2015) Fast R-CNN[J]. In: 2015 IEEE international conference on computer vision (ICCV), Santiago, pp 1440–1448

  38. Redmon J, Divvala S, Girshick R, et al. (2016) You only look once: unified, real-time object detection[C]. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), Las Vegas, NV, pp 779–788

  39. Redmon J, Farhadi A (2017) YOLO9000: Better, faster, stronger[C]. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, pp 6517–6525

  40. Dai J, Li Y, He K, et al. (2016) R-FCN: Object detection via region-based fully convolutional networks[C]. In: Adavances in neural information processing systems (NIPS), Barcelona, SPAIN, pp 379–387

  41. Huang J, Shi Y, Gao Y (2019) Multi-scale faster-RCNN algorithm for small object detection[J]. J Comput Res Devel 56(02):319–327

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuejin Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Meng Wang contributed equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, G., Yu, M., Wang, M. et al. Research on highway vehicle detection based on faster R-CNN and domain adaptation. Appl Intell 52, 3483–3498 (2022). https://doi.org/10.1007/s10489-021-02552-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02552-7

Keywords

Navigation