Skip to main content

Occlusion-Aware Detection for Internet of Vehicles in Urban Traffic Sensing Systems

Abstract

Vehicle detection is a fundamental challenge in urban traffic surveillance video. Due to the powerful representation ability of convolution neural network (CNN), CNN-based detection approaches have achieve incredible success on generic object detection. However, they can’t deal well with vehicle occlusion in complex urban traffic scene. In this paper, we present a new occlusion-aware vehicle detection CNN framework, which is an effective and efficient framework for vehicle detection. First, we concatenate the low-level and high-level feature maps to capture more robust feature representation, then we fuse the local and global feature maps for handling vehicle occlusion, the context information is also been adopted in our framework. Extensive experiments demonstrate the competitive performance of our proposed framework. Our methods achieve better effect than primal Faster R-CNN in terms of accuracy on a new urban traffic surveillance dataset (UTSD) which contains a mass of occlusion vehicles and complex scenes.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

References

  1. 1.

    Hu X et al (2018) SINet: A Scale-insensitive Convolutional Neural Network for Fast Vehicle Detection. arXiv 1:1–10

    Google Scholar 

  2. 2.

    Chen L, Ye F, Ruan Y, Fan H, Chen Q (2018) An algorithm for highway vehicle detection based on convolutional neural network. EURASIP J. Image Video Process 2018(1):109

    Article  Google Scholar 

  3. 3.

    Fan H, Zhu H (2018) Separation of vehicle detection area using Fourier descriptor under internet of things monitoring. IEEE Access 6:47600–47609

    Article  Google Scholar 

  4. 4.

    Zhu H, Fan H, Ye F, Zhu S, Gan P (2016) A novel method for moving vehicle tracking based on horizontal edge identification and local autocorrelation images. DYNA-Ingeniería e Ind. 91(1):61–68

    Google Scholar 

  5. 5.

    Viola PA, Jones MJ (2001) Rapid Object Detection using a Boosted Cascade of Simple Features. In: IEEE Computer Society Conference on Computer Vision & Pattern Recognition, p 511

  6. 6.

    Felzenszwalb PF, Girshick RB, Mcallester D, Ramanan D (2009) Object detection with discriminatively trained part based models. IEEE Trans Pattern Anal Mach Intell 32(9):1–20

    Google Scholar 

  7. 7.

    Chen L, Zhu P, Zhu G (2010) Moving objects detection based on background subtraction combined with consecutive frames subtraction. In: International conference on Future Information Technology & Management Engineering

  8. 8.

    Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    MathSciNet  Article  Google Scholar 

  9. 9.

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Adv Neural Inf Process Syst, pp. 1–9, 2012

  10. 10.

    Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, vol 2015 Inter, pp 1440–1448

  11. 11.

    He K, Zhang X, Ren S, Sun J (2015) Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans Pattern Anal Mach Intell 37(9)

  12. 12.

    Shelhamer E, Long J, Darrell T (2014) Fully Convolutional Networks for Semantic Segmentation IEEE Trans Pattern Anal Mach Intell (99):1

  13. 13.

    Alonso Martínez A, Del Valle Soto M, Cecchini Estrada JA, Izquierdo M (2003) Asociación de la condicion física saludable y los indicadores del estado de salud (II). Arch Med del Deport 20(97):405–415

    Google Scholar 

  14. 14.

    Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. Nips

  15. 15.

    Liu W et al (2016) SSD: single shot MultiBox detector. In: European conference on computer vision

  16. 16.

    Redmon J, Divvala S, Girshick R, Farhadi A (2015) You only look once: unified, real-time object detection

  17. 17.

    Zhang H, Wang K, Tian Y, Gou C, Wang FY (2018) MFR-CNN: incorporating multi-scale features and global information for traffic object detection. IEEE Trans Veh Technol 67(9):8019–8030

    Article  Google Scholar 

  18. 18.

    Wang L, Lu Y, Wang H, Zheng Y, Ye H, Xue X (2017) Evolving boxes for fast vehicle detection. In: Multimedia and Expo (ICME), 2017 IEEE International Conference on, pp 1135–1140

  19. 19.

    Singh B, Li H, Sharma A, Davis LS (2017) R-FCN-3000 at 30fps: decoupling detection and classification

  20. 20.

    Gidaris S, Komodakis N (2015) Object Detection via a Multi-Region and Semantic Segmentation-Aware CNN Model. Iccv (1):1134–1142

  21. 21.

    Kong T, Yao A, Chen Y, Sun F (2016) HyperNet: towards accurate region proposal generation and joint object detection

  22. 22.

    Güzel MS, Askerbeylİ İ (2018) A vehicle detection approach using deep learning methodologies

  23. 23.

    Zhang S, Wen L, Bian X, Lei Z, Li SZ (2017) Single-shot refinement neural network for object detection

  24. 24.

    Dalal N, Triggs B (2005) Histograms of Oriented Gradients for Human Detection. In: IEEE Computer Society Conference on Computer Vision & Pattern Recognition, pp 886–893

  25. 25.

    Lowe DG (2004) Distinctive Image Features from Scale-Invariant Keypoints. In: International journal of computer vision, pp 91–110

  26. 26.

    Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection

  27. 27.

    Uijlings JRR, Van De Sande KEA, Gevers T, Smeulders AWM (2013), “Selective search for object recognition. Int J Comput Vis 104(2)

  28. 28.

    Zitnick CL, Dollár P, Doll P (2014) Edge Boxes:Locating Object Proposals from Edges. Eur Conf Comput Vis 8693 LNCS(PART 5):1–15

  29. 29.

    Everingham M, Eslami SMA, Van Gool L, Williams CKI, Winn J, Zisserman A (2015) The Pascal visual object classes challenge: a retrospective. Int J Comput Vis 111(1):98–136

    Article  Google Scholar 

  30. 30.

    Lin T-Y et al (2014) Microsoft COCO: Common Objects in Context. In: Computer Vision -- ECCV 2014, pp 740–755

  31. 31.

    Dai J, Li Y, He K, Sun J (2016) R-FCN: object detection via region-based fully convolutional networks

  32. 32.

    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgments

This research was supported by the National Natural Science Foundation of China (Grant No. 61806088). Natural Science Fund of Changzhou (CE20175026), Qing Lan Project of Jiangsu Province. The Science and Technology Support Plan of Changzhou (Social Development, CE20185044). The Science and Technology Achievements Transformation Project of Nanjing Association for Science and Technology (201701209).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Linkai Chen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Ruan, Y., Fan, H. et al. Occlusion-Aware Detection for Internet of Vehicles in Urban Traffic Sensing Systems. Mobile Netw Appl 26, 981–987 (2021). https://doi.org/10.1007/s11036-020-01668-3

Download citation

Keywords

  • Vehicle detection
  • Vehicle occlusion
  • CNN
  • Context information