Skip to main content
Log in

Two-Wheeled Vehicle Detection Using Two-Step and Single-Step Deep Learning Models

  • Research Article-Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

Road accidents are major cause of death which has been increased by 46% since 1990. In recent years, significant efforts have been invested in four-wheeled vehicle detection that improved intelligent transportation systems and decreased the calamity rate. However, the automatic detection of two-wheeled vehicles remains challenging due to occlusion, illumination variation, environmental conditions, and viewpoint variations. In this paper, we present a comprehensive methodology of two-wheeled vehicle detection using two categories of deep learning-based object detection models including two-step and single-step techniques. In two-step object detection techniques, experiments are carried out with object detection models such as a region-based convolutional neural network (RCNN), Fast-RCNN, Faster-RCNN, and region-based fully convolutional network (R-FCN), while in single-step object detection techniques, detection is performed using the single-shot multibox detector (SSD), SDDLite, and you only look once (YOLOv3) detection models. The performance of the proposed methodology is evaluated on two benchmark datasets, i.e., MB7500 and Tsinghua-Daimler Cyclist data. The experimentation results demonstrate that Faster-RCNN with the Inception-Resnetv2 backbone model impressively outperforms two-step object detection techniques, while in single-step object detection techniques, SSD with the Inceptionv2 model shows superior performance. Further, the performance comparison of the proposed methodology with existing state-of-the-art methods confirms its effectiveness in two-wheeled vehicle detection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Liu, Y.; Tian, B.; Chen, S.; Zhu, F.; Wang, K.: A survey of vision-based vehicle detection and tracking techniques in its. In: Proceedings of 2013 IEEE International Conference on Vehicular Electronics and Safety. IEEE, pp. 72-77 (2013)

  2. Yang, Z.; Pun-Cheng, L.S.: Vehicle detection in intelligent transportation systems and its applications under varying environments: a review. Image Vis. Comput. 69, 143–154 (2018)

    Article  Google Scholar 

  3. Benjdira, B.; Khursheed, T.; Koubaa, A.; Ammar, A.; Ouni, K.: Car detection using unmanned aerial vehicles:Comparison between faster r-cnn and yolov3. In: 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS). IEEE, pp. 1–6 (2019)

  4. Hsu, S.C.; Huang, C.L.; Chuang, C.H.: Vehicle detectionusing simplified fast R-CNN. In: 2018 International Workshop on Advanced Image Technology (IWAIT). IEEE, pp. 1–3 (2018)

  5. Lu, J.; Ma, C.; Li, L.; Xing, X.; Zhang, Y.; Wang, Z.; Xu, J.: A vehicle detection method for aerial image based on yolo. J. Comput. Commun. 6, 98–107 (2018)

    Article  Google Scholar 

  6. Sang, J.; Wu, Z.; Guo, P.; Hu, H.; Xiang, H.; Zhang, Q.; Cai, B.: An improved yolov2 for vehicle detection. Sensors 18(12), 4272 (2018)

    Article  Google Scholar 

  7. Zhang, F.; Li, C.; Yang, F.: Vehicle detection in urban traffic surveillance images based on convolutional neural networks with feature concatenation. Sensors 19(3), 594 (2019)

    Article  Google Scholar 

  8. Li, X.; Li, L.; Flohr, F.; Wang, J.; Xiong, H.; Bernhard, M.; Pan, S.; Gavrila, D.M.; Li, K.: A unified framework for concurrent pedestrian and cyclist detection. IEEE Trans. Intell. Transp. Syst. 18(2), 269–281 (2016)

    Article  Google Scholar 

  9. Organization, W.H.; et al.: Global status report on road safety 2018: Summary (2018)

  10. Wang, K.; Zhou, W.: Pedestrian and cyclist detection based on deep neural network fast R-CNN. Int. J. Adv. Robot. Syst. 16(2), 1729881419829651 (2019)

    Google Scholar 

  11. Geronimo, D.; Lopez, A.M.; Sappa, A.D.; Graf, T.: Survey of pedestrian detection for advanced driver assistance systems. IEEE Trans. Pattern Anal. Mach. Intell. 7, 1239–1258 (2009)

    Google Scholar 

  12. Yang, Z.; Pun, L.: Vehicle detection using imaging technologies and its applications under varying environments: a review. In: Proceeding of the 2nd World Congress on Civil, Structural, and Environmental Engineering (2017)

  13. Zhang, X.; Cheng, L.; Li, B.; Hu, H.M.: Too far to see? Not really! Pedestrian detection with scale-aware localization policy. IEEE Trans. Image Process. 27(8), 3703–3715 (2018)

    Article  MathSciNet  Google Scholar 

  14. Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural information Processing Systems, pp. 1097–1105 (2012)

  15. Simonyan, K.; Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  16. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  17. He, K.; Zhang, X.; Ren, S.; Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  18. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna,Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

  19. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)

  20. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K.: Squeezenet: Alexnet-level accu- racy with 50x fewer parameters and 0.5 mb model size.arXiv preprint arXiv:1602.07360 (2016)

  21. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

  22. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

  23. Ren, S.; He, K.; Girshick, R.; Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

  24. Dai, J.; Li, Y.; He, K.; Sun, J.: R-FCN: Object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, pp. 379–387 (2016)

  25. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017)

  26. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen,L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  27. Redmon, J.; Farhadi, A.: Yolov3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)

  28. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C.: SSD: Single shot multibox detector. In: European Conference on Computer Vision. Springer, pp. 21–37 (2016)

  29. Thai, N.D.; Le, T.S.; Thoai, N.; Hamamoto, K.: Learning bag of visual words for motorbike detection. In: 2014 13th International Conference on Control Automation Robotics and Vision (ICARCV). IEEE, pp. 1045–1050 (2014)

  30. Mukhtar, A.; Tang, T.B.: Vision based motorcycle detection using hog features. In: 2015 IEEE International Conference on Signal and Image Processing Applications(ICSIPA). IEEE, pp. 452–456 (2015)

  31. Huynh, C.K.; Le, T.S.; Hamamoto, K.: Convolutional neural network for motorbike detection in dense traffic. In: 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE). IEEE, pp. 369–374 (2016)

  32. Espinosa, J.E.; Velastin, S.A.; Branch, J.W.: Motorcycle detection and classification in urban scenarios using a model based on faster R-CNN (2018)

  33. Espinosa, J.E.; Velastin, S.A.; Branch, J.W.: Detection and tracking of motorcycles in congested urban environments using deep learning and Markov decision processes. In: Mexican Conference on Pattern Recognition. Springer, pp. 139–148 (2019)

  34. Cho, H.; Rybski, P.E.; Zhang, W.: Vision-based bicyclist detection and tracking for intelligent vehicles. In: 2010 IEEE Intelligent Vehicles Symposium. IEEE, pp. 454–461 (2010)

  35. Hu, H.; Tao, P.; Gao, Z.; Wang, Q.; Li, Z.; Qu, Z.: Vision-based bicycle detection using multiscale block local binary pattern. Mathematical Problems in Engineering 2014 (2014)

  36. Chen, H.H.; Lin, C.C.; Wu, W.Y.; Chan, Y.M.; Fu, L.C.; Hsiao, P.Y.: Integrating appearance and edge features for on-road bicycle and motorcycle detection in the nighttime. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC). IEEE, pp. 354-359 (2014)

  37. Saleh, K.; Hossny, M.; Hossny, A.; Nahavandi, S.: Cyclist detection in Lidar scans using faster R-CNN and synthetic depth images. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE, pp. 1–6 (2017)

  38. Ahmed, S.; Huda, M.N.; Rajbhandari, S.; Saha, C.; Elshaw, M.; Kanarachos, S.: Pedestrian and cyclist detection and intent estimation for autonomous vehicles: a survey. Appl. Sci. 9(11), 2335 (2019)

    Article  Google Scholar 

  39. Li, X.; Flohr, F.; Yang, Y.; Xiong, H.; Braun, M.; Pan, S.; Li, K.; Gavrila, D.M.: A new benchmark for vision-based cyclist detection. In: 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 1028–1033 (2016)

  40. Anjali S.N.J.: Faster RCNN for concurrent pedestrian and cyclist detection. SSRG Int. J. Electron. Commun. Eng. (SSRG IJECE) 5 (2018)

  41. Das, K.N.; Bansal, J.C.; Deep, K.; Nagar, A.K.; Pathipooranam, P.; Naidu, R.C.: Soft computing for problem solving

  42. Liu, C.; Guo, Y.; Li, S.; Chang, F.: ACF based region proposal extraction for yolov3 network towards high performance cyclist detection in high resolution images. Sensors 19(12), 2671 (2019)

    Article  Google Scholar 

  43. Yadav, G.; Maheshwari, S.; Agarwal, A.: Contrast limited adaptive histogram equalization based enhancement for real time video system. In: 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, pp. 2392–2397 (2014)

  44. Kreslin, R.; Calvo, P.M.; Corzo, L.G.; Peer, P.: Linear chromatic adaptation transform based on delaunay triangulation. Math. Probl. Eng. 2014 (2014)

  45. He, K.; Sun, J.; Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  46. Singh, D.; Kumar, V.: A novel dehazing model for remote sensing images. Comput. Electr. Eng. 69, 14–27 (2018)

    Article  Google Scholar 

  47. Singh, D.; Kumar, V.: Image dehazing using moore neighborhood-based gradient profile prior. Signal Process. Image Commun. 70, 131–144 (2019)

    Article  Google Scholar 

  48. Singh, D.; Kumar, V.: Dehazing of outdoor images using notch based integral guided filter. Multimed. Tools Appl. 77(20), 27363–27386 (2018)

    Article  Google Scholar 

  49. Liu, S.; Fu, W.; Zhao, W.; Zhou, J.; Li, Q.: A novel fusion method by static and moving facial capture. Math. Probl. Eng. 2013 (2013)

  50. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, pp. 248–255 (2009)

  51. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollffar, P.; Zitnick, C.L.: Microsoft coco: common objects in context. In: European Conference on Computer Vision. Springer, pp. 740–755 (2014)

  52. Espinosa, J.: MotorBike Datasets. http://velastin.dynu.com/videodatasets/UrbanMotorbike/

  53. Li, X.; F.F.: Cyclist Benchmark. http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Tsinghua-Daimler_Cyclist_Detec/tsinghua-daimler_cyclist_detec.html/

  54. Wang, Z.; Bovik, A.C.: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 26(1), 98–117 (2009)

    Article  Google Scholar 

  55. Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  56. Nazar, T.S.; da Costa, G.B.P.; Contato, W.A.; Ponti, M.: Deep convolutional neural networks and noisy images. In: Iberoamerican Congress on Pattern Recognition. Springer, pp. 416–424 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Haroon Yousaf.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kausar, A., Jamil, A., Nida, N. et al. Two-Wheeled Vehicle Detection Using Two-Step and Single-Step Deep Learning Models. Arab J Sci Eng 45, 10755–10773 (2020). https://doi.org/10.1007/s13369-020-04837-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-020-04837-4

Keywords

Navigation