Skip to main content
Log in

Deep Learning Techniques for Vehicle Trajectory Extraction in Mixed Traffic

  • Original Paper
  • Published:
Journal of Big Data Analytics in Transportation Aims and scope Submit manuscript

Abstract

Vehicle trajectories provide very useful empirical data for studying traffic phenomena such as vehicle following behavior, lane changing behavior, traffic oscillations, capacity drop, safety analysis, etc. However, there are a very limited number of studies on extracting trajectory data from mixed traffic and for congested conditions. This paper presents a deep learning-based framework to extract vehicle trajectories in mixed traffic under both free-flow and congested conditions. The popular YOLOv3 deep learning architecture is used and trained on a hybrid dataset generated from two different sets of frames with different scales and orientations. The anchor boxes for vehicle detection and classification are customized to improve accuracy and efficiency. The SORT algorithm is used to track the identified vehicles and the extracted trajectory data are benchmarked with a popular trajectory extraction portal that showed that the proposed model performs well for trajectory extraction. The paper also presents a methodology based on numerical integration techniques to impute missing trajectory data. Finally, the trajectory data obtained from the adjacent road sections are aligned and scaled to the real-world coordinates using coordination transformation and error correction methods to make it useful for research purposes. The extracted trajectories show remarkable accuracy with approximately 0.25–0.35 m of precision. It is expected that these trajectories capture traffic and driving behavior phenomena for a better understanding of mixed traffic conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Adu-Gyamfi YO, Asare SK, Sharma A, Titus T (2017) Automated vehicle recognition with deep convolutional neural networks. Transp Res Rec 2645(1):113–122

    Article  Google Scholar 

  • Apeltauer J, Babinec A, Herman D, Apeltauer T (2015) Automatic vehicle trajectory extraction for traffic analysis from aerial video data. Int Arch Photogramm Remote Sensing Spatial Inform Sci 40(3):9

    Article  Google Scholar 

  • Barmpounakis EN, Vlahogianni EI, Golias JC (2016) Extracting kinematic characteristics from unmanned aerial vehicles (No. 16–3429)

  • Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110(3):346–359

    Article  Google Scholar 

  • Bewley A, Ge Z, Ott L, Ramos F, Upcroft B (2016) Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP) 3464–3468

  • Bisong E (2019) Google Colaboratory. In: Building Machine Learning and Deep Learning Models on Google Cloud Platform. A press, Berkeley https://doi.org/10.1007/978-1-4842-4470-8_7

  • Chen E, Tang X, Fu B (2018) A modified pedestrian retrieval method based on faster R-CNN with integration of pedestrian detection and re-identification. In: 2018 International conference on audio, language and image processing (ICALIP) pp 63–66

  • Chen X, Li Z, Yang Y, Qi L, Ke R (2020) High-resolution vehicle trajectory extraction and denoising from aerial videos. IEEE Transactions on Intelligent Transportation Systems

  • Cheng HY, Yu CC (2012) Detecting and Tracking Vehicles in Airborne Videos. International Journal of Computer and Information Engineering 6(5):665–668

    Google Scholar 

  • Choi JH, Lee D, Bang H (2011) Tracking an unknown moving target from uav: Extracting and localizing an moving target with vision sensor based on optical flow. In The 5th International Conference on Automation, Robotics and Applications 384–389

  • Coifman B, Li L (2017) A critical evaluation of the Next Generation Simulation (NGSIM) vehicle trajectory dataset. Transportation Research Part B: Methodological 105:362–377

    Article  Google Scholar 

  • Danescu R, Oniga F, Nedevschi S, Meinecke MM (2009) Tracking multiple objects using particle filters and digital elevation maps. In 2009 IEEE Intelligent Vehicles Symposium 88–93

  • DFS (Data From Sky, 2020): https://datafromsky.com/

  • Gao H, Kong SL, Zhou S, Lv F, Chen Q (2014) Automatic extraction of multi-vehicle trajectory based on traffic videotaping from quadcopter model. Appl Mechan Mater 552:232–239. Trans Tech Publications Ltd.

  • Garcia F, Cerri P, Broggi A, de la Escalera A, Armingol JM (2012) Data fusion for overtaking vehicle detection based on radar and optical flow. In 2012 IEEE Intelligent Vehicles Symposium 494–499

  • Gould, H., Tobochnik, J., & Christian, W. (2016). An introduction to computer simulation methods application to physical system.

  • Guido G, Vitale A, Saccomanno FF, Astarita V, Giofrè V (2014) Vehicle tracking system based on videotaping data. Procedia Soc Behav Sci 111:1123–1132

    Article  Google Scholar 

  • Haghighat AK, Ravichandra-Mouli V, Chakraborty P, Esfandiari Y, Arabi S, Sharma A (2020) Applications of Deep Learning in Intelligent Transportation Systems. Journal of Big Data Analytics in Transportation, 2(2):115–145

  • He K, Gkioxari G, Dollár P, Girshick RB (2017) "Mask R-CNN", 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988

  • Henriques JF, Caseiro R, Martins P, Batista J (2014) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596

    Article  Google Scholar 

  • Hue C, Le Cadre JP, Pérez P (2002) Sequential Monte Carlo methods for multiple target tracking and data fusion. IEEE Trans Signal Process 50(2):309–325

    Article  Google Scholar 

  • Jonathan Hui, “Object detection: speed and accuracy comparison (Faster R-CNN, R-FCN, SSD, FPN, RetinaNet and YOLOv3)”, https://medium.com/@jonathan_hui/object-detection-speed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359

  • Jodoin JP, Bilodeau GA, Saunier N (2014) Urban tracker: Multiple object tracking in urban mixed traffic. In IEEE Winter Conference on Applications of Computer Vision 885–892

  • Kanagaraj V, Asaithambi G, Toledo T, Lee TC (2015) Trajectory data and flow characteristics of mixed traffic. Transp Rese Rec 2491(1):1–11

  • Khan MA, Ectors W, Bellemans T, Janssens D, Wets G (2017) Unmanned aerial vehicle–based traffic analysis: Methodological framework for automated multivehicle trajectory extraction. Transp Res Rec 2626(1):25–33

    Article  Google Scholar 

  • Kim Z, Gomes G, Hranac R, Skabardonis A (2005) A machine vision system for generating vehicle trajectories over extended freeway segments. In 12th World Congress on Intelligent Transportation Systems

  • Kim EJ, Park HC, Ham SW, Kho SY, Kim DK (2019) Extracting vehicle trajectories using unmanned aerial vehicles in congested traffic conditions. J Adv Transp. https://doi.org/10.1155/2019/9060797

    Article  Google Scholar 

  • Koller D, Weber J, Malik J (1994) Robust multiple car tracking with occlusion reasoning. European Conference on Computer Vision. Springer, Berlin, pp 189–196

    Google Scholar 

  • Leibe B, Seemann E, Schiele B (2005) Pedestrian detection in crowded scenes. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 1:878–885

  • Li Q, Lei B, Yu Y, Hou R (2009) Real-time highway traffic information extraction based on airborne video. In 2009 12th International IEEE Conference on Intelligent Transportation Systems. IEEE. 1–6

  • Li J, Hajimirsadeghi H, Zaki MH, Mori G, Sayed T (2014) Computer vision techniques to collect helmet-wearing data on cyclists. Transp Res Rec 2468(1):1–10

    Article  Google Scholar 

  • Miao Q, Wang G, Shi C, Lin X, Ruan Z (2011) A new framework for on-line object tracking based on SURF. Pattern Recogn Lett 32(13):1564–1571

    Article  Google Scholar 

  • Milan A, Roth S, Schindler K (2013) Continuous energy minimization for multitarget tracking. IEEE Trans Pattern Anal Mach Intell 36(1):58–72

    Article  Google Scholar 

  • Montanino M, Punzo V (2013) Making NGSIM data usable for studies on traffic flow theory: Multistep method for vehicle trajectory reconstruction. Transp Res Rec 2390(1):99–111

    Article  Google Scholar 

  • Oh J, Min J, Kim M, Cho H (2009) Development of an automatic traffic conflict detection system based on image tracking technology. Transp Res Rec 2129(1):45–54

    Article  Google Scholar 

  • Ozkurt C, Camci F (2009) Automatic traffic density estimation and vehicle classification for traffic surveillance systems using neural networks. Mathem Comput Appl 14(3):187–196

    Google Scholar 

  • Redmon, Joseph, et al (2016) "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition

  • Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. arXiv preprint

  • Ren S, He K, Girshick R Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inform Process Syst 91–99

  • Rodríguez-Canosa GR, Thomas S, Del Cerro J, Barrientos A, MacDonald B (2012) A real-time method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera. Remote Sensing 4(4):1090–1111

    Article  Google Scholar 

  • Rothrock RL, Drummond OE (2000) Performance metrics for multiple-sensor multiple-target tracking. Signal Data Process Small Targets 4048:521–531

    Google Scholar 

  • Song H, Liang H, Li H, Dai Z, Yun X (2019) Vision-based vehicle detection and counting system using deep learning in highway scenes. European Transport Research Review, 11(1):1–16

  • St-Aubin P, Saunier N, Miranda-Moreno LF, Ismail K (2013) Use of computer vision data for detailed driver behavior analysis and trajectory interpretation at roundabouts. Transp Res Rec 2389(1):65–77

    Article  Google Scholar 

  • Tsai LW, Hsieh JW, Fan KC (2007) Vehicle detection using normalized color and edge map. IEEE Trans Image Process 16(3):850–864

    Article  MathSciNet  Google Scholar 

  • Tzutalin (2015) LabelImg. Git code. https://github.com/tzutalin/labelImg

  • Wang L, Yung NHC (2012) Three-dimensional model-based human detection in crowded scenes. IEEE Trans Intell Transp Syst 13(2):691–703

    Article  Google Scholar 

  • Wang L, Yung NHC, Xu L (2014) Multiple-human tracking by iterative data association and detection update. IEEE Trans Intell Transp Syst 15(5):1886–1899

    Article  Google Scholar 

  • Wang L, Chen F, Yin H (2016). Detecting and tracking vehicles in traffic by unmanned aerial vehicles. Automation in construction, 72:294–308

  • Xu Y, Yu G, Wang Y, Wu X, Ma Y (2017) Car detection from low-altitude UAV imagery with the faster R-CNN. J Adv Transp. https://doi.org/10.1155/2017/2823617

    Article  Google Scholar 

  • Yilmaz A, Javed O, Shah M (2006) Object tracking: A survey. Acm Comput Surveys (CSUR) 38(4):13-es

    Article  Google Scholar 

Download references

Acknowledgements

The authors would also like to thank Kiran Roy, a former Dual Degree student in the Department of Civil Engineering, IIT Madras for recording the videos, IIT Madras for providing the facilities to conduct the research, and MHRD, Government of India for providing the scholarship to the first author.

Funding

The authors would also like to thank the SPARC program, MHRD, Government of India, and IC&SR, IITM for supporting this research through projects.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: Bhargava Rama Chilukuri; Methodology: Rohan Dhatbale, Bhargava Rama Chilukuri; Formal analysis and investigation: Rohan Dhatbale, Bhargava Rama Chilukuri; Writing—original draft preparation: Rohan Dhatbale; Writing—review and editing: Rohan Dhatbale, Bhargava Rama Chilukuri; Funding acquisition: Bhargava Rama Chilukuri; Resources: IIT Madras, MHRD (Govt. of India); Supervision: Bhargava Rama Chilukuri.

Corresponding author

Correspondence to Bhargava Rama Chilukuri.

Ethics declarations

Conflicts of interest

None.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dhatbale, R., Chilukuri, B.R. Deep Learning Techniques for Vehicle Trajectory Extraction in Mixed Traffic. J. Big Data Anal. Transp. 3, 141–157 (2021). https://doi.org/10.1007/s42421-021-00042-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42421-021-00042-3

Keywords

Navigation