Multimedia Tools and Applications

, Volume 76, Issue 21, pp 22979–22998 | Cite as

Fast lane detection based on bird’s eye view and improved random sample consensus algorithm

  • Yong Ding
  • Zheng XuEmail author
  • Yubin Zhang
  • Ke Sun


In order to ensure the safety of drivers, Advanced Driving Assistance System (ADAS) has drawn more and more attention. The Lane Departure Warning system is one of the most important parts of ADAS. However, fast and stable lane marking detection is the precondition of it under complex background. In this paper, we proposed a new lane detection method through bird’s eye view and improved RANSAC (Random Sample Consensus) algorithm based on the inspiration that extraction of road features from remote sensed images. According to the bird’s eye view of the road image, we can recognize the line marking through Progressive Probabilistic Hough transform instead of lane detection. Then, the detected lines are grouped by a new distance-based weighting scheme and we can get the fields of candidate lanes. For each of the fields, lanes are refined through improved RANSAC algorithm and fitted by double models. Hence, the road orientation can be predicted by the curvature and straight line’s slope. At last, our experimental results indicated that the lane detection algorithm has good robustness and real-time under various road environment.


Bird’s eye view Lane detection Random sample consensus 



This research has been funded by Guangxi Natural Science Foundation Project No. 2014GXNSFCA118014, Innovation of Guangxi Graduate Education No.XJYC2012020


  1. 1.
    Aly M (2008) Real time detection of lane markers in urban streets[C]. IEEE Intelligent Vehicles Symposium. IEEE, Eindhoven, pp. 7–12Google Scholar
  2. 2.
    Canny J (1986) A computational approach to edge detection [J]. IEEE Trans Pattern Anal Mach Intell 8(6):679–698CrossRefGoogle Scholar
  3. 3.
    Chen Q, Wang H (2006) A real-time lane detection algorithm based on a hyperbola-pair model[C]. IEEE Intelligent Vehicles Symposium. IEEE, Tokyo, p 510–515Google Scholar
  4. 4.
    Chin KY, Lin SF (2005) Lane detection using color-based segmentation. In IEEE intelligent vehicles symposium [C]:706–711Google Scholar
  5. 5.
    Collado, Hilario C, Escalera A, Armingol JM (2005) Detection and classification of road lanes with a frequency analysis[C]. IEEE Intelligent Vehicles Symposium. Nevada, USA, 7883Google Scholar
  6. 6.
    Fan C, Wang L, Liu P, Lu K, Liu D (2016) Compressed sensing based remote sensing image reconstruction via employing similarities of reference images. Multimedia Tools Appl 75(19):12201–12225Google Scholar
  7. 7.
    Galamhos C, Matas J, Kittler J (1999) Progressive probabilistic Hough transform for line detection[C]. Proceedings of computer vision and pattern recognition Fort Collins, co, IEEE, USA, 23–35Google Scholar
  8. 8.
    Hsiao PY, Yeh CW (2006) A portable real-time lane departure warning system based on embedded calculating technique[C]. 2006 I.E. 63rd Vehicular Technology Conference 6:2982–2986Google Scholar
  9. 9.
    Hu C, Xu Z, Liu Y, Mei L (2015) Video structural description technology for the new generation video surveillance systems[J]. Frontiers of Computer Science 9(6):980–989CrossRefGoogle Scholar
  10. 10.
    King HL, Kah PS, Li-Minn A (2009) Lane detection and kalman-based linear-parabolic lane tracking [C]. Proceedings of IEEE international conference on intelligent human-machine systems and cybernetics. IEEE, Hangzhou, p 351–354Google Scholar
  11. 11.
    Kong H, Audibert JY, Ponce J (2010) General road detection from a single image [J]. IEEE Trans Image Process 19(8):2211–2220MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Kuk JG, An JH, Ki H, Cho (2010) Fast lane detection and tracking based on Hough transform with reduced memory requirement [C]. In IEEE conference on intelligent transportation systems 1344–1349Google Scholar
  13. 13.
    Lu WN, Zheng YC, Ma YQ, et al. (2008) An integrated approach to recognition of lane marking and road boundary[C]. Proceedings of International Workshop on Knowledge Discovery and Data Mining. University of Adelaide, Australia, p 649–653Google Scholar
  14. 14.
    Massimo B, Alberto B, Alessandra F (1998) Stereo inverse perspective mapping: theory and applications [J]. Image Vis Comput 16:585–590CrossRefGoogle Scholar
  15. 15.
    Peden M, Scurfield R, Sleet D, Mohan D, Hyder A, Jarawan E, Mathers C (2013) Global plan for the decade of action for road safety 2011–2020[R]. UN: World report on road traffic injury prevention (5):1–21Google Scholar
  16. 16.
    Richard OD, Peter EH (1972) Use of the hough transformation to detect lines and curves in picture [J]. Graphics and Image Processing 15(1):11–15zbMATHGoogle Scholar
  17. 17.
    Ryosuke O, Kajiwara Y, Kazuaki T (2014) A survey of technical trend of ADAS and autonomous driving [C]. VLSI design, automation and test (VLSI-DAT), 2014 International Symposium on 4:1–4Google Scholar
  18. 18.
    Sibel Y, Gökhan Y, Ekrem D (2013) Keeping the vehicle on the road: a survey on on-road lane detection systems [J]. ACM Comput Surv 46(1):1–43Google Scholar
  19. 19.
    Son J, Yoo H, Kim S et al (2015) Real-time illumination invariant lane detection for lane departure warning system [J]. Expert Syst Appl 42(4):1816–1824CrossRefGoogle Scholar
  20. 20.
    Sun TY, Tsai SJ, Chan V (2006) HSI color model based lane-marking detection[C]. In IEEE conference on intelligent transport system 1168–1172Google Scholar
  21. 21.
    Wang J, An X (2010) A multi-step curved lane detection algorithm based on hyperbola-pair model[C]. IEEE International Conference on Automation and Logistics. HK and Macao 132–137Google Scholar
  22. 22.
    Wang Y, Teoh E, Shen D (2004) Lane detection and tracking using b-snake [J]. Image Vis Comput 22(4):269–280CrossRefGoogle Scholar
  23. 23.
    Wang JG, Lin CJ, Chen SM (2010) Applying fuzzy method to vision-based lane detection and departure warning system [J]. Expert Syst Appl 37(i):113–126CrossRefGoogle Scholar
  24. 24.
    Wang L, Song W, Liu P (2016) Link the remote sensing big data to the image features via wavelet transformation. Clust Comput 19(2):793–810CrossRefGoogle Scholar
  25. 25.
    Xu Z, Hu C, Mei L (2006) Video structured description technology based intelligence analysis of surveillance videos for public security applications [J]. Multimedia Tools Appl 75(19):12155–12172CrossRefGoogle Scholar
  26. 26.
    Xu Z, Liu Y, Mei L, Hu C, Chen L (2015) Semantic based representing and organizing surveillance big data using video structural description technology [J]. J Syst Softw 102:217–225CrossRefGoogle Scholar
  27. 27.
    Xu Z, Mei L, Liu Y, Hu C, Chen L (2016a) Semantic enhanced cloud environment for surveillance data management using video structural description [J]. Computing 98(1–2):35–54MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Xu Z, Mei L, Hu C, Liu Y (2016b) The big data analytics and applications of the surveillance system using video structured description technology [J]. Clust Comput 19(3):1283–1292CrossRefGoogle Scholar
  29. 29.
    Xu Z, Zhang H, Hu C, Mei L, Xuan J, Choo K-KR, Sugumaran V, Zhu Y (2016c) Building knowledge base of urban emergency events based on crowdsourcing of social media. Concurrency and Computation: Practice and Experience 28(15):4038–4052CrossRefGoogle Scholar
  30. 30.
    Xu Z, Zhang H, Sugumaran V, Choo K-KR, Mei L, Zhu Y 2016d Participatory sensing-based semantic and spatial analysis of urban emergency events using mobile social media. EURASIP J Wireless Comm and Networking 44. doi: 10.1186/s13638-016-0553-0

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.School of Computer Science and Information Security, Guangxi Key Laboratory of Cryptography and Information SecurityGuilin University of Electronic TechnologyGuilinChina
  2. 2.The third research institute of the ministry of public securityShanghaiChina
  3. 3.School of Mathematics and Computing ScienceGuilin University of Electronic TechnologyGuilinChina

Personalised recommendations