Drivable Road Recognition by Multilayered LiDAR and Vision

Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 193)


This paper presents the processing of the drivable road recognition by multilayered LiDAR and vision. Multilayered LiDAR information used for detecting a planetary region with boundaries and vision processing gives colored lane information for safe driving control. During navigating the road, EKF result on two different information are fused for robust and reliable navigation. This sensor fusing technique makes the autonomous navigation system to be robust and useful in real environment not only on regular road intersection but also unpaved ground way.


Drivable road recognition multilayer LiDAR computer vision Extended Kalman Filter Fusion 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Tsogas, M., Floudas, N., Lytrivis, P., Polychronopoulos, A.: Combined lane and road attributes extraction by fusing data from digital map, laser scanner and camera. Information Fusion 12, 28–36 (2011)CrossRefGoogle Scholar
  2. 2.
    Gupta, R.A., Snyder, W.S., Pitts, W.S.: Concurrent visual multiple lane detection for autonomous vehicles. In: IEEE Int. Conf. on Robotics and Automation, Anchorage, USA, May 3-8, pp. 2416–2422 (2010)Google Scholar
  3. 3.
    Kim, Z.: Realtime lane tracking of curved local road. In: Proc. of IEEE Int. Transportation Sys. Conf., Toronto, Canada, pp. 17–20 (September 2006)Google Scholar
  4. 4.
    Lipski, C., Scholz, B., Berger, K., Linz, C., Stich, T.: A fast and robust approach to lane marking detection and lane tracking. In: Southwestern Symp. on Image Analysis and Interpretation, pp. 57–60 (2008)Google Scholar
  5. 5.
    Peterson, K., Ziglar, J., Rybski, P.E.: Fast feature detection and stochastic parameter estimation of road shape using muliple LIDAR. In: IEEE Int. Conf. on Int. Robots and Systems, Nice, France, September 22-26, pp. 612–619 (2008)Google Scholar
  6. 6.
    Ogawa, T., Takagi, K.: Lane recognition using on-vehicle LIDAR. In: Int. Vehicles Symposium, Tokyo, Japan, June 13-15, pp. 740–741 (2006)Google Scholar
  7. 7.
    Lindner, P., Rochter, E., Wanielik, G., Takagi, K., Isogai, A.: Multi channel LiDAR processing for lane detection and estimation. In: Proc. of Int. IEEE Conf. on Intelligent Transportation Systems, St.Louis, USA, October 3-7, pp. 202–207 (2009)Google Scholar
  8. 8.
    Manz, M., von Hundelshausen, F., Wuensche, H.-J.: A hybrid estimation approach for autonomous dirt road following using multiple clothoid segments. In: IEEE Int. Conf. on Robotics and Automation, Anchorage, USA, May 3-8, pp. 2410–2415 (2010)Google Scholar
  9. 9.
    Chun, C., Suh, S., Lee, S., Roh, C.-W., Kang, S., Kang, Y.: Autonomous navigation of KUVE. Journal of Institute of Control, Robotics and Systems 16(7) (July 2010)Google Scholar
  10. 10.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge Univ. Press (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Suhyeon Gim
    • 1
  • Ilyas Meo
    • 1
  • Yongjin Park
    • 1
  • Sukhan Lee
    • 1
    • 2
  1. 1.Intelligent Systems Research InstituteSungKyunKwan UniversitySuwonRep. of Korea
  2. 2.Dept. of Interaction Science, School of Information and Communication Eng.SungKyunKwan UniversitySuwonRep. of Korea

Personalised recommendations