Advertisement

A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars

  • Jian-ru Xue
  • Di Wang
  • Shao-yi Du
  • Di-xiao Cui
  • Yong Huang
  • Nan-ning Zheng
Article

Abstract

Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.

Key words

Visual perception Self-localization Mapping Motion planning Robotic car 

CLC number

TP181 

References

  1. Aeberhard, M., Rauch, S., Bahram, M., et al., 2015. Experience, results and lessons learned from automated driving on Germany’s highways. IEEE Intell. Transp. Syst. Mag., 7(1)):42–57. http://dx.doi.org/10.1109/MITS.2014.2360306 CrossRefGoogle Scholar
  2. Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2007. A new approach for large-scale localization and mapping: hybrid metric-topological SLAM. Proc. IEEE Int. Conf. on Robotics and Automation, p.2061–2067. http://dx.doi.org/10.1109/ROBOT.2007.363625 Google Scholar
  3. Blanco, J.L., Fernádez-Madrigal, J.A., González, J., 2008. Toward a unified Bayesian approach to hybrid metrictopological SLAM. IEEE Trans. Robot., 24(2)):259–270. http://dx.doi.org/10.1109/TRO.2008.918049 CrossRefGoogle Scholar
  4. Brubaker, M.A., Geiger, A., Urtasun, R., 2016. Map-based probabilistic visual self-localization. IEEE Trans. Patt. Anal. Mach. Intell., 38(4)):652–665. http://dx.doi.org/10.1109/TPAMI.2015.2453975 CrossRefGoogle Scholar
  5. Buehler, M., Iagnemma, K., Singh, S., 2009. The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer. http://dx.doi.org/10.1007/978-3-642-03991-1 CrossRefGoogle Scholar
  6. Cho, H., Seo, Y.W., Kumar, B.V.K.V., et al., 2014. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. IEEE Int. Conf. on Robotics and Automation, p.1836–1843. http://dx.doi.org/10.1109/ICRA.2014.6907100 Google Scholar
  7. Cui, D., Xue, J., Du, S., et al., 2014. Real-time global localization of intelligent road vehicles in lane-level via lane marking detection and shape registration. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, p.4958–4964. http://dx.doi.org/10.1109/IROS.2014.6943267 Google Scholar
  8. Cui, D.X., Xue, J.R., Zheng, N.N., 2016. Real-time global localization of robotic cars in lane level via lane marking detection and shape registration. IEEE Trans. Intell. Transp. Syst., 17(4)):1039–1050. http://dx.doi.org/10.1109/TITS.2015.2492019 CrossRefGoogle Scholar
  9. Darms, M., Rybski, P., Urmson, C., 2008. Classification and tracking of dynamic objects with multiple sensors for autonomous driving in urban environments. IEEE Intelligent Vehicles Symp., p.1197–1202. http://dx.doi.org/10.1109/IVS.2008.4621259 Google Scholar
  10. Darms, M., Rybski, P., Baker, C., et al., 2009. Obstacle detection and tracking for the urban challenge. IEEE Trans. Intell. Transp. Syst., 10(3)):475–485. http://dx.doi.org/10.1109/TITS.2009.2018319 CrossRefGoogle Scholar
  11. Davison, A.J., Reid, I.D., Molton, N.D., et al., 2007. MonoSLAM: real-time single camera SLAM. IEEE Trans. Patt. Anal. Mach. Intell., 29(6)):1052–1067. http://dx.doi.org/10.1109/TPAMI.2007.1049 CrossRefGoogle Scholar
  12. Dissanayake, M.W.M.G., Newman, P., Clark, S., et al., 2001. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom., 17(3)):229–241. http://dx.doi.org/10.1109/70.938381 CrossRefGoogle Scholar
  13. Dollár, P., Appel, R., Belongie, S., et al., 2014. Fast feature pyramids for object detection. IEEE Trans. Patt. Anal. Mach. Intell., 36(8)):1532–1545. http://dx.doi.org/10.1109/TPAMI.2014.2300479 CrossRefGoogle Scholar
  14. Douillard, B., Fox, D., Ramos, F., 2009. Laser and vision based outdoor object mapping. Robotics: Science and Systems IV, p.9–16.Google Scholar
  15. Du, S.Y., Zheng, N.N., Xiong, L., et al., 2010. Scaling iterative closest point algorithm for registration of m-D point sets. J. Vis. Commun. Image Represent., 21(5- 6)):442–452. http://dx.doi.org/10.1016/j.jvcir.2010.02.005 CrossRefGoogle Scholar
  16. Durrant-Whyte, H., Bailey, T., 2006. Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag., 13(2)):99–110. http://dx.doi.org/10.1109/MRA.2006.1638022 CrossRefGoogle Scholar
  17. Ess, A., Schindler, K., Leibe, B., et al., 2010. Object detection and tracking for autonomous navigation in dynamic environments. Int. J. Robot. Res., 29(14)):1707–1725. http://dx.doi.org/10.1177/0278364910365417 CrossRefGoogle Scholar
  18. Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M., 2015. Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev., 43(1)):55–81. http://dx.doi.org/10.1007/s10462-012-9365-8 CrossRefGoogle Scholar
  19. Grisetti, G., Kummerle, R., Stachniss, C., et al., 2010. A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag., 2(4)):31–43. http://dx.doi.org/10.1109/MITS.2010.939925 CrossRefGoogle Scholar
  20. Hartley, R.I., Zisserman, A., 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.CrossRefzbMATHGoogle Scholar
  21. Held, D., Levinson, J., Thrun, S., et al., 2016. Robust realtime tracking combining 3D shape, color, and motion. Int. J. Robot. Res., 35(1-3)):30–49. http://dx.doi.org/10.1177/0278364915593399 CrossRefGoogle Scholar
  22. Hillel, A.B., Lerner, R., Levi, D., et al., 2014. Recent progress in road and lane detection: a survey. Mach. Vis. Appl., 25(3)):727–745. http://dx.doi.org/10.1007/s00138-011-0404-2 CrossRefGoogle Scholar
  23. Hoiem, D., Hays, J., Xiao, J.X., et al., 2015. Guest editorial: scene understanding. Int. J. Comput. Vis., 112(2)):131–132. http://dx.doi.org/10.1007/s11263-015-0807-z MathSciNetCrossRefGoogle Scholar
  24. Konolige, K., Marder-Eppstein, E., Marthi, B., 2011. Navigation in hybrid metric-topological maps. IEEE Int. Conf. on Robotics and Automation, p.3041–3047. http://dx.doi.org/10.1109/ICRA.2011.5980074 Google Scholar
  25. Li, Q., Zheng, N.N., Cheng, H., 2004. Springrobot: a prototype autonomous vehicle and its algorithms for lane detection. IEEE Trans. Intell. Transp. Syst., 5(4)):300–308. http://dx.doi.org/10.1109/TITS.2004.838220 CrossRefGoogle Scholar
  26. Mertz, C., Navarro-Serment, L.E., MacLachlan, R.A., et al., 2013. Moving object detection with laser scanners. J. Field Robot., 30(1)):17–43. http://dx.doi.org/10.1002/rob.21430 CrossRefGoogle Scholar
  27. Montemerlo, M., Thrun, S., Koller, D., et al., 2002. Fast- SLAM: a factored solution to the simultaneous localization and mapping problem. 8th National Conf. on Artificial Intelligence, p.593–598.Google Scholar
  28. Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4)):409–413. http://dx.doi.org/10.1016/J.ENG.2016.04.018 CrossRefGoogle Scholar
  29. Schueler, K., Weiherer, T., Bouzouraa, E., et al., 2012. 360 degree multi sensor fusion for static and dynamic obstacles. IEEE Intelligent Vehicles Symp., p.692–697. http://dx.doi.org/10.1109/IVS.2012.6232253 Google Scholar
  30. Thrun, S., Leonard, J.J., 2008. Simultaneous localization and mapping. Int. Conf. on Artificial Intelligence, p.871–889. http://dx.doi.org/10.1007/978-3-540-30301-5_38 Google Scholar
  31. Xue, J., Zheng, N.N., Geng, J., et al., 2008. Tracking multiple visual targets via particle-based belief propagation. IEEE Trans. Syst. Man Cybern. B, 38(1)):196–209. http://dx.doi.org/10.1109/TSMCB.2007.910533 CrossRefGoogle Scholar
  32. Zhang, Z., 2000. A flexible new technique for camera calibration. IEEE Trans. Patt. Anal. Mach. Intell., 22(11)):1330–1334. http://dx.doi.org/10.1109/34.888718 CrossRefGoogle Scholar
  33. Zheng, N.N., Liu, Z.Y., Ren, P.J., et al., 2017. Hybridaugmented intelligence: collaboration and cognition. Front. Inform. Technol. Electron. Eng., in press. http://dx.doi.org/10.1631/FITEE.1700053 Google Scholar

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2017

Authors and Affiliations

  • Jian-ru Xue
    • 1
  • Di Wang
    • 1
  • Shao-yi Du
    • 1
  • Di-xiao Cui
    • 1
  • Yong Huang
    • 1
  • Nan-ning Zheng
    • 1
  1. 1.Lab of Visual Cognitive Computing and Intelligent VehicleXi’an Jiaotong UniversityXi’anChina

Personalised recommendations