Advertisement

Journal of Intelligent & Robotic Systems

, Volume 92, Issue 1, pp 19–32 | Cite as

Topological Semantic Mapping and Localization in Urban Road Scenarios

  • Fernando Bernuy
  • Javier Ruiz-del-Solar
Article
  • 149 Downloads

Abstract

Autonomous vehicle self-localization must be robust to environment changes, such as dynamic objects, variable illumination, and atmospheric conditions. Topological maps provide a concise representation of the world by only keeping information about relevant places, being robust to environment changes. On the other hand, semantic maps correspond to a high level representation of the environment that includes labels associated with relevant objects and places. Hence, the use of a topological map based on semantic information represents a robust and efficient solution for large-scale outdoor scenes for autonomous vehicles and Advanced Driver Assistance Systems (ADAS). In this work, a novel topological semantic mapping and localization methodology for large-scale outdoor scenarios for autonomous driving and ADAS applications is presented. The methodology uses: (i) a deep neural network for obtaining semantic observations of the environment, (ii) a Topological Semantic Map (TSM) for storing selected semantic observations, and (iii) a topological localization algorithm which uses a Particle Filter for obtaining the vehicle’s pose in the TSM. The proposed methodology was tested on a real driving scenario, where a True Estimate Rate of the vehicle’s pose of 96.9% and a Mean Position Accuracy of 7.7[m] were obtained. These results are much better than the ones obtained by other two methods used for comparative purposes. Experiments also show that the method is able to obtain the pose of the vehicle when its initial pose is unknown.

Keywords

Autonomous driving Semantic map Topological map Semantic segmentation Semantic localization Topological semantic map 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This work was partially funded by FONDECYT Project 1161500.

References

  1. 1.
    Arulampalam, M.S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)CrossRefGoogle Scholar
  2. 2.
    Bernuy, F., Ruiz del Solar, J.: Topological semantic mapping and localization demo. https://youtu.be/H9TPohkC4yE
  3. 3.
    Bernuy, F., Ruiz del Solar, J.: Semantic mapping of large-scale outdoor scenes for autonomous off-road driving. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 124–130. IEEE (2015)Google Scholar
  4. 4.
    Best, D., Fisher, N.I.: Efficient simulation of the von mises distribution. Appl. Stat. 28, 152–157 (1979)Google Scholar
  5. 5.
    Brubaker, M.A., Geiger, A., Urtasun, R.: Lost! leveraging the crowd for probabilistic visual self-localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3057–3064 (2013)Google Scholar
  6. 6.
    Brubaker, M.A., Geiger, A., Urtasun, R.: Map-based probabilistic visual self-localization. IEEE Trans. Pattern Anal. Mach. Intell. 38(4), 652–665 (2016)CrossRefGoogle Scholar
  7. 7.
    Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.J.: Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)CrossRefGoogle Scholar
  8. 8.
    Churchill, W., Newman, P.: Experience-based navigation for long-term localisation. Int. J. Robot. Res. 32 (14), 1645–1661 (2013)CrossRefGoogle Scholar
  9. 9.
    Cole, D., Newman, P.: Using laser range data for 3d SLAM in outdoor environments. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Orlando (2006)Google Scholar
  10. 10.
    Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  11. 11.
    Cover, T.M., Thomas, J.A.: Elements of information theory. Wiley, NJ (2012)zbMATHGoogle Scholar
  12. 12.
    Floros, G., van der Zander, B., Leibe, B.: Openstreetslam: Global vehicle localization using openstreetmaps. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 1054–1059. IEEE (2013)Google Scholar
  13. 13.
    Gadd, M., Newman, P.: Checkout my map: Version control for fleetwide visual localisation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016)Google Scholar
  14. 14.
    Galindo, C., Fernández-Madrigal, J.A., González, J., Saffiotti, A.: Robot task planning using semantic maps. Robot. Auton. Syst. 56(11), 955–966 (2008)CrossRefGoogle Scholar
  15. 15.
    Garcia-Fidalgo, E., Ortiz, A.: Vision-based topological mapping and localization methods: A survey. Robot. Auton. Syst. 64, 1–20 (2015)CrossRefGoogle Scholar
  16. 16.
    Glover, A.J., Maddern, W.P., Milford, M.J., Wyeth, G.F.: Fab-map+ ratslam: Appearance-based slam for multiple times of day. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 3507–3512. IEEE (2010)Google Scholar
  17. 17.
    Henson, C., Sheth, A., Thirunarayan, K.: Semantic perception: Converting sensory observations to abstractions. Internet Comput. IEEE 16(2), 26–34 (2012)CrossRefGoogle Scholar
  18. 18.
    Kitagawa, G.: Monte carlo filter and smoother for non-gaussian nonlinear state space models. J. Comput. Graph. Stat. 5(1), 1–25 (1996)MathSciNetGoogle Scholar
  19. 19.
    Kostavelis, I., Gasteratos, A.: Semantic mapping for mobile robotics tasks: A survey. Robot. Auton. Syst. 66, 86–103 (2015)CrossRefGoogle Scholar
  20. 20.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  21. 21.
    Linegar, C., Churchill, W., Newman, P.: Work smart, not hard: Recalling relevant experiences for vast-scale but time-constrained localisation. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 90–97. IEEE (2015)Google Scholar
  22. 22.
    Maddern, W., Pascoe, G., Newman, P.: Leveraging Experience for Large-Scale LIDAR Localisation in Changing Cities. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle (2015)Google Scholar
  23. 23.
    Merriaux, P., Dupuis, Y., Vasseur, P., Savatier, X.: Fast and robust vehicle positioning on graph-based representation of drivable maps. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2787–2793. IEEE (2015)Google Scholar
  24. 24.
    Newman, P., Sibley, G., Smith, M., Cummins, M., Harrison, A., Mei, C., Posner, I., Shade, R., Schroeter, D., Murphy, L., Churchill, W., Cole, D., Reid, I.: Navigating, recognising and describing urban spaces with vision and laser. Int. J. Robot. Res. 28 (2009).  https://doi.org/10.1177/0278364909341483
  25. 25.
    Nüchter, A., Hertzberg, J.: Towards semantic maps for mobile robots. Robot. Auton. Syst. 56(11), 915–926 (2008)CrossRefGoogle Scholar
  26. 26.
    Paul, R., Newman, P.: Fab-map 3d: Topological mapping with spatial and visual appearance. In: Robotics and Automation (ICRA), 2010 IEEE International Conference on, pp. 2649–2656. IEEE (2010)Google Scholar
  27. 27.
    Pronobis, A., Aydemir, A., Sjöö, K., Jensfelt, P.: Exploiting semantics in mobile robotics. In: ICRA 2012 Workshop on Semantic Perception, Mapping and Exploration (2012)Google Scholar
  28. 28.
    Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proc. IEEE 77(2), 257–286 (1989)CrossRefGoogle Scholar
  29. 29.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
  30. 30.
    Tomatis, N., Nourbakhsh, I., Siegwart, R.: Hybrid simultaneous localization and map building: a natural integration of topological and metric. Robot. Auton. Syst. 44(1), 3–14 (2003)CrossRefGoogle Scholar
  31. 31.
    Wolcott, R.W., Eustice, R.M.: Visual localization within lidar maps for automated urban driving. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 176–183. IEEE (2014)Google Scholar
  32. 32.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: ICLR (2016)Google Scholar

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2017

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUniversidad de ChileSantiagoChile
  2. 2.Advanced Mining Technology CenterUniversidad de ChileSantiagoChile

Personalised recommendations