A Versatile Visual Navigation System for Autonomous Vehicles

  • Filip Majer
  • Lucie Halodová
  • Tomáš Vintr
  • Martin Dlouhý
  • Lukáš Merenda
  • Jaime Pulido Fentanes
  • David Portugal
  • Micael Couceiro
  • Tomáš KrajníkEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11472)


We present a universal visual navigation method which allows a vehicle to autonomously repeat paths previously taught by a human operator. The method is computationally efficient and does not require camera calibration. It can learn and autonomously traverse arbitrarily shaped paths and is robust to appearance changes induced by varying outdoor illumination and naturally-occurring environment changes. The method does not perform explicit position estimation in the 2d/3d space, but it relies on a novel mathematical theorem, which allows fusing exteroceptive and interoceptive sensory data in a way that ensures navigation accuracy and reliability. The experiments performed indicate that the proposed navigation method can accurately guide different autonomous vehicles along the desired path. The presented system, which was already deployed in patrolling scenarios, is provided as open source at



We thank the for sharing their the data and the TAROS vehicle. We would like to thank also Milan Kroulík and Jakub Lev from the Czech University of Life Sciences Prague for their positive attitude and their help to perform experiments with the John Deere tractor.


  1. 1.
    Arvin, F., Krajník, T., Turgut, A.E., Yue, S.: COS\(\varPhi \): artificial pheromone system for robotic swarms research. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2015)Google Scholar
  2. 2.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). Scholar
  3. 3.
    Behzadian, B., Agarwal, P., Burgard, W., Tipaldi, G.D.: Monte carlo localization in hand-drawn maps. In: IROS, pp. 4291–4296. IEEE (2015)Google Scholar
  4. 4.
    Biber, P., Duckett, T.: Dynamic maps for long-term operation of mobile service robots. In: RSS (2005)Google Scholar
  5. 5.
    Blanc, G., Mezouar, Y., Martinet, P.: Indoor navigation of a wheeled mobile robot along visual routes. In: IEEE International Conference on Robotics and Automation (ICRA) (2005)Google Scholar
  6. 6.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010). Scholar
  7. 7.
    Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Robot. Autom. 25, 749–754 (2009). Scholar
  8. 8.
    Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR 32, 1645–1661 (2013). Scholar
  9. 9.
    Dayoub, F., Cielniak, G., Duckett, T.: Long-term experiments with an adaptive spherical view representation for navigation in changing environments. Robot. Auton. Syst. 59, 285–295 (2011)CrossRefGoogle Scholar
  10. 10.
    De Cristóforis, P., Nitsche, M., Krajník, T.: Real-time monocular image-based path detection. J. Real Time Image Process. 11, 335–348 (2013)CrossRefGoogle Scholar
  11. 11.
    DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24, 237–267 (2002). Scholar
  12. 12.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). Scholar
  13. 13.
    Faigl, J., Krajník, T., Vonásek, V., Přeučil, L.: Surveillance planning with localization uncertainty for UAVs. In: 3rd Israeli Conference on Robotics (2010)Google Scholar
  14. 14.
    Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010). Scholar
  15. 15.
    Halodová, L.: Map Management for Long-term Navigation of Mobile Robots. Bachelor thesis, Czech Technical University, May 2018Google Scholar
  16. 16.
    Halodová, L., Dvořáková, E., Majer, F., Ulrich, J., Vintr, T., Krajník, T.: Adaptive image processing methods for outdoor autonomous vehicles. In: Mazal. J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 456–476 (2018)Google Scholar
  17. 17.
    Zhang, N., Warren, M., Barfoot, T.: Learning place-and-time-dependent binary descriptors for long-term visual localization. In: IEEE International Conference on Robotics and Automation (ICRA). IEEE (2016)Google Scholar
  18. 18.
    Holmes, S., Klein, G., Murray, D.W.: A square root unscented kalman filter for visual monoSLAM. In: International Conference on Robotics and Automation (ICRA), pp. 3710–3716 (2008)Google Scholar
  19. 19.
    Kosaka, A., Kak, A.C.: Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties. CVGIP: Image Underst. 56(3), 271–329 (1992)CrossRefGoogle Scholar
  20. 20.
    Krajník, T., et al.: A practical multirobot localization system. J. Intell. Robot. Syst. 76, 539–562 (2014). Scholar
  21. 21.
    Krajník, T., Cristóforis, P., Kusumam, K., Neubert, P., Duckett, T.: Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 88, 127–141 (2017)CrossRefGoogle Scholar
  22. 22.
    Krajník, T., Majer, F., Halodová, L., Vintr, T.: Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2018)Google Scholar
  23. 23.
    Krajník, T., Nitsche, M., Pedre, S., Přeučil, L., Mejail, M.E.: A simple visual navigation system for an UAV. In: 9th International Multi-Conference on Systems, Signals and Devices (SSD), pp. 1–6. IEEE (2012)Google Scholar
  24. 24.
    Kunze, L., Hawes, N., Duckett, T., Hanheide, M., Krajnik, T.: Artificial intelligence for long-term robot autonomy: a survey. IEEE Robot. Autom. Lett. 3, 4023–4030 (2018). Scholar
  25. 25.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  26. 26.
    Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 183–196. Springer, Heidelberg (2010). Scholar
  27. 27.
    Majer, F., Halodová, L., Krajník, T.: Source codes: bearing-only navigation.
  28. 28.
    Matsumoto, Y., Inaba, M., Inoue, H.: Visual navigation using view-sequenced route representation. In: IEEE International Conference on Robotics and Automation (ICRA), Minneapolis, USA, pp. 83–88 (1996)Google Scholar
  29. 29.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015)CrossRefGoogle Scholar
  30. 30.
    Nitsche, M., Pire, T., Krajník, T., Kulich, M., Mejail, M.: Monte Carlo localization for teach-and-repeat feature-based navigation. In: Mistry, M., Leonardis, A., Witkowski, M., Melhuish, C. (eds.) TAROS 2014. LNCS (LNAI), vol. 8717, pp. 13–24. Springer, Cham (2014). Scholar
  31. 31.
    Paton, M., MacTavish, K., Berczi, L.-P., van Es, S.K., Barfoot, T.D.: I can see for miles and miles: an extended field test of visual teach and repeat 2.0. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 415–431. Springer, Cham (2018). Scholar
  32. 32.
    Portugal, D., Pereira, S., Couceiro, M.S.: The role of security in human-robot shared environments: a case study in ROS-based surveillance robots. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 981–986. IEEE (2017)Google Scholar
  33. 33.
    Royer, E., Lhuillier, M., Dhome, M., Lavest, J.M.: Monocular vision for mobile robot localization and autonomous navigation. Int. J. Comput. Vis. 74(3), 237–260 (2007). Scholar
  34. 34.
    Segvic, S., Remazeilles, A., Diosi, A., Chaumette, F.: Large scale vision based navigation without an accurate global reconstruction. IEEE International Conference on Computer Vision and Pattern Recognition. CVPR 2007, Minneapolis, Minnesota, pp. 1–8 (2007)Google Scholar
  35. 35.
    Krajník, T., Faigl, J., Vonásek, V., et al.: Simple, yet Stable bearing-only Navigation. J. Field Robot. 27, 511–533 (2010)CrossRefGoogle Scholar
  36. 36.
    Thorpe, C., Hebert, M.H., Kanade, T., Shafer, S.A.: Vision and navigation for the Carnegie-Mellon Navlab. IEEE Trans. Pattern Anal. Mach. Intell. 10(3), 362–373 (1988)CrossRefGoogle Scholar
  37. 37.
    Wallace, R.S., Stentz, A., Thorpe, C.E., Moravec, H.P., Whittaker, W., Kanade, T.: First results in robot road-following. In: IJCAI, pp. 1089–1095. Citeseer (1985)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Artificial Intelligence CenterCzech Technical UniversityPragueCzech Republic
  2. 2.Czech University of Life Sciences PraguePragueCzech Republic
  3. 3.VOP CZŠenov u Nového JičínaCzech Republic
  4. 4.Lincoln Center for Autonomous SystemsUniversity of LincolnLincolnUK
  5. 5.Ingeniarius, Ltd.CoimbraPortugal

Personalised recommendations