Advertisement

I Can See for Miles and Miles: An Extended Field Test of Visual Teach and Repeat 2.0

  • Michael PatonEmail author
  • Kirk MacTavish
  • Laszlo-Peter Berczi
  • Sebastian Kai van Es
  • Timothy D. Barfoot
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 5)

Abstract

Autonomous path-following systems based on the Teach and Repeat paradigm allow robots to traverse extensive networks of manually driven paths using on-board sensors. These methods are well suited for applications that involve repeated traversals of constrained paths such as factory floors, orchards, and mines. In order for path-following systems to be viable for these applications they must be able to navigate large distances over long time periods, a challenging task for vision-based systems that are susceptible to appearance change. This paper details Visual Teach and Repeat 2.0, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments. These tasks are achieved through the use of a suite of novel, multi-experience, vision-based navigation algorithms. We have validated our system experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada, where we incrementally built and autonomously traversed a 5 Km network of paths. Over the span of the field test, the robot logged over 140 Km of autonomous driving with an autonomy rate of 99.6%, despite experiencing significant appearance change due to lighting and weather, including driving at night using headlights.

Notes

Acknowledgements

This work was supported financially and in-kind by Clearpath Robotics and the Natural Sciences and Engineering Research Council (NSERC) through the NSERC Canadian Field Robotics Network (NCFRN). The authors would like to also extended their deepest thanks to Ethier Sand and Gravel for allowing us to conduct our field test at their site.

References

  1. 1.
    Anderson S., Barfoot, T.D.: Full steam ahead: Exactly sparse gaussian process regression for batch continuous-time trajectory estimation on se(3). In: IROS (2015)Google Scholar
  2. 2.
    Berczi, L.-P., Barfoot, T.D.: It’s like déjà vu all over again: Learning place-dependent terrain assessment for visual teach and repeat. In: IROS (2016)Google Scholar
  3. 3.
    Berczi, L.-P., Posner, I., Barfoot, T.D.: Learning to assess terrain from human demonstration using an introspective gaussian-process classifier. In: ICRA (2015)Google Scholar
  4. 4.
    Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Robot. 25(3), 749–754 (2009)CrossRefGoogle Scholar
  5. 5.
    Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR 32(14), 1645–1661 (2013)Google Scholar
  6. 6.
    Clement, L., Kelly, J., Barfoot, T.D.: Robust monocular visual teach and repeat aided by local ground planarity and color-constant imagery. JFR 34(1), 74–97 (2017)Google Scholar
  7. 7.
    Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. JFR 27(5), 534–560 (2010)Google Scholar
  8. 8.
    Goldberg, S.B., Maimone, M.W., Matthies, L.: Stereo vision and rover navigation software for planetary exploration. In IEEE Aerospace Conference Proceedings (2002)Google Scholar
  9. 9.
    Jackel, L.D., Krotkov, E., Perschbacher, M., Pippine, J., Sullivan, C.: The DARPA LAGR program: Goals, challenges, methodology, and phase I results. JFR 23(11–12), 945–973 (2006)Google Scholar
  10. 10.
    Krajnk, T., Faigl, J., Vonsek, V., Konar, K., Kulich, M., Peuil, L.: Simple yet stable bearing-only navigation. JFR 27(5), 511–533 (2010)Google Scholar
  11. 11.
    Krüsi, P., Bücheler, B., Pomerleau, F., Schwesinger, U., Siegwart, R., Furgale P.: Lighting-Invariant Adaptive Route Following Using ICP. JFR (2014)Google Scholar
  12. 12.
    Linegar, C., Churchill, W., Newman, P.: Work Smart. Recalling relevant experiences for vast-scale but time-constrained localisation. In: ICRA, Not Hard (2015)Google Scholar
  13. 13.
    MacTavish, K., Barfoot, T.D.: Towards hierarchical place recognition for long-term autonomy. In: ICRA Workshop (2014)Google Scholar
  14. 14.
    MacTavish, K., Paton, M., Barfoot, T.D.: Visual triage: a bag-of-words experience selector for long-term visual route following. In: ICRA (2017)Google Scholar
  15. 15.
    McManus, C., Furgale, P., Stenning, B., Barfoot, T.D.: Visual teach and repeat using appearance-based lidar. In: ICRA (2012)Google Scholar
  16. 16.
    Mhlfellner, P., Brki, M., Bosse, M., Derendarz, W., Philippsen, R., Furgale, P.: Summary maps for lifelong visual localization. JFR (2015)Google Scholar
  17. 17.
    Ostafew, C., Schoellig, A.P., Barfoot, T.D., Collier, J.: Learning-based nonlinear model predictive control to improve vision-based mobile robot path tracking. JFR 33(1), 133–152 (2016)Google Scholar
  18. 18.
    Paton, M., MacTavish, K., Ostafew, C., Pomerleau, F., Barfoot, T.D.: Expanding the limits of vision-based localization for long-term route following autonomy. JFR 34(1), 98–122 (2017)Google Scholar
  19. 19.
    Paton, M., MacTavish, K., Warren, M., Barfoot, T.D.: Bridging the appearance gap: Multi-experience localization for long-term visual teach & repeat. In: IROS (2016)Google Scholar
  20. 20.
    Sibley, G., Mei, C., Reid, I., Newman, P.: Adaptive relative bundle adjustment. In: RSS (2009)Google Scholar
  21. 21.
    van Es, K., Barfoot, T.D.: Being in two places at once: Smooth visual path following on globally inconsistent pose graphs. In: CRV (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Michael Paton
    • 1
    Email author
  • Kirk MacTavish
    • 1
  • Laszlo-Peter Berczi
    • 1
  • Sebastian Kai van Es
    • 1
  • Timothy D. Barfoot
    • 1
  1. 1.University of Toronto Institute for Aerospace StudiesTorontoCanada

Personalised recommendations