Advertisement

Towards Visual Teach and Repeat for GPS-Denied Flight of a Fixed-Wing UAV

  • M. Warren
  • M. Paton
  • K. MacTavish
  • A. P. Schoellig
  • T. D. Barfoot
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 5)

Abstract

Most consumer and industrial Unmanned Aerial Vehicles (UAVs) rely on combining Global Navigation Satellite Systems (GNSS) with barometric and inertial sensors for outdoor operation. As a consequence, these vehicles are prone to a variety of potential navigation failures such as jamming and environmental interference. This usually limits their legal activities to locations of low population density within line-of-sight of a human pilot to reduce risk of injury and damage. Autonomous route-following methods such as Visual Teach and Repeat (VT&R) have enabled long-range navigational autonomy for ground robots without the need for reliance on external infrastructure or an accurate global position estimate. In this paper, we demonstrate the localisation component of VT&R outdoors on a fixed-wing UAV as a method of backup navigation in case of primary sensor failure. We modify the localisation engine of VT&R to work with a single downward facing camera on a UAV to enable safe navigation under the guidance of vision alone. We evaluate the method using visual data from the UAV flying a 1200 m trajectory (at altitude of 80 m) several times during a multi-day period, covering a total distance of 10.8 km using the algorithm. We examine the localisation performance for both small (single flight) and large (inter-day) temporal differences from teach to repeat. Through these experiments, we demonstrate the ability to successfully localise the aircraft on a self-taught route using vision alone without the need for additional sensing or infrastructure.

Notes

Acknowledgements

Thanks to PrecisionHawk, MITACS, and the NCFRN for project funding, Ethier Sand and Gravel for property access to gather data, and Haowei Zhang for logistical support and data processing.

References

  1. 1.
    Achtelik, M.W., Achtelik, M.C., Weiss, S.M., Siegwart, R.: Onboard IMU and monocular vision based control for MAVs in Unknown in- and outdoor environments. In: International Conference on Robotics and Automation (ICRA), pp. 3056–3063. IEEE, May 2011Google Scholar
  2. 2.
    Anderson, S., Barfoot, T.D.: Full STEAM ahead: Exactly sparse gaussian process regression for batch continuous-time trajectory estimation on SE(3). In: Intelligent Robots and Systems (IROS), pp. 157–164, Sept 2015Google Scholar
  3. 3.
    Barfoot, Timothy D., Furgale, Paul T.: Associating uncertainty with three-dimensional poses for use in estimation problems. IEEE Trans. Robot. 30(3), 679–693 (2014)CrossRefGoogle Scholar
  4. 4.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: European Conference on Computer Vision (2006)Google Scholar
  5. 5.
    Bry, A., Bachrach, A., Roy, N.: State estimation for aggressive flight in GPS-denied environments using onboard sensing. In: International Conference on Robotics and Automation (ICRA). IEEE (2012)Google Scholar
  6. 6.
    Caballero, F., Merino, L., Ferruz, J., Ollero, A.: Vision-based odometry and SLAM for medium and high altitude flying UAVs. J. Intell. Robot. Syst. 54(1–3), 137–161 (2008)Google Scholar
  7. 7.
    Clement, L.E., Kelly, J., Barfoot, T.D.: Monocular visual teach and repeat aided by local ground planarity. In: Wettergreen, D., Barfoot, T.D. (eds.) Field and Service Robotics, chapter VI, pp. 547–561. Springer International Publishing, Toronto (2015)Google Scholar
  8. 8.
    Dillingham, G.L.: Unmanned aircraft systems: continued coordination, operational data, and performance standards needed to guide research and development. Technical report, U.S. Government Accountability Office (2013)Google Scholar
  9. 9.
    Fang, Zheng, Yang, Shichao, Jain, Sezal, Dubey, Geetesh, Maeta, Silvio: Robust autonomous flight in constrained and visually degraded environments. In: Wettergreen, D., Barfoot, T.D. (eds.) Field and Service Robotics. chapter IV, pp. 411–425. Springer International Publishing, Toronto (2015)Google Scholar
  10. 10.
    Forster, C., Faessler, M., Fontana, F., Werlberger, M., Scaramuzza, D.: Continuous on-board monocular-vision based elevation mapping applied to autonomous landing of micro aerial vehicles. In: International Conference on Robotics and Automation (ICRA), Seattle. IEEE (2015)Google Scholar
  11. 11.
    Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (ICRA). IEEE (2014)Google Scholar
  12. 12.
    Furgale, P., Barfoot, T. D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010)Google Scholar
  13. 13.
    Gao, XS., Hou X.R, Tang, J., Cheng, H.F.: Complete solution classification for the perspective-three-point problem. IEEE Trans Pattern Anal. Mach. Intell. 25(8), 930–943 (2003)Google Scholar
  14. 14.
    Goedemé, T., Nuttin, M., Tuytelaars, T., Van Gool, L.: Omnidirectional vision based topological navigation. Int. J. Comput. Vis. 74(3), 219–236 (2007)CrossRefGoogle Scholar
  15. 15.
    Transportation Infrastructure: Vulnerability assessment of the transportation infrastructure relying on the global positioning system. Technical Report, Center, John A. Volpe National Transportation Systems (2001)Google Scholar
  16. 16.
    Kelly, J., Sukhatme, G.S.: An Experimental Study of Aerial Stereo Visual Odometry. In: Symposium on Intelligent Autonomous Vehicles, pp. 1–6 (2007)Google Scholar
  17. 17.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1–10. IEEE, Nov 2007Google Scholar
  18. 18.
    Majdik, L., Albers-schoenberg, Y.,Scaramuzza, D.: MAV urban localization from google street view data. In: Intelligent Robots and Systems (IROS), pp. 3979–3986. IEEE (2013)Google Scholar
  19. 19.
    McManus, C., Furgale, P., Barfoot, T.D.: Towards appearance-based methods for LiDAR sensors. In: International Conference on Robotics and Automation (ICRA), pp. 1930–1935. IEEE (2011)Google Scholar
  20. 20.
    Oettershagen, P., Stastny, T., Mantel, T., Melzer, A., Gohl, P., Agamennoni, G., Alexis, K., Siegwart, R.: Long-endurance sensing and mapping using a hand-launchable solar-powered UAV. In: Wettergreen, D., Barfoot, T.D. (eds.) Field and Service Robotics. chapter IV, pp. 441–454. Springer International Publishing, Toronto (2015)Google Scholar
  21. 21.
    Ostafew, C.J., Schoellig, A.P., Barfoot, T.D.: VIsual Teach And Repeat, Repeat, Repeat: Iterative Learning Control To Improve Mobile Robot Path Tracking In Challenging Outdoor Environments. In: Intelligent Robots and Systems (IROS), pp. 176–181. IEEE (2013)Google Scholar
  22. 22.
    Ostafew, C.J., Schoellig, A.P., Barfoot, T.D.: Learning-based nonlinear model predictive control to improve vision-based mobile robot path-tracking in challenging outdoor environments. In: International Conference on Robotics and Automation (ICRA), pp. 4029–4036. IEEE (2014)Google Scholar
  23. 23.
    Paton, M., Mactavish, K., Ostafew, C.J., Barfoot, T.D.: It’s not easy seeing green: lighting-resistant stereo visual teach & repeat using color-constant images. In: International Conference on Robotics and Automation (ICRA). IEEE (2015)Google Scholar
  24. 24.
    Paton, M., Mactavish, K., Warren, M., Barfoot, T.D.: Bridging the appearance gap: multi-experience localization for long-term visual teach and repeat. In: Intelligent Robots and Systems (IROS) (2016)Google Scholar
  25. 25.
    Pfrunder, A., Schoellig, A.P., Barfoot, T.D.: A proof-of-concept demonstration of visual teach and repeat on a quadrocopter using an altitude sensor and a monocular camera. In: Conference on Computer and Robot Vision (CRV), pp. 238–245 (2014)Google Scholar
  26. 26.
    Shen, S., Michael, N., Kumar, V.: Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs. In: International Conference on Robotics and Automation (ICRA), pp. 5303–5310, Seattle, IEEE (2015)Google Scholar
  27. 27.
    Torr, P.: An assessment of information criteria for motion model selection. In: Computer Vision and Pattern Recognition, San Juan, pp. 47–52. IEEE (1997)Google Scholar
  28. 28.
    Torr, P.: MLESAC: a new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 78(1), 138–156 (2000)Google Scholar
  29. 29.
    Weiss, S.M., Scaramuzza, D., Siegwart, R.: Monocular SLAM Based Navigation for Autonomous Micro Helicopters in GPS Denied Environments. J. Field Robot. 28(6), 854–874 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • M. Warren
    • 1
  • M. Paton
    • 1
  • K. MacTavish
    • 1
  • A. P. Schoellig
    • 1
  • T. D. Barfoot
    • 1
  1. 1.University Of Toronto Institute For Aerospace Studies (UTIAS)TorontoCanada

Personalised recommendations