Skip to main content

Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping Through Continual Learning

  • Conference paper
  • First Online:
Robotics Research (ISRR 2022)

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 27))

Included in the following conference series:

Abstract

Robots operating in the open world encounter various different environments that can substantially differ from each other. This domain gap also poses a challenge for Simultaneous Localization and Mapping (SLAM) being one of the fundamental tasks for navigation. In particular, learning-based SLAM methods are known to generalize poorly to unseen environments hindering their general adoption. In this work, we introduce the novel task of continual SLAM extending the concept of lifelong SLAM from a single dynamically changing environment to sequential deployments in several drastically differing environments. To address this task, we propose CL-SLAM leveraging a dual-network architecture to both adapt to new environments and retain knowledge with respect to previously visited environments. We compare CL-SLAM to learning-based as well as classical SLAM methods and show the advantages of leveraging online data. We extensively evaluate CL-SLAM on three different datasets and demonstrate that it outperforms several baselines inspired by existing continual learning-based visual odometry methods. We make the code of our work publicly available at http://continual-slam.cs.uni-freiburg.de.

This work was funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 871449-OpenDR.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bešić, B., Gosala, N., Cattaneo, D., Valada, A.: Unsupervised domain adaptation for lidar panoptic segmentation. IEEE Robot. Autom. Lett. 7(2), 3404–3411 (2022)

    Article  Google Scholar 

  2. Bešić, B., Valada, A.: Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning. IEEE Trans. Intell. Veh. 7(2), 170–185 (2022)

    Article  Google Scholar 

  3. Cattaneo, D., Vaghi, M., Valada, A.: Lcdnet: deep loop closure detection and point cloud registration for lidar slam. IEEE Trans. Rob. 38(4), 1–20 (2022)

    Article  Google Scholar 

  4. Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)

    Google Scholar 

  5. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)

    Google Scholar 

  6. Godard, C., Aodha, O.M., Firman, M., Brostow, G.: Digging into self-supervised monocular depth estimation. In: International Conference on Computer Vision, pp. 3827–3837 (2019)

    Google Scholar 

  7. Gopalakrishnan, S., Singh, P.R., Fayek, H., Ambikapathi, A.: Knowledge capture and replay for continual learning. In: IEEE Winter Conference on Applications of Computer Vision (2022)

    Google Scholar 

  8. Guizilini, V., Ambru, R., Pillai, S., Raventos, A., Gaidon, A.: 3D packing for self-supervised monocular depth estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Howard, A., et al.: Searching for mobilenetv3. In: International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  11. Hurtado, J.V., Londoño, L., Valada, A.: From learning to relearning: a framework for diminishing bias in social robot navigation. Frontiers Robot. AI 8, 69 (2021)

    Article  Google Scholar 

  12. Jocher, G.: Yolov5 (2022). https://github.com/ultralytics/yolov5

    Google Scholar 

  13. Kemker, R., Kanan, C.: Fearnet: brain-inspired model for incremental learning. In: International Conference on Learning Representations (2018)

    Google Scholar 

  14. Kretzschmar, H., Grisetti, G., Stachniss, C.: Lifelong map learning for graph-based slam in static environments. KI - Künstliche Intelligenz 24(3), 199–206 (2010)

    Article  Google Scholar 

  15. Kurz, G., Holoch, M., Biber, P.: Geometry-based graph pruning for lifelong SLAM. In: International Conference on Intelligent Robots and Systems, pp. 3313–3320 (2021)

    Google Scholar 

  16. Kuznietsov, Y., Proesmans, M., Van Gool, L.: CoMoDA: continuous monocular depth adaptation using past experiences. In: IEEE Winter Conference on Applications of Computer Vision (2021)

    Google Scholar 

  17. Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: G2o: A general framework for graph optimization. In: International Conference on Robotics and Automation, pp. 3607–3613 (2011)

    Google Scholar 

  18. Li, R., Wang, S., Gu, D.: DeepSLAM: a robust monocular SLAM system with unsupervised deep learning. IEEE Trans. Industr. Electron. 68(4), 3577–3587 (2021)

    Article  Google Scholar 

  19. Li, S., Wang, X., Cao, Y., Xue, F., Yan, Z., Zha, H.: Self-supervised deep visual odometry with online adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  20. Li, S., Wu, X., Cao, Y., Zha, H.: Generalizing to the open world: deep visual odometry with online adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 13179–13188 (2021)

    Google Scholar 

  21. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2018)

    Article  Google Scholar 

  22. Lopez-Paz, D., Ranzato, M.A.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  23. Luo, H., Gao, Y., Wu, Y., Liao, C., Yang, X., Cheng, K.T.: Real-time dense monocular SLAM with online adapted depth prediction network. IEEE Trans. Multimedia 21(2), 470–483 (2019)

    Article  Google Scholar 

  24. Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 year, 1000 km: the Oxford RobotCar dataset. Int. J. Rob. Res. 36(1), 3–15 (2017)

    Article  Google Scholar 

  25. McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102(3), 419 (1995)

    Article  Google Scholar 

  26. McCraith, R., Neumann, L., Zisserman, A., Vedaldi, A.: Monocular depth estimation with self-supervised instance adaptation. arXiv preprint arXiv:2004.05821 (2020)

  27. Mittal, M., Mohan, R., Burgard, W., Valada, A.: Vision-based autonomous UAV navigation and landing for urban search and rescue. In: Asfour, T., Yoshida, E., Park, J., Christensen, H., Khatib, O. (eds.) Robotics Research, vol. 20, pp. 575–592. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95459-8_35

    Chapter  Google Scholar 

  28. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  29. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  30. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  31. Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Advances in Neural Information Processing Systems, vol. 8 (1995)

    Google Scholar 

  32. Zhan, H., Weerasekera, C.S., Bian, J.W., Reid, I.: Visual odometry revisited: what should be learnt? In: International Conference on Robotics and Automation, pp. 4203–4210 (2020)

    Google Scholar 

  33. Zhang, Z., Lathuilière, S., Ricci, E., Sebe, N., Yang, J.: Online depth learning against forgetting in monocular videos. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  34. Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6612–6619 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Niclas Vödisch .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 9692 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vödisch, N., Cattaneo, D., Burgard, W., Valada, A. (2023). Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping Through Continual Learning. In: Billard, A., Asfour, T., Khatib, O. (eds) Robotics Research. ISRR 2022. Springer Proceedings in Advanced Robotics, vol 27. Springer, Cham. https://doi.org/10.1007/978-3-031-25555-7_3

Download citation

Publish with us

Policies and ethics