Abstract
Robots operating in the open world encounter various different environments that can substantially differ from each other. This domain gap also poses a challenge for Simultaneous Localization and Mapping (SLAM) being one of the fundamental tasks for navigation. In particular, learning-based SLAM methods are known to generalize poorly to unseen environments hindering their general adoption. In this work, we introduce the novel task of continual SLAM extending the concept of lifelong SLAM from a single dynamically changing environment to sequential deployments in several drastically differing environments. To address this task, we propose CL-SLAM leveraging a dual-network architecture to both adapt to new environments and retain knowledge with respect to previously visited environments. We compare CL-SLAM to learning-based as well as classical SLAM methods and show the advantages of leveraging online data. We extensively evaluate CL-SLAM on three different datasets and demonstrate that it outperforms several baselines inspired by existing continual learning-based visual odometry methods. We make the code of our work publicly available at http://continual-slam.cs.uni-freiburg.de.
This work was funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 871449-OpenDR.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bešić, B., Gosala, N., Cattaneo, D., Valada, A.: Unsupervised domain adaptation for lidar panoptic segmentation. IEEE Robot. Autom. Lett. 7(2), 3404–3411 (2022)
Bešić, B., Valada, A.: Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning. IEEE Trans. Intell. Veh. 7(2), 170–185 (2022)
Cattaneo, D., Vaghi, M., Valada, A.: Lcdnet: deep loop closure detection and point cloud registration for lidar slam. IEEE Trans. Rob. 38(4), 1–20 (2022)
Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3213–3223 (2016)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)
Godard, C., Aodha, O.M., Firman, M., Brostow, G.: Digging into self-supervised monocular depth estimation. In: International Conference on Computer Vision, pp. 3827–3837 (2019)
Gopalakrishnan, S., Singh, P.R., Fayek, H., Ambikapathi, A.: Knowledge capture and replay for continual learning. In: IEEE Winter Conference on Applications of Computer Vision (2022)
Guizilini, V., Ambru, R., Pillai, S., Raventos, A., Gaidon, A.: 3D packing for self-supervised monocular depth estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Howard, A., et al.: Searching for mobilenetv3. In: International Conference on Computer Vision, pp. 1314–1324 (2019)
Hurtado, J.V., Londoño, L., Valada, A.: From learning to relearning: a framework for diminishing bias in social robot navigation. Frontiers Robot. AI 8, 69 (2021)
Jocher, G.: Yolov5 (2022). https://github.com/ultralytics/yolov5
Kemker, R., Kanan, C.: Fearnet: brain-inspired model for incremental learning. In: International Conference on Learning Representations (2018)
Kretzschmar, H., Grisetti, G., Stachniss, C.: Lifelong map learning for graph-based slam in static environments. KI - Künstliche Intelligenz 24(3), 199–206 (2010)
Kurz, G., Holoch, M., Biber, P.: Geometry-based graph pruning for lifelong SLAM. In: International Conference on Intelligent Robots and Systems, pp. 3313–3320 (2021)
Kuznietsov, Y., Proesmans, M., Van Gool, L.: CoMoDA: continuous monocular depth adaptation using past experiences. In: IEEE Winter Conference on Applications of Computer Vision (2021)
Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: G2o: A general framework for graph optimization. In: International Conference on Robotics and Automation, pp. 3607–3613 (2011)
Li, R., Wang, S., Gu, D.: DeepSLAM: a robust monocular SLAM system with unsupervised deep learning. IEEE Trans. Industr. Electron. 68(4), 3577–3587 (2021)
Li, S., Wang, X., Cao, Y., Xue, F., Yan, Z., Zha, H.: Self-supervised deep visual odometry with online adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)
Li, S., Wu, X., Cao, Y., Zha, H.: Generalizing to the open world: deep visual odometry with online adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 13179–13188 (2021)
Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2018)
Lopez-Paz, D., Ranzato, M.A.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Luo, H., Gao, Y., Wu, Y., Liao, C., Yang, X., Cheng, K.T.: Real-time dense monocular SLAM with online adapted depth prediction network. IEEE Trans. Multimedia 21(2), 470–483 (2019)
Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 year, 1000 km: the Oxford RobotCar dataset. Int. J. Rob. Res. 36(1), 3–15 (2017)
McClelland, J.L., McNaughton, B.L., O’Reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102(3), 419 (1995)
McCraith, R., Neumann, L., Zisserman, A., Vedaldi, A.: Monocular depth estimation with self-supervised instance adaptation. arXiv preprint arXiv:2004.05821 (2020)
Mittal, M., Mohan, R., Burgard, W., Valada, A.: Vision-based autonomous UAV navigation and landing for urban search and rescue. In: Asfour, T., Yoshida, E., Park, J., Christensen, H., Khatib, O. (eds.) Robotics Research, vol. 20, pp. 575–592. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-95459-8_35
Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems (2019)
Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Advances in Neural Information Processing Systems, vol. 8 (1995)
Zhan, H., Weerasekera, C.S., Bian, J.W., Reid, I.: Visual odometry revisited: what should be learnt? In: International Conference on Robotics and Automation, pp. 4203–4210 (2020)
Zhang, Z., Lathuilière, S., Ricci, E., Sebe, N., Yang, J.: Online depth learning against forgetting in monocular videos. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)
Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 6612–6619 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 1 (mp4 9692 KB)
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vödisch, N., Cattaneo, D., Burgard, W., Valada, A. (2023). Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping Through Continual Learning. In: Billard, A., Asfour, T., Khatib, O. (eds) Robotics Research. ISRR 2022. Springer Proceedings in Advanced Robotics, vol 27. Springer, Cham. https://doi.org/10.1007/978-3-031-25555-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-25555-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25554-0
Online ISBN: 978-3-031-25555-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)