Skip to main content
Log in

Locality-constrained continuous place recognition for SLAM in extreme conditions

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Simultaneous Localization and Mapping (SLAM) in extreme lighting conditions and weather conditions is a challenging problem due to a lack of features that actively cause the vehicle to drift. The localization is not possible on maps produced by the ideal continuous as features in different domains rarely match. Place recognition techniques have achieved improved results by using known poses from ideal domains; however, they are generally effective only in regions of high distinctiveness. In this paper we first solve the place recognition problem by comparing a pair of sequences of images taken by the robot in ideal and extreme domains with the constraint that every place predicted can only be from the neighboring places of the previous prediction. A neural network is used to get cross-domain similarity measures between images. A binning technique is used to discretize the continuous poses into discrete places, where bins are large enough to maintain distinctiveness and small enough to have an acceptable loss by discretization. Furthermore, we employ a global landmark search in case the confidence score of the model gets under a threshold i.e. the robot loses its way. The outputs are passed through a Particle filter to give a continuous trajectory and the final trajectories are observed to be more stable. The results show that the proposed technique beats the state-of-the-art Visual Odometry (VO) libraries and place recognition libraries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Abbreviations

FABMAP:

Fast Appearance Based Mapping

OrbSLAM:

Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping

RMSE:

Root Mean Square Error

SeqSLAM:

Sequence Simultaneous Localization and Mapping

SLAM:

Simultaneous Localization and Mapping

SPTAM:

Stereo Parallel Tracking And Mapping

VO:

Visual Odometry

References

  1. Cen G, Matsuhira N, Hirokawa J, Ogawa H, Hagiwara I (2008) Mobile robot global localization using particle filters. In: 2008 international conference on control, automation and systems. IEEE, pp 710–713

  2. Chancán M, Milford M (2020) Deepseqslam: a trainable cnn+ rnn for joint global description and sequence-based place recognition. arXiv:2011.08518

  3. Chen Z, Jacobson A, Sünderhauf N, Upcroft B, Liu L, Shen C, Reid I, Milford M (2017) Deep learning features at scale for visual place recognition. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE, pp 3223–3230

  4. Cummins M, Newman P (2008) Fab-map: probabilistic localization and mapping in the space of appearance. Int J Robotics Res 27(6):647–665

    Article  Google Scholar 

  5. Gálvez-López D, Tardos JD (2012) Bags of binary words for fast place recognition in image sequences. IEEE Trans Robot 28(5):1188–1197

    Article  Google Scholar 

  6. Glover AJ, Maddern WP, Milford MJ, Wyeth GF (2010) Fab-map+ ratslam: appearance-based slam for multiple times of day. In: 2010 IEEE international conference on robotics and automation. IEEE, pp 3507–3512

  7. Gopalan R, Li R, Chellappa R (2011) Domain adaptation for object recognition: an unsupervised approach. In: 2011 international conference on computer vision. IEEE, pp 999–1006

  8. Guo J, Ni R, Zhao Y (2021) Deblurslam: a novel visual slam system robust in blurring scene. In: 2021 IEEE 7Th international conference on virtual reality (ICVR), pp 62–68. IEEE

  9. Hausler S, Garg S, Xu M, Milford M, Fischer T (2021) Patch-netvlad: multi-scale fusion of locally-global descriptors for place recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14141–14152

  10. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  11. Hong Z, Petillot Y, Wang S (2020) Radarslam: Radar based large-scale slam in all weathers. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5164–5170

  12. Howard J, Gugger S (2020) Fastai: a layered api for deep learning. Information 11(2):108

    Article  Google Scholar 

  13. Irie K, Yoshida T, Tomono M (2012) Outdoor localization using stereo vision under various illumination conditions. Adv Robot 26(3-4):327–348

    Article  Google Scholar 

  14. Kamranian Z, Sadeghian H, Naghsh Nilchi AR, Mehrandezh M (2021) Fast, yet robust end-to-end camera pose estimation for robotic applications. Appl Intell 51(6):3581–3599

    Article  Google Scholar 

  15. Kaneko M, Iwami K, Ogawa T, Yamasaki T, Aizawa K (2018) Mask-slam: robust feature-based monocular slam by masking using semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 258–266

  16. Kulis B, Saenko K, Darrell T (2011) What you saw is not what you get: Domain adaptation using asymmetric kernel transforms. In: CVPR 2011. IEEE, pp 1785–1792

  17. Lee S, Song JB (2004) Robust mobile robot localization using optical flow sensors and encoders. In: IEEE international conference on robotics and automation, 2004. Proceedings. ICRA’04. 2004. IEEE, vol 1, pp 1039–1044

  18. Linegar C, Churchill W, Newman P (2016) Made to measure: bespoke landmarks for 24-hour, all-weather localisation with a camera. In: 2016 IEEE international conference on robotics and automation (ICRA). IEEE, pp 787–794

  19. Lopez-Antequera M, Gomez-Ojeda R, Petkov N, Gonzalez-Jimenez J (2017) Appearance-invariant place recognition by discriminatively training a convolutional neural network. Pattern Recognit Lett 92:89–95

    Article  Google Scholar 

  20. Lowry S, Sünderhauf N, Newman P, Leonard JJ, Cox D, Corke P, Milford MJ (2015) Visual place recognition: a survey. IEEE Trans Robot 32(1):1–19

    Article  Google Scholar 

  21. Lu H, Li X, Zhang H, Zheng Z (2013) Robust place recognition based on omnidirectional vision and real-time local visual features for mobile robots. Adv Robot 27(18):1439–1453

    Article  Google Scholar 

  22. Milford MJ, Wyeth GF (2012) Seqslam: visual route-based navigation for sunny summer days and stormy winter nights. In: 2012 IEEE international conference on robotics and automation. IEEE, pp 1643–1649

  23. Milford MJ, Wyeth GF, Prasser D (2004) Ratslam: a hippocampal model for simultaneous localization and mapping. In: IEEE international conference on robotics and automation, 2004. Proceedings. ICRA’04. 2004. IEEE, vol 1, pp 403–408

  24. Mur-Artal R, Tardós J. D. (2017) Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE transactions on robotics 33(5):1255–1262

    Article  Google Scholar 

  25. Naseer T, Ruhnke M, Stachniss C, Spinello L, Burgard W (2015) Robust visual slam across seasons. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 2529–2535

  26. Naseer T, Spinello L, Burgard W, Stachniss C (2014) Robust visual robot localization across seasons using network flows. In: Proceedings of the AAAI conference on artificial intelligence, vol 28

  27. Pascoe G, Maddern W, Tanner M, Piniés P., Newman P (2017) Nid-slam: Robust monocular slam using normalised information distance. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1435–1444

  28. Pire T, Fischer T, Civera J, De Cristóforis P, Berlles JJ (2015) Stereo parallel tracking and mapping for robot localization. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 1373–1378

  29. Santos JM, Couceiro MS, Portugal D, Rocha RP (2015) A sensor fusion layer to cope with reduced visibility in slam. J Intell Robotic Syst 80(3):401–422

    Article  Google Scholar 

  30. Smith LN (2018) A disciplined approach to neural network hyper-parameters: part 1–learning rate, batch size, momentum, and weight decay. arXiv:1803.09820

  31. Yadav R, Kala R (2022) Fusion of visual odometry and place recognition for slam in extreme conditions. Appl Intell, pp 1–20

  32. You Y, Li J, Reddi S, Hseu J, Kumar S, Bhojanapalli S, Song X, Demmel J, Keutzer K, Hsieh CJ (2019) Large batch optimization for deep learning: Training bert in 76 minutes. arXiv:1904.00962

Download references

Acknowledgements

This research is supported by NavAjna Technologies private ltd., Science and Engineering Research Board (SERB) and FICCI under the Prime Minister Fellowship for Doctoral Research. The authors take full responsibility for the content of the paper and any errors or omissions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rohit Yadav.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by NavAjna Technologies Pvt. Ltd., Science and Engineering Research Board (SERB) and FICCI under the Prime Minister Research Fellowship for Doctoral Research

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yadav, R., Pani, V., Mishra, A. et al. Locality-constrained continuous place recognition for SLAM in extreme conditions. Appl Intell 53, 17593–17609 (2023). https://doi.org/10.1007/s10489-022-04415-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04415-1

Keywords

Navigation