Skip to main content

LiDAR Place Recognition Evaluation with the Oxford Radar RobotCar Dataset Revised

  • Conference paper
  • First Online:
Image Analysis (SCIA 2023)

Abstract

The Oxford Radar RobotCar dataset has recently become popular in evaluating LiDAR-based methods for place recognition. The Radar dataset is preferred over the original Oxford RobotCar dataset since it has better LiDAR sensors and location ground truth is available for all sequences. However, it turns out that the Radar dataset has serious issues with its ground truth and therefore experimental findings with this dataset can be misleading. We demonstrate how easily this can happen, by varying only the gallery sequence and keeping the training and test sequences fixed. Results of this experiment strongly indicate that the gallery selection is an important consideration for place recognition. However, the finding is a mistake and the difference between galleries can be explained by systematic errors in the ground truth. In this work, we propose a revised benchmark for LiDAR-based place recognition with the Oxford Radar RobotCar dataset. The benchmark includes fixed gallery, training and test sequences, corrected ground truth, and a strong baseline method. All data and code will be made publicly available to facilitate fair method comparison and development.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agarwal, S., Vora, A., Pandey, G., Williams, W., Kourous, H., McBride, J.: Ford multi-AV seasonal dataset. Int. J. Robot. Res. 39(12), 1367–1376 (2020)

    Article  Google Scholar 

  2. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: TPAMI (2018)

    Google Scholar 

  3. Arandjelović, R., Zisserman, A.: DisLocation: scalable descriptor distinctiveness for location recognition. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9006, pp. 188–204. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16817-3_13

    Chapter  Google Scholar 

  4. Barnes, D., Gadd, M., Murcutt, P., Newman, P., Posner, I.: The oxford radar robotcar dataset: a radar extension to the oxford RobotCar dataset. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Paris (2020)

    Google Scholar 

  5. Barnes, D., Posner, I.: Under the radar: learning to predict robust keypoints for odometry estimation and metric localisation in radar. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9484–9490. IEEE (2020)

    Google Scholar 

  6. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008)

    Article  Google Scholar 

  7. Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  8. Cao, B., Araujo, A., Sim, J.: Unifying deep local and global features for image search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12365, pp. 726–743. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58565-5_43

    Chapter  Google Scholar 

  9. Carlevaris-Bianco, N., Ushani, A.K., Eustice, R.M.: University of Michigan north campus long-term vision and lidar dataset. Int. J. Robot. Res. 35(9), 1023–1035 (2016)

    Article  Google Scholar 

  10. Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008). https://doi.org/10.1177/0278364908090961

    Article  Google Scholar 

  11. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR2005), vol. 1, pp. 886–893. IEEE (2005)

    Google Scholar 

  12. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  13. Gálvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Rob. 28(5), 1188–1197 (2012)

    Article  Google Scholar 

  14. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  15. He, L., Wang, X., Zhang, H.: M2DP: a novel 3d point cloud descriptor and its application in loop closure detection. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 231–237. IEEE (2016)

    Google Scholar 

  16. Jeong, J., Cho, Y., Shin, Y.S., Roh, H., Kim, A.: Complex urban dataset with multi-level sensors from highly diverse urban environments. Int. J. Robot. Res. 38(6), 642–657 (2019)

    Article  Google Scholar 

  17. Kim, G., Kim, A.: Scan context: egocentric spatial descriptor for place recognition within 3d point cloud map. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4802–4809. IEEE (2018)

    Google Scholar 

  18. Kim, G., Park, Y.S., Cho, Y., Jeong, J., Kim, A.: MulRan: multimodal range dataset for urban place recognition. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6246–6253. IEEE (2020)

    Google Scholar 

  19. Komorowski, J.: MinkLoc3d: point cloud based large-scale place recognition. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2021)

    Google Scholar 

  20. Komorowski, J., Wysoczanska, M., Trzcinski, T.: Large-scale topological radar localization using learned descriptors. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds.) ICONIP 2021. LNCS, vol. 13109, pp. 451–462. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92270-2_39

    Chapter  Google Scholar 

  21. Liu, Z., et al.: LPD-NET: 3D point cloud learning for large-scale place recognition and environment analysis. In: ICCV (2019)

    Google Scholar 

  22. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  23. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2016). https://doi.org/10.1109/TRO.2015.2496823

    Article  MathSciNet  Google Scholar 

  24. Maddern, W., Pascoe, G., Linegar, C., Newman, P.: 1 Year, 1000 km: the Oxford RobotCar dataset. Int. J. Robot. Res. (IJRR) 36(1), 3–15 (2017)

    Article  Google Scholar 

  25. Noh, H., Araujo, A., Sim, J., Weyand, T., Han, B.: Large-scale image retrieval with attentive deep local features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3456–3465 (2017)

    Google Scholar 

  26. Pandey, G., McBride, J.R., Eustice, R.M.: Ford campus vision and lidar data set. Int. J. Robot. Res. 30(13), 1543–1552 (2011)

    Article  Google Scholar 

  27. Peltomäki, J., Alijani, F., Puura, J., Huttunen, H., Rahtu, E., Kämäräinen, J.K.: Evaluation of long-term LiDAR place recognition. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Prague, Czech Rep. (2021)

    Google Scholar 

  28. Pion, N., Humenberger, M., Csurka, G., Cabon, Y., Sattler, T.: Benchmarking image retrieval for visual localization. In: International Conference on 3D Vision (3DV) (2020)

    Google Scholar 

  29. Radenović, F., Tolias, G., Chum, O.: CNN Image retrieval learns from BoW: unsupervised fine-tuning with hard examples. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 3–20. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_1

    Chapter  Google Scholar 

  30. Radenović, F., Tolias, G., Chum, O.: Fine-tuning CNN image retrieval with no human annotation. In: TPAMI (2018)

    Google Scholar 

  31. Ramezani, M., Wang, Y., Camurri, M., Wisth, D., Mattamala, M., Fallon, M.: The newer college dataset: handheld lidar, inertial and vision with ground truth. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020)

    Google Scholar 

  32. Rizzini, D.L., Galasso, F., Caselli, S.: Geometric relation distribution for place recognition. IEEE Robot. Autom. Lett. 4(2), 523–529 (2019)

    Article  Google Scholar 

  33. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  34. Sarlin, P.E., Cadena, C., Siegwart, R., Dymczyk, M.: From coarse to fine: robust hierarchical localization at large scale. In: CVPR (2019)

    Google Scholar 

  35. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR (2020)

    Google Scholar 

  36. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  37. Steder, B., Grisetti, G., Burgard, W.: Robust place recognition for 3D range data based on point features. In: ICRA (2010)

    Google Scholar 

  38. Tang, T., Martini, D.D., Newman, P.: Get to the point: learning lidar place recognition and metric localisation using overhead imagery. In: Robotics: Science and Systems (RSS) (2021)

    Google Scholar 

  39. Torii, A., Arandjelovic, R., Sivic, J., Okutomi, M., Pajdla, T.: 24/7 place recognition by view synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1808–1817 (2015)

    Google Scholar 

  40. Torii, A., Sivic, J., Pajdla, T., Okutomi, M.: Visual place recognition with repetitive structures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 883–890 (2013)

    Google Scholar 

  41. Uy, M.A., Lee, G.H.: PointNetVLAD: deep point cloud based retrieval for large-scale place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4470–4479 (2018)

    Google Scholar 

  42. Warburg, F., Hauberg, S., Lopex-Antequera, M., Gargallo, P., Kuang, Y., Civera, J.: Mapillary street-level sequences: a dataset for lifelong place recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  43. Xie, S., Pan, C., Peng, Y., Liu, K., Ying, S.: Large-scale place recognition based on camera-lidar fused descriptor. Sensors 20(10), 2870 (2020)

    Article  Google Scholar 

  44. Yin, H., Xu, X., Wang, Y., Xiong, R.: Radar-to-lidar: heterogeneous place recognition via joint learning. Front. Robot. AI 8, 661199 (2021)

    Google Scholar 

  45. Yin, P., et al.: Stabilize an unsupervised feature learning for lidar-based place recognition. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1162–1167. IEEE (2018)

    Google Scholar 

  46. Zhang, X., Wang, L., Su, Y.: Visual place recognition: a survey from deep learning perspective. Pattern Recogn. 113, 107760 (2020). https://doi.org/10.1016/j.patcog.2020.107760

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jukka Peltomäki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peltomäki, J., Alijani, F., Puura, J., Huttunen, H., Rahtu, E., Kämäräinen, JK. (2023). LiDAR Place Recognition Evaluation with the Oxford Radar RobotCar Dataset Revised. In: Gade, R., Felsberg, M., Kämäräinen, JK. (eds) Image Analysis. SCIA 2023. Lecture Notes in Computer Science, vol 13885. Springer, Cham. https://doi.org/10.1007/978-3-031-31435-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-31435-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-31434-6

  • Online ISBN: 978-3-031-31435-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics