Advertisement

Place Recognition in Gardens by Learning Visual Representations: Data Set and Benchmark Analysis

  • María Leyva-VallinaEmail author
  • Nicola Strisciuglio
  • Nicolai Petkov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11678)

Abstract

Visual place recognition is an important component of systems for camera localization and loop closure detection. It concerns the recognition of a previously visited place based on visual cues only. Although it is a widely studied problem for indoor and urban environments, the recent use of robots for automation of agricultural and gardening tasks has created new problems, due to the challenging appearance of garden-like environments. Garden scenes predominantly contain green colors, as well as repetitive patterns and textures. The lack of available data recorded in gardens and natural environments makes the improvement of visual localization algorithms difficult.

In this paper we propose an extended version of the TB-Places data set, which is designed for testing algorithms for visual place recognition. It contains images with ground truth camera pose recorded in real gardens in different seasons, with varying light conditions. We constructed and released a ground truth for all possible pairs of images, indicating whether they depict the same place or not.

We present the results of a benchmark analysis of methods based on convolutional neural networks for holistic image description and place recognition. We train existing networks (i.e. ResNet, DenseNet and VGG NetVLAD) as backbone of a two-way architecture with a contrastive loss function. The results that we obtained demonstrate that learning garden-tailored representations contribute to an improvement of performance, although the generalization capabilities are limited.

Keywords

Benchmarking Data set Deep learning Place recognition 

Notes

Acknowledgements

This work was funded by the European Horizon 2020 program, under the project TrimBot2020 (grant No. 688007).

References

  1. 1.
    Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: IEEE CVPR, pp. 5297–5307 (2016)Google Scholar
  2. 2.
    Bac, C.W., van Henten, E.J., Hemming, J., Edan, Y.: Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J. Field Robot. 31(6), 888–911 (2014)CrossRefGoogle Scholar
  3. 3.
    Badino, H., Huber, D., Kanade, T.: Visual topometric localization. In: 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 794–799 (2011)Google Scholar
  4. 4.
    Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008)CrossRefGoogle Scholar
  5. 5.
    Cummins, M., Newman, P.: Highly scalable appearance-only SLAM-FAB-MAP 2.0. Robot. Sci. Syst. 5, 17 (2009)Google Scholar
  6. 6.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRefGoogle Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)Google Scholar
  8. 8.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE CVPR, pp. 4700–4708 (2017)Google Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  10. 10.
    Leyva-Vallina, M., Strisciuglio, N., López-Antequera, M., Tylecek, R., Blaich, M., Petkov, N.: TB-places: a data set for visual place recognition in garden environments. IEEE Access 7, 52277–52287 (2019)CrossRefGoogle Scholar
  11. 11.
    Lopez-Antequera, M., Gomez-Ojeda, R., Petkov, N., Gonzalez-Jimenez, J.: Appearance-invariant place recognition by discriminatively training a convolutional neural network. Pattern Recognit. Lett. 92, 89–95 (2017)CrossRefGoogle Scholar
  12. 12.
    Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Robot. 32(1), 1–19 (2016)CrossRefGoogle Scholar
  13. 13.
    McManus, C., Churchill, W., Maddern, W., Stewart, A.D., Newman, P.: Shady dealings: robust, long-term visual localisation using illumination invariance. In: IEEE ICRA, pp. 901–906 (2014)Google Scholar
  14. 14.
    Milford, M.J., Wyeth, G.F.: SeqSLAM: visual route-based navigation for sunny summer days and stormy winter nights. In: IEEE ICRA, pp. 1643–1649 (2012)Google Scholar
  15. 15.
    Ohi, N., et al.: Design of an autonomous precision pollination robot. In: IEEE IROS (2018)Google Scholar
  16. 16.
    Sattler, T., et al.: Benchmarking 6DOF outdoor visual localization in changing conditions. In: IEEE CVPR, vol. 1 (2018)Google Scholar
  17. 17.
    Sattler, T., Weyand, T., Leibe, B., Kobbelt, L.: Image retrieval for image-based localization revisited. In: BMVC, vol. 1, p. 4 (2012)Google Scholar
  18. 18.
    Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., Fitzgibbon, A.: Scene coordinate regression forests for camera relocalization in RGB-D images. In: IEEE CVPR, pp. 2930–2937 (2013)Google Scholar
  19. 19.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  20. 20.
    Strisciuglio, N., Lopez-Antequera, M., Petkov, N.: A push-pull layer improves robustness of convolutional neural networks. arXiv preprint arXiv:1901.10208 (2019)
  21. 21.
    Strisciuglio, N., et al.: TrimBot2020: an outdoor robot for automatic gardening. In: ISR (2018)Google Scholar
  22. 22.
    Sünderhauf, N., Neubert, P., Protzel, P.: Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons. In: IEEE ICRA, p. 2013 (2013)Google Scholar
  23. 23.
    Sünderhauf, N., Protzel, P.: BRIEF-Gist-closing the loop by simple means. In: IEEE IROS, pp. 1234–1241. IEEE (2011)Google Scholar
  24. 24.
    Torii, A., Sivic, J., Okutomi, M., Pajdla, T.: Visual place recognition with repetitive structures. IEEE Trans. Pattern Anal. Mach. Intell. 37, 2346–2359 (2015)CrossRefGoogle Scholar
  25. 25.
    Walter, A., et al.: Flourish-a robotic approach for automation in crop management. In: ICPA (2018)Google Scholar
  26. 26.
    Zheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: IEEE CVPR, pp. 4480–4488 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • María Leyva-Vallina
    • 1
    Email author
  • Nicola Strisciuglio
    • 1
  • Nicolai Petkov
    • 1
  1. 1.Bernoulli Institute for Mathematics, Computer Science and Artificial IntelligenceUniversity of GroningenGroningenThe Netherlands

Personalised recommendations