Skip to main content

Place Recognition in Gardens by Learning Visual Representations: Data Set and Benchmark Analysis

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11678))

Included in the following conference series:

Abstract

Visual place recognition is an important component of systems for camera localization and loop closure detection. It concerns the recognition of a previously visited place based on visual cues only. Although it is a widely studied problem for indoor and urban environments, the recent use of robots for automation of agricultural and gardening tasks has created new problems, due to the challenging appearance of garden-like environments. Garden scenes predominantly contain green colors, as well as repetitive patterns and textures. The lack of available data recorded in gardens and natural environments makes the improvement of visual localization algorithms difficult.

In this paper we propose an extended version of the TB-Places data set, which is designed for testing algorithms for visual place recognition. It contains images with ground truth camera pose recorded in real gardens in different seasons, with varying light conditions. We constructed and released a ground truth for all possible pairs of images, indicating whether they depict the same place or not.

We present the results of a benchmark analysis of methods based on convolutional neural networks for holistic image description and place recognition. We train existing networks (i.e. ResNet, DenseNet and VGG NetVLAD) as backbone of a two-way architecture with a contrastive loss function. The results that we obtained demonstrate that learning garden-tailored representations contribute to an improvement of performance, although the generalization capabilities are limited.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arandjelovic, R., Gronat, P., Torii, A., Pajdla, T., Sivic, J.: NetVLAD: CNN architecture for weakly supervised place recognition. In: IEEE CVPR, pp. 5297–5307 (2016)

    Google Scholar 

  2. Bac, C.W., van Henten, E.J., Hemming, J., Edan, Y.: Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J. Field Robot. 31(6), 888–911 (2014)

    Article  Google Scholar 

  3. Badino, H., Huber, D., Kanade, T.: Visual topometric localization. In: 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 794–799 (2011)

    Google Scholar 

  4. Cummins, M., Newman, P.: FAB-MAP: probabilistic localization and mapping in the space of appearance. Int. J. Robot. Res. 27(6), 647–665 (2008)

    Article  Google Scholar 

  5. Cummins, M., Newman, P.: Highly scalable appearance-only SLAM-FAB-MAP 2.0. Robot. Sci. Syst. 5, 17 (2009)

    Google Scholar 

  6. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)

    Google Scholar 

  8. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE CVPR, pp. 4700–4708 (2017)

    Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)

    Google Scholar 

  10. Leyva-Vallina, M., Strisciuglio, N., LĂ³pez-Antequera, M., Tylecek, R., Blaich, M., Petkov, N.: TB-places: a data set for visual place recognition in garden environments. IEEE Access 7, 52277–52287 (2019)

    Article  Google Scholar 

  11. Lopez-Antequera, M., Gomez-Ojeda, R., Petkov, N., Gonzalez-Jimenez, J.: Appearance-invariant place recognition by discriminatively training a convolutional neural network. Pattern Recognit. Lett. 92, 89–95 (2017)

    Article  Google Scholar 

  12. Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Robot. 32(1), 1–19 (2016)

    Article  Google Scholar 

  13. McManus, C., Churchill, W., Maddern, W., Stewart, A.D., Newman, P.: Shady dealings: robust, long-term visual localisation using illumination invariance. In: IEEE ICRA, pp. 901–906 (2014)

    Google Scholar 

  14. Milford, M.J., Wyeth, G.F.: SeqSLAM: visual route-based navigation for sunny summer days and stormy winter nights. In: IEEE ICRA, pp. 1643–1649 (2012)

    Google Scholar 

  15. Ohi, N., et al.: Design of an autonomous precision pollination robot. In: IEEE IROS (2018)

    Google Scholar 

  16. Sattler, T., et al.: Benchmarking 6DOF outdoor visual localization in changing conditions. In: IEEE CVPR, vol. 1 (2018)

    Google Scholar 

  17. Sattler, T., Weyand, T., Leibe, B., Kobbelt, L.: Image retrieval for image-based localization revisited. In: BMVC, vol. 1, p. 4 (2012)

    Google Scholar 

  18. Shotton, J., Glocker, B., Zach, C., Izadi, S., Criminisi, A., Fitzgibbon, A.: Scene coordinate regression forests for camera relocalization in RGB-D images. In: IEEE CVPR, pp. 2930–2937 (2013)

    Google Scholar 

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  20. Strisciuglio, N., Lopez-Antequera, M., Petkov, N.: A push-pull layer improves robustness of convolutional neural networks. arXiv preprint arXiv:1901.10208 (2019)

  21. Strisciuglio, N., et al.: TrimBot2020: an outdoor robot for automatic gardening. In: ISR (2018)

    Google Scholar 

  22. SĂ¼nderhauf, N., Neubert, P., Protzel, P.: Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons. In: IEEE ICRA, p. 2013 (2013)

    Google Scholar 

  23. SĂ¼nderhauf, N., Protzel, P.: BRIEF-Gist-closing the loop by simple means. In: IEEE IROS, pp. 1234–1241. IEEE (2011)

    Google Scholar 

  24. Torii, A., Sivic, J., Okutomi, M., Pajdla, T.: Visual place recognition with repetitive structures. IEEE Trans. Pattern Anal. Mach. Intell. 37, 2346–2359 (2015)

    Article  Google Scholar 

  25. Walter, A., et al.: Flourish-a robotic approach for automation in crop management. In: ICPA (2018)

    Google Scholar 

  26. Zheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: IEEE CVPR, pp. 4480–4488 (2016)

    Google Scholar 

Download references

Acknowledgements

This work was funded by the European Horizon 2020 program, under the project TrimBot2020 (grant No. 688007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to MarĂ­a Leyva-Vallina .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Leyva-Vallina, M., Strisciuglio, N., Petkov, N. (2019). Place Recognition in Gardens by Learning Visual Representations: Data Set and Benchmark Analysis. In: Vento, M., Percannella, G. (eds) Computer Analysis of Images and Patterns. CAIP 2019. Lecture Notes in Computer Science(), vol 11678. Springer, Cham. https://doi.org/10.1007/978-3-030-29888-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29888-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29887-6

  • Online ISBN: 978-3-030-29888-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics