Advertisement

Pairwise Probabilistic Voting: Fast Place Recognition without RANSAC

  • Edward David Johns
  • Guang-Zhong Yang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8690)

Abstract

Place recognition currently suffers from a lack of scalability due to the need for strong geometric constraints, which as of yet are typically limited to RANSAC implementations. In this paper, we present a method to successfully achieve state-of-the-art performance, in both recognition accuracy and speed, without the need for RANSAC. We propose to discretise each feature pair in an image, in both appearance and 2D geometry, to create a triplet of words: one each for the appearance of the two features, and one for the pairwise geometry. This triplet is then passed through an inverted index to find examples of such pairwise configurations in the database. Finally, a global geometry constraint is enforced by considering the maximum-clique in an adjacency graph of pairwise correspondences. The discrete nature of the problem allows for tractable probabilistic scores to be assigned to each correspondence, and the least informative feature pairs can be eliminated from the database for memory and time efficiency. We demonstrate the performance of our method on several large-scale datasets, and show improvements over several baselines.

Keywords

Place Recognition Location Recognition Instance Recognition Image Retrieval Bag Of Words Inverted Index 

References

  1. 1.
    Arandjelovic, R., Zisserman, A.: Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2911–2918 (2012)Google Scholar
  2. 2.
    Cummins, M., Newman, P.: FAB-MAP: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research 27, 647–661 (2008)CrossRefGoogle Scholar
  3. 3.
    Heath, K., Gelfand, N., Ovsjanikov, M., Aanjaneya, M., Guibas, L.J.: Image-webs: Computing and exploiting connectivity in image collections. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 3432–3439 (2010)Google Scholar
  4. 4.
    Jegou, H., Douze, M., Schmid, C.: Hamming embedding and weak geometric consistency for large scale image search. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 304–317. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  5. 5.
    Jégou, H., Chum, O.: Negative evidences and co-occurrences in image retrieval: The benefit of PCA and whitening. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part II. LNCS, vol. 7573, pp. 774–787. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Johns, E., Yang, G.Z.: From images to scenes: Compressing an image cluster into a single scene model for place recognition. In: Proceedings of the IEEE International Conference on Comptuer Vision, pp. 874–881 (2011)Google Scholar
  7. 7.
    Johns, E., Yang, G.Z.: Generative methods for long-term place recognition in dynamic scenes. pp. 297–314 (2014)Google Scholar
  8. 8.
    Kalantidis, Y., Tolias, G., Avrithis, Y., Phinikettos, M., Spyrou, E., Mylonas, P., Kollias, S.: VIRaL: Visual image retrieval and localization. Multimedia Tools and Applications 51, 555–591 (2011)CrossRefGoogle Scholar
  9. 9.
    Li, Y., Crandall, D.J., Huttenlocher, D.P.: Landmark classification in large-scale image collections. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1957–1964 (2009)Google Scholar
  10. 10.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal on Computer Vision 60, 91–111 (2004)CrossRefGoogle Scholar
  11. 11.
    Mikulík, A., Perdoch, M., Chum, O., Matas, J.: Learning a fine vocabulary. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part III. LNCS, vol. 6313, pp. 1–14. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  12. 12.
    Östergård, P.R.: A fast algorithm for the maximum clique problem. Discrete Appl. Math. 120, 197–201 (2002)Google Scholar
  13. 13.
    Philbin, J., Chum, O., Isard, M.: J. Sivic, A.Z.: Lost in quantization: Improving particular object retrieval in large scale image databases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)Google Scholar
  14. 14.
    Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)Google Scholar
  15. 15.
    Raguram, R., Chum, O., Pollejeys, M., Matas, J., Frahm, J.M.: Usac: A universal framework for random sample consensus. Pattern Analysis and Machine Intelligence 35, 2022–2038 (2013)CrossRefGoogle Scholar
  16. 16.
    Raguram, R., Wu, C., Frahm, J.M., Lazebnik, S.: Modeling and recognition of landmark image collections using iconic scene graphs. International Journal of Computer Vision 95, 213–231 (2011)CrossRefGoogle Scholar
  17. 17.
    Sattler, T., Leibe, B., Kobbelt, L.: Fast image-based localization using direct 2D-to-3D matching. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 667–674 (2011)Google Scholar
  18. 18.
    Shen, X., Lin, Z., Brandt, J., Wu, Y.: Spatially-constrained similarity measure for large-scale object retrieval. Pattern Analysis and Machine Intelligence 36, 1229–1241 (2014)CrossRefGoogle Scholar
  19. 19.
    Tolias, G., Kalantidis, Y., Avrithis, Y., Kollias, S.: Towards large-scale geometry indexing by feature selection. Computer Vision and Image Understanding 120(3), 31–45 (2014)CrossRefGoogle Scholar
  20. 20.
    Tolias, G., Avrithis, Y.: Speeded-up, relaxed spatial matching. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1653–1660 (2011)Google Scholar
  21. 21.
    Wang, R., Tang, Z., Cao, Q.: An efficient approximation algorithm for finding a maximum clique using hopfield network learning. Neural Computing 15(7), 1605–1619 (2003)CrossRefzbMATHGoogle Scholar
  22. 22.
    Wang, X., Yang, M., Cour, T., Zhu, S., Yu, K., Han, T.X.: Contextual weighting for vocabulary tree based image retrieval. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 209–216 (2011)Google Scholar
  23. 23.
    Wu, Z., Ke, Q., Isard, M., Sun, J.: Bundling features for large scale partial-duplicate web image search. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 25–32 (2010)Google Scholar
  24. 24.
    Yuan, U., Wu, Y., Yang, M.: Discovery of collocation patterns: from visual words to visual phrases. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)Google Scholar
  25. 25.
    Zhang, Y., Jia, Z., Chen, T.: Image retrieval with geometry-preserving visual phrases. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 809–816 (2011)Google Scholar
  26. 26.
    Zheng, Y., Zhao, M., Song, Y., Adam, H., Buddemeier, U., Bissacco, A., Brucher, F., Chua, T.S., Neven, H.: Tour the world: Building a web-scale landmark recognition engine. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1085–1092 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Edward David Johns
    • 1
  • Guang-Zhong Yang
    • 1
  1. 1.Imperial College LondonUK

Personalised recommendations