Advertisement

Multidimensional Spectral Hashing

  • Yair Weiss
  • Rob Fergus
  • Antonio Torralba
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7576)

Abstract

With the growing availability of very large image databases, there has been a surge of interest in methods based on “semantic hashing”, i.e. compact binary codes of data-points so that the Hamming distance between codewords correlates with similarity. In reviewing and comparing existing methods, we show that their relative performance can change drastically depending on the definition of ground-truth neighbors. Motivated by this finding, we propose a new formulation for learning binary codes which seeks to reconstruct the affinity between datapoints, rather than their distances. We show that this criterion is intractable to solve exactly, but a spectral relaxation gives an algorithm where the bits correspond to thresholded eigenvectors of the affinity matrix, and as the number of datapoints goes to infinity these eigenvectors converge to eigenfunctions of Laplace-Beltrami operators, similar to the recently proposed Spectral Hashing (SH) method. Unlike SH whose performance may degrade as the number of bits increases, the optimal code using our formulation is guaranteed to faithfully reproduce the affinities as the number of bits increases. We show that the number of eigenfunctions needed may increase exponentially with dimension, but introduce a “kernel trick” to allow us to compute with an exponentially large number of bits but using only memory and computation that grows linearly with dimension. Experiments shows that MDSH outperforms the state-of-the art, especially in the challenging regime of small distance thresholds.

Keywords

Binary Code Distance Threshold Optimal Code Factorization Problem Kernel Trick 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Gionis, A., Indyk, P., Motwani, R.: Similarity Search in High Dimensions via Hashing. In: Proc. Intl Conf. on Very Large Data Bases (1999)Google Scholar
  2. 2.
    Andoni, A., Indyk, P.: Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In: FOCS, pp. 459–468 (2006)Google Scholar
  3. 3.
    Raginksi, M., Lazebnik, S.: Locality-sensitive binary codes from shift-invariant kernels. In: NIPS (2009)Google Scholar
  4. 4.
    Salakhutdinov, R.R., Hinton, G.E.: Semantic hashing. In: SIGIR Workshop on Information Retrieval and Applications of Graphical Models (2007)Google Scholar
  5. 5.
    Torralba, A., Fergus, R., Weiss, Y.: Small Codes and Large Image Databases for Recognition. In: CVPR (2008)Google Scholar
  6. 6.
    Weiss, Y., Torralba, A., Fergus, R.: Spectral hashing. In: NIPS (2008)Google Scholar
  7. 7.
    Xu, H., Wang, J., Li, Z., Zeng, G., Li, S., Yu, N.: Complementary hashing for approximate nearest neighbor search. In: ICCV, pp. 1631–1638 (2011)Google Scholar
  8. 8.
    Wang, J., Kumar, S., Chang, S.F.: Sequential projection learning for hashing with compact codes. In: ICML, pp. 1127–1134 (2010)Google Scholar
  9. 9.
    Kulis, B., Darrell, T.: Learning to Hash with Binary Reconstructive Embeddings. In: NIPS (2009)Google Scholar
  10. 10.
    Norouzi, M., Fleet, D.: Minimal loss hashing for compact binary codes. In: ICML (2011)Google Scholar
  11. 11.
    Lin, R., Yagnik, D.R.: Spec hashing: Similarity preserving algorithm for entropy-based coding. In: CVPR (2010)Google Scholar
  12. 12.
    Gong, Y., Lazebnik, S.: Iterative quantization: A procrustean approach to learning binary codes. In: CVPR (2011)Google Scholar
  13. 13.
    Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: ICML, pp. 720–727 (2003)Google Scholar
  14. 14.
    Kumar, S., Mohri, M., Talwalkar, A.: Sampling techniques for the Nystrom method. In: AISTATS (2009)Google Scholar
  15. 15.
    Coifman, R.R., Lafon, S., Lee, A., Maggioni, M., Nadler, B., Warner, F., Zucker, S.: Geometric diffusion as a tool for harmonic analysis and structure definition of data, part i: Diffusion maps. PNAS 21, 7426–7431 (2005)CrossRefGoogle Scholar
  16. 16.
    Fergus, R., Weiss, Y., Torralba, A.: Semi-supervised learning in gigantic image collections. In: NIPS (2009)Google Scholar
  17. 17.
    Belkin, M., Niyogi, P.: Towards a theoretical foundation for laplacian-based manifold methods. J. Comput. Syst. Sci. 74, 1289–1308 (2008)MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Nadler, B., Srebro, N., Zhou, X.: Statistical analysis of semi-supervised learning: The limit of infinite unlabelled data. In: NIPS, pp. 1330–1338 (2009)Google Scholar
  19. 19.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE PAMI 22, 888–905 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Yair Weiss
    • 1
  • Rob Fergus
    • 2
  • Antonio Torralba
    • 3
  1. 1.School of Computer ScienceHebrew UniversityIsrael
  2. 2.Dept. of Computer Science, Courant InstituteNew York UniversityUSA
  3. 3.CSAIL, MITUSA

Personalised recommendations