Revisiting Additive Quantization

  • Julieta MartinezEmail author
  • Joris Clement
  • Holger H. Hoos
  • James J. Little
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9906)


We revisit Additive Quantization (AQ), an approach to vector quantization that uses multiple, full-dimensional, and non-orthogonal codebooks. Despite its elegant and simple formulation, AQ has failed to achieve state-of-the-art performance on standard retrieval benchmarks, because the encoding problem, which amounts to MAP inference in multiple fully-connected Markov Random Fields (MRFs), has proven to be hard to solve. We demonstrate that the performance of AQ can be improved to surpass the state of the art by leveraging iterated local search, a stochastic local search approach known to work well for a range of NP-hard combinatorial problems. We further show a direct application of our approach to a recent formulation of vector quantization that enforces sparsity of the codebooks. Unlike previous work, which required specialized optimization techniques, our formulation can be plugged directly into state-of-the-art lasso optimizers. This results in a conceptually simple, easily implemented method that outperforms the previous state of the art in solving sparse vector quantization. Our implementation is publicly available (


Local Search Query Time Local Search Procedure Inverted Index Iterate Local Search 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We thank NVIDIA for the donation of some of the GPUs used in this project. Joris Clement was supported by DAAD while doing an internship at the University of British Columbia. This research was supported in part by NSERC.


  1. 1.
    Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: TOG, vol. 25, no. 3 (2006)Google Scholar
  2. 2.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. In: IJCV, vol. 60, no. 2 (2004)Google Scholar
  3. 3.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  4. 4.
    Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. In: TPAMI, vol. 30, no. 11 (2008)Google Scholar
  5. 5.
    Gong, Y., Lazebnik, S.: Iterative quantization: a procrustean approach to learning binary codes. In: CVPR (2011)Google Scholar
  6. 6.
    Weiss, Y., Torralba, A., Fergus, R.: Spectral hashing. In: NIPS (2009)Google Scholar
  7. 7.
    Babenko, A., Lempitsky, V.: Additive quantization for extreme vector compression. In: CVPR (2014)Google Scholar
  8. 8.
    Babenko, A., Lempitsky, V.: Tree quantization for large-scale similarity search and classification. In: CVPR (2015)Google Scholar
  9. 9.
    Ge, T., He, K., Ke, Q., Sun, J.: Optimized product quantization. In: TPAMI, vol. 36, no. 4 (2014)Google Scholar
  10. 10.
    Jégou, H., Douze, M., Schmid, C.: Product quantization for nearest neighbor search. In: TPAMI, vol. 33, no. 1 (2011)Google Scholar
  11. 11.
    Zhang, T., Du, C., Wang, J.: Composite quantization for approximate nearest neighbor search. In: ICML (2014)Google Scholar
  12. 12.
    Zhang, T., Qi, G.J., Tang, J., Wang, J.: Sparse composite quantization. In: CVPR (2015)Google Scholar
  13. 13.
    Guo, R., Kumar, S., Choromanski, K., Simcha, D.: Quantization based fast inner product search (2016)Google Scholar
  14. 14.
    Shrivastava, A., Li, P.: Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In: NIPS (2014)Google Scholar
  15. 15.
    Jiaxiang, W., Cong Leng, Y., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR (2016)Google Scholar
  16. 16.
    Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The international SAT solver competitions. AI Mag. 33(1), 89–92 (2012)Google Scholar
  17. 17.
    Hoos, H.H., Stützle, T.: Stochastic Local Search: Foundations & Applications. Elsevier, Amsterdam (2004)zbMATHGoogle Scholar
  18. 18.
    Xu, L., Hutter, F., Hoos, H.H., Leyton-Brown, K.: SATzilla: portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32, 565–606 (2008)zbMATHGoogle Scholar
  19. 19.
    Norouzi, M., Fleet, D.J.: Cartesian k-means. In: CVPR (2013)Google Scholar
  20. 20.
    Gersho, A., Gray, R.M.: Vector Quantization and Signal Compression. Kluwer Academic Publishers, Berlin (1992)CrossRefzbMATHGoogle Scholar
  21. 21.
    Cooper, G.F.: The computational complexity of probabilistic inference using Bayesian belief networks. AI 42(2), 393–405 (1990)MathSciNetzbMATHGoogle Scholar
  22. 22.
    Kappes, J.H., Andres, B., Hamprecht, F., Schnorr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B.X., Lellmann, J., Komodakis, N., et al.: A comparative study of modern inference techniques for discrete energy minimization problems. In: CVPR, pp. 1328–1335 (2013)Google Scholar
  23. 23.
    Nagata, Y., Kobayashi, S.: A powerful genetic algorithm using edge assembly crossover for the traveling salesman problem. INFORMS J. Comput. 25(2), 346–363 (2013)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algorithm configuration. In: VISAPP, no. 1 (2009)Google Scholar
  25. 25.
    Babenko, A., Lempitsky, V.: The inverted multi-index. In: CVPR (2012)Google Scholar
  26. 26.
    Xia, Y., He, K., Wen, F., Sun, J.: Joint inverted indexing. In: ICCV (2013)Google Scholar
  27. 27.
    Babenko, A., Lempitsky, V.: Improving bilayer product quantization for billion-scale approximate nearest neighbors in high dimensions (2014). arXiv preprint arXiv:1404.1831
  28. 28.
    André, F., Kermarrec, A.M., Le Scouarnec, N.: Cache locality is not enough: high-performance nearest neighbor search with product quantization fast scan. Proc. VLDB Endow. 9(4), 288–299 (2015)CrossRefGoogle Scholar
  29. 29.
    Van Den Berg, E., Friedlander, M.P.: Probing the pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: delving deep into convolutional nets. In: BMVC (2014)Google Scholar
  31. 31.
    Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 584–599. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-10590-1_38 Google Scholar
  32. 32.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 512–519. IEEE (2014)Google Scholar
  33. 33.
    Norouzi, M., Fleet, D.J.: Minimal loss hashing for compact binary codes. In: ICML (2011)Google Scholar
  34. 34.
    Kalantidis, Y., Avrithis, Y.: Locally optimized product quantization for approximate nearest neighbor search. In: CVPR (2014)Google Scholar
  35. 35.
    Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing (2014). arXiv preprint arXiv:1411.1607

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Julieta Martinez
    • 1
    Email author
  • Joris Clement
    • 1
  • Holger H. Hoos
    • 1
  • James J. Little
    • 1
  1. 1.University of British ColumbiaVancouverCanada

Personalised recommendations