Cybernetics and Systems Analysis

, Volume 53, Issue 5, pp 799–820 | Cite as

Index Structures for Fast Similarity Search for Binary Vectors

  • D. A. Rachkovskij


This article reviews index structures for fast similarity search for objects represented by binary vectors (with components equal to 0 or 1). Structures for both exact and approximate search by Hamming distance and other similarity measures are considered. Mainly, index structures are presented that are based on hash tables and similarity-preserving hashing and also on tree structures, neighborhood graphs, and distributed neural autoassociative memory. Ideas of well-known algorithms and algorithms proposed in recent years are stated.


similarity search Hamming distance nearest neighbor near neighbor index structure multi-index hashing locality-sensitive hashing tree structure neighborhood graph neural autoassociative memory 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    C. Manning, P. Raghavan, and Schütze, Introduction to Information Retrieval, Cambridge University Press, New York (2008).CrossRefzbMATHGoogle Scholar
  2. 2.
    R. Datta, D. Joshi, J. Li, and J. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Computing Surveys, Vol. 40, No. 2, 1–60 (2008).CrossRefGoogle Scholar
  3. 3.
    M. M. Fouad, “Content-based search for image retrieval. I,” J. Image, Graphics and Signal Processing, Vol. 5, No. 11, 46–52 (2013).Google Scholar
  4. 4.
    D. A. Rachkovkij, “Distance-based index structures for fast similarity search,” Cybernetics and Systems Analysis, Vol. 53, No. 4, 636–658 (2017).CrossRefGoogle Scholar
  5. 5.
    J. Heinly, E. Dunn, and J.-M. Frahm, “Comparative evaluation of binary features,” in: Proc. ECCV’12, 759–773 (2012).Google Scholar
  6. 6.
    F. A. Khalifa, N. A. Semary, H. M. El-Sayed, and M. M. Hadhoud, “Local detectors and descriptors for object class recognition,” International Journal of Intelligent Systems and Applications, Vol. 7, No. 10, 12–18 (2015).CrossRefGoogle Scholar
  7. 7.
    Y. Uchida, Local Feature Detectors, Descriptors, and Image Representations: A Survey. arXiv:1607.08368. 28 Jul 2016.Google Scholar
  8. 8.
    M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks,” in: Proc. ECCV’16 (2016), pp. 525–542.Google Scholar
  9. 9.
    I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks,” in: Proc. NIPS’16 (2016), pp. 4107–4115.Google Scholar
  10. 10.
    W. Tang, G. Hua, and L. Wang, “How to train a compact binary neural network with high accuracy?” in: Proc. AAAI’17 (2017), pp. 2625–2631.Google Scholar
  11. 11.
    S. Kumar, J. V. Desai, and S. Mukherjee, “Copy move forgery detection in contrast variant environment using binary DCT vectors. I,” J. Image, Graphics and Signal Processing, Vol. 7, No. 6, 38–44 (2015).CrossRefGoogle Scholar
  12. 12.
    M. Faruqui and C. Dyer, “Non-distributional word vector representations,” in: Proc. ACL-IJCNLP’15, Vol. 2 (2015), pp. 464–469.Google Scholar
  13. 13.
    S. Ren, X. Cao, Y. Wei, and J. Sun, “Face alignment at 3000 fps via regressing local binary features,” in: Proc. CVPR’14 (2014), pp. 1685–1692.Google Scholar
  14. 14.
    D. N. Pavlov, H. Mannila, and P. Smyth, “Beyond independence: Probabilistic models for query approximation on binary transaction data,” IEEE TKDE, Vol. 15, No. 6, 1409–1421 (2003).Google Scholar
  15. 15.
    J. Wang, H. T. Shen, J. Song, and J. Ji, Hashing for Similarity Search: A Survey. arXiv:1408.2927. 13 Aug 2014.Google Scholar
  16. 16.
    D. A. Rachkovskij, E. M. Kussul, and T. N. Baidyk, “Building a world model with structure-sensitive sparse binary distributed representations,” BICA, Vol. 3, 64–86 (2013).Google Scholar
  17. 17.
    D. A. Rachkovskij, “Binary vectors for fast distance and similarity estimation,” Cybernetics and Systems Analysis, Vol. 53, No. 1, 138–156 (2017).MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    J. Wang, W. Liu, S. Kumar, and S.-F. Chang, “Learning to hash for indexing big data: A survey,” in: Proc. IEEE, Vol. 104, No. 1, 34–57 (2016).Google Scholar
  19. 19.
    J. Wang, T. Zhang, J. Song, N. Sebe, and H. T. Shen, “A survey on learning to hash,” IEEE Trans. PAMI. DOI:
  20. 20.
    D. A. Rachkovskij, “Real-valued vectors for fast distance and similarity estimation,” Cybernetics and Systems Analysis, Vol. 52, No. 6, 967–988 (2016).MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    V. Gaede and O. Gunther, “Multidimensional access methods,” ACM Comput. Surv., Vol. 30, No. 2, 170–231 (1998).CrossRefGoogle Scholar
  22. 22.
    C. Böhm, S. Berchtold, and D. A. Keim, “Searching in high-dimensional spaces: Index structures for improving the performance of multimedia databases,” ACM Comp. Surv., Vol. 33, No. 3, 322–373 (2001).CrossRefGoogle Scholar
  23. 23.
    H. Samet, Foundations of Multidimensional and Metric Data Structures, Morgan Kaufmann, San Francisco (2006).zbMATHGoogle Scholar
  24. 24.
    I. S. Haque, V. S. Pande, and W. P. Walters, “Anatomy of high-performance 2d similarity calculations,” Journal of Chemical Information and Modeling, Vol. 51, No. 9, 2345–2351 (2011).CrossRefGoogle Scholar
  25. 25.
    R. Donaldson, A. Gupta, Y. Plan, and T. Reimer, Random Mappings Designed for Commercial Search Engines. arXiv:1507.05929. 21 Jul 2015.Google Scholar
  26. 26.
    G. Brodal and L. Gasieniec, “Approximate dictionary queries,” in: Proc. CPM’96 (1996), pp. 65–74.Google Scholar
  27. 27.
    L. Carter and M. N. Wegman, “Universal classes of hash functions,” Journal of Computer and System Sciences, Vol. 18, No. 2, 143–154 (1979).MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    M. L. Fredman, J. Komlos, and E. Szemeredi, “Storing a sparse table with O(1) worst case access time,” Journal of the ACM, Vol. 31, No 3, 538–544 (1984).MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    I. Chegrane and D. Belazzougui, “Simple, compact and robust approximate string dictionary,” J. Discrete Algorithms, Vol. 28, 49–60 (2014).MathSciNetCrossRefzbMATHGoogle Scholar
  30. 30.
    R. Pagh, “Locality-sensitive hashing without false negatives,” in: Proc. SODA’16 (2016), pp. 1–9.Google Scholar
  31. 31.
    A. Andoni and P. Indyk, “Nearest neighbors in high-dimensional spaces,” in: Handbook of Discrete and Computational Geometry, 3rd Edition, Ch. 43, 1133–1153 (2017).Google Scholar
  32. 32.
    J. Zobel and A. Moffat, “Inverted files for text search engines,” ACM Comput. Surv., Vol. 38, No. 2, 6:1–6:56 (2006).Google Scholar
  33. 33.
    D. A. Rachkovskij and S. V. Slipchenko, “Similarity-based retrieval with structure-sensitive sparse binary distributed representations,” Computational Intelligence, Vol. 28, No. 1, 106–129 (2012).MathSciNetCrossRefGoogle Scholar
  34. 34.
    S. Ferdowsi, S. Voloshynovskiy, D. Kostadinov, and T. Holotyak, “Fast content identification in high-dimensional feature spaces using sparse ternary codes,” in: Proc. WIFS’16 (2016), pp. 1–6.Google Scholar
  35. 35.
    R. Weber, H. Schek, and S. Blott, “A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces,” in: Proc. VLDB’98 (1998), pp. 194–205.Google Scholar
  36. 36.
    N. Tatti, T. Mielikainen, A. Goonies, and H. Mannila, “What is the dimension of your binary data?” in: Proc. ICDM’06 (2006), pp. 603–612.Google Scholar
  37. 37.
    J. Alman and R. Williams, “Probabilistic polynomials and Hamming nearest neighbors,” in: Proc. FOCS’15 (2015), pp. 136–150.Google Scholar
  38. 38.
    D. M. W. Powers, “Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation,” Journal of Machine Learning Tech., Vol. 2, No. 1, 37–63 (2011).MathSciNetGoogle Scholar
  39. 39.
    M. Muja and D. G. Lowe, “Scalable nearest neighbor algorithms for high dimensional data,” IEEE TPAMI, Vol. 36, No. 11, 2227–2240 (2014).CrossRefGoogle Scholar
  40. 40.
    A. C. Yao and F. F. Yao, “Dictionary look-up with one error,” Journal of Algorithms, Vol. 25, No. 1, 194–202 (1997).MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    G. S. Brodal and V. Srinivasan, “Improved bounds for dictionary look-up with one error,” Information Processing Letters, Vol. 75, Nos. 1–2, 57–59 (2000).Google Scholar
  42. 42.
    R. Cole, L.-A. Gottlieb, and M. Lewenstein, “Dictionary matching and indexing with errors and do not cares,” in: Proc. STOC’04 (2004), pp. 91–100.Google Scholar
  43. 43.
    H.-L. Chan, T.-W. Lam, W.-K. Sung, S.-L. Tam, and S.-S. Wong, “A linear size index for approximate pattern matching,” Journal of Discrete Algorithms, Vol. 9, No. 4, 358–364 (2011).MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    H. Chan, T. W. Lam, W. Sung, S. Tam, and S. Wong, “Compressed indexes for approximate string matching,” Algorithmica, Vol. 58, No. 2, 263–281 (2010).MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    D. Greene, M. Parnas, and F. Yao, “Multi-index hashing for information retrieval,” in Proc. FOCS’94 (1994), pp. 722–731.Google Scholar
  46. 46.
    S. Wu and U. Manber, “Fast text searching allowing errors,”. Communications of the ACM, Vol. 35, No. 10, 83–91 (1992).CrossRefGoogle Scholar
  47. 47.
    G. S. Manku, A. Jain, and A. D. Sarma, “Detecting near-duplicates for web crawling,” in: Proc. WWW’07 (2007), pp. 141–150.Google Scholar
  48. 48.
    A. X. Liu, S. Ke, and E. Torng, “Large scale Hamming distance query processing,” in: Proc. ICDE’11 (2011), pp. 553–564.Google Scholar
  49. 49.
    S. Gog and R. Venturini, “Fast and compact Hamming distance index,” in: Proc. SIGIR’16 (2016), pp. 285–294.Google Scholar
  50. 50.
    X. Zhang, J. Qin, W. Wang, Y. Sun, and J. Lu, “Hmsearch: An efficient Hamming distance query processing algorithm,” in: Proc. SSDBM’13 (2013), pp. 19:1–19:12.Google Scholar
  51. 51.
    M. Norouzi, A. Punjani, and D. J. Fleet, “Fast exact search in Hamming space with multi-index hashing,” IEEE Trans. PAMI, Vol. 36, No. 6, 1107–1119 (2014).CrossRefGoogle Scholar
  52. 52.
    J. Wan, S. Tang, Y. Zhang, L. Huang, and J. Li, “Data driven multi-index hashing,” in: Proc. ICIP’13 (2013), pp. 2670–2673.Google Scholar
  53. 53.
    Y. Ma, H. Zou, H. Xie, and Q. Su, “Fast search with data-oriented multi-index hashing for multimedia data,” KSII TIIS, Vol. 9, No. 7, 2599–2613 (2015).Google Scholar
  54. 54.
    M. Wang, X. Feng, and J. Cui, “Multi-index hashing with repeat-bits in Hamming space” in: Proc. FSKD’15 (2015), pp. 1307–1313.Google Scholar
  55. 55.
    J. Song, H. T. Shen, J. Wang, Z. Huang, N. Sebe, and J. Wang, “A distance-computation-free search scheme for binary code databases,” IEEE Trans. Multimedia, Vol. 18, No. 3, 484–495 (2016).CrossRefGoogle Scholar
  56. 56.
    E.-J. Ong and M. Bober, “Improved Hamming distance search using variable length hashing,” in: Proc. CVPR’16 (2016), pp. 2000–2008.Google Scholar
  57. 57.
    S. Eghbali and L. Tahvildari, Cosine Similarity Search with Multi-Index Hashing. arXiv:1610.00574. 14 Sep 2016.Google Scholar
  58. 58.
    A. Andoni and P. Indyk, “Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions,” Communications of the ACM, Vol. 51, No. 1, 117–122 (2008).CrossRefGoogle Scholar
  59. 59.
    S. Har-Peled, P. Indyk, and R. Motwani, “Approximate nearest neighbor: Towards removing the curse of dimensionality,” Theory Comput., Vol. 8, 321–350 (2012).MathSciNetCrossRefzbMATHGoogle Scholar
  60. 60.
    A. Shrivastava and P. Li, “Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS),” in: Proc. NIPS’14 (2014), pp. 2321–2329.Google Scholar
  61. 61.
    M. Charikar, “Similarity estimation techniques from rounding algorithms,” in: Proc. STOC’02 (2002), pp. 380–388.Google Scholar
  62. 62.
    A. Shrivastava and P. Li, “Asymmetric minwise hashing for indexing binary inner products and set containment,” in: Proc. WWW’15 (2015), pp. 981–991.Google Scholar
  63. 63.
    A. Andoni, M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni, “Locality-sensitive hashing using stable distributions,” in: Nearest Neighbor Methods for Learning and Vision: Theory and Practice, MIT Press, Cambridge (2006), pp. 61–72.Google Scholar
  64. 64.
    R. O’Donnell, Y. Wu, and Y. Zhou, “Optimal lower bounds for locality sensitive hashing (except when q is tiny),” ACM TOCS, Vol. 6, No. 1, 5.1–5.13 (2014).Google Scholar
  65. 65.
    A. Z. Broder, “On the resemblance and containment of documents,” in: Proc. SEQUENCES’97 (1997), pp. 21–29.Google Scholar
  66. 66.
    A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig, “Syntactic clustering of the web,” Computer Networks and ISDN Systems, Vol. 29, No. 8–13, 1157–1166 (1997).Google Scholar
  67. 67.
    A. Z. Broder, M. Charikar, A. M. Frieze, and M. Mitzenmacher, “Min-wise independent permutations,” J. Comput. System Sci., Vol. 60, 327–336 (1998).MathSciNetzbMATHGoogle Scholar
  68. 68.
    J. Tang and Y. Tian, “A systematic review on minwise hashing algorithms,” Annals of Data Science, Vol. 3, No. 4, 445–468 (2016).CrossRefGoogle Scholar
  69. 69.
    S. Dahlgaard, M. B. T. Knudsen, and M. Thorup, Fast Similarity Sketching. arXiv:1704.04370. 14 Apr 2017.Google Scholar
  70. 70.
    P. Li, and A.C. König, “Theory and applications of b-bit minwise hashing,” Communications of the ACM, Vol. 54, No. 8, 101–109 (2011).CrossRefGoogle Scholar
  71. 71.
    A. Shrivastava, Optimal Densification for Fast and Accurate Minwise Hashing. arXiv:1703.04664. 14 Mar 2017.Google Scholar
  72. 72.
    A. Shrivastava and P. Li, “In defense of minhash over simhash,” in: Proc. AISTATS’14 (2014), pp. 886–894.Google Scholar
  73. 73.
    T. D. Ahle, R. Pagh, I. Razenshteyn, and F. Silvestri, “On the complexity of inner product similarity join,” in: Proc. PODS’16 (2016), pp. 151–164.Google Scholar
  74. 74.
    D. Bera and R. Pratap, “Frequent-itemset mining using locality-sensitive hashing,” in: Proc. COCOON’16 (2016), pp. 143–155.Google Scholar
  75. 75.
    T. Trzcinski, V. Lepetit and, P. Fua, “Thick boundaries in binary space and their influence on nearest-neighbor search,” Pattern Recognition Letters, Vol. 33, No. 16, 2173–2180 (2012).CrossRefGoogle Scholar
  76. 76.
    M. M. Esmaeili, R. K. Ward, and M. Fatourechi, “A fast approximate nearest neighbor search algorithm in the Hamming space,” IEEE Trans. PAMI, Vol. 34, No. 12, 2481–2488 (2012).CrossRefGoogle Scholar
  77. 77.
    S. Har-Peled and S. Mahabadi, “Proximity in the age of distraction: Robust approximate nearest neighbor search,” in: Proc. SODA’17 (2017), pp. 1–15.Google Scholar
  78. 78.
    T. D. Ahle, M. Aumuller, and R. Pagh, “Parameter-free locality sensitive hashing for spherical range reporting,” in: Proc. SODA’17 (2017), pp. 239–256.Google Scholar
  79. 79.
    N. Pham, “Hybrid LSH: Faster near neighbors reporting in high-dimensional space,” in: Proc. EDBT’17 (2017), pp. 454–457.Google Scholar
  80. 80.
    P. Flajolet, E. Fusy, O. Gandouet, and F. Meunier, “Hyperloglog: The analysis of a near-optimal cardinality estimation algorithm,” in: Proc. AofA’07 (2007), pp. 127–146.Google Scholar
  81. 81.
    N. Pham and R. Pagh, “Scalability and total recall with fast CoveringLSH,” in: Proc. CIKM’16 (2016), pp. 1109–1118.Google Scholar
  82. 82.
    A. Becker, L. Ducas, N. Gama, and T. Laarhoven, “New directions in nearest neighbor searching with applications to lattice sieving,” in: Proc. SODA’16 (2016), pp. 10–24.Google Scholar
  83. 83.
    A. Andoni, T. Laarhoven, I. Razenshteyn, and E. Waingarten, “Optimal hashing-based time-space trade-offs for approximate near neighbors,” in: Proc. SODA’17 (2017), pp. 47–66.Google Scholar
  84. 84.
    T. Christiani and R. Pagh, “Set similarity search beyond MinHash,” in: Proc. STOC’17 (2017), pp. 1094–1107.Google Scholar
  85. 85.
    T. D. Ahle, Optimal Las Vegas Locality Sensitive Data Structures. arXiv:1704.02054. April 6, 2017Google Scholar
  86. 86.
    A. Andoni and I. Razenshteyn, “Optimal data-dependent hashing for approximate near neighbors,” in: Proc. STOC’15, pp. 793–801 (2015).Google Scholar
  87. 87.
    A. Andoni, I. Razenshteyn, and N. Shekel Nosatzki, “Lsh forest: Practical algorithms made theoretical,” in: Proc. SODA’17 (2017), pp. 67–78.Google Scholar
  88. 88.
    M. Bawa, T. Condie, and P. Ganesan, “Lsh forest: Self-tuning indexes for similarity search,” in: Proc. WWW’05 (2005), pp. 651–660.Google Scholar
  89. 89.
    G. Qian, Q. Zhu, Q. Xue, and S. Pramanik, “Dynamic indexing for multidimensional non-ordered discrete data spaces using a data-partitioning approach,” ACM TODS, Vol. 31, No. 2, 439–484 (2006).CrossRefGoogle Scholar
  90. 90.
    G. Qian, Q. Zhu, Q. Xue, and S. Pramanik, “A space-partitioning-based indexing method for multidimensional non-ordered discrete data spaces,” ACM TOIS, Vol. 23, 79–110 (2006).CrossRefGoogle Scholar
  91. 91.
    C. C. Yan, H. Xie, B. Zhang, Y. Ma, Q. Dai, and Y. Liu, “Fast approximate matching of binary codes with distinctive bits,” Front. Comput. Sci., Vol. 9, No. 5, 741–750 (2015).CrossRefGoogle Scholar
  92. 92.
    D. Galvez-Lopez and J. D. Tardos, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. Robotics, Vol. 28, No. 5, 1188–1197 (2012).CrossRefGoogle Scholar
  93. 93.
    Q. Luo, S. Zhang, T. Huang, W. Gao, and Q. Tian, “Scalable mobile search with binary phrase,” in: Proc. ICIMCS’13 (2013), pp. 66–70.Google Scholar
  94. 94.
    J. Niedermayer and P. Kroger, “Retrieval of binary features in image databases: A study,” in: Proc. SISAP’14 (2014), pp. 151–163.Google Scholar
  95. 95.
    V. Kryzhanovsky, M. Malsagov, J. A. C. Tomas, and I. Zhelavskaya, “On error probability of search in high-dimensional binary space with scalar neural network tree,” in Proc. NCTA’14 (2014).Google Scholar
  96. 96.
    M. Tang, Y. Yu, W. G. Aref, Q. M. Malluhi, and M. Ouzzani, “Efficient processing of Hamming-distance-based similarity-search queries over mapreduce,” in: Proc. EDBT’15 (2015), pp. 361–372.Google Scholar
  97. 97.
    Y. Tao, K. Yi, C. Sheng, and P. Kalnis, “Efficient and accurate nearest neighbor and closest pair search in high-dimensional space,” ACM Trans. Database Syst., Vol. 35, No. 3, 20:1–20:46 (2010).Google Scholar
  98. 98.
    Z. Jiang, L. Xie, X. Deng, W. Xu, and J. Wang, “Fast nearest neighbor search in the Hamming space,” in: Proc. MMM ’16 (2016), pp. 325–336.Google Scholar
  99. 99.
    Yu. A. Malkov and D. A. Yashunin, Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs. arXiv:1603.09320. 21 May 2016.Google Scholar
  100. 100.
    V. I. Gritsenko, D. A. Rachkovskij, A. A. Frolov, R. Gayler, D. Kleyko, and E. Osipov, “Neural distributed autoassociative memories: A survey,” Cybernetics and Computer Engineering, No. 2 (188), 5–35 (2017).Google Scholar
  101. 101.
    L. G. Valiant, “Functionality in neural nets,” in: Proc. AAAI’88, Vol. 2 (1988), pp. 629–634.Google Scholar
  102. 102.
    J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” in: Proc. of the Nat. Acad. Sci. USA, Vol. 79, No. 8, 2554–2558 (1982).Google Scholar
  103. 103.
    M. Tsodyks and M. Feigelman, “The enhanced storage capacity in neural networks with low activity level,” Europhysics Letters, Vol. 6, No. 2, 101–105 (1988).CrossRefGoogle Scholar
  104. 104.
    A. A. Frolov, D. Husek, and I. P. Muraviev, “Information capacity and recall quality in sparsely encoded Hopfield-like neural network: Analytical approaches and computer simulation,” Neural Networks, Vol. 10, No. 5, 845–855 (1997).CrossRefGoogle Scholar
  105. 105.
    A. A. Frolov, D. Husek, and I. P. Muraviev, “Informational efficiency of sparsely encoded Hopfield-like associative memory,” Optical Memory & Neural Networks, Vol. 12, No. 3, 177–197 (2003).Google Scholar
  106. 106.
    S. Amari, “Characteristics of sparsely encoded associative memory,” Neural Networks, Vol. 2, No. 6, 451–457 (1989).CrossRefGoogle Scholar
  107. 107.
    J. Heusel, M. Lowe, and F. Vermet, “On the capacity of an associative memory model based on neural cliques,” Statist. Probab. Lett., Vol. 106, 256–261 (2015).MathSciNetCrossRefzbMATHGoogle Scholar
  108. 108.
    V. Gripon, J. Heusel, M. Lowe, F. Vermet, “A comparative study of sparse associative memories,” Journal of Statistical Physics, Vol. 164, 105–129 (2016).MathSciNetCrossRefzbMATHGoogle Scholar
  109. 109.
    A. A. Frolov, D. Husek, and D. A. Rachkovskij, “Time of searching for similar binary vectors in associative memory,” Cybernetics and Systems Analysis, Vol. 42, No. 5, 615–623 (2006).CrossRefzbMATHGoogle Scholar
  110. 110.
    G. Palm, “On associative memory,” Biological Cybernetics, Vol. 36, 19–31 (1980).CrossRefzbMATHGoogle Scholar
  111. 111.
    M. V. Tsodyks, “Associative memory in neural networks with binary synapses,” Mod. Phys. Lett., Vol. B4, 713–716 (1990).MathSciNetCrossRefGoogle Scholar
  112. 112.
    A. Frolov, A. Kartashov, A. Goltsev, and R. Folk, “Quality and efficiency of retrieval for Willshaw-like autoassociative networks. I. Correction,” Network, Vol. 6, 513–534 (1995).CrossRefzbMATHGoogle Scholar
  113. 113.
    F. Schwenker, F. T. Sommer, and G. Palm, “Iterative retrieval of sparsely coded associative memory patterns,” Neural Networks, Vol. 9, 445-455 (1996).CrossRefGoogle Scholar
  114. 114.
    A. A. Frolov, D. A. Rachkovskij, and D. Husek, “On information characteristics of Willshaw-like auto-associative memory,” Neural Network World, Vol. 12, No. 2, 141–157 (2002).Google Scholar
  115. 115.
    I. Kanter, “Potts-glass models of neural networks,” Physical Rev. A, Vol. 37 (7), 2739–2742 (1988).MathSciNetCrossRefGoogle Scholar
  116. 116.
    M. Lowe and F. Vermet, “The capacity of q-state Potts neural networks with parallel retrieval dynamics,” Statistics and Probability Letters, Vol. 77, No. 4, 1505–1514 (2007).MathSciNetCrossRefzbMATHGoogle Scholar
  117. 117.
    N. Onizawa, H. Jarollahi, T. Hanyu, and W. J. Gross, “Hardware execution of associative memories based on multiple-valued sparse clustered networks,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Vol. 6, No. 1, 13–24 (2016).CrossRefGoogle Scholar
  118. 118.
    A. Kartashov, A. Frolov, A. Goltsev, and R. Folk, “Quality and efficiency of retrieval for Willshaw-like autoassociative networks. III. Willshaw–Potts model,” Network, Vol. 8, No. 1, 71–86 (1997).CrossRefzbMATHGoogle Scholar
  119. 119.
    V. Gripon and C. Berrou, “Sparse neural networks with large learning diversity,” IEEE Trans. on Neural Networks, Vol. 22, No. 7, 1087–1096 (2011).CrossRefGoogle Scholar
  120. 120.
    M. Tsodyks, “Associative memory in asymmetric diluted network with low level of activity,” Europhysics Letters, Vol. 7, No. 3, 203–208 (1988).CrossRefGoogle Scholar
  121. 121.
    J. Buckingham and D. Willshaw, “On setting unit thresholds in an incompletely connected associative net,” Network, Vol. 4, 441–459 (1993).CrossRefGoogle Scholar
  122. 122.
    C. Yu, V. Gripon, X. Jiang, and H. Jegou, “Neural associative memories as accelerators for binary vector search,” in: Proc. COGNITIVE’15 (2015), pp. 85–89.Google Scholar
  123. 123.
    A. A. Frolov, D. Husek, I. P. Muraviev, and P. Polyakov, “Boolean factor analysis by attractor neural network,” IEEE Trans. Neural Networks, Vol. 18, No. 3, 698–707 (2007).CrossRefGoogle Scholar
  124. 124.
    P. Peretto and J. J. Niez, “Long term memory storage capacity of multiconnected neural networks” Biol. Cybern., Vol. 54, No. 1, 53–63 (1986).CrossRefzbMATHGoogle Scholar
  125. 125.
    P. Baldi and S. S. Venkatesh, “Number of stable points for spin-glasses and neural networks of higher orders,” Physical Review Letters, Vol. 58, No. 9, 913–916 (1987).MathSciNetCrossRefGoogle Scholar
  126. 126.
    D. Krotov and J. J. Hopfield, “Dense associative memory for pattern recognition,” in: Proc. NIPS’16 (2016), pp. 1172–1180.Google Scholar
  127. 127.
    D. Krotov and J. Hopfield, Dense Associative Memory is Robust to Adversarial Inputs. arXiv:1701.00939. 4 Jan 2017.Google Scholar
  128. 128.
    M. Demircigil, J. Heusel, M. Lowe, S. Upgang, and F. Vermet, “On a model of associative memory with huge storage capacity,” Journal Stat. Phys., Vol. 168, No. 2, 288–299 (2017).MathSciNetCrossRefGoogle Scholar
  129. 129.
    A. Karbasi, A. H. Salavati, and A. Shokrollahi, “Iterative learning and denoising in convolutional neural associative memories,” in: Proc. ICML’13 (2013), pp. 445–453.Google Scholar
  130. 130.
    A. H. Salavati, K. R. Kumar, and A. Shokrollahi, “Nonbinary associative memory with exponential pattern retrieval capacity and iterative learning,” IEEE Trans. Neural Networks and Learning Systems, Vol. 25, No. 3, 557–570 (2014).CrossRefGoogle Scholar
  131. 131.
    A. Mazumdar and A. S. Rawat, “Associative memory via a sparse recovery model,” in: Proc. NIPS’15 (2015), pp. 2683–2691.Google Scholar
  132. 132.
    A. Mazumdar and A. S. Rawat, “Associative memory using dictionary learning and expander decoding,” in: Proc. AAAI’17 (2017), pp. 267–273.Google Scholar
  133. 133.
    M. A. Mansor, M. S. M. Kasihmuddin, and S. Sathasivam, “VLSI circuit configuration using satisfiability logic in Hopfield network,” International Journal of Intelligent Systems and Applications (IJISA), Vol. 8, No. 9, 22–29 (2016).CrossRefGoogle Scholar
  134. 134.
    M. A. Mansor, M. S. M. Kasihmuddin, and S. Sathasivam, “Enhanced Hopfield network for pattern satisfiability optimization,” International Journal of Intelligent Systems and Applications (IJISA), Vol. 8, No. 11, 27–33 (2016).CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.International Scientific-Educational Center of Information Technologies and Systems, NAS of Ukraine and MES of UkraineKyivUkraine

Personalised recommendations