Advertisement

Cybernetics and Systems Analysis

, Volume 42, Issue 5, pp 615–623 | Cite as

Time of searching for similar binary vectors in associative memory

  • A. A. Frolov
  • D. Husek
  • D. A. Rachkovskii
Cybernetics

Abstract

Times of searching for similar binary vectors in neural-net and traditional associative memories are investigated and compared. The neural-net approach is demonstrated to surpass the traditional ones even if it is implemented on a serial computer when the entropy of a space of signals is of order of several hundreds and the number of stored vectors is vastly larger than the entropy.

Keywords

associative memory neural network Hopfield network binary vector indexing hashing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Nat. Acad. Sci. USA, 79, 2554–2558 (1982).CrossRefMathSciNetGoogle Scholar
  2. 2.
    G. A. Kohring, “Convergence time and finite size effects in neural networks,” J. Phys. A: Math. Gen., 23, 2237–2241 (1990).CrossRefMathSciNetGoogle Scholar
  3. 3.
    D. O. Gorodnichi and A. M. Reznik, “Increasing attraction of pseudo-inverse autoassociative networks,” Neural Proces. Lett., 5(2), 123–127 (1997).Google Scholar
  4. 4.
    A. K. Dekhtyarenko and D. V. Novitskii, “Neural associative memory with cellular structure,” Mat. Mashiny i Sistemy, No. 3, 37–44 (2002).Google Scholar
  5. 5.
    A. A. Frolov, D. A. Rachkovskij, and D. Husek, “On information characteristics of Willshaw-like auto-associative memory,” Neural Network World, 2, 141–157 (2002).Google Scholar
  6. 6.
    D. A. Rachkovskii, I. S. Misuno, S. V. Slipchenko, and A. M. Sokolov, “Search for analogues with the help of distributed representations,” Problemy Programmirovaniya, No. 1, 39–50 (2005).Google Scholar
  7. 7.
    D. A. Rachkovskij and E. M. Kussul, “Binding and normalization of binary sparse distributed representations by context-dependent thinning,” Neural Comput., 13(2), 411–452 (2001).CrossRefGoogle Scholar
  8. 8.
    A. A. Frolov, D. Husek, and I. P. Muraviev, “Information capacity and recall quality in sparsely encoded Hopfield-like neural network: Analytical approaches and computer simulation,” Neural Networks, 10(5), 845–855 (1997).CrossRefGoogle Scholar
  9. 9.
    A. A. Frolov, D. Husek, and I. P. Muraviev, “Informational efficiency of sparsely encoded Hopfield-like associative memory,” Optical Memory & Neural Networks (Inform. Optics), 12(3), 177–197 (2004).Google Scholar
  10. 10.
    G. Palm, “Computing with neural networks,” Science, 235, 1227–1228 (1987).CrossRefGoogle Scholar
  11. 11.
    J. Buhmann, R. Divko, and K. Schulten, “Associative memory with high information content,” Phys. Rev., 39, 2689–2692 (1989).MathSciNetGoogle Scholar
  12. 12.
    M. V. Tsodyks and M. V. Feigelman, “The enhanced storage capacity in neural network with low activity level,” Europhys. Lett., 6, 101–105 (1988).Google Scholar

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  • A. A. Frolov
    • 1
  • D. Husek
    • 2
  • D. A. Rachkovskii
    • 3
  1. 1.Institute of Higher Nervous Activity and NeurophysiologyRussian Academy of SciencesMoscowRussia
  2. 2.Institute of InformaticsAcademy of Sciences of the Czech RepublicPragueCzechia
  3. 3.International Scientific and Training Center of Information Technologies and SystemsNational Academy of Sciences of Ukraine and Ministry of Education and Science of UkraineKievUkraine

Personalised recommendations