Cognitive Computation

, Volume 1, Issue 2, pp 139–159 | Cite as

Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors

Article

Abstract

The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.

Keywords

Holographic reduced representation Holistic record Holistic mapping Random indexing Cognitive code von Neumann architecture 

References

  1. 1.
    Anderson JA. A simple neural network generating an interactive memory. Math Biosci. 1972;14:197–220.CrossRefGoogle Scholar
  2. 2.
    Kohonen T. Correlation matrix memories. IEEE Trans Comput. 1984;C21(4):353–9.Google Scholar
  3. 3.
    Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA. 1982;79(8):2554–8.PubMedCrossRefGoogle Scholar
  4. 4.
    Kanerva P. Sparse distributed memory. Cambridge, MA: MIT Press; 1988.Google Scholar
  5. 5.
    Karlsson R. A fast activation mechanism for the Kanerva SDM memory. In: Uesaka Y, Kanerva P, Asoh H, editors. Foundations of real-world computing. Stanford: CSLI; 2001. p. 289–93.Google Scholar
  6. 6.
    Aleksander I, Stonham TJ, Wilkie BA. Computer vision systems for industry: WISARD and the like. Digit Syst Ind Autom. 1982;1:305–23.Google Scholar
  7. 7.
    Hinton GH, Anderson JA, editors. Parallel models of associative memory. Hillsdale, NJ: Erlbaum; 1981.Google Scholar
  8. 8.
    Hassoun MH, editor. Associative neural memories: theory and implementation. New York, Oxford: Oxford University Press; 1993.Google Scholar
  9. 9.
    Kohonen T. Self-organization and associative memory. 3rd ed. Berlin: Springer; 1989.Google Scholar
  10. 10.
    Palm G. Neural assemblies: an alternative approach to artificial intelligence. Heidelberg: Springer; 1982.Google Scholar
  11. 11.
    Hinton GE. Mapping part–whole hierarchies into connectionist networks. Artif Intell. 1990;46(1–2):47–75.CrossRefGoogle Scholar
  12. 12.
    Smolensky P. Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artif Intell. 1990;46(1–2):159–216.CrossRefGoogle Scholar
  13. 13.
    Plate T. Holographic Reduced Representations: convolution algebra for compositional distributed representations. In: Mylopoulos J, Reiter R, editors. Proc. 12th int’l joint conference on artificial intelligence (IJCAI). San Mateo, CA: Kaufmann; 1991. p. 30–35.Google Scholar
  14. 14.
    Plate TA. Holographic reduced representation: distributed representation of cognitive structure. Stanford: CSLI; 2003.Google Scholar
  15. 15.
    Kanerva P. Binary spatter-coding of ordered K-tuples. In: von der Malsburg C, von Seelen W, Vorbruggen JC, Sendhoff B, editors. Artificial neural networks – ICANN 96 proceedings (Lecture notes in computer science, vol. 1112). Berlin: Springer; 1996. p. 869–73.Google Scholar
  16. 16.
    Gayler RW. Multiplicative binding, representation operators, and analogy. Poster abstract. In: Holyoak K, Gentner D, Kokinov B, editors. Advances in analogy research. Sofia: New Bulgarian University; 1998. p. 405. Full poster http://cogprints.org/502/. Accessed 15 Nov 2008.
  17. 17.
    Rachkovskij DA, Kussul EM. Binding and normalization of binary sparse distributed representations by context-dependent thinning. Neural Comput. 2001;13(2):411–52.CrossRefGoogle Scholar
  18. 18.
    Kussul EM, Baidyk TN. On Information encoding in associative–projective neural networks. Report 93-3. Kiev, Ukraine: V.M. Glushkov Inst. of Cybernetics; 1993 (in Russian).Google Scholar
  19. 19.
    Landauer T, Dumais S. A solution to Plato’s problem: the Latent Semantic Analysis theory of acquisition, induction and representation of knowledge. Psychol Rev. 1997;104(2):211–40.CrossRefGoogle Scholar
  20. 20.
    Kanerva P, Kristoferson J, Holst A. Random Indexing of text samples for latent semantic analysis. Poster abstract. In: Gleitman LR, Josh AK, editors. Proc. 22nd annual conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum; 2000. p. 1036. Full poster http://www.rni.org/kanerva/cogsci2k-poster.txt. Accessed 23 Nov 2008.
  21. 21.
    Papadimitriou C, Raghavan P, Tamaki H, Vempala S. Latent semantic indexing: a probabilistic analysis. Proc. 17th ACM symposium on the principles of database systems. New York: ACM Press; 1998. p. 159–68.Google Scholar
  22. 22.
    Kaski S. Dimensionality reduction by random mapping: fast similarity computation for clustering. Proc. int’l joint conference on neural networks, IJCNN’98. Piscataway, NJ: IEEE Service Center; 1999. p. 413–8.Google Scholar
  23. 23.
    Indyk P. Algorithmic aspects of low-distortion geometric embeddings. Annual symposium on foundations of computer science (FOCS) 2001 tutorial. http://people.csail.mit.edu/indyk/tut.ps. Accessed 15 Nov 2008.
  24. 24.
    Schütze H. Word space. In: Hanson SJ, Cowan JD, Giles CL, editors. Advances in neural information processing systems 5. San Mateo, CA: Kaufmann; 1993. p. 895–902.Google Scholar
  25. 25.
    Lund K, Burgess C, Atchley R. Semantic and associative priming in high-dimensional semantic space. Proc. 17th annual conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum; 1995. p. 660–5.Google Scholar
  26. 26.
    Sahlgren M. The word-space model. Doctoral dissertation. Department of Linguistics, Stockholm University; 2006. http://www.sics.se/∼mange/TheWordSpaceModel.pdf. Accessed 23 Nov 2008.
  27. 27.
    Jones MN, Mewhort DJK. Representing word meaning and order information in a composite holographic lexicon. Psychol Rev. 2007;114(1):1–37.PubMedCrossRefGoogle Scholar
  28. 28.
    Sahlgren M, Holst A, Kanerva P. Permutations as a means to encode order in word space. Proc. 30th annual conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. p. 1300–5.Google Scholar
  29. 29.
    Widdows D. Geometry and meaning. Stanford: CSLI; 2004.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Center for the Study of Language and InformationStanford UniversityStanfordUSA

Personalised recommendations