Advertisement

Cognitive Computation

, Volume 6, Issue 1, pp 74–88 | Cite as

Analogical Mapping with Sparse Distributed Memory: A Simple Model that Learns to Generalize from Examples

  • Blerim Emruli
  • Fredrik Sandin
Article

Abstract

We present a computational model for the analogical mapping of compositional structures that combines two existing ideas known as holistic mapping vectors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type \(x_i \rightarrow y_i\) and computing analogies \(x_j \rightarrow y_j\) for novel inputs x j . The model has a one-shot learning process, is randomly initialized, and has three exogenous parameters: the dimensionality \(\mathcal{D}\) of representations, the memory size S, and the probability χ for activation of the memory. After learning three examples, the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S, and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 104, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships, and we calculate the error probabilities for recall and generalization.

Keywords

Analogical mapping Compositional structures Distributed representations Holistic processing Sparse distributed memory 

Notes

Acknowledgments

We thank the anonymous reviewers for their constructive suggestions that helped us to improve this article, Serge Thill for reviewing and commenting on an early version of the manuscript, Ross Gayler for helping us to improve the final text, and Jerker Delsing for comments and encouraging support. This work was partially supported by the Swedish Foundation for International Cooperation in Research and Higher Education (STINT) and the Kempe Foundations.

References

  1. 1.
    Gentner D. Structure-mapping: a theoretical framework for analogy. Cogn Sci. 1983;7(2):155–70.CrossRefGoogle Scholar
  2. 2.
    Minsky M. The society of mind. New York: Simon & Schuster; 1988.Google Scholar
  3. 3.
    Minsky M. The emotion machine: commonsense thinking, artificial intelligence, and the future of the human mind. New York: Simon & Schuster; 2006.Google Scholar
  4. 4.
    Holyoak KJ, Thagard P. Analogical mapping by constraint satisfaction. Cogn Sci. 1989;13(3):295–35.CrossRefGoogle Scholar
  5. 5.
    Holyoak KJ, Thagard P. Mental leaps: analogy in creative thought. Cambridge: MIT Press; 1996.Google Scholar
  6. 6.
    Hofstadter DR. In: Gentner D, Holyoak KJ, Kokinov BN, editors. The analogical mind: perspectives from cognitive science. Cambridge: MIT Press; 2001. p. 499–38.Google Scholar
  7. 7.
    Eliasmith C, Thagard P. Integrating structure and meaning: a distributed model of analogical mapping. Cogn Sci. 2001;25(2):245–86.CrossRefGoogle Scholar
  8. 8.
    Turney PD. The latent relation mapping engine: algorithm and experiments. J Artif Intell Res. 2008;33:615–55.Google Scholar
  9. 9.
    Kanerva P. Sparse distributed memory. Cambridge: The MIT Press; 1988.Google Scholar
  10. 10.
    Kanerva P. Sparse distributed memory and related models. In: Hassoun MH, editors. Associative neural memories: theory and implementation. Oxford: Oxford University Press; 1993. p. 50–76.Google Scholar
  11. 11.
    Anderson JA, Rosenfeld E, Pellionisz A. Neurocomputing. Cambridge: MIT Press; 1993.Google Scholar
  12. 12.
    Claridge-Chang A, Roorda RD, Vrontou E, Sjulson L, Li H, Hirsh J, Miesenböck G. Writing memories with light-addressable reinforcement circuitry. Cell. 2009;139(2):405–15.CrossRefPubMedCentralPubMedGoogle Scholar
  13. 13.
    Linhares A, Chada DM, Aranha CN. The emergence of Miller’s magic number on a sparse distributed memory. PLoS ONE. 2011;6(1):e15592.CrossRefPubMedCentralPubMedGoogle Scholar
  14. 14.
    Plate TA. Holographic reduced representations. IEEE Trans Neural Netw. 1995;6(3):623–41.CrossRefPubMedGoogle Scholar
  15. 15.
    Kanerva P. The spatter code for encoding concepts at many levels. In: Proceedings of the international conference on artificial neural networks; 1994. vol 1, p. 226–29.Google Scholar
  16. 16.
    Kanerva P. Fully distributed representation. In: Proceedings of the real world computing symposium; 1997. vol 97, p. 358–65.Google Scholar
  17. 17.
    Kanerva P. Large patterns make great symbols: an example of learning from example. In: Wermter S, Sun R, editors. Hybrid neural systems; 2000. vol 1778, p. 194–03.Google Scholar
  18. 18.
    Neumann J. Learning the systematic transformation of holographic reduced representations. Cogn Syst Res. 2002;3(2):227–35.CrossRefGoogle Scholar
  19. 19.
    Plate TA. Distributed representations and nested compositional structure. Ph.D. thesis, Department of Computer Science, University of Toronto, Toronto, Canada. 1994.Google Scholar
  20. 20.
    Plate TA. Holographic reduced representation: distributed representation for cognitive structures (Center for the Study of Language and Information (CSLI), 2003).Google Scholar
  21. 21.
    Neumann J. Holistic processing of hierarchical structures in connectionist networks. Ph.D. thesis, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom. 2001.Google Scholar
  22. 22.
    Kanerva P. Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn Comput. 2009;1(2):139–59.CrossRefGoogle Scholar
  23. 23.
    Evans TG. A heuristic program to solve geometric-analogy problems. In: Proceedings of the spring joint computer conference; 1964. p. 327–38.Google Scholar
  24. 24.
    Reitman WR. Cognition and thought. An information processing approach. Psychol Sch. 1966;3(2):189.Google Scholar
  25. 25.
    French RM. The computational modeling of analogy-making. Trends Cogn Sci. 2002;6(5):200–05.CrossRefPubMedGoogle Scholar
  26. 26.
    Gentner D, Forbus KD. Computational models of analogy. Wiley Interdiscip Rev Cogn Sci. 2011;2(3):266–76.CrossRefGoogle Scholar
  27. 27.
    Falkenhainer B, Forbus KD, Gentner D. The structure-mapping engine: algorithm and examples. Artif Intell. 1989;41(1):1–63.CrossRefGoogle Scholar
  28. 28.
    Gentner D, Markman AB. Defining structural similarity. J Cogn Sci. 2006;6:1–20.Google Scholar
  29. 29.
    Hummel JE, Holyoak KJ. Distributed representations of structure: a theory of analogical access and mapping. Psychol Rev. 1997;104:427–66.Google Scholar
  30. 30.
    Stewart T, Eliasmith C. Compositionality and biologically plausible models. In: Werning M, Hinzen W, Machery E, editors. The Oxford handbook of compositionality. Oxford: Oxford University Press; 2012.Google Scholar
  31. 31.
    Mitchell M. analogy-making as perception: a computer model. Cambridge: MIT Press; 1993.Google Scholar
  32. 32.
    French RM.The subtlety of sameness: a theory and computer model of analogy-making. Cambridge: MIT Press; 1995.Google Scholar
  33. 33.
    Marshall JB, Hofstadter DR. The metacat project: a self-watching model of analogy-making. Cogn Stud. 1997;4(4):57–71.Google Scholar
  34. 34.
    Kokinov BN, Petrov AA. The analogical mind: perspectives prom cognitive science. Cambridge: MIT Press; 2001. p. 161–96.Google Scholar
  35. 35.
    Pollack JB. Recursive distributed representations. Artif Intell. 1990;46:77–105.CrossRefGoogle Scholar
  36. 36.
    Chalmers DJ. Syntactic transformations on distributed representations. Conn Sci. 1990;2(1–2):53–62.CrossRefGoogle Scholar
  37. 37.
    Niklasson LF, van Gelder T. On being systematically connectionist. Mind Lang. 1994;9(3):288–302.CrossRefGoogle Scholar
  38. 38.
    Markman BA, Gentner D, Wisniewski JE. Comparison and cognition: implications of structure-sensitive processing for connectionist models. 1993.Google Scholar
  39. 39.
    Harnad S. The symbol grounding problem. Physica D. 1990;42(1–3):335–46.CrossRefGoogle Scholar
  40. 40.
    Barsalou LW. Grounded cognition. Ann Rev Psychol. 2008;59:617–45.CrossRefGoogle Scholar
  41. 41.
    Gayler RW. Vector symbolic architectures answer Jackendoff ’s challenges for cognitive neuroscience. In: Proceedings of the joint international conference on cognitive science; 2003. p. 133–38.Google Scholar
  42. 42.
    Hinton GE. Mapping part-whole hierarchies into connectionist networks. Artif Intell. 1990;46(1–2):47–75.CrossRefGoogle Scholar
  43. 43.
    Hammerton J. Holistic computation: reconstructing a muddled concept. Conn Sci. 1998;10(1):3–19.CrossRefGoogle Scholar
  44. 44.
    Reichardt W. Autokorrelations-Auswertung als Funktionsprinzip des Zentralnervensystems. Z Naturforsch. 1957;12(b):448.Google Scholar
  45. 45.
    Gabor D. Improved holographic model of temporal recall. Nature. 1968;217(5135):1288.CrossRefGoogle Scholar
  46. 46.
    Longuet-Higgins HC. Holographic model of temporal recall. Nature. 1968;217:104.CrossRefGoogle Scholar
  47. 47.
    Willshaw DJ, Buneman OP, Longuet-Higgins HC. Non-holographic associative memory. Nature. 1969;222:960–62.CrossRefPubMedGoogle Scholar
  48. 48.
    Aerts D, Czachor M, De Moor B. Geometric analogue of holographic reduced representation. J Math Psychol. 2009;53(5):389–98.CrossRefGoogle Scholar
  49. 49.
    Rasmussen D, Eliasmith C. A neural model of rule generation in inductive reasoning. Top Cogn Sci. 2011;3(1):140–53.CrossRefPubMedGoogle Scholar
  50. 50.
    Hely AT, Willshaw JD, Gillian HM. A new approach to Kanerva’s sparse distributed memory. IEEE Trans Neural Netw. 1997;8(3):791–94.CrossRefPubMedGoogle Scholar
  51. 51.
    Anwar A, Franklin S. Sparse distributed memory for ‘conscious’ software agents. Cogn Syst Res. 2003;4(4):339–54.CrossRefGoogle Scholar
  52. 52.
    Ratitch B, Precup D. Lecture Notes in Computer Science. 2004;3201:347.Google Scholar
  53. 53.
    Meng H, Appiah K, Hunter A, Yue S, Hobden M, Priestley N, Hobden P, Pettit C. A modified sparse distributed memory model for extracting clean patterns from noisy inputs. In: Proceedings of the international joint conference on neural networks; 2009. p. 2084–89.Google Scholar
  54. 54.
    Snaider J, Franklin S. Extended sparse distributed memory and sequence storage. Cogn Comput. 2012;4(2):172.Google Scholar
  55. 55.
    Hill SL, Wang Y, Riachi I, Schürmann F, Markram H. Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits. Proc Natl Acad Sci. 2012.Google Scholar
  56. 56.
    Russell S, Norvig P. Artificial Intelligence: a modern approach, 3rd edn. Englewood Cliffs: Prentice Hall; 2009.Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.EISLAB, Luleå University of TechnologyLuleåSweden

Personalised recommendations