Faster Learning with Overlapping Neural Assemblies

  • Andrei Kursin
  • Dušan Húsek
  • Roman Neruda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)


Cell assemblies in neural network are often assumed as overlapping, i.e. a neuron may belong to several of them simultaneously. We argue that network structures with overlapping cell assemblies can exhibit faster learning comparing to non-overlapping ones. In such structures newly trained assemblies take advantage of their overlaps with the already trained neighbors. The assemblies learned in such manner nevertheless preserve the ability for subsequent separate firing. We discuss the implications it may have for intensification of neural network training methods and we also propose to view this learning speed-up in a broader context of inter-assembly cooperation useful for modeling concept formation in human thinking.


Input Pattern Excitatory Neuron Faster Learn Connection Matrix Input Link 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hebb, D.O.: The Organization of Behavior. A Neuropsychological Theory. John Wiley, New York (1949)Google Scholar
  2. 2.
    Palm, G.: Neural Assemblies. Studies of Brain Function, vol. VII. Springer, Heidelberg (1982)Google Scholar
  3. 3.
    Wickelgren, W.A.: Webs, Cell Assemblies, and Chunking in Neural Nets. Canadian Journal of Experimental psychology 53(1), 118–131 (1999)Google Scholar
  4. 4.
    Strangman, G.: Detecting Synchronous Cell Assemblies with Limited Data and Overlapping Assemblies. Neural Computation 9, 51–76 (1997)MATHCrossRefGoogle Scholar
  5. 5.
    Huyk, C.R.: Overlapping Cell Assemblies from Correlators. Neurocomputing 56, 435–439 (2004)CrossRefGoogle Scholar
  6. 6.
    Hopfield, J.: Neural Nets and Physical Systems with Emergent Collective Computational Abilities. Proc. of the Nat. Academy of Sciences USA 79, 2554–2558 (1982)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Amit, D.J., Brunel, N.: Learning Internal Representations in an Attractor Neural Network. Network 6, 359–388 (1995)MATHCrossRefGoogle Scholar
  8. 8.
    Brunel, N.: Hebbian Learning of Context in Recurrent Neural Networks. Neural Computation 8, 1677–1710 (1996)CrossRefGoogle Scholar
  9. 9.
    Kursin, A.: Neural Network: Input Anticipation May Lead To Advanced Adaptation Properties. In: Kaynak, O. (ed.) Artificial Neural Networks and Neural Information Processing, pp. 779–785. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  10. 10.
    Kursin, A.: Self-Organization of Anticipatory Neural Network. Scientific Proceedings Of Riga Technical University, Riga. series Computer Science, Information Technology and Management Science, pp. 51–59 (2004)Google Scholar
  11. 11.
    Lakoff, G., Johnson, M.: Metaphors We Live By. Univ. of Chicago Press, Chicago (1980)Google Scholar
  12. 12.
    Huyck, C., Orengo, V.: Information Retrieval and Categorisation using a Cell Assembly Network. Neural Computing & Applications 14(4), 282–289 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Andrei Kursin
    • 1
  • Dušan Húsek
    • 2
  • Roman Neruda
    • 3
  1. 1.Information Systems DepartmentKharkiv Polytechnic InstituteKharkivUkraine
  2. 2.Institute of Computer Science, Neural Networks and Nonlinear Systems DepartmentPragueCzech Republic
  3. 3.Institute of Computer Science, Neural Theoretical Computer Science DepartmentPragueCzech Republic

Personalised recommendations