Abstract
The brain is without doubt the most complex adaptive system known to humanity, arguably also a complex system about which we know very little. Throughout this book we have considered and developed general guiding principles for the understanding of complex networks and their dynamical properties; principles and concepts transcending the details of specific layouts realized in real-world complex systems. We follow the same approach here, considering the brain as a prominent example of what is called a cognitive system, a specific instance of what one denotes, cum grano salis, a living dynamical system.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
Humans can distinguish cognitively about 10–12 objects per second.
- 3.
See Edelman and Tononi (2000).
- 4.
- 5.
See Crick and Koch (2003).
- 6.
Note that neuromodulators are typically released in the intercellular medium from where they physically diffuse towards the surrounding neurons.
- 7.
This is a standard result for so-called Hopfield neural networks, see e.g. Ballard (2000).
- 8.
A neural network is denoted “recurrent” when loops dominate the network topology.
- 9.
For a mathematically precise definition, a memory is termed fading when forgetting is scale-invariant, viz having a power law functional time dependence.
- 10.
We note that general n-point interactions could be generated additionally when eliminating the interneurons. “n-point interactions” are terms entering the time evolution of dynamical systems depending on (n − 1) variables. Normal synaptic interactions are 2-point interactions, as they involve two neurons, the presynaptic and the postsynaptic neuron. When integrating out a degree of freedom, like the activity of the interneurons, n-point interactions are generated generally. The postsynaptic neuron is then influenced only when (n − 1) presynaptic neurons are active simultaneously. n-point interactions are normally not considered in neural networks theory. They complicate the analysis of the network dynamics considerably.
- 11.
Here we use the term “transient attractor” as synonymous with “attractor ruin”, an alternative terminology from dynamical system theory.
- 12.
A possible mathematical implementation for the reservoir functions, with α = w, z, is \(f_{\alpha }(\varphi )\ =\ f_{\alpha }^{(\min )}\, +\, \left (1 - f_{\alpha }^{(\min )}\right ) \frac{\mathrm{atan}[(\varphi -\varphi _{c}^{(\alpha )})/\Gamma _{\varphi }]-\mathrm{atan}[(0-\varphi _{c}^{(\alpha )})/\Gamma _{\varphi }]} {\mathrm{atan}[(1-\varphi _{c}^{(\alpha )})/\Gamma _{\varphi }]-\mathrm{atan}[(0-\varphi _{c}^{(\alpha )})/\Gamma _{\varphi }]}\). Suitable values are \(\varphi _{c}^{(z)} = 0.15\), \(\varphi _{c}^{(\text{w})} = 0.7\) \(\Gamma _{\varphi } = 0.05\), f w (min) = 0. 1 and \(f_{z}^{(\min )} = 0\).
- 13.
A Kohonen network is an example of a neural classifier via one-winner-takes-all architecture, see e.g. Ballard (2000).
References
Abeles M. et al. 1995 Cortical activity flips among quasi-stationary states. Proceedings of the National Academy of Science, USA 92, 8616–8620.
Arbib, M.A. 2002 The Handbook of Brain Theory and Neural Networks. MIT Press, Cambridge, MA.
Baars, B.J., Franklin, S. 2003 How conscious experience and working memory interact. Trends in Cognitive Science 7, 166–172.
Ballard, D.H. 2000 An Introduction to Natural Computation. MIT Press, Cambridge, MA.
Carpenter, G.A. 2001 Neural-network models of learning and memory: Leading questions and an emerging framework. Trends in Cognitive Science 5, 114–118.
Chechik, G., Meilijson, I., Ruppin, E. 2001 Effective neuronal learning with ineffective Hebbian learning rules. Neural Computation 13, 817.
Chialvo, D.R., Bak, P. 1999 Learning from mistakes. Neuroscience 90, 1137–1148.
Crick, F.C., Koch, C. 2003 A framework for consciousness. Nature Neuroscience 6, 119–126.
Dayan, P., Abbott, L.F. 2001 Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, Cambridge, MA.
Dehaene, S., Naccache, L. 2003 Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 79, 1–37.
Dorffner, G. 1996 Neural networks for time series processing. Neural Network World 6, 447–468.
Doya, K. 1999 What are the computations of the cerebellum, the basal ganglia and the cerebral cortex? Neural Networks 12, 961–974.
Edelman, G.M., Tononi, G.A. 2000 A Universe of Consciousness. Basic Books, New York.
Elman, J.L. 1990 Finding structure in time. Cognitive Science 14, 179–211.
Elman, J.L. 2004 An alternative view of the mental lexicon. Trends in Cognitive Sciences 8, 301–306.
Gros, C. 2007 Neural networks with transient state dynamics. New Journal of Physics 9, 109.
Gros, C. 2009a Cognitive computation with autonomously active neural networks: An emerging field. Cognitive Computation 1, 77.
Gros, C. 2009b Emotions, diffusive emotional control and the motivational problem for autonomous cognitive systems. In: Vallverdu, J., Casacuberta, D. (eds.) Handbook of Research on Synthetic Emotions and Sociable Robotics: New Applications in Affective Computing and Artificial Intelligence. IGI-Global Hershey, NJ..
Gros, C. 2010 Cognition and Emotion: Perspectives of a Closing Gap. Cognitive Computation 2, 78–85.
Gros, C. 2012 Emotional control - conditio sine qua non for advanced artificial intelligences? In: Müller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence. Springer..
Kaelbling, L.P., Littman, M.L., Moore, A. 1996 Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285.
Kenet, T., Bibitchkov, D., Tsodyks, M., Grinvald, A., Arieli, A. 2003 Spontaneously emerging cortical representations of visual attributes. Nature 425, 954–956.
Liu, H., Singh, P. 2004 ConcepNet a practical commonsense reasoning tool-kit. BT Technology Journal 22, 211–226.
Marković, D., Gros, C. 2010 Self-organized chaos through polyhomeostatic optimization. Physical Review Letters 105, 068702.
Marković, D., Gros, C. 2012 Intrinsic Adaption in Autonomous Recurrent Neural Networks. Neural Computation 24, 523–540.
McLeod, P., Plunkett, K., Rolls, E.T. 1998 Introduction to Connectionist Modelling. Oxford University Press, New York.
Nelson, D.L., McEvoy, C.L., Schreiber, T.A. 1998 The University of South Florida Word Association, Rhyme, and Word Fragment Norms. Homepage: http://w3.usf.edu/FreeAssociation.
O’Reilly, R.C., Munakata, Y. 2000 Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. MIT Press, Cambridge.
Rabinovich, M.I., Varona, P., Selverston, A.I., Abarbanel, H.D.I. 2006 Dynamical principles in neuroscience. Review of Modern Physics 78, 1213–1256.
Russell, S.J., Norvig, P. 1995 Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Gros, C. (2013). Elements of Cognitive Systems Theory. In: Complex and Adaptive Dynamical Systems. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36586-7_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-36586-7_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36585-0
Online ISBN: 978-3-642-36586-7
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)