Abstract
As a phenomenon in dynamical systems allowing autonomous switching between stable behaviors, chaotic itinerancy has gained interest in neurorobotics research. In this study, we draw a connection between this phenomenon and the predictive coding theory by showing how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise. We propose two scenarios generating random and past-independent attractor switching trajectories using our model.
This work was funded by the CY Cergy-Paris University Foundation (Facebook grant) and partially by Labex MME-DII, France (ANR11-LBX-0023-01).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Buckley, C.L., Kim, C.S., McGregor, S., Seth, A.K.: The free energy principle for action and perception: a mathematical review. J. Math. Psychol. 81, 55–79 (2017). https://doi.org/10.1016/j.jmp.2017.09.004
Clark, A.: Whatever next? predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36(3), 181–204 (2013). https://doi.org/10.1017/S0140525X12000477
Friston, K., Kiebel, S.: Predictive coding under the free-energy principle. Philos. Trans. Roy. Soc. Lond. Ser. B Biol. Sci. 364, 1211–1221 (2009)
Ikeda, K., Otsuka, K., Matsumoto, K.: Maxwell-Bloch turbulence. Prog. Theoret. Phys. Suppl. 99, 295–324 (1989). https://doi.org/10.1143/PTPS.99.295
Inoue, K., Nakajima, K., Kuniyoshi, Y.: Designing spontaneous behavioral switching via chaotic itinerancy. Sci. Adv. 6(46) (2020). https://doi.org/10.1126/sciadv.abb3989
Kaneko, K.: Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements. Physica D Nonlinear Phenomena 41(2), 137–172 (1990). https://doi.org/10.1016/0167-2789(90)90119-A
Kaneko, K., Tsuda, I.: Chaotic itinerancy. Chaos Interdisc. J. Nonlinear Sci. 13(3), 926–936 (2003). https://doi.org/10.1063/1.1607783
Laje, R., Buonomano, D.: Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16(7), 925–935 (2013)
Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009). https://doi.org/10.1016/j.cosrev.2009.03.005
Namikawa, J., Nishimoto, R., Tani, J.: A neurodynamic account of spontaneous behaviour. PLoS Comput. Biol. 7(10), 1–13 (2011). https://doi.org/10.1371/journal.pcbi.1002221
Ororbia, A., Mali, A., Giles, C.L., Kifer, D.: Continual learning of recurrent neural networks by locally aligning distributed representations. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4267–4278 (2020)
Rao, R., Ballard, D.: Predictive coding in the visual cortex a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)
Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pp. 1025–1032. Association for Computing Machinery, New York (2009)
Tsuda, I.: Chaotic itinerancy as a dynamical basis of hermeneutics in brain and mind. World Futures 32(2–3), 167–184 (1991). https://doi.org/10.1080/02604027.1991.9972257
Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Comput. Biol. 4(11), 1–18 (2008). https://doi.org/10.1371/journal.pcbi.1000220
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Free-Energy Derivations
In this section, we provide the derivations for Eq. 5. We start from the following probabilistic graphical model:
Where f and g correspond to the top-down predictions described respectively in Eq. 2 and 4. Note that here, \(\mathbf {c}\), \(\mathbf {h}\) and \(\mathbf {x}\) denote random variables, and should not be confused with the variables of the computation model presented in the main text. Since free-energy will be used to perform inference on the hidden variables, and that it’s not possible to update the past hidden variable \(\mathbf {h}_{t-1}\), we consider it as a parameter and not a random variable of the probabilistic model. We only perform inference on \(\mathbf {c}\) and \(\mathbf {h} = \mathbf {h_t}\).
We introduce approximate posterior density functions \(q(\mathbf {h})\) and \(q(\mathbf {c})\) that are assumed to be Gaussian distributions of means \(\mathbf {m}_h\) and \(\mathbf {m}_c\). Given a target for \(\mathbf {x}\), denoted \(\mathbf {x^*}\), the variational free energy is defined as:
The second term of Eq. 15 is the entropy of the approximate posterior distribution, and using the Gaussian assumption, does not depend on \(\mathbf {m}_h\) and \(\mathbf {m}_c\). As such, this term is of no interest for the derivation of the update rule of \(\mathbf {m}_h\) and \(\mathbf {m}_c\), and is replaced by the constant \(C_1\) in the remaining of the derivations. Using the Gaussian assumption, we can also find simplified derivations for the first term of Eq. 15, and grouping the terms not depending on \(\mathbf {m}_h\) and \(\mathbf {m}_c\) under the constant \(C_2\), we have the following result:
where \(C = C_1 + C_2 + C_3\) and \(C_3\) corresponds to the additional terms obtained when developing \(\log p(\mathbf {x^*}|\mathbf {h})\) and \(\log p(\mathbf {m}_h|\mathbf {c})\).
[1] provides more detailed derivations and a deeper hindsight on the subject.
B Linked Videos
Here is the link to a video showing animated example trajectories in modes A and B (https://youtu.be/LRJQr8RmeCY).
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Annnabi, L., Pitti, A., Quoy, M. (2021). A Predictive Coding Account for Chaotic Itinerancy. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12891. Springer, Cham. https://doi.org/10.1007/978-3-030-86362-3_47
Download citation
DOI: https://doi.org/10.1007/978-3-030-86362-3_47
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86361-6
Online ISBN: 978-3-030-86362-3
eBook Packages: Computer ScienceComputer Science (R0)