Skip to main content

A Predictive Coding Account for Chaotic Itinerancy

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2021 (ICANN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12891))

Included in the following conference series:

  • 3024 Accesses

Abstract

As a phenomenon in dynamical systems allowing autonomous switching between stable behaviors, chaotic itinerancy has gained interest in neurorobotics research. In this study, we draw a connection between this phenomenon and the predictive coding theory by showing how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise. We propose two scenarios generating random and past-independent attractor switching trajectories using our model.

This work was funded by the CY Cergy-Paris University Foundation (Facebook grant) and partially by Labex MME-DII, France (ANR11-LBX-0023-01).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Buckley, C.L., Kim, C.S., McGregor, S., Seth, A.K.: The free energy principle for action and perception: a mathematical review. J. Math. Psychol. 81, 55–79 (2017). https://doi.org/10.1016/j.jmp.2017.09.004

    Article  MathSciNet  MATH  Google Scholar 

  2. Clark, A.: Whatever next? predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36(3), 181–204 (2013). https://doi.org/10.1017/S0140525X12000477

    Article  Google Scholar 

  3. Friston, K., Kiebel, S.: Predictive coding under the free-energy principle. Philos. Trans. Roy. Soc. Lond. Ser. B Biol. Sci. 364, 1211–1221 (2009)

    Google Scholar 

  4. Ikeda, K., Otsuka, K., Matsumoto, K.: Maxwell-Bloch turbulence. Prog. Theoret. Phys. Suppl. 99, 295–324 (1989). https://doi.org/10.1143/PTPS.99.295

  5. Inoue, K., Nakajima, K., Kuniyoshi, Y.: Designing spontaneous behavioral switching via chaotic itinerancy. Sci. Adv. 6(46) (2020). https://doi.org/10.1126/sciadv.abb3989

  6. Kaneko, K.: Clustering, coding, switching, hierarchical ordering, and control in a network of chaotic elements. Physica D Nonlinear Phenomena 41(2), 137–172 (1990). https://doi.org/10.1016/0167-2789(90)90119-A

    Article  MathSciNet  MATH  Google Scholar 

  7. Kaneko, K., Tsuda, I.: Chaotic itinerancy. Chaos Interdisc. J. Nonlinear Sci. 13(3), 926–936 (2003). https://doi.org/10.1063/1.1607783

  8. Laje, R., Buonomano, D.: Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16(7), 925–935 (2013)

    Article  Google Scholar 

  9. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009). https://doi.org/10.1016/j.cosrev.2009.03.005

  10. Namikawa, J., Nishimoto, R., Tani, J.: A neurodynamic account of spontaneous behaviour. PLoS Comput. Biol. 7(10), 1–13 (2011). https://doi.org/10.1371/journal.pcbi.1002221

  11. Ororbia, A., Mali, A., Giles, C.L., Kifer, D.: Continual learning of recurrent neural networks by locally aligning distributed representations. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4267–4278 (2020)

    Article  MathSciNet  Google Scholar 

  12. Rao, R., Ballard, D.: Predictive coding in the visual cortex a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)

    Article  Google Scholar 

  13. Taylor, G.W., Hinton, G.E.: Factored conditional restricted Boltzmann machines for modeling motion style. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pp. 1025–1032. Association for Computing Machinery, New York (2009)

    Google Scholar 

  14. Tsuda, I.: Chaotic itinerancy as a dynamical basis of hermeneutics in brain and mind. World Futures 32(2–3), 167–184 (1991). https://doi.org/10.1080/02604027.1991.9972257

    Article  Google Scholar 

  15. Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Comput. Biol. 4(11), 1–18 (2008). https://doi.org/10.1371/journal.pcbi.1000220

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louis Annnabi .

Editor information

Editors and Affiliations

Appendices

A Free-Energy Derivations

In this section, we provide the derivations for Eq. 5. We start from the following probabilistic graphical model:

$$\begin{aligned} p(\mathbf {c}) = \sum _{k=1}^p \pi _k \mathcal {N}(\mathbf {c} ; \mathbf {\mu }_k, \sigma _c^2 \mathbb {I}_p) \end{aligned}$$
(11)
$$\begin{aligned} p(\mathbf {h}|\mathbf {c}) = \mathcal {N}(\mathbf {h} ; f(\mathbf {c}, \mathbf {h}_{t-1}); \sigma _h^2 \mathbb {I}_n) \end{aligned}$$
(12)
$$\begin{aligned} p(\mathbf {x}|\mathbf {h}) = \mathcal {N}(\mathbf {x} ; g(\mathbf {h}); \sigma _x^2 \mathbb {I}_2) \end{aligned}$$
(13)

Where f and g correspond to the top-down predictions described respectively in Eq. 2 and 4. Note that here, \(\mathbf {c}\), \(\mathbf {h}\) and \(\mathbf {x}\) denote random variables, and should not be confused with the variables of the computation model presented in the main text. Since free-energy will be used to perform inference on the hidden variables, and that it’s not possible to update the past hidden variable \(\mathbf {h}_{t-1}\), we consider it as a parameter and not a random variable of the probabilistic model. We only perform inference on \(\mathbf {c}\) and \(\mathbf {h} = \mathbf {h_t}\).

We introduce approximate posterior density functions \(q(\mathbf {h})\) and \(q(\mathbf {c})\) that are assumed to be Gaussian distributions of means \(\mathbf {m}_h\) and \(\mathbf {m}_c\). Given a target for \(\mathbf {x}\), denoted \(\mathbf {x^*}\), the variational free energy is defined as:

$$\begin{aligned} \mathcal {E}(\mathbf {x*}, \mathbf {m}_h,\mathbf {m}_c)&= \text {KL}(q(\mathbf {c}, \mathbf {h}) || p(\mathbf {c}, \mathbf {h}, \mathbf {x^*})) \end{aligned}$$
(14)
$$\begin{aligned}&= - \mathbb {E}_q[\log p(\mathbf {c}, \mathbf {h}, \mathbf {x^*})] + \mathbb {E}_q[\log q(\mathbf {c}, \mathbf {h})] \end{aligned}$$
(15)

The second term of Eq. 15 is the entropy of the approximate posterior distribution, and using the Gaussian assumption, does not depend on \(\mathbf {m}_h\) and \(\mathbf {m}_c\). As such, this term is of no interest for the derivation of the update rule of \(\mathbf {m}_h\) and \(\mathbf {m}_c\), and is replaced by the constant \(C_1\) in the remaining of the derivations. Using the Gaussian assumption, we can also find simplified derivations for the first term of Eq. 15, and grouping the terms not depending on \(\mathbf {m}_h\) and \(\mathbf {m}_c\) under the constant \(C_2\), we have the following result:

$$\begin{aligned} \mathcal {E}(\mathbf {x*}, \mathbf {m}_h,\mathbf {m}_c)&= -\log p(\mathbf {x^*}|\mathbf {h}) - \log p(\mathbf {m}_h|\mathbf {c}) - \log p(\mathbf {m}_c) + C_1 + C_2 \end{aligned}$$
(16)
$$\begin{aligned}&= \frac{(\mathbf {x^*} - g(\mathbf {h}))^2}{2\sigma _x^2} + \frac{(\mathbf {m}_h - f(\mathbf {c}, \mathbf {h}_{t-1}))^2}{2\sigma _h^2} - \log p(\mathbf {m}_c) + C \end{aligned}$$
(17)

where \(C = C_1 + C_2 + C_3\) and \(C_3\) corresponds to the additional terms obtained when developing \(\log p(\mathbf {x^*}|\mathbf {h})\) and \(\log p(\mathbf {m}_h|\mathbf {c})\).

[1] provides more detailed derivations and a deeper hindsight on the subject.

B Linked Videos

Here is the link to a video showing animated example trajectories in modes A and B (https://youtu.be/LRJQr8RmeCY).

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Annnabi, L., Pitti, A., Quoy, M. (2021). A Predictive Coding Account for Chaotic Itinerancy. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12891. Springer, Cham. https://doi.org/10.1007/978-3-030-86362-3_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86362-3_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86361-6

  • Online ISBN: 978-3-030-86362-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics