Abstract
Nonlinear network dynamics are notoriously difficult to understand. Here we study a class of recurrent neural networks called combinatorial threshold-linear networks (CTLNs) whose dynamics are determined by the structure of a directed graph. They are a special case of TLNs, a popular framework for modeling neural activity in computational neuroscience. In prior work, CTLNs were found to be surprisingly tractable mathematically. For small networks, the fixed points of the network dynamics can often be completely determined via a series of graph rules that can be applied directly to the underlying graph. For larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties. In this work, we propose a method of covering graphs of CTLNs with a set of smaller directional graphs that reflect the local flow of activity. While directional graphs may or may not have a feedforward architecture, their fixed point structure is indicative of feedforward dynamics. The combinatorial structure of the graph cover is captured by the nerve of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three nerve theorems that provide strong constraints on the fixed points of the underlying network from the structure of the nerve. We then illustrate the power of these theorems with some examples. Remarkably, we find that the nerve not only constrains the fixed points of CTLNs, but also gives insight into the transient and asymptotic dynamics. This is because the flow of activity in the network tends to follow the edges of the nerve.
Authors Katherine Morrison and Carina Curto have equally contributed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A graph is simple if it does not have loops or multiple edges between a pair of vertices.
- 2.
A fixed point, x ∗, of a TLN is a solution that satisfies \(dx_i/dt|{ }_{x^*} = 0\) for each i ∈ [n].
- 3.
- 4.
Recall that we write 123 to denote the fixed point support {1, 2, 3}.
- 5.
We say that {ν 1, …, ν n} is a simply-embedded partition if every vertex in ν i is treated identically by the rest of the graph. In other words, for every j ∈ V G ∖ ν i if j → k for some k ∈ ν i, then j → ℓ for all ℓ ∈ ν i.
- 6.
Each vertex k in the 5-star has two outgoing edges: k → k + 1 and k → k + 2 (indexing mod 5).
References
Abeles, M.: Local Cortical Circuits: An Electrophysiological Study. Springer, Berlin (1982)
Aviel, Y., Pavlov, E., Abeles, M., Horn, D.: Synfire chain in a balanced network. Neurocomputing 44, 285–292 (2002)
Bel, A., Cobiaga, R., Reartes, W., Rotstein, H.G.: Periodic solutions in threshold-linear networks and their entrainment. SIAM J. Appl. Dyn. Syst. 20(3), 1177–1208 (2021)
Biswas, T., Fitzgerald, J.E.: A geometric framework to predict structure from function in neural networks (2020). Available at https://arxiv.org/abs/2010.09660
Borsuk, K.: On the imbedding of systems of compacta in simplicial complexes. Fundam. Math. 35(1), 217–234 (1948)
Curto, C., Morrison, K.: Pattern completion in symmetric threshold-linear networks. Neural Comput. 28, 2825–2852 (2016)
Curto, C., Degeratu, A., Itskov, V.: Encoding binary neural codes in networks of threshold-linear neurons. Neural Comput. 25, 2858–2903 (2013)
Curto, C., Geneson, J., Morrison, K.: Fixed points of competitive threshold-linear networks. Neural Comput. 31(1), 94–155 (2019)
Curto, C., Geneson, J., Morrison, K.: Stable fixed points of combinatorial threshold-linear networks (2019). Available at https://arxiv.org/abs/1909.02947
Hahnloser, R.H., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000)
Hahnloser, R.H., Seung, H.S., Slotine, J.J.: Permitted and forbidden sets in symmetric threshold-linear networks. Neural Comput. 15(3), 621–638 (2003)
Hayon, G., Abeles, M., Lehmann, D.: A model for representing the dynamics of a system of synfire chains. J. Comput. Neurosci. 18(41–53) (2005)
Leray, J.: Sur la forme des espaces topologiques et sur les points fixes des représentations. J. Math. Pures Appl. 9, 95–248 (1945)
Morrison, K., Curto, C.: Predicting neural network dynamics via graphical analysis. In: Robeva, R., Macaulay, M. (eds.) Algebraic and Combinatorial Computational Biology, pp. 241–277. Elsevier, Amsterdam (2018)
Morrison, K., Degeratu, A., Itskov, V., Curto, C.: Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report (2016). Available at https://arxiv.org/abs/1605.04463
Parmelee, C., Londono Alvarez, J., Curto, C., Morrison, K.: Sequential attractors of combinatorial threshold-linear networks. To appear in SIAM J. Appl. Dyn. Syst. (2022). Available at https://arxiv.org/abs/2107.10244
Parmelee, C., Moore, S., Morrison, K., Curto, C.: Core motifs predict dynamic attractors in combinatorial threshold-linear networks, PLOS ONE, 17(3) (2022): e0264456 https://doi.org/10.1371/journal.pone.0264456
Seung, H.S., Yuste, R.: Principles of Neural Science. In: Appendix E: Neural Networks, pp. 1581–1600, 5th ed. McGraw-Hill Education/Medical, New York (2012)
Xie, X., Hahnloser, R.H., Seung, H.S.: Selectively grouping neurons in recurrent networks of lateral inhibition. Neural Comput. 14, 2627–2646 (2002)
Acknowledgements
This research is a product of one of the working groups at the Workshop for Women in Computational Topology (WinCompTop) in Canberra, Australia (1–5 July 2019). We thank the organizers of this workshop and the funding from NSF award CCF-1841455, the Mathematical Sciences Institute at ANU, the Australian Mathematical Sciences Institute (AMSI), and Association for Women in Mathematics that supported participants’ travel. We thank Caitlyn Parmelee for fruitful discussions that helped set the foundation for this work. We would also like to thank Joan Licata for valuable conversations at the WinCompTop workshop. CC and KM acknowledge funding from NIH R01 EB022862, NIH R01 NS120581, NSF DMS-1951165, and NSF DMS-1951599.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s) and the Association for Women in Mathematics
About this chapter
Cite this chapter
Santander, D.E. et al. (2022). Nerve Theorems for Fixed Points of Neural Networks. In: Gasparovic, E., Robins, V., Turner, K. (eds) Research in Computational Topology 2. Association for Women in Mathematics Series, vol 30. Springer, Cham. https://doi.org/10.1007/978-3-030-95519-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-95519-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95518-2
Online ISBN: 978-3-030-95519-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)