Skip to main content

Nerve Theorems for Fixed Points of Neural Networks

  • Chapter
  • First Online:
Research in Computational Topology 2

Abstract

Nonlinear network dynamics are notoriously difficult to understand. Here we study a class of recurrent neural networks called combinatorial threshold-linear networks (CTLNs) whose dynamics are determined by the structure of a directed graph. They are a special case of TLNs, a popular framework for modeling neural activity in computational neuroscience. In prior work, CTLNs were found to be surprisingly tractable mathematically. For small networks, the fixed points of the network dynamics can often be completely determined via a series of graph rules that can be applied directly to the underlying graph. For larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties. In this work, we propose a method of covering graphs of CTLNs with a set of smaller directional graphs that reflect the local flow of activity. While directional graphs may or may not have a feedforward architecture, their fixed point structure is indicative of feedforward dynamics. The combinatorial structure of the graph cover is captured by the nerve of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three nerve theorems that provide strong constraints on the fixed points of the underlying network from the structure of the nerve. We then illustrate the power of these theorems with some examples. Remarkably, we find that the nerve not only constrains the fixed points of CTLNs, but also gives insight into the transient and asymptotic dynamics. This is because the flow of activity in the network tends to follow the edges of the nerve.

Authors Katherine Morrison and Carina Curto have equally contributed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A graph is simple if it does not have loops or multiple edges between a pair of vertices.

  2. 2.

    A fixed point, x , of a TLN is a solution that satisfies \(dx_i/dt|{ }_{x^*} = 0\) for each i ∈ [n].

  3. 3.

    This is analogous to the definition of a “good” cover of a topological space, which also requires well-behaved intersections. Nerves of good covers reflect the topology of the underlying space [5, 13].

  4. 4.

    Recall that we write 123 to denote the fixed point support {1,  2,  3}.

  5. 5.

    We say that {ν 1, …, ν n} is a simply-embedded partition if every vertex in ν i is treated identically by the rest of the graph. In other words, for every j ∈ V G ∖ ν i if j → k for some k ∈ ν i, then j →  for all  ∈ ν i.

  6. 6.

    Each vertex k in the 5-star has two outgoing edges: k → k + 1 and k → k + 2 (indexing mod 5).

References

  1. Abeles, M.: Local Cortical Circuits: An Electrophysiological Study. Springer, Berlin (1982)

    Book  Google Scholar 

  2. Aviel, Y., Pavlov, E., Abeles, M., Horn, D.: Synfire chain in a balanced network. Neurocomputing 44, 285–292 (2002)

    Article  Google Scholar 

  3. Bel, A., Cobiaga, R., Reartes, W., Rotstein, H.G.: Periodic solutions in threshold-linear networks and their entrainment. SIAM J. Appl. Dyn. Syst. 20(3), 1177–1208 (2021)

    Article  MathSciNet  Google Scholar 

  4. Biswas, T., Fitzgerald, J.E.: A geometric framework to predict structure from function in neural networks (2020). Available at https://arxiv.org/abs/2010.09660

  5. Borsuk, K.: On the imbedding of systems of compacta in simplicial complexes. Fundam. Math. 35(1), 217–234 (1948)

    Article  MathSciNet  Google Scholar 

  6. Curto, C., Morrison, K.: Pattern completion in symmetric threshold-linear networks. Neural Comput. 28, 2825–2852 (2016)

    Article  MathSciNet  Google Scholar 

  7. Curto, C., Degeratu, A., Itskov, V.: Encoding binary neural codes in networks of threshold-linear neurons. Neural Comput. 25, 2858–2903 (2013)

    Article  MathSciNet  Google Scholar 

  8. Curto, C., Geneson, J., Morrison, K.: Fixed points of competitive threshold-linear networks. Neural Comput. 31(1), 94–155 (2019)

    Article  MathSciNet  Google Scholar 

  9. Curto, C., Geneson, J., Morrison, K.: Stable fixed points of combinatorial threshold-linear networks (2019). Available at https://arxiv.org/abs/1909.02947

  10. Hahnloser, R.H., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J., Seung, H.S.: Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature 405, 947–951 (2000)

    Article  Google Scholar 

  11. Hahnloser, R.H., Seung, H.S., Slotine, J.J.: Permitted and forbidden sets in symmetric threshold-linear networks. Neural Comput. 15(3), 621–638 (2003)

    Article  Google Scholar 

  12. Hayon, G., Abeles, M., Lehmann, D.: A model for representing the dynamics of a system of synfire chains. J. Comput. Neurosci. 18(41–53) (2005)

    Google Scholar 

  13. Leray, J.: Sur la forme des espaces topologiques et sur les points fixes des représentations. J. Math. Pures Appl. 9, 95–248 (1945)

    MATH  Google Scholar 

  14. Morrison, K., Curto, C.: Predicting neural network dynamics via graphical analysis. In: Robeva, R., Macaulay, M. (eds.) Algebraic and Combinatorial Computational Biology, pp. 241–277. Elsevier, Amsterdam (2018)

    Google Scholar 

  15. Morrison, K., Degeratu, A., Itskov, V., Curto, C.: Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report (2016). Available at https://arxiv.org/abs/1605.04463

  16. Parmelee, C., Londono Alvarez, J., Curto, C., Morrison, K.: Sequential attractors of combinatorial threshold-linear networks. To appear in SIAM J. Appl. Dyn. Syst. (2022). Available at https://arxiv.org/abs/2107.10244

  17. Parmelee, C., Moore, S., Morrison, K., Curto, C.: Core motifs predict dynamic attractors in combinatorial threshold-linear networks, PLOS ONE, 17(3) (2022): e0264456 https://doi.org/10.1371/journal.pone.0264456

    Article  Google Scholar 

  18. Seung, H.S., Yuste, R.: Principles of Neural Science. In: Appendix E: Neural Networks, pp. 1581–1600, 5th ed. McGraw-Hill Education/Medical, New York (2012)

    Google Scholar 

  19. Xie, X., Hahnloser, R.H., Seung, H.S.: Selectively grouping neurons in recurrent networks of lateral inhibition. Neural Comput. 14, 2627–2646 (2002)

    Article  Google Scholar 

Download references

Acknowledgements

This research is a product of one of the working groups at the Workshop for Women in Computational Topology (WinCompTop) in Canberra, Australia (1–5 July 2019). We thank the organizers of this workshop and the funding from NSF award CCF-1841455, the Mathematical Sciences Institute at ANU, the Australian Mathematical Sciences Institute (AMSI), and Association for Women in Mathematics that supported participants’ travel. We thank Caitlyn Parmelee for fruitful discussions that helped set the foundation for this work. We would also like to thank Joan Licata for valuable conversations at the WinCompTop workshop. CC and KM acknowledge funding from NIH R01 EB022862, NIH R01 NS120581, NSF DMS-1951165, and NSF DMS-1951599.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carina Curto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s) and the Association for Women in Mathematics

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Santander, D.E. et al. (2022). Nerve Theorems for Fixed Points of Neural Networks. In: Gasparovic, E., Robins, V., Turner, K. (eds) Research in Computational Topology 2. Association for Women in Mathematics Series, vol 30. Springer, Cham. https://doi.org/10.1007/978-3-030-95519-9_6

Download citation

Publish with us

Policies and ethics