Skip to main content
Log in

Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons

  • Original paper
  • Published:
Nonlinear Dynamics Aims and scope Submit manuscript

Abstract

In this work, we consider recurrent neural networks of firing rate neurons supervisely trained to generate multidimensional sequences of given configurations. We study dynamical objects in the network multidimensional phase space underlying successfully trained outputs and analyze spatiotemporal neural activity and its features in three cases. First, we consider autonomous generation of complex sequences by output units driven by a recurrent network. Second, we study how input pulses can trigger different output units. Third, we explore the case where input pulses allow us to switch between different sequential activities of output units.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Marblestone, A.H., Wayne, G., Kording, K.P.: Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci. 10, 94 (2016)

    Google Scholar 

  2. Barak, O.: Recurrent neural networks as versatile tools of neuroscience research. Curr. Opin. Neurobiol. 46, 1–6 (2017)

    Google Scholar 

  3. Sussillo, D.: Neural circuits as computational dynamical systems. Curr. Opin. Neurobiol. 25, 156–163 (2014)

    Google Scholar 

  4. Buonomano, D.V., Maass, W.: State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10(2), 113 (2009)

    Google Scholar 

  5. Graves, A., Mohamed, A.r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6645–6649. IEEE (2013)

  6. Venugopalan, S., Xu, H., Donahue, J., Rohrbach, M., Mooney, R., Saenko, K.: Translating videos to natural language using deep recurrent neural networks (2014). arXiv preprint arXiv:1412.4729

  7. Churchland, M.M., Byron, M.Y., Cunningham, J.P., Sugrue, L.P., Cohen, M.R., Corrado, G.S., Newsome, W.T., Clark, A.M., Hosseini, P., Scott, B.B., et al.: Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat. Neurosci. 13(3), 369 (2010)

    Google Scholar 

  8. Mante, V., Sussillo, D., Shenoy, K.V., Newsome, W.T.: Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503(7474), 78 (2013)

    Google Scholar 

  9. Laje, R., Buonomano, D.V.: Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat. Neurosci. 16(7), 925 (2013)

    Google Scholar 

  10. Douglas, R.J., Martin, K.: A functional microcircuit for cat visual cortex. J. Physiol. 440(1), 735–769 (1991)

    Google Scholar 

  11. Gerstner, W., van Hemmen, J.L.: Associative memory in a network of spiking neurons. Netw. Comput. Neural Syst. 3(2), 139–164 (1992)

    MATH  Google Scholar 

  12. Sommer, F.T., Wennekers, T.: Associative memory in networks of spiking neurons. Neural Netw. 14(6–7), 825–834 (2001)

    Google Scholar 

  13. Zamani, M., Sadeghian, A., Chartier, S.: A bidirectional associative memory based on cortical spiking neurons using temporal coding. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010)

  14. Amit, D.J., Mongillo, G.: Spike-driven synaptic dynamics generating working memory states. Neural Comput. 15(3), 565–596 (2003)

    MATH  Google Scholar 

  15. Miller, P., Brody, C.D., Romo, R., Wang, X.J.: A recurrent network model of somatosensory parametric working memory in the prefrontal cortex. Cerebral Cortex 13(11), 1208–1218 (2003)

    Google Scholar 

  16. Mongillo, G., Barak, O., Tsodyks, M.: Synaptic theory of working memory. Science 319(5869), 1543–1546 (2008)

    Google Scholar 

  17. Szatmáry, B., Izhikevich, E.M.: Spike-timing theory of working memory. PLoS Comput. Biol. 6(8), e1000879 (2010)

    MathSciNet  Google Scholar 

  18. Buzsaki, G.: Rhythms of the Brain. Oxford University Press, Oxford (2006)

    MATH  Google Scholar 

  19. Melamed, O., Barak, O., Silberberg, G., Markram, H., Tsodyks, M.: Slow oscillations in neural networks with facilitating synapses. J. Comput. Neurosci. 25(2), 308 (2008)

    MathSciNet  Google Scholar 

  20. Izhikevich, E.M., Edelman, G.M.: Large-scale model of mammalian thalamocortical systems. Proc. Natl. Acad. Sci. 105(9), 3593–3598 (2008)

    Google Scholar 

  21. Ma, J., Wu, J.: Multistability in spiking neuron models of delayed recurrent inhibitory loops. Neural Comput. 19(8), 2124–2148 (2007)

    MathSciNet  MATH  Google Scholar 

  22. Graves, A., Schmidhuber, J.: Offline handwriting recognition with multidimensional recurrent neural networks. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 21, Curran Associates, Inc., pp. 545–552 (2009). http://papers.nips.cc/paper/3449-offline-handwriting-recognition-with-multidimensional-recurrent-neural-networks.pdf

  23. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization (2014). arXiv preprint arXiv:1409.2329

  24. Hewamalage, H., Bergmeir, C., Bandara, K.: Recurrent neural networks for time series forecasting: Current status and future directions (2019). arXiv preprint arXiv:1909.00590

  25. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling (2014). arXiv preprint arXiv:1412.3555

  26. Medsker, L., Jain, L.C.: Recurrent Neural Networks: Design and Applications. CRC Press, Bocaraton (1999)

    Google Scholar 

  27. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Google Scholar 

  28. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997)

    Google Scholar 

  29. Cheng, C.Y., Lin, K.H., Shih, C.W.: Multistability in recurrent neural networks. SIAM J. Appl. Math. 66(4), 1301–1320 (2006)

    MathSciNet  MATH  Google Scholar 

  30. Kaslik, E., Sivasundaram, S.: Impulsive hybrid discrete-time hopfield neural networks with delays and multistability analysis. Neural Netw. 24(4), 370–377 (2011)

    MATH  Google Scholar 

  31. Yi, Z., Tan, K.K., Lee, T.H.: Multistability analysis for recurrent neural networks with unsaturating piecewise linear transfer functions. Neural Comput. 15(3), 639–662 (2003)

    MATH  Google Scholar 

  32. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J., et al.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kolen, J. F., Kremer S. C. (eds.) A Field Guide to Dynamical Recurrent Networks. IEEE Press New York City, pp. 237–244 (2000)

  33. Hertz, J.A.: Introduction to the Theory of Neural Computation. CRC Press, Boca Raton (2018)

    Google Scholar 

  34. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)

    MATH  Google Scholar 

  35. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Google Scholar 

  36. Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4), 544–557 (2009)

    Google Scholar 

  37. Martens, J., Sutskever, I.: Learning recurrent neural networks with hessian-free optimization. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1033–1040. Citeseer (2011)

  38. Bengio, Y., Boulanger-Lewandowski, N., Pascanu, R.: Advances in optimizing recurrent networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8624–8628. IEEE (2013)

  39. Hoerzer, G.M., Legenstein, R., Maass, W.: Emergence of complex computational structures from chaotic neural networks through reward-modulated hebbian learning. Cerebral Cortex 24(3), 677–690 (2014)

    Google Scholar 

  40. Song, H.F., Yang, G.R., Wang, X.J.: Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Comput. Biol. 12(2), e1004792 (2016)

    Google Scholar 

  41. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    MATH  Google Scholar 

  42. Schrauwen, B., Verstraeten, D., Van Campenhout, J.: An overview of reservoir computing: theory, applications and implementations. In: Proceedings of the 15th European Symposium on Artificial Neural Networks, pp. 471–482 (2007)

  43. Atiya, A.F., Parlos, A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 11(3), 697–709 (2000)

    Google Scholar 

  44. Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI-Künstliche Intelligenz 26(4), 365–371 (2012)

    Google Scholar 

  45. Jaeger, H.: Short term memory in echo state networks, vol. 5. GMD-Forschungszentrum Informationstechnik (2001)

  46. Ponulak, F., Kasinski, A.: Introduction to spiking neural networks: Information processing, learning and applications. Acta Neurobiologiae Experimentalis 71(4), 409–433 (2011)

    MATH  Google Scholar 

  47. Sussillo, D., Barak, O.: Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25(3), 626–649 (2013)

    MathSciNet  MATH  Google Scholar 

  48. DePasquale, B., Cueva, C.J., Rajan, K., Abbott, L., et al.: Full-force: a target-based method for training recurrent networks. PloS One 13(2), e0191527 (2018)

    Google Scholar 

  49. Maslennikov, O.V., Nekorkin, V.I.: Collective dynamics of rate neurons for supervised learning in a reservoir computing system. Chaos Interdiscip. J. Nonlinear Sci. 29(10), 103126 (2019)

    MathSciNet  MATH  Google Scholar 

  50. Abbott, L.F., DePasquale, B., Memmesheimer, R.M.: Building functional networks of spiking model neurons. Nat. Neurosci. 19(3), 350 (2016)

    Google Scholar 

  51. Nicola, W., Clopath, C.: Supervised learning in spiking neural networks with force training. Nat. Commun. 8(1), 2208 (2017)

    Google Scholar 

  52. Kim, C.M., Chow, C.C.: Learning recurrent dynamics in spiking networks. eLife 7, e37124 (2018)

  53. Pugavko, M.M., Maslennikov, O.V., Nekorkin, V.I.: Dynamics of a network of map-based model neurons for supervised learning of a reservoir computing system. Izvestiya VUZ Appl. Nonlinear Dyn. 28(1), 77–89 (2019). https://doi.org/10.18500/0869-6632-2020-28-1-77-89

    Article  Google Scholar 

  54. Pugavko, M. M., Maslennikov, O. V., Nekorkin, V. I.: Dynamics of spiking map-based neural networks in problems of supervised learning. Commun. Nonlinear Sci. Numer. Simul. 105399 (2020). https://doi.org/10.1016/j.cnsns.2020.105399

  55. Rajan, K., Abbott, L.: Eigenvalue spectra of random matrices for neural networks. Phys. Rev. Lett. 97(18), 188104 (2006)

    Google Scholar 

  56. Rivkind, A., Barak, O.: Local dynamics in trained recurrent neural networks. Phys. Rev. Lett. 118(25), 258101 (2017)

    Google Scholar 

  57. Schuessler, F., Dubreuil, A., Mastrogiuseppe, F., Ostojic, S., Barak, O.: Dynamics of random recurrent networks with correlated low-rank structure. Phys. Rev. Res. 2(1), 013111 (2020)

    Google Scholar 

  58. Mastrogiuseppe, F., Ostojic, S.: Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron 99(3), 609–623 (2018)

    Google Scholar 

  59. Rajan, K., Harvey, C.D., Tank, D.W.: Recurrent network models of sequence generation and memory. Neuron 90(1), 128–142 (2016)

    Google Scholar 

  60. Rajan, K., Abbott, L., Sompolinsky, H.: Stimulus-dependent suppression of chaos in recurrent neural networks. Phys. Rev. E 82(1), 011903 (2010)

    Google Scholar 

  61. Sompolinsky, H., Crisanti, A., Sommers, H.J.: Chaos in random neural networks. Phys. Rev. Lett. 61(3), 259 (1988)

    MathSciNet  Google Scholar 

  62. Luczak, A., McNaughton, B.L., Harris, K.D.: Packet-based communication in the cortex. Nat. Rev. Neurosci. 16(12), 745–755 (2015)

    Google Scholar 

  63. Haykin, S.S.: Adaptive Filter Theory. Pearson Education India, Bengaluru (2005)

    MATH  Google Scholar 

  64. Benettin, G., Galgani, L., Giorgilli, A., Strelcyn, J.M.: Lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. part 1: Theory. Meccanica 15(1), 9–20 (1980)

  65. Abrams, D.M., Strogatz, S.H.: Chimera states for coupled oscillators. Phys. Rev. Lett. 93(17), 174102 (2004)

    Google Scholar 

  66. Panaggio, M.J., Abrams, D.M.: Chimera states: coexistence of coherence and incoherence in networks of coupled oscillators. Nonlinearity 28(3), R67 (2015)

    MathSciNet  MATH  Google Scholar 

  67. Kasatkin, D.V., Klinshov, V.V., Nekorkin, V.I.: Itinerant chimeras in an adaptive network of pulse-coupled oscillators. Phys. Rev. E 99(2), 022203 (2019)

    Google Scholar 

Download references

Acknowledgements

Studying autonomous dynamics in this work was carried out as part of the state assignment of the IAP RAS, project No. 0035-2019-0011. Studying stimulus-driven dynamics was supported by the Russian Science Foundation, Project No. 19-72-00112.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oleg V. Maslennikov.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Studying autonomous dynamics in this work was carried out as part of the state assignment of the IAP RAS, project No. 0035-2019-0011. Studying stimulus-driven dynamics was supported by the Russian Science Foundation, Project No. 19-72-00112.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maslennikov, O.V., Nekorkin, V.I. Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons. Nonlinear Dyn 101, 1093–1103 (2020). https://doi.org/10.1007/s11071-020-05787-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11071-020-05787-0

Keywords

Navigation