Expressive Power of Evolving Neural Networks Working on Infinite Input Streams

  • Jérémie CabessaEmail author
  • Olivier FinkelEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10472)


Evolving recurrent neural networks represent a natural model of computation beyond the Turing limits. Here, we consider evolving recurrent neural networks working on infinite input streams. The expressive power of these networks is related to their attractor dynamics and is measured by the topological complexity of their underlying neural \(\omega \)-languages. In this context, the deterministic and non-deterministic evolving neural networks recognize the (boldface) topological classes of \(BC(\varvec{\mathrm {\Pi }}^0_2)\) and \(\varvec{\mathrm {\Sigma }}^1_1\) \(\omega \)-languages, respectively. These results can actually be significantly refined: the deterministic and nondeterministic evolving networks which employ \(\alpha \in 2^\omega \) as sole binary evolving weight recognize the (lightface) relativized topological classes of \(BC(\mathrm {\Pi }^0_2)(\alpha )\) and \(\mathrm {\Sigma }^1_1(\alpha )\) \(\omega \)-languages, respectively. As a consequence, a proper hierarchy of classes of evolving neural nets, based on the complexity of their underlying evolving weights, can be obtained. The hierarchy contains chains of length \(\omega _1\) as well as uncountable antichains.


Neural networks Attractors Formal languages \(\omega \)-languages Borel sets Analytic sets Effective Borel and analytic sets 


  1. 1.
    Apt, K.R.: \(\omega \)-models in analytical hierarchy. Bulletin de l’académie polonaise des sciences XX(11), 901–904 (1972)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: a characterization in terms of Kolmogorov complexity. IEEE Trans. Inf. Theory 43(4), 1175–1183 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Cabessa, J., Duparc, J.: Expressive power of nondeterministic recurrent neural networks in terms of their attractor dynamics. IJUC 12(1), 25–50 (2016)zbMATHGoogle Scholar
  4. 4.
    Cabessa, J., Siegelmann, H.T.: Evolving recurrent neural networks are super-Turing. In: Proceedings of IJCNN 2011, pp. 3200–3206. IEEE (2011)Google Scholar
  5. 5.
    Cabessa, J., Siegelmann, H.T.: The computational power of interactive recurrent neural networks. Neural Comput. 24(4), 996–1019 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Cabessa, J., Siegelmann, H.T.: The super-turing computational power of plastic recurrent neural networks. Int. J. Neural Syst. 24(8), 1450029 (2014)CrossRefGoogle Scholar
  7. 7.
    Cabessa, J., Villa, A.E.P.: The expressive power of analog recurrent neural networks on infinite input streams. Theor. Comput. Sci. 436, 23–34 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Cabessa, J., Villa, A.E.P.: An attractor-based complexity measurement for Boolean recurrent neural networks. PLoS ONE 9(4), e94204+ (2014)CrossRefGoogle Scholar
  9. 9.
    Cabessa, J., Villa, A.E.P.: Expressive power of first-order recurrent neural networks determined by their attractor dynamics. J. Comput. Syst. Sci. 82(8), 1232–1250 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Cabessa, J., Villa, A.E.P.: Recurrent neural networks and super-turing interactive computation. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N.K. (eds.) Artificial Neural Networks. SSB, vol. 4, pp. 1–29. Springer, Cham (2015). doi: 10.1007/978-3-319-09903-3_1 Google Scholar
  11. 11.
    Finkel, O.: Ambiguity of omega-languages of turing machines. Log. Methods Comput. Sci. 10(3), 1–18 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Kechris, A.S.: Classical Descriptive Set Theory. Graduate Texts in Mathematics, vol. 156. Springer, New York (1995)zbMATHGoogle Scholar
  13. 13.
    Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Kleene, S.C.: Representation of events in nerve nets and finite automata. In: Shannon, C., McCarthy, J. (eds.) Automata Studies, pp. 3–41. Princeton University Press, Princeton (1956)Google Scholar
  15. 15.
    McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Minsky, M.L.: Computation: Finite and Infinite Machines. Prentice-Hall Inc., Englewood Cliffs (1967)zbMATHGoogle Scholar
  17. 17.
    Moschovakis, Y.N.: Descriptive Set Theory. Mathematical Surveys and Monographs, 2nd edn. American Mathematical Society, Providence (2009)CrossRefzbMATHGoogle Scholar
  18. 18.
    Siegelmann, H.T.: Recurrent neural networks and finite automata. Comput. Intell. 12, 567–574 (1996)CrossRefGoogle Scholar
  19. 19.
    Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theor. Comput. Sci. 131(2), 331–360 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Síma, J., Orponen, P.: General-purpose computation with neural networks: a survey of complexity theoretic results. Neural Comput. 15(12), 2727–2778 (2003)CrossRefzbMATHGoogle Scholar
  22. 22.
    Staiger, L.: \(\omega \)-languages. In: Rozenberg, G., Salomaa, A. (eds.) Handbook of Formal Languages: Beyond Words, vol. 3, pp. 339–387. Springer, New York (1997)CrossRefGoogle Scholar
  23. 23.
    Turing, A.M.: Intelligent machinery. Technical report, National Physical Laboratory, Teddington, UK (1948)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  1. 1.Laboratoire d’économie mathématique – LEMMAParisFrance
  2. 2.Institut de Mathématiques de Jussieu - Paris Rive GaucheCNRS et Université Paris DiderotParis Cedex 13France

Personalised recommendations