Skip to main content

Expressive Power of Evolving Neural Networks Working on Infinite Input Streams

  • Conference paper
  • First Online:
Fundamentals of Computation Theory (FCT 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10472))

Included in the following conference series:

Abstract

Evolving recurrent neural networks represent a natural model of computation beyond the Turing limits. Here, we consider evolving recurrent neural networks working on infinite input streams. The expressive power of these networks is related to their attractor dynamics and is measured by the topological complexity of their underlying neural \(\omega \)-languages. In this context, the deterministic and non-deterministic evolving neural networks recognize the (boldface) topological classes of \(BC(\varvec{\mathrm {\Pi }}^0_2)\) and \(\varvec{\mathrm {\Sigma }}^1_1\) \(\omega \)-languages, respectively. These results can actually be significantly refined: the deterministic and nondeterministic evolving networks which employ \(\alpha \in 2^\omega \) as sole binary evolving weight recognize the (lightface) relativized topological classes of \(BC(\mathrm {\Pi }^0_2)(\alpha )\) and \(\mathrm {\Sigma }^1_1(\alpha )\) \(\omega \)-languages, respectively. As a consequence, a proper hierarchy of classes of evolving neural nets, based on the complexity of their underlying evolving weights, can be obtained. The hierarchy contains chains of length \(\omega _1\) as well as uncountable antichains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The results of the paper remain valid for any other kind of sigmoidal activation function satisfying the properties mentioned in [13, Sect. 4].

  2. 2.

    In words, an attractor of \(\mathcal {N}\) is a set of output states into which the Boolean computation of the network could become forever trapped – yet not necessarily in a periodic manner.

References

  1. Apt, K.R.: \(\omega \)-models in analytical hierarchy. Bulletin de l’académie polonaise des sciences XX(11), 901–904 (1972)

    MathSciNet  MATH  Google Scholar 

  2. Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: a characterization in terms of Kolmogorov complexity. IEEE Trans. Inf. Theory 43(4), 1175–1183 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cabessa, J., Duparc, J.: Expressive power of nondeterministic recurrent neural networks in terms of their attractor dynamics. IJUC 12(1), 25–50 (2016)

    MATH  Google Scholar 

  4. Cabessa, J., Siegelmann, H.T.: Evolving recurrent neural networks are super-Turing. In: Proceedings of IJCNN 2011, pp. 3200–3206. IEEE (2011)

    Google Scholar 

  5. Cabessa, J., Siegelmann, H.T.: The computational power of interactive recurrent neural networks. Neural Comput. 24(4), 996–1019 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cabessa, J., Siegelmann, H.T.: The super-turing computational power of plastic recurrent neural networks. Int. J. Neural Syst. 24(8), 1450029 (2014)

    Article  Google Scholar 

  7. Cabessa, J., Villa, A.E.P.: The expressive power of analog recurrent neural networks on infinite input streams. Theor. Comput. Sci. 436, 23–34 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cabessa, J., Villa, A.E.P.: An attractor-based complexity measurement for Boolean recurrent neural networks. PLoS ONE 9(4), e94204+ (2014)

    Article  Google Scholar 

  9. Cabessa, J., Villa, A.E.P.: Expressive power of first-order recurrent neural networks determined by their attractor dynamics. J. Comput. Syst. Sci. 82(8), 1232–1250 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cabessa, J., Villa, A.E.P.: Recurrent neural networks and super-turing interactive computation. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N.K. (eds.) Artificial Neural Networks. SSB, vol. 4, pp. 1–29. Springer, Cham (2015). doi:10.1007/978-3-319-09903-3_1

    Google Scholar 

  11. Finkel, O.: Ambiguity of omega-languages of turing machines. Log. Methods Comput. Sci. 10(3), 1–18 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kechris, A.S.: Classical Descriptive Set Theory. Graduate Texts in Mathematics, vol. 156. Springer, New York (1995)

    MATH  Google Scholar 

  13. Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kleene, S.C.: Representation of events in nerve nets and finite automata. In: Shannon, C., McCarthy, J. (eds.) Automata Studies, pp. 3–41. Princeton University Press, Princeton (1956)

    Google Scholar 

  15. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)

    Article  MathSciNet  MATH  Google Scholar 

  16. Minsky, M.L.: Computation: Finite and Infinite Machines. Prentice-Hall Inc., Englewood Cliffs (1967)

    MATH  Google Scholar 

  17. Moschovakis, Y.N.: Descriptive Set Theory. Mathematical Surveys and Monographs, 2nd edn. American Mathematical Society, Providence (2009)

    Book  MATH  Google Scholar 

  18. Siegelmann, H.T.: Recurrent neural networks and finite automata. Comput. Intell. 12, 567–574 (1996)

    Article  Google Scholar 

  19. Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theor. Comput. Sci. 131(2), 331–360 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  20. Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  21. Síma, J., Orponen, P.: General-purpose computation with neural networks: a survey of complexity theoretic results. Neural Comput. 15(12), 2727–2778 (2003)

    Article  MATH  Google Scholar 

  22. Staiger, L.: \(\omega \)-languages. In: Rozenberg, G., Salomaa, A. (eds.) Handbook of Formal Languages: Beyond Words, vol. 3, pp. 339–387. Springer, New York (1997)

    Chapter  Google Scholar 

  23. Turing, A.M.: Intelligent machinery. Technical report, National Physical Laboratory, Teddington, UK (1948)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jérémie Cabessa or Olivier Finkel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer-Verlag GmbH Germany

About this paper

Cite this paper

Cabessa, J., Finkel, O. (2017). Expressive Power of Evolving Neural Networks Working on Infinite Input Streams. In: Klasing, R., Zeitoun, M. (eds) Fundamentals of Computation Theory. FCT 2017. Lecture Notes in Computer Science(), vol 10472. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-55751-8_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-55751-8_13

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-55750-1

  • Online ISBN: 978-3-662-55751-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics