Advertisement

Three Analog Neurons Are Turing Universal

  • Jiří Šíma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11324)

Abstract

The languages accepted online by binary-state neural networks with rational weights have been shown to be context-sensitive when an extra analog neuron is added (1ANNs). In this paper, we provide an upper bound on the number of additional analog units to achieve Turing universality. We prove that any Turing machine can be simulated by a binary-state neural network extended with three analog neurons (3ANNs) having rational weights, with a linear-time overhead. Thus, the languages accepted offline by 3ANNs with rational weights are recursively enumerable, which refines the classification of neural networks within the Chomsky hierarchy.

Keywords

Neural computing Turing machine Chomsky hierarchy 

References

  1. 1.
    Alon, N., Dewdney, A.K., Ott, T.J.: Efficient simulation of finite automata by neural nets. J. ACM 38(2), 495–514 (1991)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: a characterization in terms of Kolmogorov complexity. IEEE Trans. Inf. Theory 43(4), 1175–1183 (1997)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Horne, B.G., Hush, D.R.: Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Netw. 9(2), 243–252 (1996)CrossRefGoogle Scholar
  4. 4.
    Indyk, P.: Optimal simulation of automata by neural nets. In: Mayr, E.W., Puech, C. (eds.) STACS 1995. LNCS, vol. 900, pp. 337–348. Springer, Heidelberg (1995).  https://doi.org/10.1007/3-540-59042-0_85CrossRefGoogle Scholar
  5. 5.
    Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Koiran, P.: A family of universal recurrent networks. Theor. Comput. Sci. 168(2), 473–480 (1996)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Lupanov, O.B.: On the synthesis of threshold circuits. Probl. Kibern. 26, 109–140 (1973)Google Scholar
  8. 8.
    Minsky, M.: Computations: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs (1967)zbMATHGoogle Scholar
  9. 9.
    Orponen, P.: Computing with truly asynchronous threshold logic networks. Theor. Comput. Sci. 174(1–2), 123–136 (1997)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  11. 11.
    Siegelmann, H.T.: Recurrent neural networks and finite automata. J. Comput. Intell. 12(4), 567–574 (1996)CrossRefGoogle Scholar
  12. 12.
    Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston (1999)CrossRefGoogle Scholar
  13. 13.
    Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theor. Comput. Sci. 131(2), 331–360 (1994)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Šíma, J.: Analog stable simulation of discrete neural networks. Neural Netw. World 7(6), 679–686 (1997)Google Scholar
  16. 16.
    Šíma, J.: Energy complexity of recurrent neural networks. Neural Comput. 26(5), 953–973 (2014)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Šíma, J.: The power of extra analog neuron. In: Dediu, A.-H., Lozano, M., Martín-Vide, C. (eds.) TPNC 2014. LNCS, vol. 8890, pp. 243–254. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-13749-0_21CrossRefGoogle Scholar
  18. 18.
    Šíma, J.: Neural networks between integer and rational weights. In: Proceedings of the IJCNN 2017 Thirties International Joint Conference on Neural Networks, pp. 154–161. IEEE (2017)Google Scholar
  19. 19.
    Šíma, J., Orponen, P.: General-purpose computation with neural networks: a survey of complexity theoretic results. Neural Comput. 15(12), 2727–2778 (2003)CrossRefGoogle Scholar
  20. 20.
    Šíma, J., Savický, P.: Quasi-periodic \(\beta \)-expansions and cut languages. Theor. Comput. Sci. 720, 1–23 (2018)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Šíma, J., Wiedermann, J.: Theory of neuromata. J. ACM 45(1), 155–178 (1998)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Šorel, M., Šíma, J.: Robust RBF finite automata. Neurocomputing 62, 93–110 (2004)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Institute of Computer Science, Czech Academy of SciencesPrague 8Czech Republic

Personalised recommendations