Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?

  • Mikel L. Forcada
  • Rafael C. Carrasco
Chapter

Abstract

Finite-state machines are the most pervasive models of com- putation, not only in theoretical computer science, but also in all of its applications to real-life problems, and constitute the best characterized computational model. On the other hand, neural networks -proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings- have evolved into a great variety of so-called artificial neural networks. Artificial neural networks have be- come a very successful tool for modelling and problem solving because of their built-in learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finite-state machines and (mainly discrete-time recurrent) neural networks, and suggests possible ways to construct finite-state models in biologically plausible neural networks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    W.S. McCulloch and W.H. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    J.E. Hopcroft and J.D. Ullman. Introduction to automata theory, languages, and computation. Addison-Wesley, Reading, MA, 1979.MATHGoogle Scholar
  3. 3.
    S.C. Kleene, Representation of events in nerve nets and finite automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3–42. Princeton University Press, Princeton, N.J., 1956.Google Scholar
  4. 4.
    M.L. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1967. Ch: Neural Networks. Automata Made up of Parts.MATHGoogle Scholar
  5. 5.
    N. Alon, A.K. Dewdney, and T.J. Ott. Efficient simulation of finite automata by neural nets. Journal of the Association of Computing Machinery, 38(2):495–514, 1991.MATHMathSciNetGoogle Scholar
  6. 6.
    P. Indyk. Optimal simulation of automata by neural nets. In Proceedings of the 12th Annual Symposium on Theoretical Aspects of Computer Science, pages 337–348, Berlin, 1995. Springer-Verlag.Google Scholar
  7. 7.
    B.G. Horne and D.R. Hush. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks, 9(2):243–252, 1996.CrossRefGoogle Scholar
  8. 8.
    Tony Robinson and Frank Fallside. A recurrent error propagation network speech recognition system. Computer Speech and Language, 5:259–274, 1991.CrossRefGoogle Scholar
  9. 9.
    J.L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.CrossRefGoogle Scholar
  10. 10.
    A. Cleeremans, D. Servan-Schreiber, and J.L. McClelland. Finite state automata and simple recurrent networks. Neural Computation, 1(3):372–381, 1989.CrossRefGoogle Scholar
  11. 11.
    Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:22–252, 1991.Google Scholar
  12. 12.
    C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, and Y.C. Lee. Learning and extracted finite state automata with second-order recurrent neural networks. Neural Computation, 4(3):393–405, 1992.CrossRefGoogle Scholar
  13. 13.
    R.L. Watrous and G.M. Kuhn. Induction of finite-state languages using second-order recurrent networks. Neural Computation, 4(3):406–414, 1992.CrossRefGoogle Scholar
  14. 14.
    Arun Maskara and Andrew Noetzel. Forcing simple recurrent neural networks to encode context. In Proceedings of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics, 1992.Google Scholar
  15. 15.
    A. Sanfeliu and R. Alquézar. Active grammatical inference: a new learning methodology. In Dov Dori and A. Bruckstein, editors, Shape and Structure in Pattern Recognition, Singapore, 1994. World Scientific. Proceedings of the IAPR International Workshop on Structural and Syntactic Pattern Recognition SSPR’94 (Nahariya, Israel).Google Scholar
  16. 16.
    P. Manolios and R. Fanelli. First order recurrent neural networks and deterministic finite state automata. Neural Computation, 6(6):1154–1172, 1994.CrossRefGoogle Scholar
  17. 17.
    M.L. Forcada and R.C. Carrasco. Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation, 7(5):923–930, 1995.CrossRefGoogle Scholar
  18. 18.
    Peter Tiňo and Jozef Sajda. Learning and extracting initial Mealy automata with a modular neural network model. Neural Computation, 7(4), July 1995.Google Scholar
  19. 19.
    R.P. Ñeco and M.L. Forcada. Beyond Mealy machines: Learning translators with recurrent neural networks. In Proceedings of the World Conference on Neural Networks’ 96, pages 408–411, San Diego, California, September 15-18 1996.Google Scholar
  20. 20.
    Marco Gori, Marco Maggini, E. Martinelli, and G. Soda. Inductive inference from noisy examples using the hybrid finite state filter. IEEE Transactions on Neural Networks, 9(3):571–575, 1998.CrossRefGoogle Scholar
  21. 21.
    M.W. Goudreau, C.L. Giles, S.T. Chakradhar, and D. Chen. First-order vs. second-order single layer recurrent neural networks. IEEE Transactions on Neural Networks, 5(3):511–513, 1994.CrossRefGoogle Scholar
  22. 22.
    Rafael C. Carrasco, Mikel L. Forcada, M. Ángeles Valdés-Muñoz, and Ramón P. Ñeco. Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units. Neural Computation, 12, 2000. In press.Google Scholar
  23. 23.
    Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.CrossRefGoogle Scholar
  24. 24.
    J.F. Kolen and Jordan B. Pollack. The observer’s paradox: apparent computational complexity in physical systems. Journal of Experimental and Theoretical Artificial Intelligence, 7:253–277, 1995.CrossRefGoogle Scholar
  25. 25.
    J.F. Kolen. Fool’s gold: Extracting finite state machines from recurrent network dynamics. In J. D. Cowan, G. Tesauro,, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 501–508, San Mateo, CA, 1994. Morgan Kaufmann.Google Scholar
  26. 26.
    M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6):1135–1178, 1996.CrossRefGoogle Scholar
  27. 27.
    A. Blair and J.B. Pollack. Analysis of dynamical recognizers. Neural Computation, 9(5):1127–1142, 1997.CrossRefGoogle Scholar
  28. 28.
    C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6):937–972, 1996.MATHCrossRefMathSciNetGoogle Scholar
  29. 29.
    R. Alquézar and A. Sanfeliu. An algebraic framework to represent finite state automata in single-layer recurrent neural networks. Neural Computation, 7(5):931–949, 1995.CrossRefGoogle Scholar
  30. 30.
    Stefan C. Kremer. A Theory of Grammatical Induction in the Connectionist Paradigm. PhD thesis, Department of Computer Science, University of Alberta, Edmonton, Alberta, 1996.Google Scholar
  31. 31.
    Paolo Frasconi, Marco Gori, Marco Maggini, and Giovanni Soda. Representation of finite-state automata in recurrent radial basis function networks. Machine Learning, 23:5–32, 1996.MATHGoogle Scholar
  32. 32.
    Jiří Šíma. Analog stable simulation of discrete neural networks. Neural Network World, 7:679–686, 1997.Google Scholar
  33. 33.
    Jiří Šíma and Jiří Wiedermann. Theory of neuromata. Journal of the ACM, 45(1):155–178, 1998.CrossRefMathSciNetMATHGoogle Scholar
  34. 34.
    Ramón P. ðeco, Mikel L. Forcada, Rafael C. Carrasco, and M. Ángeles Valdés-Muñoz. Encoding of sequential translators in discrete-time recurrent neural nets. In Proceedings of the European Symposium on Artificial Neural Networks ESANN’99, pages 375–380, 1999.Google Scholar
  35. 35.
    Rafael C. Carrasco, Jose Oncina, and Mikel L. Forcada. Efficient encodings of finite automata in discrete-time recurrent neural networks. In Proceedings of ICANN’99, International Conference on Artificial Neural Networks, 1999. (in press).Google Scholar
  36. 36.
    B.A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks, 6(5):1212–1228, 1995.CrossRefGoogle Scholar
  37. 37.
    F.J. Pineda. Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59(19):2229–2232, 1987.CrossRefMathSciNetGoogle Scholar
  38. 38.
    L.B. Almeida. Backpropagation in perceptrons with feedback. In R. Eckmiller and Ch. von der Malsburg, editors, Neural Computers, pages 199–208, Neuss 1987, 1988. Springer-Verlag, Berlin.Google Scholar
  39. 39.
    S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996.CrossRefGoogle Scholar
  40. 40.
    Thomas Natschläger and Wolfgang Maass. Fast analog computation in networks of spiking neurons using unreliable synapses. In Proceedings of ESANN’99, European Symposium on Artificial Neural Networks, pages 417–422, 1999.Google Scholar
  41. 41.
    Thomas Wennekers. Synfire graphs: From spike patterns to automata of spiking neurons. Technical Report Ulmer Informatik-Berichte Nr. 98-08, Universität Ulm, Fakultät für Informatik, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Mikel L. Forcada
    • 1
  • Rafael C. Carrasco
    • 1
  1. 1.Departament de Llenguatges i Sistemes InformàticsUniversitat d’AlacantAlacantSpain

Personalised recommendations