Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?
Finite-state machines are the most pervasive models of com- putation, not only in theoretical computer science, but also in all of its applications to real-life problems, and constitute the best characterized computational model. On the other hand, neural networks -proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings- have evolved into a great variety of so-called artificial neural networks. Artificial neural networks have be- come a very successful tool for modelling and problem solving because of their built-in learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finite-state machines and (mainly discrete-time recurrent) neural networks, and suggests possible ways to construct finite-state models in biologically plausible neural networks.
Unable to display preview. Download preview PDF.
- 3.S.C. Kleene, Representation of events in nerve nets and finite automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3–42. Princeton University Press, Princeton, N.J., 1956.Google Scholar
- 6.P. Indyk. Optimal simulation of automata by neural nets. In Proceedings of the 12th Annual Symposium on Theoretical Aspects of Computer Science, pages 337–348, Berlin, 1995. Springer-Verlag.Google Scholar
- 11.Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:22–252, 1991.Google Scholar
- 14.Arun Maskara and Andrew Noetzel. Forcing simple recurrent neural networks to encode context. In Proceedings of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics, 1992.Google Scholar
- 15.A. Sanfeliu and R. Alquézar. Active grammatical inference: a new learning methodology. In Dov Dori and A. Bruckstein, editors, Shape and Structure in Pattern Recognition, Singapore, 1994. World Scientific. Proceedings of the IAPR International Workshop on Structural and Syntactic Pattern Recognition SSPR’94 (Nahariya, Israel).Google Scholar
- 18.Peter Tiňo and Jozef Sajda. Learning and extracting initial Mealy automata with a modular neural network model. Neural Computation, 7(4), July 1995.Google Scholar
- 19.R.P. Ñeco and M.L. Forcada. Beyond Mealy machines: Learning translators with recurrent neural networks. In Proceedings of the World Conference on Neural Networks’ 96, pages 408–411, San Diego, California, September 15-18 1996.Google Scholar
- 22.Rafael C. Carrasco, Mikel L. Forcada, M. Ángeles Valdés-Muñoz, and Ramón P. Ñeco. Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units. Neural Computation, 12, 2000. In press.Google Scholar
- 25.J.F. Kolen. Fool’s gold: Extracting finite state machines from recurrent network dynamics. In J. D. Cowan, G. Tesauro,, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 501–508, San Mateo, CA, 1994. Morgan Kaufmann.Google Scholar
- 30.Stefan C. Kremer. A Theory of Grammatical Induction in the Connectionist Paradigm. PhD thesis, Department of Computer Science, University of Alberta, Edmonton, Alberta, 1996.Google Scholar
- 32.Jiří Šíma. Analog stable simulation of discrete neural networks. Neural Network World, 7:679–686, 1997.Google Scholar
- 34.Ramón P. ðeco, Mikel L. Forcada, Rafael C. Carrasco, and M. Ángeles Valdés-Muñoz. Encoding of sequential translators in discrete-time recurrent neural nets. In Proceedings of the European Symposium on Artificial Neural Networks ESANN’99, pages 375–380, 1999.Google Scholar
- 35.Rafael C. Carrasco, Jose Oncina, and Mikel L. Forcada. Efficient encodings of finite automata in discrete-time recurrent neural networks. In Proceedings of ICANN’99, International Conference on Artificial Neural Networks, 1999. (in press).Google Scholar
- 38.L.B. Almeida. Backpropagation in perceptrons with feedback. In R. Eckmiller and Ch. von der Malsburg, editors, Neural Computers, pages 199–208, Neuss 1987, 1988. Springer-Verlag, Berlin.Google Scholar
- 40.Thomas Natschläger and Wolfgang Maass. Fast analog computation in networks of spiking neurons using unreliable synapses. In Proceedings of ESANN’99, European Symposium on Artificial Neural Networks, pages 417–422, 1999.Google Scholar
- 41.Thomas Wennekers. Synfire graphs: From spike patterns to automata of spiking neurons. Technical Report Ulmer Informatik-Berichte Nr. 98-08, Universität Ulm, Fakultät für Informatik, 1998.Google Scholar