# Finite State Automata and Connectionist Machines: A survey

## Abstract

Work in the literature related to Finite State Automata (FSAs) and Neural Networks (NNs) is review. These studies have dealt with Grammatical Inference tasks as well as how to represent FSAs through a neural model. The inference of Regular Grammars through NNs has been focused either on the acceptance or rejection of strings generated by the grammar or on the prediction of the possible successor(s) for each character in the string. Different neural architectures using first and second order connections were adopted. In order to extract the FSA inferred by a trained net, several techniques have been described in the literature, which are also reported here. Finally, theoretical work about the relationship between NNs and FSAs is outlined and discussed.

## Preview

Unable to display preview. Download preview PDF.

## Bibliography

- [Alquezar,93]
*Representation and Recognition of Regular Grammars by means of Second-Order Recurrent Neural Networks*. R. Alquézar, A. Sanfeliu. In New Trends in Neural Computation. Eds. J.Mira, J.Cabestany, A.Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 143–148. 1993.Google Scholar - [Castaño,93a]
*Simulation of Stochastic Regular Grammars through Simple Recurrent Networks*. M.A. Castaño, F. Casacuberta, E. Vidal. In New Trends in Neural Computation. Eds. J. Mira, J. Cabestany, A. Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 210–215. 1993.Google Scholar - [Castaño,93b]
*Inference of Stochastic Regular Languages through Simple Recurrent Networks*. M.A. Castaño, E. Vidal, F. Casacuberta. In Procs. of the First International Conference on Grammatical Inference. 1993.Google Scholar - [Chen,95]
*Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Cascade Correlation and a Simple Solution*. D. Chen, C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, M.W. Goudreau. IEEE Transactions on Neural Networks. 1995. In pressGoogle Scholar - [Cleeremans,89]
*Finite State Automata and Simple Recurrent Networks*. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland. Neural Computation, no. 1, pp. 372–381. 1989.Google Scholar - [Das,93]
*Using Hints to Successfully Learn Context-Free Grammars with a Neural Network Pushdown Automaton*. S. Das, C.L. Giles, G.Z. Sun. In Advances in Neural Information Processing Systems 5 Eds C.L. Giles, R.P. Lipmann. 1993.Google Scholar - [Elman,88]
*Finding Structure in Time*. J.L. Elman. Technical Report No. 8801. Center for Research in Language. University of California. La Jolla. 1988.Google Scholar - [Fahlman,91]
*The Recurrent Cascade-Correlation Architecture*. S.E. Fahlman. Technical Report CMU-CS-91-100, School of Computer Science, Carnegie Mellon University, Pittsburgh. 1991.Google Scholar - [Giles,92a]
*Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks*. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee. Neural Computation, no 4, pp. 393–405 1992.Google Scholar - [Giles,92b]
*Extracting and Learning an Unknown Grammar with Recurrent Neural Networks*. C.L. Giles, C.B. Miller, D. Chen, G.Z. Sun, H.H. Chen, Y.C. Lee. In Advances in Neural Information Processing Systems 4. Eds. J.E. Moody, S.J. Hanson, R.P. Lipmann. 1992.Google Scholar - [Giles,92c]
*Inserting Rules into Recurrent Neural Networks*. C.L. Giles, C.W. Omlin. In Procs. of the 1992 IEE Signal Processing, pp. 13–22. 1992.Google Scholar - [Giles,93a]
*Rule Refinement with Recurrent Neural Networks*. C.L. Giles, C.W. Omlin. In Procs of the 1993 IEE International Conference on Neural Networks. 1993.Google Scholar - [Giles,93b]
*Extraction, Insertion and Refinement of Symbolic Rules in Dynamically-Driven Recurrent Neural Networks*. C.L. Giles, C.W. Omlin. Connection Science, vol. 5, no. 3, pp. 307–337. 1993.Google Scholar - [Goudreau,94]
*First-Order vs. Second-Order Single Layer Recurrent Neural Networks*. M.W. Goudreau, C.L. Giles, S.T. Chkradhar, D. Chen. IEEE Transactions on Neural Networks, vol. 5, no. 3, pp. 511–513. 1994.Google Scholar - [Jordan,88]
*Serial order: A parallel distributed processing approach*. M.I. Jordan. Technical Report No. 8604. Institute of Cognitive Science. University of California. San Diego. 1988.Google Scholar - [Lucas,93]
*Algebraic Grammatical Inference*. S.M. Lucas. In Procs. of the First International Conference on Grammatical Inference. 1993.Google Scholar - [Manolios,93]
*First Order Recurrent Neural Networks and Deterministic Finite State Automata*. P. Manolios, R. Fanelli. Technical Report NNRG-930625A, Department of computer Science and Physics, Brooklyn College of the City University of New York. Brooklyn. 1993.Google Scholar - [Maskara,92]
*Forcing Simple Recurrent Neural Networks to Encode Context*. Procs. of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics. 1992.Google Scholar - [McCulloch,43]
*A logical Calculus of the Ideas Imminent in Nervous Activity*. W.S. McCulloch, W. Pits. Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133. 1943.Google Scholar - [Maxwell,89]
*Generalization in Neural Networks: The Contiguity Problem*. T. Maxwell, C.L. Giles, Y.C. Lee. In Procs. of the International Joint Conference on Neural Networks, vol. 2, pp. 41–46. 1989.Google Scholar - [Miller,93]
*Experimental Comparison of the Effect of Order in Recurrent Neural Networks*. C.B. Miller, C.L. Giles. International Journal of Pattern Recognition and Artificial Intelligence. 1993.Google Scholar - [Minsky,67]
*Computation: Finite and Infinite Machines*. M.L. Minsky. Chap. 3.5. Ed. Prentice-Hall, Englewood Cliffs, New York. 1967.Google Scholar - [Omlin,92]
*Training Second-Order Recurrent Neural Networks using Hints*. C.W. Omlin, C.L. Giles. In Procs. of the Ninth International Conference on Machine Learning. 1992.Google Scholar - [Omlin,93]
*Pruning Recurrent Neural Networks for Improved Generalization Performance*. C.W. Omlin, C.L. Giles. Technical Report No. 93-6. Computer Science Department, Rensselaer Polytechnic Institute, Troy, N.Y. 1993.Google Scholar - [Pollack,91]
*The Induction of Dynamical Recognizers*. J.B. Pollack. Machine Learning, no. 7, pp. 227–252. 1991.Google Scholar - [Rumelhart,86]
*Learning sequential structure in simple recurrent networks*. D.E. Rumelhart, G. Hinton, R. Williams. Parallel distributed processing: Experiments in the microstructure of cognition, vol. 1. Ed. Rumelhart, D.E. McClelland, J.L. and the PDP Research Group. MIT Press. Cambridge. 1986.Google Scholar - [Sanfeliu,92]
*Understanding Neural Networks for Grammatical Inference and Recognition*. A. Sanfeliu, R. Alquézar. In Advances in Structural and Syntactic Pattern Recognition, pp. 75–948. Ed. H.Bunke. 1992Google Scholar - [Servan,88]
*Encoding sequential structure in simple recurrent networks*. Servan-Schreiber, D.A. Cleeremans, J.L. McClelland. Technical Report CMU-CS-183. School of Computer Science. Carnegie Mellon University. Pittsburgh, PA. 1988.Google Scholar - [Servan,91]
*Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks*. D. Servan-Schreiber, A. Cleeremans, J.L. McClelland. Machine Learning, no. 7, pp. 161–193. 1991.Google Scholar - [Smith,89]
*Learning Sequential Structure with the Real-Time Recurrent Learning Algorithm*. A.W. Smith, D. Zipser. International Journal of Neural Systems, vol. 1, no. 2, pp. 125–131. 1989.CrossRefGoogle Scholar - [Sun,90]
*Connectionist Pushdown Automata that Learn Context-Free Grammars*. G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen. In Procs. of the International Joint Conference on Neural Networks, vol. 1, pp. 577–580. 1990.Google Scholar - [Tomita,82]
*Dynamic Construction of Finite-State Automata from Examples using Hill-Climbing*. M. Tomita. In Procs. of the Fourth Annual Cognitive Science Conference, pp. 105–108. 1982.Google Scholar - [Watrous,92]
*Induction of Finite-State Languages Using Second-Order Recurrent Networks*. R.L Watrous, G.M. Kuhn. Neural Computation, no. 4, pp. 406–414. 1992.Google Scholar - [Williams,89]
*Experimental Analysis of the Real-time Recurrent Learning Algorithm*. R.J. Williams, and D. Zipser. Connection Science, vol. 1, no.1, pp. 87–111. 1989.Google Scholar - [Zeng,94]
*Discrete Recurrent Neural Networks for Grammatical Inference*. Z. Zeng, M. Goodman, P. Smith. IEEE Transactions on Neural Networks. Vol. 5, no. 2, pp. 320–330 1994.Google Scholar