Representation and recognition of regular grammars by means of second-order recurrent neural networks

  • R. Alquézar
  • A. Sanfeliu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 686)


Recently, some models of neural networks, recurrent neural networks, have been used in conjunction with their associated neural learning schemes to infer regular grammars from a set of sample strings. The representation of the inferred automata is hidden in the weights and connections of the net, this being a common feature in emergent subsymbolic representations. In order to relate the symbolic and connectionist approaches to the tasks of grammatical inference and recognition, we address and solve a basic problem, which is, how to build a neural net recognizer for a given regular language specified by a deterministic finite-state automaton. A second-order recurrent network model is employed, which allows to formulate the problem as one of solving a linear system of equations. These equations directly represent the automaton transitions in terms of static linear approximations of the network running equations, and can be viewed as constraints to be satisfied by the network weights. A description is given both for the weight computation step and the string recognition procedure.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    J.L. Elman ”Finding structure in time” CRL Technical Report 8801 University of California, San Diego, Center of Research in Language, 1988.Google Scholar
  2. [2]
    C.L. Giles et al. ”Learning and extracting finite state automata with second-order recurrent neural networks” Neural Compulation, vol. 4, pp. 393–405, 1992.Google Scholar
  3. [3]
    J.E. Hopfcroft and J.D. Ullman ”Introduction to Automata Theory, Languages and Computation”, p. 68, Addison-Wesley, Reading MA, 1979.Google Scholar
  4. [4]
    L. Miclet ”Grammatical Inference” Chapter 9 ”Syntatic and Structural Pattern recognition: Theory and Applications, H. Bunke and A. Sanfeliu editors, World Scientific, 1990.Google Scholar
  5. [5]
    A. Sanfeliu and R. Alquezar, ”Understanding neural networks for grammatical inference and recognition” IAPR Int. Workshop on Structural and Syntactic Pattern Recognition, Bern, August 26–28, 1992.Google Scholar
  6. [6]
    D. Servan-Schreiber, A. Cleeremans and J.L. McClelland ”Graded state machines: the representation of temporal contingencies in simple recurrent networks” Machine Learning, vol. 7, pp. 161–193, 1991.Google Scholar
  7. [7]
    A.W. Smith and D. Zipser ”Learning sequential structure with the real-time recurrent learning algorithm” Int. Journal of Neural Systems, vol. 1 n.2, pp. 125–131, 1989.Google Scholar
  8. [8]
    R.L. Watrous and G.M. Kuhn ”Induction of finite state languages using second-order recurrent networks” Neural Computation, vol. 4, pp. 406–414, 1992.Google Scholar
  9. [9]
    R.J. Williams and D. Zipser ”A learning algorithm for continually running fully recurrent neural networks” Neural Computation, vol. 1, pp. 270–280, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • R. Alquézar
    • 1
  • A. Sanfeliu
    • 1
  1. 1.Institut de Cibernètica (UPC-CSIC)BarcelonaSpain

Personalised recommendations