Second-order recurrent neural networks can learn regular grammars from noisy strings

  • Rafael C. Carrasco
  • Mikel L. Forcada
Cognitive Science and AI
Part of the Lecture Notes in Computer Science book series (LNCS, volume 930)

Abstract

Recent work has shown that second-order recurrent neural networks (2ORNNs) may be used to infer deterministic finite automata (DFA) when trained with positive and negative string examples. This paper shows that 2ORNN can also learn DFA from samples consisting of pairs (W,μ W ) where W is a noisy string of input vectors describing the degree of resemblance of every input to the symbols in the alphabet, and μW is the degree of acceptance of the noisy string, computed with a DFA whose behavior has been extended to deal with noisy strings.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Giles, C.L., Miller, C.B., Chen, D., Chen, H.H., Sun, G.Z., and Lee, Y.C. (1992a) “Learning and extracting finite state automata with second-order recurrent neural networks” Neural Computation 4, 393–405.Google Scholar
  2. [2]
    Giles, C.L., Miller, C.B., Chen, D., Sun, G.Z., Chen, H.H., and Lee, Y.C. (1992b) “Extracting and learning an unknown grammar with recurrent neural networks”, Advances in Neural Information Processing Systems, vol. 4 (J. Moody et al., eds; Morgan-Kaufmann, San Mateo, Calif., U.S.A.), 317–324.Google Scholar
  3. [3]
    Siegelmann, H.T., Sontag, E.D., and Giles, C.L. (1992) “The complexity of language recognition by neural networks” Information Processing 92, vol. 1 (Elsevier/North-Holland), p. 329–335.Google Scholar
  4. [4]
    Watrous, R.L. and Kuhn, G.M. (1992a) “Induction of Finite-State Automata Using Second-Order Recurrent Networks”, Advances in Neural Information Processing Systems, vol. 4 (J. Moody et al., eds; Morgan-Kaufmann, San Mateo, Calif., U.S.A.), 306–316.Google Scholar
  5. [5]
    Watrous, R.L. and Kuhn, G.M. (1992b) “Induction of Finite-State Languages Using Second-Order Recurrent Networks”, Neural Computation 4, 406–414.Google Scholar
  6. [6]
    Steimann, F. and Adlassnig, K.-P. (1994) “Clinical monitoring with fuzzy automata”, Fuzzy Sets and Systems 61, 37–42.MathSciNetGoogle Scholar
  7. [7]
    M.L. Forcada and R.C. Carrasco (1994) “Learning the initial state of a second-order recurrent neural network during regular-language inference”, Neural Computation, in press.Google Scholar
  8. [8]
    Williams, R.J. and Zipser, D. (1989) “A learning algorithm for continually running fully recurrent neural networks” Neural Comp. 1, 270.Google Scholar
  9. [9]
    Tomita, M. (1982) “Dynamic construction of finite-state automata from examples, using hillclimbing” Proceedings of the Fourth. Annual Cognitive Science Conference, (Ann Arbor, Mich., U.S.A.) p. 105–108Google Scholar
  10. [10]
    Carrasco, R.C. and Oncina, J. (1994) “Learning stochastic regular grammars by means of a state merging method”, in Grammatical Inference and Applications, Proc. of the 2nd. Intl. Colloq. on Grammatical Inference ICGI-94 (Alicante, Spain, September 1994) (Carrasco, R. and Oncina, J., eds.) Lecture Notes in Artificial Intelligence 862 (Springer-Verlag) p. 139–152.Google Scholar

Copyright information

© Springer-Verlag 1995

Authors and Affiliations

  • Rafael C. Carrasco
    • 1
  • Mikel L. Forcada
    • 1
  1. 1.Departament de Tecnologia Informàtica i ComputacióUniversitat d'AlacantAlacantSpain

Personalised recommendations