Skip to main content

Simulation of stochastic regular grammars through simple recurrent networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Abstract

Formal grammars have been successfully simulated through Artificial Neural Networks. This fact has established a new approach to the problem of Grammatical Inference. First, [Pollack,91], [Giles,92] and [Watrous,92] trained network architectures from positive samples or positive and negative samples generated by regular grammars to accept or reject new strings. On the other hand, [Servan,88] and [Smith,89] used nets in which strings were fed character by character, so that the possible successors for each character were predicted. Later, [Servan,91] suggested that these networks could also predict the generation probabilities of each character in the strings generated by Stochastic Regular Grammars. Our present work shows empirical evidence supporting this suggestion.

Work supported in part by the Spanish CICYT, under gram TIC-1026/92-CO2.

Supported by a Spanish MEC postgraduate grant.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Continuous Speech Recognition: From Hidden Markov Models to Neural Networks. H. Boulard. EUSIPCO'92, vol. 1, pp. 63–70. 1992.

    Google Scholar 

  2. Finding Structure in Time. Elman,J.L. Technical Report 8801. Center for Research in Language. University of California. La Jolla. 1988.

    Google Scholar 

  3. Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks. Giles,C.L et al. Neural Computation, no. 4, pp. 393–405, 1992.

    Google Scholar 

  4. Serial order: A parallel distributed processing approach. Jordan,M.I. Technical Report No. 8604. Institute of Cognitive Science. University of California. San Diego. 1988.

    Google Scholar 

  5. Minsky,M.L. Computation: Finite and Infinite Machines, Chap. 3.5. Ed. Prentice-Hall Prentice-Hall, Englewood Cliffs, New York. 1967.

    Google Scholar 

  6. Generalization of BackPropagation to Recurrent and Higher Order Neural Networks. Pineda,F.J. Neural Information Processing Systems. Ed. D.Z. Anderson. American Institute of Physics. New York. 1988.

    Google Scholar 

  7. The Induction of Dynamical Recognizers. Pollack,J.B. Machine Learning, no. 7, pp. 227–252. 1991.

    Google Scholar 

  8. Learning sequential structure in simple recurrent networks. Rumelhart,D.E. Hinton,G and Williams,R. Parallel distributed processing: Experiments in the microstructure of cognition, vol. 1. Ed. Rumelhart,D.E. McClelland,J.L. and the PDP Research Group. MIT Press. Cambridge. 1986.

    Google Scholar 

  9. Encoding sequential structure in simple recurrent networks. Servan-Schreiber,D. Cleeremans,A. and McClelland,J.L. Technical Report CMU-CS-183. School of Computer Science. Carnegie Mellon University. Pittisburg, PA. 1988.

    Google Scholar 

  10. Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks. Servan-Schreiber,D. Cleeremans,A. and McClelland,J.L. Machine Learning, no. 7, pp. 161–193. 1991.

    Google Scholar 

  11. Learning Sequential Structure with the Real-Time Recurrent Learning Algorithm. Smith,A.W. and Zipser,D. International Journal of Neural Systems, vol. 1, no. 2, pp. 125–131. 1989.

    Google Scholar 

  12. Induction of Finite-State Languages Using Second-Order Recurrent Networks. Watrous,R.L. and Kuhn,G.M. Neural Computation, no. 4, pp. 406–414. 1992.

    Google Scholar 

  13. Experimental Analysis of the Real-time Recurrent Learning Algorithm. Williams,R.J. and Zipser,D. Connection Science, vol. 1, no.1, pp. 87–111. 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Castaño, M.A., Casacuberta, F., Vidal, E. (1993). Simulation of stochastic regular grammars through simple recurrent networks. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_149

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_149

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics