Advertisement

Encoding Nondeterministic Finite-State Tree Automata in Sigmoid Recursive Neural Networks

  • Mikel L. Forcada
  • Rafael C. Carrasco
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1876)

Abstract

Recently, a number of authors have explored the use of recursive recursive neural nets (RNN) for the adaptive processing of trees or tree-like structures. One of the most important language-theoretical formalizations of the processing of tree-structured data is that of finite-state tree automata (FSTA). In many cases, the number of states of a nondeterministic FSTA (NFSTA) recognizing a tree language may be smaller than that of the corresponding deterministic FSTA (DFSTA) (for example, the language of binary trees in which the label of the leftmost k-th order grandchild of the root node is the same as that on the leftmost leaf). This paper describes a scheme that directly encodes NFSTA in sigmoid RNN.

Keywords

Input Symbol Tree Automaton Tree Language Tree Transducer State Port 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Rafael C. Carrasco and Mikel L. Forcada. Simple strategies to encode tree automata in analog recursive neural networks. IEEE Transactions on Knowledge and Data Engineering, 2000. Accepted for publication.Google Scholar
  2. 2.
    Rafael C. Carrasco, Mikel L. Forcada, M. Ángeles Valdés-Muñoz, and Ramón P. Neco. Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units. Neural Computation, 12, 2000. In press.Google Scholar
  3. 3.
    J. L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.CrossRefGoogle Scholar
  4. 4.
    M. L. Forcada and R. C. Carrasco. Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation, 7(5):923–930, 1995.CrossRefGoogle Scholar
  5. 5.
    Paolo Frasconi, Marco Gori, and Alessandro Sperduti. A general framework for adaptive data structures processing. IEEE Transactions on Neural Networks, 9(5):768–786, 1998.CrossRefGoogle Scholar
  6. 6.
    R.C. Gonzalez and M.G. Thomason. Syntactical pattern recognition. Addison-Wesley, Menlo Park, CA, 1978.Google Scholar
  7. 7.
    Stefan C. Kremer. On the computational power of Elman-style recurrent networks. IEEE Transactions on Neural Networks, 6(4): 1000–1004, 1995.CrossRefGoogle Scholar
  8. 8.
    Stefan C. Kremer. A Theory of Grammatical Induction in the Connectionist Paradigm. PhD thesis, Department of Computer Science, University of Alberta, Edmonton, Alberta, 1996.Google Scholar
  9. 9.
    Alessandro Sperduti. On the computational power of neural networks for structures. Neural Networks, 10(3):395–400, 1997.CrossRefGoogle Scholar
  10. 10.
    Alessandro Sperduti and Antonina Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997.CrossRefGoogle Scholar
  11. 11.
    J.W. Thatcher. Tree automata: An informal survey. In A.V. Aho, editor, Currents in the theory of computing. Prentice-Hall, Englewood-Cliffs, NJ, 1973.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Mikel L. Forcada
    • 1
  • Rafael C. Carrasco
    • 1
  1. 1.Departament de Llenguatges i Sistemes InformàticsUniversitat d’AlacantAlacantSpain

Personalised recommendations