Advertisement

A stochastic search approach to grammar induction

  • Hugues Juillé
  • Jordan B. Pollack
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1433)

Abstract

This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rodney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE.

Keywords

Processing Element Recurrent Neural Network Construction Procedure Finite State Automaton Beam Search 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Dana Angluin and Carl H. Smith. Inductive inference: Theory and methods. Computing Surveys, 15:237–269, September 1983.MathSciNetCrossRefGoogle Scholar
  2. [2]
    Thomas BÄck, Frank Hoffmeister, and Hans-Paul Schwefel. A survey of evolution strategies. In Richard K. Belew and Lashon B. Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, pages 2–9, San Mateo, California, 1991. Morgan Kaufmann.Google Scholar
  3. [3]
    Pang C. Chen. Heuristic sampling: a method for predicting the performance of tree searching programs. SIAM Journal on Computing, 21:295–315, april 1992.zbMATHCrossRefGoogle Scholar
  4. [4]
    S. Das and M. C. Mozer. A unified gradient-descent/clustering architecture for finite state machine induction. In Neural Information Processing Systems, volume 6, pages 19–26, 1994.Google Scholar
  5. [5]
    Lawrence J. Fogel. Autonomous automata. Industrial Research, 4:14–19, 1962.Google Scholar
  6. [6]
    M. L. Forcada and R. C. Carrasco. Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation, 7(5):923–930, 1995.Google Scholar
  7. [7]
    E. Mark Gold. Complexity of automaton identification from given data. Information and Control, 37:302–320, 1978.zbMATHMathSciNetCrossRefGoogle Scholar
  8. [8]
    David E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, 1989.Google Scholar
  9. [9]
    Hugues Juillé. Evolution of non-deterministic incremental algorithms as a new approach for search in state spaces. In Larry J. Eshelman, editor, Proceedings of the Sixth International Conference on Genetic Algorithms, San Mateo, California, 1995. Morgan Kaufmann.Google Scholar
  10. [10]
    Hugues Juillé and Jordan B. Pollack. Sage: a sampling-based heuristic for tree search. 1998. Submitted to Machine Learning.Google Scholar
  11. [11]
    Donald E. Knuth. Estimating the efficiency of backtracking programs. Math. Comp., 29:121–136, 1975.zbMATHMathSciNetCrossRefGoogle Scholar
  12. [12]
    John R. Koza. Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, 1992.Google Scholar
  13. [13]
    Kevin J. Lang. Random dfa's can be approximately learned from sparse uniform examples. In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, pages 45–52, 1992.Google Scholar
  14. [14]
    Kevin J. Lang and Barak A. Pearlmutter. Abbadingo one: Dfa learning competition. http://abba-dingo.cs.unm.edu, 1997.Google Scholar
  15. [15]
    Kevin J. Lang, Barak A. Pearlmutter, and Rodney Price. Results of the abbadingo one dfa learning competition and a new evidence driven state merging algorithm. 1998. Submitted to Machine Learning.Google Scholar
  16. [16]
    Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:227–252, 1991.Google Scholar
  17. [17]
    B. A. Trakhtenbrot and Ya M. Barzdin. Finite Automata: Behavior and Synthesis. North Holland Publishing Company, 1973.Google Scholar
  18. [18]
    R. L. Watrous and G. M. Kuhn. Induction of finite state languages using second-order recurrent networks. Neural Computation, 4(3):406–414, 1992.Google Scholar
  19. [19]
    Z. Zeng, R. M. Goodman, and P. Smyth. Learning finite state machines with self-clustering recurrent networks. Neural Computation, 5(6):976–990, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Hugues Juillé
    • 1
  • Jordan B. Pollack
    • 1
  1. 1.Computer Science DepartmentBrandeis UniversityWalthamUSA

Personalised recommendations