A minimum description length approach to grammar inference

  • Peter Grünwald
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1040)

Abstract

We describe a new abstract model for the computational learning of grammars. The model deals with a learning process in which an algorithm is given an input of a large set of training sentences that belong to some unknown grammar. The algorithm then tries to infer this grammar. Our model is based on the well-known Minimum Description Length Principle. It is quite close to, but more general than several other existing approaches. We have shown that one of these approaches (based on n-gram statistics) coincides exactly with a restricted version of our own model. We have used a restricted version of the algorithm implied by the model to find classes of related words in natural language texts. It turns out that for this task, which can be seen as a ‘degenerate’ case of grammar learning, our approach gives quite good results. As opposed to many other approaches, it also provides a clear ‘stopping criterion’ indicating at what point the learning process should stop.

Keywords

Parse Tree Minimum Description Length Restricted Version Word Classification Grammar Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    P.F. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and R.L. Mercer. Class/based n-gram models of natural language. Computational Linguistics, 18:467–479, 1992.Google Scholar
  2. 2.
    N. Chomsky. Syntactic Structures. Mouton, The Hague, 1957.Google Scholar
  3. 3.
    A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 34:1–38, 1977.Google Scholar
  4. 4.
    S. Finch and N. Chater. A hybrid approach to the automatic learning of linguistic categories. AISB Quarterly, 78:16–24, 1991.Google Scholar
  5. 5.
    R.G. Gallager. Information Theory and Reliable Communication. Wiley, New York, 1968.Google Scholar
  6. 6.
    P.D. Grünwald. Automatic grammar induction using the MDL Principle. Master's thesis, Free University of Amsterdam, Amsterdam, 1994.Google Scholar
  7. 7.
    H. Kucera and W. Francis. Computational Analysis of Present Day American English. Brown University Press, 1967.Google Scholar
  8. 8.
    P. Langley. Machine learning and grammar induction. Machine Learning, 2:5–8, 1987. Editorial of special issue on language learning.Google Scholar
  9. 9.
    M. Li and P.M.B. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer-Verlag, 1993.Google Scholar
  10. 10.
    J. Rissanen. A universal prior for integers and estimation by minimum description length. Ann. Statist., 11:416–431, 1982.Google Scholar
  11. 11.
    R.J. Solomonoff. A formal theory of inductive inference, part 1 and part 2. Inform. Contr., 7:1–22, 224–254, 1964.Google Scholar
  12. 12.
    A. Stolcke. Bayesian Learning of Probabilistic Language Models. PhD thesis, ICSI, Berkeley, 1994.Google Scholar
  13. 13.
    J.G. Wolff. Language acquisition, data compression, and generalization. Language and Communication, 2:57–89, 1982.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Peter Grünwald
    • 1
  1. 1.CWIAB AmsterdamThe Netherlands

Personalised recommendations