IJCAI 1995: Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing pp 203-216 | Cite as
A minimum description length approach to grammar inference
Abstract
We describe a new abstract model for the computational learning of grammars. The model deals with a learning process in which an algorithm is given an input of a large set of training sentences that belong to some unknown grammar. The algorithm then tries to infer this grammar. Our model is based on the well-known Minimum Description Length Principle. It is quite close to, but more general than several other existing approaches. We have shown that one of these approaches (based on n-gram statistics) coincides exactly with a restricted version of our own model. We have used a restricted version of the algorithm implied by the model to find classes of related words in natural language texts. It turns out that for this task, which can be seen as a ‘degenerate’ case of grammar learning, our approach gives quite good results. As opposed to many other approaches, it also provides a clear ‘stopping criterion’ indicating at what point the learning process should stop.
Keywords
Parse Tree Minimum Description Length Restricted Version Word Classification Grammar RulePreview
Unable to display preview. Download preview PDF.
References
- 1.P.F. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and R.L. Mercer. Class/based n-gram models of natural language. Computational Linguistics, 18:467–479, 1992.Google Scholar
- 2.N. Chomsky. Syntactic Structures. Mouton, The Hague, 1957.Google Scholar
- 3.A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 34:1–38, 1977.Google Scholar
- 4.S. Finch and N. Chater. A hybrid approach to the automatic learning of linguistic categories. AISB Quarterly, 78:16–24, 1991.Google Scholar
- 5.R.G. Gallager. Information Theory and Reliable Communication. Wiley, New York, 1968.Google Scholar
- 6.P.D. Grünwald. Automatic grammar induction using the MDL Principle. Master's thesis, Free University of Amsterdam, Amsterdam, 1994.Google Scholar
- 7.H. Kucera and W. Francis. Computational Analysis of Present Day American English. Brown University Press, 1967.Google Scholar
- 8.P. Langley. Machine learning and grammar induction. Machine Learning, 2:5–8, 1987. Editorial of special issue on language learning.Google Scholar
- 9.M. Li and P.M.B. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer-Verlag, 1993.Google Scholar
- 10.J. Rissanen. A universal prior for integers and estimation by minimum description length. Ann. Statist., 11:416–431, 1982.Google Scholar
- 11.R.J. Solomonoff. A formal theory of inductive inference, part 1 and part 2. Inform. Contr., 7:1–22, 224–254, 1964.Google Scholar
- 12.A. Stolcke. Bayesian Learning of Probabilistic Language Models. PhD thesis, ICSI, Berkeley, 1994.Google Scholar
- 13.J.G. Wolff. Language acquisition, data compression, and generalization. Language and Communication, 2:57–89, 1982.Google Scholar