Machine Learning

, Volume 34, Issue 1–3, pp 71–105 | Cite as

An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery

  • Michael R. Brent
Article

Abstract

This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that our algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances.

bayesian grammar induction probability models minimum description length (MDL) unsupervised learning language acquisition segmentation 

References

  1. Aslin, R.N., Woodward, J.Z., LaMendola, N.P., & Bever, T.G. (1996). Models of word segmentation in fluent maternal speech to infants. In J.L. Morgan & K. Demuth (Eds.), Signal to syntax: Bootstrapping from speech to grammar in early acquisition (pp. 117–134). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  2. Baayen, H. (1991). A stochastic process for word frequency distributions. Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, CA.Google Scholar
  3. Bernstein-Ratner, N. (1987). The phonology of parent child speech. In K. Nelson & A. van Kleeck (Eds.), Children's language (Vol. 6). Hillsdale, NJ: Erlbaum.Google Scholar
  4. Brent, M.R. (1996). Advances in computational study of language acquisition. Cognition, 61, 1–38.Google Scholar
  5. Brent, M.R. (1997). Toward a unified model of lexical acquisition and lexical access. Journal of Psycholinguistic Research, 26, 363–375.Google Scholar
  6. Brent, M.R., & Cartwright, T.A. (1996). Distributional regularity and phonotactics are useful for segmentation. Cognition, 61, 93–125.Google Scholar
  7. Cartwright, T.A., & Brent, M.R. (1994). Segmenting speech without a lexicon: Evidence for a bootstrapping model of lexical acquisition. Proceedings of the 16th Annual Meeting of the Cognitive Science Society. Hillsdale, NJ: Erlbaum.Google Scholar
  8. Cartwright, T.A., & Brent, M.R. (1997). Syntactic categorization in early language acquisition: Formalizing the role of distributional analysis. Cognition, 63, 121–170.Google Scholar
  9. Christiansen, M.H., Allen, J., & Seidenberg, M. (1998). Learning to segment speech using multiple cues: A connectionist model. To appear in Language and Cognitive Processes, 13, 221–268.Google Scholar
  10. Church, K.W., & Gale, W.A. (1991). A comparison of the enhanced good-turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5, 19–54.Google Scholar
  11. Dahan, D., & Brent, M.R. (1999). On the discovery of novelword-like units from utterances: An artificial-language study with implications for native-language acquisition. To appear in Journal of Experimental Psychology: General (in press).Google Scholar
  12. Elman, J.L. (1990). Finding structure in time. Cognitive Science, 14, 179–211.Google Scholar
  13. Gale, W.A., & Church, K.W. (1994). What is wrong with adding one? In Nelleke Oostdijk & P. de Haan (Eds.), Corpus-based research into language (pp. 189–198). Amsterdam: Rodopi.Google Scholar
  14. Hankerson, D., Harris, G.A., & Johnson, P.D., Jr. (1998). Introduction to information theory and data comprestsion. New York: CRC Press.Google Scholar
  15. Harris, Z.S. (1954). Distributional structure. Word, 10, 146–162.Google Scholar
  16. Jelinek, F. (1997). Statistical methods for speech recognition. Cambridge: MIT Press.Google Scholar
  17. Kraft, L.G. (1949). A device for quantizing, grouping and coding amplitude modulated pulses. Unpublished Master's thesis, Massachusetts Institute of Technology.Google Scholar
  18. Li, M., & Vitányi, P.M.B. (1993). An introduction to Kolmogorov complexity and its applications.Google Scholar
  19. MacWhinney, B., & Snow, C. (1985). The child language data exchange system. Journal of Child Language, 12, 271–296.Google Scholar
  20. Mandelbrot, B. (1953). An informational theory of the statistical structure of language. In W. Jackson (Ed.), Communication theory. Butterworths.Google Scholar
  21. de Marcken, C. (1995). The unsupervised acquisition of a lexicon from continuous speech. AI Memo No. 1558, Massachusetts Institute of Technology.Google Scholar
  22. Miller, G.A. (1957). Some effects of intermittent silence. The American Journal of Psychology, 52, 311–314.Google Scholar
  23. Nevill-Manning, C.G., & Witten, I.H. (1997). Compression and explanation using hierarchical grammars. Computer Journal, 40, 103–116.Google Scholar
  24. Olivier, D.C. (1968). Stochastic grammars and language acquisition mechanisms. Unpublished doctoral dissertation, Harvard University.Google Scholar
  25. Quinlan, J.R., & Rivest, R.L. (1989). Inferring decision trees using the minimum description length principle. Information and Computation, 80, 227–248.Google Scholar
  26. Redlich, A.N. (1993). Redundancy reduction as a strategy for unsupervised learning. Neural Computation, 5, 289–304.Google Scholar
  27. Rissanen, J. (1989). Stochastic complexity in statistical inquiry. Singapore: World Scientific Publishing.Google Scholar
  28. Saffran, J.R., Newport, E.L., & Aslin, R.N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606–621.Google Scholar
  29. Stolcke, A. (1994). Bayesian learning of probabilistic language models. Unpublished doctoral dissertation, University of California at Berkeley.Google Scholar
  30. Wallace, C.S., & Boulton, D.M. (1968). An information measure for classification. Computer Journal, 11, 185–194.Google Scholar
  31. Witten, I.H., & Bell, T.C. (1991). The zero-frequency problem: Estimating the probabilities of novel events in adaptive text compression. IEEE Transactions on Information Theory, 37, 1085–1094.Google Scholar
  32. Wolff, J.G. (1982). Language acquisition, data compression, and generalization. Language and Communication, 2, 57–89.Google Scholar
  33. Zipf, G.K. (1935). The psycho-biology of language. Boston: Houghton Mifflin.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • Michael R. Brent
    • 1
  1. 1.Department of Cognitive ScienceJohns Hopkins UniversityBaltimore

Personalised recommendations