Performance and Improvements of a~Language Model Based on Stochastic Context-Free Grammars

  • José García-Hernandez
  • Joan Andreu Sánchez
  • José Miguel Benedí
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2652)

Abstract

This paper describes a hybrid language model defined as a combination of a word-based n-gram, which is used to capture the local relations between words, and a category-based SCFG with a word distribution into categories, which is defined to represent the long-term relations between these categories. Experiments on the UPenn Treebank corpus are reported. These experiments have been carried out in terms of the test set perplexity and the word error rate in a speech recognition experiment.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bahl, L.R., Jelinek, F., Mercer, R.L.: A maximum likelihood approach to continuous speech recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI 5(2), 179–190 (1983)CrossRefGoogle Scholar
  2. 2.
    Benedí, J.M., Sánchez, J.A.: Combination of n-grams and stochastic context-free grammars for language modeling. In: Proc. COLING, Saarbrücken, Germany, pp. 55–61 (2000)Google Scholar
  3. 3.
    Chelba, C., Jelinek, F.: Structured language modeling. Computer Speech and Language 14, 283–332 (2000)CrossRefGoogle Scholar
  4. 4.
    Jelinek, F.: Statistical Methods for Speech Recognition. MIT Press, Cambridge (1998)Google Scholar
  5. 5.
    Jelinek, F., Lafferty, J.D.: Computation of the probability of initial substring generation by stochastic context-free grammars. Computational Linguistics 17(3), 315–323 (1991)Google Scholar
  6. 6.
    Lari, K., Young, S.J.: The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer, Speech and Language 4, 35–56 (1990)CrossRefGoogle Scholar
  7. 7.
    Marcus, M.P., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of english: the penn treebank. Computational Linguistics 19(2), 313–330 (1993)Google Scholar
  8. 8.
    Pereira, F., Schabes, Y.: Inside-outside reestimation from partially bracketed corpora. In: Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, pp. 128–135. University of Delaware, USA (1992)CrossRefGoogle Scholar
  9. 9.
    Roark, B.: Probabilistic top-down parsing and language modeling. Computational Linguistics 27(2), 249–276 (2001)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Rosenfeld, R.: The cmu statistical language modeling toolkit and its use in the 1994 arpa csr evaluation. In: ARPA Spoken Language Technology Workshop, Austin, USA (1995)Google Scholar
  11. 11.
    Sánchez, J.A., Benedí, J.M.: Learning of stochastic context-free grammars by means of estimation algorithms. In: Proc. EUROSPEECH 1999, Budapest, Hungary, vol. 4, pp. 1799–1802 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • José García-Hernandez
    • 1
  • Joan Andreu Sánchez
    • 1
  • José Miguel Benedí
    • 1
  1. 1.Depto. Sistemas Informáticos y ComputaciónUniversidad Politécnica de ValenciaValenciaSpain

Personalised recommendations