Integrating High and Low Smoothed LMs in a CSR System

  • Amparo Varona
  • Ines Torres
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2905)


In Continuous Speech Recognition (CSR) systems, acoustic and Language Models (LM) must be integrated. To get optimum CSR performances, it is well-known that heuristic factors must be optimised. Due to its great effect on final CSR performances, the exponential scaling factor applied to LM probabilities is the most important. LM probabilities are obtained after applying a smoothing technique. The use of the scaling factor implies a redistribution of the smoothed LM probabilities, i.e., a new smoothing is obtained. In this work, the relationship between the amount of smoothing of LMs and the new smoothing achieved by the scaling factor is studied. High and low smoothed LMs, using well-known discounting techniques, were integrated into the CSR system. The experimental evaluation was carried out on two Spanish speech application tasks with very different levels of difficulty. The strong relationship observed between the two redistributions of the LM probabilities was independent of the task. When the adequate value of the scaling factor was applied, not very different optimum CSR performances were obtained in spite of the great differences between perplexity values.


Hide Markov Model Language Model Acoustic Model Smoothing Technique Word Error Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ney, H., Martin, S., Wessel, F.: Statistical Language Modeling using leaving-oneout. In: Young, S. (ed.) LM, pp. 174–207. Kluwer Academic Publishers, Dordrecht (1997)Google Scholar
  2. 2.
    Chen, F.S., Goodman, J.: An empirical study of smoothing techniques for language modeling. Computer, Speech and Language 13, 359–394 (1999)CrossRefGoogle Scholar
  3. 3.
    Rosenfeld, R.: Two decades of statistical language modeling: Where do we go from here (2000)Google Scholar
  4. 4.
    Clarkson, P., Robinson, R.: Improved language modelling through better language model evaluation measures. Computer, Speech and Language 15, 39–53 (2001)CrossRefGoogle Scholar
  5. 5.
    Jelinek, F.: Five speculations (and a divertimento) on the themes of h. bourlard, h. hermansky and n. morgan. Speech Communication 18, 242–246 (1996)CrossRefGoogle Scholar
  6. 6.
    Klakow, D., Peters, J.: Testing the correlation of word error rate and perplexity. Speech Communication 38, 19–28 (2002)zbMATHCrossRefGoogle Scholar
  7. 7.
    Varona, A.: Torres, I.: Back-Off smoothing evaluation over syntactic language models. In: Proc. of European Conference on Speech Technology, vol. 3, pp. 2135–2138 (2001)Google Scholar
  8. 8.
    Torres, I., Varona, A.: k-tss language models in a Speech recognition Systems. Computer, Speech and Language 15, 127–149 (2001)CrossRefGoogle Scholar
  9. 9.
    Díaz, J., Rubio, A., Peinado, A., Segarra, E., Prieto, N., Casacuberta, F.: Albayzin: a task-oriented spanish Speech Corpus. In: First Int. Conf. on language resources and evaluation, vol. 11, pp. 497–501 (1998)Google Scholar
  10. 10.
    Rodríguez, L., Torres, I., Varona, A.: Evaluation of sublexical and lexical models of acoustic disfluencies for spontaneous Speech recognition in spanish. In: Proc. of European Conference on Speech Technology, vol. 3, pp. 1665–1668 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Amparo Varona
    • 1
  • Ines Torres
    • 1
  1. 1.Departamento de Electricidad y ElectrónicaFacultad de Ciencias. UPV/EHU.BilbaoSPAIN

Personalised recommendations