Distant Co-occurrence Language Model for ASR in LooseWord Order Languages

  • Jerzy Sas
  • Andrzej Zolnierek
Conference paper
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 95)


In the paper the problem of language modeling for automatic speech recognition in loose word order languages is considered. In loose word order languages classical n-gram language models are less effective, because the ordered word sequences encountered in the language corpus used to build the language models are less specific than in the case of strict word order languages. Because a word set appearing in the phrase is likely to appear in other permutation, all permutations of word sequences encountered in the corpus should be given additional likelihood in the language model.We propose the method of n-gram language model construction which assigns additional probability to word tuples being permutations of word sequences found in the training corpus. The paradigm of backoff bigram language model is adapted. The modification of typical model construction method consists in increasing the backed-off probability of bigrams that never appeared in the corpus but which elements appeared in the same phrases separated by other words. The proposed modification can be applied to any method of language model construction that is based on ML probability discounting. The performances of various LM creation methods adapted with the proposed way were compared in the application to Polish speech recognition.


Hiden Markov Model Speech Recognition Language Model Automatic Speech Recognition Word Error Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Jelinek, F., Merialdo, B., Roukos, S., Strauss, M.: A dynamic language model for speech recognition. In: Proceedings of the Workshop on Speech and Natural Language, HLT 1991, Association for Computational Linguistics, pp. 293–295 (1991)Google Scholar
  2. 2.
    Piasecki, M., Broda, B.: Correction of medical handwriting OCR based on semantic similarity. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 437–446. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  3. 3.
    Devine, E.G., Gaehde, S.A., Curtis, A.C.: Comparative Evaluation of Three Continuous Speech Recognition Software Packages in the Generation of Medical Reports. Journal of American Medical Informatics Association 7(5), 462–468 (2000)CrossRefGoogle Scholar
  4. 4.
    Chen, s.F., Goodman, S.: An empirical study of smoothing techniques for language modeling. Computer Speech and Language (13), 359–394 (1999)Google Scholar
  5. 5.
    Ziolko, B., Skurzok, D., Ziolko, M.: Word n-grams for Polish. In: Proc. of 10th IASTED Int. Conf. on Artifficial Intelligence and Applications (AIA 2010), pp. 197–201 (2010)Google Scholar
  6. 6.
    Mauces, M., Rotownik, T., Zemljak, M.: Modelling Highly Inflected Slovenian Language. International Journal of Speech technology 6, 254–257 (2003)Google Scholar
  7. 7.
    Whittaker, E.W.D., Woodland, P.C.: Language modelling for Russian and English using words and classes. In: Computer Speech and Language, vol. 17, pp. 87–104. Elsevier Academic Press, Amsterdam (2003)Google Scholar
  8. 8.
    Joshua, T., Goodman, J.T.: A Bit of Progress in Language Modeling Extended Version. Machine Learning and Applied Statistics Group Microsoft Research. Technical Report, MSR-TR-2001-72 (2001)Google Scholar
  9. 9.
    Katz, S.: Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. IEEE Transactions On Acoustics, Speech and Signal Processing ASP-35(3), 400–401 (1987)CrossRefGoogle Scholar
  10. 10.
    Jurafsky, D., Matrin, J.: Speech and language processing. An introduction to natural language processing. In: Computational Linguistics and Speech Recognition, Pearson Prentice Hall, New Jersey (2009)Google Scholar
  11. 11.
    Gale, A., Sampson, G.: Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics 2, 217–239 (1995)CrossRefGoogle Scholar
  12. 12.
    Lee, A., Kawahara, T., Shikano, K.: Julius - an Open Source Real-Time Large Vocabulary Recognition Engine. In: Proc. of European Conference on Speech Communication and Technology (EUROSPEECH), pp. 1691–1694 (2001)Google Scholar
  13. 13.
    Young, S., Everman, G.: The HTK Book (for HTK Version 3.4). Cambridge University Engineering Department (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jerzy Sas
    • 1
  • Andrzej Zolnierek
    • 2
  1. 1.Institute of InformaticsWroclaw University of TechnologyWroclawPoland
  2. 2.Faculty of Electronics, Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations