Speaker Adaptation on Myanmar Spontaneous Speech Recognition

  • Hay Mar Soe NaingEmail author
  • Win Pa Pa
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 781)


This paper introduces the work on automatic speech recognition (ASR) of Myanmar spontaneous speech. The recognizer is based on the Gaussian Mixture and Hidden Markov Model (GMM-HMM). A baseline ASR is developed with 20.5 h of spontaneous speech corpus and refine it with many speaker adaptation methods. In this paper, five kinds of adapted acoustic models were explored; Maximum A Posteriori (MAP), Maximum Mutual Information (MMI), Minimum Phone Error (MPE), Maximum Mutual Information including feature space and model space (fMMI) and Subspace GMM (SGMM). We evaluate these adapted models using spontaneous evaluation set consists of 100 utterances from 61 speakers totally 23 min and 19 s. Experiments on this speech corpus show significant improvement of speaker adaptative training models and SGMM-based acoustic model performs better than other adaptative models. It can significantly reduce 3.16% WER compared with the baseline GMM model. It is also investigated that the Deep Neural Network (DNN) training on the same corpus and evaluated with same evaluation set. With respect to the DNN training, the result reaches up to 31.5% WER.


Spontaneous speech ASR Myanmar GMM Adaptative training DNN 



We would like to thank all members of Spoken Language Communication Lab., from National Institute of Information and Communications Technology (NICT), Kyoto, Japan, for the utilization of Myanmar spontaneous speech corpus presented in this paper.


  1. 1.
    Liu, G., Lei, Y., Hansen, J.H.: Dialect identification: impact of differences between read versus spontaneous speech. In: 2010 18th European Signal Processing Conference, pp. 2003–2006. IEEE (2010)Google Scholar
  2. 2.
    Chen, L.Y., Lee, C.J., Jang, J.S.R.: Minimum phone error discriminative training for Mandarin Chinese speaker adaptation. In: INTERSPEECH, pp. 1241–1244 (2008)Google Scholar
  3. 3.
    Hoesen, D., Satriawan, C.H., Lestari, D.P., Khodra, M.L.: Towards robust Indonesian speech recognition with spontaneous-speech adapted acoustic models. Procedia Comput. Sci. 81, 167–173 (2016)CrossRefGoogle Scholar
  4. 4.
    Pirhosseinloo, S., Ganj, F.A.: Discriminative speaker adaptation in Persian continuous speech recognition systems. Procedia Soc. Behav. Sci. 32, 296–301 (2012)CrossRefGoogle Scholar
  5. 5.
    Lestari, D.P., Irfani, A.: Acoustic and language models adaptation for Indonesian spontaneous speech recognition. In: 2015 2nd International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), pp. 1–5. IEEE (2015)Google Scholar
  6. 6.
    Saz, O., Vaquero, C., Lleida, E., Marcos, J.M., Canalís, C.: Study of maximum a posteriori speaker adaptation for automatic speech recognition of pathological speech. In: Proceedings of Jornadas en Tecnología del Habla (2006)Google Scholar
  7. 7.
    Vertanen, K.: An overview of discriminative training for speech recognition, pp. 1–14. University of Cambridge (2004)Google Scholar
  8. 8.
    Hsiao, R., Schultz, T.: Generalized discriminative feature transformation for speech recognition. In: INTERSPEECH, pp. 664–667 (2009)Google Scholar
  9. 9.
    Povey, D., Burget, L., Agarwal, M., Akyazi, P., Feng, K., Ghoshal, A., Glembek, O., Goel, N.K., Karafiát, M., Rastrow, A., et al.: Subspace Gaussian mixture models for speech recognition. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 4330–4333. IEEE (2010)Google Scholar
  10. 10.
    Ghalehjegh, S.H.: New Paradigms for Modeling Acoustic Variation in Speech Processing. Ph.D. thesis, McGill University (2016)Google Scholar
  11. 11.
    Hsu, B.J.P., Glass, J.R.: Iterative language model estimation: efficient data structure & algorithms. In: INTERSPEECH, pp. 841–844 (2008)Google Scholar
  12. 12.
    Naing, H.M.S., Hlaing, A.M., Pa, W.P., Hu, X., Thu, Y.K., Hori, C., Kawai, H.: A Myanmar large vocabulary continuous speech recognition system. In: 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 320–327. IEEE (2015)Google Scholar
  13. 13.
    Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., et al.: The kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding, No. EPFL-CONF-192584. IEEE Signal Processing Society (2011)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Natural Language Processing LaboratoryUCSYYangonMyanmar

Personalised recommendations