Exploring GMM-derived Features for Unsupervised Adaptation of Deep Neural Network Acoustic Models

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9811)


In this paper we investigate GMM-derived features recently introduced for adaptation of context-dependent deep neural network HMM (CD-DNN-HMM) acoustic models. We present an initial attempt of improving the previously proposed adaptation algorithm by applying lattice scores and by using confidence measures in the traditional maximum a posteriori (MAP) adaptation algorithm. Modified MAP adaptation is performed for the auxiliary GMM model used in a speaker adaptation procedure for a DNN. In addition we introduce two approaches - data augmentation and data selection, for improving the regularization in MAP adaptation for DNN. Experimental results on the Wall Street Journal (WSJ0) corpus show that the proposed adaptation technique can provide, on average, up to \(9.9\,\%\) relative word error rate (WER) reduction under an unsupervised adaptation setup, compared to speaker independent DNN-HMM systems built on conventional features.


Speaker adaptation Deep neural networks (DNN) MAP CD-DNN-HMM GMM-derived (GMMD) features Speaker adaptive training (SAT) Confidence scores 



This work was partially financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.579.21.0057, ID RFMEFI57914X0057.


  1. 1.
    Abdel-Hamid, O., Jiang, H.: Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code. In: Proceedings of the ICASSP, pp. 7942–7946. IEEE (2013)Google Scholar
  2. 2.
    Gemello, R., Mana, F., Scanzio, S., Laface, P., De Mori, R.: Adaptation of hybrid ANN/HMM models using linear hidden transformations and conservative training. In: Proceedings of the ICASSP, vol. 1. IEEE (2006)Google Scholar
  3. 3.
    Gollan, C., Bacchiani, M.: Confidence scores for acoustic model adaptation. In: Proceedings of the ICASSP, pp. 4289–4292. IEEE (2008)Google Scholar
  4. 4.
    Lei, X., Lin, H., Heigold, G.: Deep neural networks with auxiliary gaussian mixture models for real-time speech recognition. In: Proceedings of the ICASSP, pp. 7634–7638. IEEE (2013)Google Scholar
  5. 5.
    Li, J., Huang, J.T., Gong, Y.: Factorized adaptation for deep neural network. In: Proceedings of the ICASSP, pp. 5537–5541. IEEE (2014)Google Scholar
  6. 6.
    Liao, H.: Speaker adaptation of context dependent deep neural networks. In: Proceedings of the ICASSP, pp. 7947–7951. IEEE (2013)Google Scholar
  7. 7.
    Liu, S., Sim, K.C.: On combining DNN and GMM with unsupervised speaker adaptation for robust automatic speech recognition. In: Proceedings of the ICASSP, pp. 195–199. IEEE (2014)Google Scholar
  8. 8.
    Paul, D.B., Baker, J.M.: The design for the wall street journal-based CSR corpus. In: Proceedings of the Workshop on Speech and Natural Language, pp. 357–362. Association for Computational Linguistics (1992)Google Scholar
  9. 9.
    Povey, D., Ghoshal, A., et al.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding (2011)Google Scholar
  10. 10.
    Rath, S.P., Povey, D., Veselỳ, K., Cernockỳ, J.: Improved feature processing for deep neural networks. In: Interspeech, pp. 109–113 (2013)Google Scholar
  11. 11.
    Senior, A., Lopez-Moreno, I.: Improving DNN speaker independence with i-vector inputs. In: Proceedings of the ICASSP, pp. 225–229 (2014)Google Scholar
  12. 12.
    Swietojanski, P., Bell, P., Renals, S.: Structured output layer with auxiliary targets for context-dependent acoustic modelling. In: Interspeech (2015)Google Scholar
  13. 13.
    Tomashenko, N., Khokhlov, Y.: Speaker adaptation of context dependent deep neural networks based on MAP-adaptation and GMM-derived feature processing. In: Proceedings of the Interspeech, pp. 2997–3001 (2014)Google Scholar
  14. 14.
    Tomashenko, N., Khokhlov, Y.: GMM-derived features for effective unsupervised adaptation of deep neural network acoustic models. In: Proceedings of the Interspeech, pp. 2882–2886 (2015)Google Scholar
  15. 15.
    Tomashenko, N., Khokhlov, Y., Larcher, A., Estève, Y.: Exploration de paramètres acoustiques dérivés de GMM pour l’adaptation non supervisée de modèles acoustiques à base de réseaux de neurones profonds. In: Proceedings of the 31éme Journées d’Études sur la Parole (JEP) (2016)Google Scholar
  16. 16.
    Yu, D., Yao, K., Su, H., Li, G., Seide, F.: KL-divergence regularized deep neural network adaptation for improved large vocabulary speech recognition. In: Proceedings of the ICASSP, pp. 7893–7897. IEEE (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.STC-Innovations Ltd.Saint-petersburgRussia
  2. 2.ITMO UniversitySaint-petersburgRussia
  3. 3.University of Le MansLe MansFrance

Personalised recommendations