Application of l1 Estimation of Gaussian Mixture Model Parameters for Language Identification

  • Danila Doroshin
  • Maxim Tkachenko
  • Nikolay Lubimov
  • Mikhail Kotov
Conference paper

DOI: 10.1007/978-3-319-01931-4_6

Part of the Lecture Notes in Computer Science book series (LNCS, volume 8113)
Cite this paper as:
Doroshin D., Tkachenko M., Lubimov N., Kotov M. (2013) Application of l1 Estimation of Gaussian Mixture Model Parameters for Language Identification. In: Železný M., Habernal I., Ronzhin A. (eds) Speech and Computer. SPECOM 2013. Lecture Notes in Computer Science, vol 8113. Springer, Cham

Abstract

In this paper we explore the using of l1 optimization for a parameter estimation of Gaussian mixture models (GMM) applied to the language identification. To train the Universal background model (UBM) at each step of Expectation maximization (EM) algorithm the problem of the GMM means estimation is stated as l1 optimization. The approach is Iteratively reweighted least squares (IRLS). Also here is represented the corresponding solution of the Maximum a posteriori probability (MAP) adaptation. The results of the above UBM-MAP system combined with Support vector machine (SVM) are reported on the LDC and GlobalPhone datasets.

Keywords

language identification irls map robust gmm estimation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Danila Doroshin
    • 1
  • Maxim Tkachenko
    • 1
  • Nikolay Lubimov
    • 1
  • Mikhail Kotov
    • 1
  1. 1.STEL Computer Systems Ltd.MoscowRussia

Personalised recommendations