Application of l1 Estimation of Gaussian Mixture Model Parameters for Language Identification
- Cite this paper as:
- Doroshin D., Tkachenko M., Lubimov N., Kotov M. (2013) Application of l1 Estimation of Gaussian Mixture Model Parameters for Language Identification. In: Železný M., Habernal I., Ronzhin A. (eds) Speech and Computer. SPECOM 2013. Lecture Notes in Computer Science, vol 8113. Springer, Cham
In this paper we explore the using of l1 optimization for a parameter estimation of Gaussian mixture models (GMM) applied to the language identification. To train the Universal background model (UBM) at each step of Expectation maximization (EM) algorithm the problem of the GMM means estimation is stated as l1 optimization. The approach is Iteratively reweighted least squares (IRLS). Also here is represented the corresponding solution of the Maximum a posteriori probability (MAP) adaptation. The results of the above UBM-MAP system combined with Support vector machine (SVM) are reported on the LDC and GlobalPhone datasets.
Keywordslanguage identification irls map robust gmm estimation
Unable to display preview. Download preview PDF.