Advertisement

Robust Heteroscedastic Linear Discriminant Analysis and LCRC Posterior Features in Meeting Data Recognition

  • Martin Karafiát
  • Frantiśek Grézl
  • Petr Schwarz
  • Lukáš Burget
  • Jan Černocký
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4299)

Abstract

This paper investigates into feature extraction for meeting recognition. Three robust variants of popular HLDA transform are investigated. Influence of adding posterior features to PLP feature stream is studied. The experimental results are obtained on two data-sets: CTS (continuous telephone speech) and meeting data from NIST RT’05 evaluations. Silence-reduced HLDA and LCRC phoneme-state posterior features are found to be suitable for both recognition tasks.

Keywords

Linear Discriminant Analysis Meeting Data Phoneme Recognition Large Vocabulary Continuous Speech Recognition Telephone Speech 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burget, L.: Combination of speech features using smoothed heteroscedastic linear discriminant analysis. In: 8th International Conference on Spoken Language Processing, Jeju island, KR (October 2004)Google Scholar
  2. 2.
    Kumar, N.: Investigation of Silicon-Auditory Models and Generalization of Linear Discriminant Analysis for Improved Speech Recognition, Ph.D. thesis, John Hopkins University, Baltimore (1997)Google Scholar
  3. 3.
    Gales, M.J.F.: Semi-tied covariance matrices for hidden markov models. IEEE Trans. Speech and Audio Processing 7, 272–281 (1999)CrossRefGoogle Scholar
  4. 4.
    Gauvain, J., Lee, C.: Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains. IEEE Trans. Speech Audio Processing 2, 291–298 (1994)CrossRefGoogle Scholar
  5. 5.
    Hain, T., et al.: The 2005 AMI system for the transcription of speech in meetings. In: Proc. Rich Transcription 2005 Spring Meeting Recognition Evaluation Workshop, Edinburgh (July 2005)Google Scholar
  6. 6.
    Zhu, Q., Stolcke, A., Chen, B.Y., Morgan, N.: Using MLP Features in SRI’s Conversational Speech Recognition System. In: Proc. Eurospeech 2005, Lisabon, Portugal, pp. 2141–2144 (2005)Google Scholar
  7. 7.
    Adami, A., Burget, L., Dupont, S., Garudadri, H., Grezl, F., Hermansky, H., Jain, P., Kajarekar, S., Morgan, N., Sivadas, S.: Qualcomm-ICSI-OGI features for ASR. In: Proc. ICSLP 2002, Denver, Colorado, USA (2002)Google Scholar
  8. 8.
    Schwarz, P., Matějka, P., Černocký, J.: Towards lower error rates in phoneme recognition. In: Sojka, P., Kopeček, I., Pala, K. (eds.) TSD 2004. LNCS, vol. 3206, pp. 465–472. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    Schwarz, P., Matějka, P., Černocký, J.: Hierarchical structures of neural networks for phoneme recognition. In: ICASSP 2006, Toulouse (accepted, 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Martin Karafiát
    • 1
  • Frantiśek Grézl
    • 1
  • Petr Schwarz
    • 1
  • Lukáš Burget
    • 1
  • Jan Černocký
    • 1
  1. 1.Speech@FIT, Faculty of Information TechnologyBrno University of Technology 

Personalised recommendations