Advertisement

The AMI Meeting Transcription System: Progress and Performance

  • Thomas Hain
  • Lukas Burget
  • John Dines
  • Giulia Garau
  • Martin Karafiat
  • Mike Lincoln
  • Jithendra Vepa
  • Vincent Wan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4299)

Abstract

We present the AMI 2006 system for the transcription of speech in meetings. The system was jointly developed by multiple sites on the basis of the 2005 system for participation in the NIST RT’05 evaluations. The paper describes major developments such as improvements in automatic segmentation, cross-domain model adaptation, inclusion of MLP based features, improvements in decoding, language modelling and vocal tract length normalisation, the use of a new decoder, and a new system architecture. This is followed by a comprehensive description of the final system and its performance in the NIST RT’06s evaluations. In comparison to the previous year word error rate results on the individual headset microphone task were reduced by 20% relative.

Keywords

Language Model Manual Segmentation Acoustic Model Frame Error Rate Word Error Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hain, T., Burget, L., Dines, J., Garau, G., Karafiat, M., Lincoln, M., McCowan, I., Moore, D., Wan, V., Ordelman, R., Renals, S.: The development of the AMI system for the transcription of speech in meetings. In: Renals, S., Bengio, S. (eds.) MLMI 2005. LNCS, vol. 3869, pp. 344–356. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Hain, T., Dines, J., Garau, G., Karafiat, M., Moore, D., Wan, V., Ordelman, R., Renals, S.: Transcription of conference room meetings: an investigation. In: Proc. Interspeech 2005 (2005)Google Scholar
  3. 3.
    Stolcke, A., Anguera, X., Boakye, K., Cetin, O., Grezl, F., Janin, A., Manda, A., Peskin, B., Wooters, C., Zheng, J.: Further progress in meeting recognition: The icsi-sri spring 2005 speech-to-text evaluation system. In: Proc. NIST RT 2005 Workshop (2005)Google Scholar
  4. 4.
    Hain, T., Burget, L., Dines, J., Garau, G., Karafiat, M., Lincoln, M., McCowan, I., Moore, D., Wan, V., Ordelman, R., Renals, S.: The 2005 AMI system for the transcription of speech in meetings. In: Proc. NIST RT 2005 Workshop, Edinburgh (2005)Google Scholar
  5. 5.
    Schwarz, P., Matìjka, P., Cernocký, J.: Towards lower error rates in phoneme recognition. In: Proc. of 7th Intl. Conf. on Text, Speech and Dialogue, p. 8. Springer, Brno (2004)Google Scholar
  6. 6.
    Gales, M.J.F.: Linear transformations for hmm-based speech recognition. Technical Report CUED/F-INFENG/TR-291, Cambridge University Engineering Department (1997)Google Scholar
  7. 7.
    Messerschmitt, D., Hedberg, D., Cole, C., Haoui, A.: P.Winship: Digital voice echo canceller with a tms32020. Application report SPRA129, Texas Instruments (1989)Google Scholar
  8. 8.
    Wrigley, S., Brown, G., Wan, V., Renals, S.: Speech and crosstalk detection in multichannel audio. IEEE Trans.Speech and Audio Processing 13(1), 84–91 (2005)CrossRefGoogle Scholar
  9. 9.
    Zhu, Q., Chen, A.S.B., Morgan, N.: Using MLP features in sri’s conversationl speech recognition system. In: Proc. Interspeech 2005 (2005)Google Scholar
  10. 10.
    Povey, D.: Discriminative Training for Large Vocabulary Speech, Recognition. PhD thesis, Cambridge University (2004)Google Scholar
  11. 11.
    Povey, D., Gales, M.J.F., Kim, D.Y., Woodland, P.C.: MMI-MAP and MPE-MAP for acoustic model adaptation. In: Proc. Eurospeech 2003 (2003)Google Scholar
  12. 12.
    Bulyko, I., Ostendorf, M., Stolcke, A.: Getting more mileage from web text sources for conversational speech language modeling using class-dependent mixtures. In: Proc. Human Language Technology Conference 2003 (2003)Google Scholar
  13. 13.
    Wan, V., Hain, T.: Strategies for language model web-data collection. In: Proc. ICASSP 2006. Number SLP-P17.11 (2006)Google Scholar
  14. 14.
    Moore, D., Dines, J., Doss, M.M., Vepa, J., Cheng, O., Hain, T.: Juicer: A weighted finite state transducer speech decoder. In: Renals, S., Bengio, S., Fiscus, J.G. (eds.) MLMI 2006. LNCS, vol. 4299, pp. 285–296. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Mohri, M., Pereira, F., Riley, M.: General-purpose finite-state machine software tools. Technical report, AT&T Labs -Research (1997)Google Scholar
  16. 16.
    Hetherington, L.: The mit fst toolkit. Technical report, L. Hetherington, The MIT FST toolkit, MIT Computer Science and Artificial Intelligence Laboratory (2005) (May 2005), http://people.csail.mit.edu/ilh/fst

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Thomas Hain
    • 1
  • Lukas Burget
    • 2
  • John Dines
    • 3
  • Giulia Garau
    • 4
  • Martin Karafiat
    • 2
  • Mike Lincoln
    • 4
  • Jithendra Vepa
    • 3
  • Vincent Wan
    • 1
  1. 1.Department of Computer ScienceUniversity of SheffieldSheffieldUK
  2. 2.Faculty of Information EngineeringBrno University of TechnologyBrnoCzech Republic
  3. 3.IDIAP Research InstituteMartignySwitzerland
  4. 4.Centre for Speech Technology ResearchUniversity of EdinburghEdinburghUK

Personalised recommendations