Advertisement

An Investigation of Single-Pass ASR System Combination for Spoken Language Understanding

  • Fethi Bougares
  • Mickael Rouvier
  • Nathalie Camelin
  • Paul Deléglise
  • Yannick Estève
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7978)

Abstract

This paper studies the benefits provided by a single-pass Automatic Speech Recognition (ASR) exchange-based combination approach for spoken dialog system. Three famous open-source ASR systems are used to experiment this approach in the framework of Spoken Language Understanding (SLU). On the ASR side, single-pass ASR systems are used with an online acoustic model adaptation using the previous utterances said by a speaker. On the SLU side, a competitive CRF-based SLU system is applied on outputs of ASR system to obtain the semantic concepts. The evaluation is done on the French PORT-MEDIA test data in terms of both Word Error Rate (WER) and Concept Error Rate (CER). While the best single pass system used alone shows a CER of 29.8% for a WER of 22.8%, single-pass ASR exchange-based combination reaches a CER of 27.3% for a WER of 26%. This CER is only slightly higher than the one reached by a 5-passes ASR system which obtained a CER of 26.8% for a WER of 22.8% in better conditions, i.e. better acoustic model adaptation made on all the speech utterances said by a speaker, advanced feature extraction techniques and search graph rescoring using language model with higher order.

Keywords

Automatic speech recognition spoken dialog understanding ASR system combination 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bougares, F., Estève, Y., Deléglise, P., Linarès, G.: Bag Of N-Gram driven decoding for LVCSR system harnessing. In: IEEE Automatic Speech Recognition and Understanding Workshop, Hawaii, USA (December 2011)Google Scholar
  2. 2.
    Bougares, F., Rouvier, M., Estève, Y., Linarès, G.: Low latency combination of parallelized single-pass LVCSR systems. In: Interspeech, Portland, Oregon (USA), September 9-13 (2012)Google Scholar
  3. 3.
    Deléglise, P., Estève, Y., Meignier, S., Merlin, T.: Improvements to the LIUM French ASR system based on CMU Sphinx: what helps to significantly reduce the word error rate?. In: Interspeech, Brighton, UK (September 2009)Google Scholar
  4. 4.
    Deoras, A., Sarikaya, R., Tür, G., Hakkani-Tür, D.: Joint decoding for speech recognition and semantic tagging. In: Interspeech 2012, Portland, Oregon, USA (September 2012)Google Scholar
  5. 5.
    Estève, Y., Bazillon, T., Antoine, J.-Y., Béchet, F., Farinas, J.: The EPAC corpus: manual and automatic annotations of conversational speech in french broadcast news. In: LREC 2010, Malta, May 17-23 (2010)Google Scholar
  6. 6.
    Fiscus, J.: A post-processing system to yield reduced word error rates: recogniser output voting error reduction (ROVER). In: ASRU, pp. 347–354 (1997)Google Scholar
  7. 7.
    Gales, M.J.F.: Maximum likelihood linear transformations for hmm-based speech recognition. Computer Speech and Language 12, 75–98 (1998)CrossRefGoogle Scholar
  8. 8.
    Gauvain, J.-l., Chin-hui, L.: Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Speech and Audio Processing 2, 291–298 (1994)CrossRefGoogle Scholar
  9. 9.
    Hahn, S., Dinarelli, M., Raymond, C., Lefèvre, F., Lehnen, P., De Mori, R., Moschitti, A., Ney, H., Riccardi, G.: Comparing stochastic approaches to spoken language understanding in multiple languages. IEEE Transactions on Audio, Speech and Language Processing PP(99), 1 (2010)Google Scholar
  10. 10.
    Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: International Conference on Machine Learning, Williamstown, MA, USA, pp. 282–289 (2001)Google Scholar
  11. 11.
    Lecouteux, B., Linarès, G., Estève, Y., Mauclair, J.: System combination by driven decoding. In: ICASSP (2007)Google Scholar
  12. 12.
    Lecouteux, B., Linarès, G., Estève, Y., Gravier, G.: Generalized driven decoding for speech recognition system combination. In: ICASSP, Las Vegas, Nevada, USA (2008)Google Scholar
  13. 13.
    Lecouteux, B., Linarès, G., Nocera, P., Bonastre, J.-F.: Imperfect transcript driven speech recognition. In: ICSLP /INTERSPEECH, Pittsburgh, Pennsylvania, USA (2006)Google Scholar
  14. 14.
    Lefèvre, F., Mostefa, D., Besacier, L., Estève, Y., Quignard, M., Camelin, N., Favre, B., Jabaian, B., Rojas-Barahona, L.M.: Leveraging study of robustness and portability of spoken language understanding systems across languages and domains: the Port-MEDIA corpora. In: LREC 2012, Istanbul, Turkey (2012)Google Scholar
  15. 15.
    Leggetter, C.J., Woodland, P.C.: Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models. Computer Speech & Language 9(2), 171–185 (1995)CrossRefGoogle Scholar
  16. 16.
    Liu, X., Gales, M.J.F., Woodland, P.C.: Language Model Cross Adaptation For LVCSR System Combination. In: Interspeech (2010)Google Scholar
  17. 17.
    Ma, C., Kuo, H.-K.J., Soltau, H.: A comparative study on system combination schemes for LVCSR. In: ICASSP, pp. 4394–4397 (2010)Google Scholar
  18. 18.
    Maynard, H., Rosset, S., Ayache, C., Kuhn, A., Mostefa, D.: Semantic annotation of the MEDIA corpus for spoken dialog. In: Proceedings of Eurospeech, Lisbon, pp. 3457–3460 (2005)Google Scholar
  19. 19.
    Nguyen, L., Abdou, S., Afify, M., Makhoul, J., Matsoukas, S., Schwartz, R., Xiang, B., Lamel, L., Gauvain, J.-L., Adda, G., Schwenk, H.: The 2004 BBN/LIMSI 10xRT English Broadcast News Transcription System. In: 2004 Rich Transcriptions Workshop, Pallisades, NY (2004)Google Scholar
  20. 20.
    Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., Vesely, K.: The Kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding, Hilton Waikoloa Village, Big Island, Hawaii, US (December 2011)Google Scholar
  21. 21.
    Ravishankar, M., Singh, R., Raj, B., Stern, R.M.: The 1999 CMU 10x real time broadcast news transcription system. In: Proc. DARPA Workshop on Automatic Transcription of Broadcast News (2000)Google Scholar
  22. 22.
    Rybach, D., Hahn, S., Lehnen, P., Nolden, D., Sundermeyer, M., Tüske, Z., Wiesler, S., Schlüter, R., Ney, H.: RASR - the RWTH Aachen University open source speech recognition toolkit. In: IEEE Automatic Speech Recognition and Understanding Workshop, Hawaii, USA (December 2011)Google Scholar
  23. 23.
    Zhu, Q., Chen, B., Morgan, N., Stolcke, A.: On using mlp features in lvcsr. In: Proc. ICSLP, Jeju, Korea, pp. 921–924 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Fethi Bougares
    • 1
  • Mickael Rouvier
    • 1
  • Nathalie Camelin
    • 1
  • Paul Deléglise
    • 1
  • Yannick Estève
    • 1
  1. 1.LIUM - University of Le MansFrance

Personalised recommendations