A Decision Tree-Based Method for Speech Processing: Question Sentence Detection

  • Vũ Minh Quang
  • Eric Castelli
  • Phạm Ngọc Yên
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4223)


Retrieving pertinent parts of a meeting or a conversation recording can help for automatic summarization or indexing of the document. In this paper, we deal with an original task, almost never presented in the literature, which consists in automatically extracting questions utterances from a recording. In a first step, we have tried to develop and evaluate a question extraction system which uses only acoustic parameters and does not need any textual information from a speech-to-text automatic recognition system (called ASR system for Automatic Speech Recognition in the speech processing domain) output. The parameters used are extracted from the intonation curve of the speech utterance and the classifier is a decision tree. Our first experiments on French meeting recordings lead to approximately 75% classification rate. An experiment in order to find the best set of acoustic parameters for this task is also presented in this paper. Finally, data analysis and experiments on another French dialog database show the need of using other cues like the lexical information from an ASR output, in order to improve question detection performance on spontaneous speech.


Automatic Speech Recognition Acoustic Parameter Spontaneous Speech Speech Corpus Pitch Period 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ferrer, L., Shriberg, E., Stolcke, A.: A Prosody-Based Approach to End-of-Utterance Detection That Does Not Require Speech Recognition. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Hong Kong, vol. I, pp. 608–611 (2003)Google Scholar
  2. 2.
    Shriberg, E., Bates, R., Stolcke, A.: A prosody-only decision-tree model for disfluency detection. In: Eurospeech 1997, Rhodes, Greece (1997)Google Scholar
  3. 3.
    Standfpord, V., Garofolo, J., Galibert, O., Michel, M., Laprun, C.: The NIST Smart Space and Meeting Room Projects: Signal, Acquisition, Annotation and Metrics. In: Proc of ICASSP 2003, Hong-Kong, China, Mai (2003)Google Scholar
  4. 4.
    Wang, D., Lu, L., Zhang, H.J.: Speech Segmentation Without Speech Recognition. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), April 2003, vol. I, pp. 468–471 (2003)Google Scholar
  5. 5.
    Mana, N., Burger, S., Cattoni, R., Besacier, L., Maclaren, V., McDonough, J., Metze, F.: The NESPOLE! VoIP Multilingual Corpora in Tourism and Medical Domains. In: Eurospeech 2003, Geneva, September 1-4 (2003)Google Scholar
  6. 6.
    Marquez, L.: Machine learning and Natural Language processing, Technical Report LSI-00-45-R, Universitat Politechnica de Catalunya (2000)Google Scholar
  7. 7.
    Witten, I.H., Frank, E.: Data mining: Pratical machine learning tools and techniques with Java implementations. Morgan Kaufmann, San Francisco (1999)Google Scholar
  8. 8.
    Besacier, L., Bonastre, J.F., Fredouille, C.: Localization and selection of speaker-specific information with statistical modeling. Speech Communication 31, 89–106 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Vũ Minh Quang
    • 1
  • Eric Castelli
    • 1
  • Phạm Ngọc Yên
    • 1
  1. 1.International research center MICAIP Hanoi – CNRS/UMI-2954, INP GrenobleHanoiViet Nam

Personalised recommendations