Multimedia Corpus of In-Car Speech Communication

  • Nobuo Kawaguchi
  • Kazuya Takeda
  • Fumitada Itakura


An ongoing project for constructing a multimedia corpus of dialogues under the driving condition is reported. More than 500 subjects have been enrolled in this corpus development and more than 2 gigabytes of signals have been collected during approximately 60 minutes of driving per subject. Twelve microphones and three video cameras are installed in a car to obtain audio and video data. In addition, five signals regarding car control and the location of the car provided by the Global Positioning System (GPS) are recorded. All signals are simultaneously recorded directly onto the hard disk of the PCs onboard the specially designed data collection vehicle (DCV). The in-car dialogues are initiated by a human operator, an automatic speech recognition (ASR) system and a wizard of OZ (WOZ) system so as to collect as many speech disfluencies as possible.

In addition to the details of data collection, in this paper, preliminary results on intermedia signal conversion are described as an example of the corpus-based in-car speech signal processing research.


Global Position System Engine Speed Automatic Speech Recognition Mean Opinion Score Speech Corpus 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    J.C. Junqua and J.P. Raton, Robustness in Automatic Speech Recognition. Kluwer Academic Publishers, 1996.Google Scholar
  2. 2.
    D. Roy, “`Grounded’ Speech Communication,” in Proc. of the International Conference on Spoken Language Processing, ICSLP 2000,Beijin, 2000, pp. IV69–IV72Google Scholar
  3. 3.
    P. Gelin and J.C. Junqua, “Techniques for Robust Speech Recognition in the Car Environment,” in Proc. of European Conference Speech Communication and Technology, EUROSPEECH’99, Budapest, 1999.Google Scholar
  4. 4.
    M.J. Hunt, “Some Experiences in In-Car Speech Recognition,” in Proc. of the Workshop on Robust Methods for Speech Recognition in Adverse Conditions, Tampere, 1999, pp. 25–31Google Scholar
  5. 5.
    P. Geutner, L. Arevalo, and J. Breuninger, “VODIS—VoiceOperated Driver Information Systems: A Usability Study on Advanced Speech Technologies for Car Environments,” in Proc. of International Conference on Spoken Language Processing,ICSLP2000, Beijin, 2000, pp. IV378–IV381.Google Scholar
  6. 6.
    A. Moreno, B. Lindberg, C. Draxler, G. Richard, K. Choukri, J. Allen, and Stephan Eule, “SpeechDat-Car: A Large Speech Database for Automotive Environments,” in Proc. of 2nd Int Conference on Language Resources and Evaluation, Athens, LREC 2000.Google Scholar
  7. 7.
    N. Kawaguchi, S. Matsubara, H. Iwa, S. Kajita, K. Takeda, E. Itakura, and Y. Inagaki, “Construction of Speech Corpus in Moving Car Environment,” in Proc. of International Conference on Spoken Language Processing, ICSLP2000, Beijin, 2000 pp. 362–365.Google Scholar
  8. 8.
    T. Kawahara, T. Kobayashi, K. Takeda, N. Minematsu, K. Itou, M. Yamamoto, A. Yamada, T. Utsuro, and K. Shikano, “Japanese Dictation Toolkit: Plug-and-Play Framework for Speech Recognition R and D,” in Proc. of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU’99), 1999 pp. 393–396.Google Scholar
  9. 9.
    K. Itou, M. Yamamoto, K. Takeda, T. Takezawa, T. Matsuoka, T. Kobayashi, K. Shikano, and S. Itahashi, JNAS: Japanese Speech Corpus for Large Vocabulary Continuous Speech Recognition Research, J. Acoust. Soc. Jpn.(E), vol. 20, no. 3, 1999, pp. 199–206.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2004

Authors and Affiliations

  • Nobuo Kawaguchi
    • 1
  • Kazuya Takeda
    • 1
  • Fumitada Itakura
    • 1
  1. 1.Center for Integrated Acoustic Information ResearchNagoya UniversityNagoyaJapan

Personalised recommendations