Language Resources and Evaluation

, Volume 44, Issue 3, pp 205-219

First online:

WOZ acoustic data collection for interactive TV

  • Alessio BruttiAffiliated withFondazione Bruno Kessler (FBK)–irst
  • , Luca CristoforettiAffiliated withFondazione Bruno Kessler (FBK)–irst Email author 
  • , Walter KellermannAffiliated withMultimedia Communications and Signal Processing, University of Erlangen-Nuremberg (FAU)
  • , Lutz MarquardtAffiliated withMultimedia Communications and Signal Processing, University of Erlangen-Nuremberg (FAU)
  • , Maurizio OmologoAffiliated withFondazione Bruno Kessler (FBK)–irst

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


This paper describes a multichannel acoustic data collection recorded under the European DICIT project, during Wizard of Oz (WOZ) experiments carried out at FAU and FBK-irst laboratories. The application of interest in DICIT is a distant-talking interface for control of interactive TV working in a typical living room, with many interfering devices. The objective of the experiments was to collect a database supporting efficient development and tuning of acoustic processing algorithms for signal enhancement. In DICIT, techniques for sound source localization, multichannel acoustic echo cancellation, blind source separation, speech activity detection, speaker identification and verification as well as beamforming are combined to achieve a maximum possible reduction of the user speech impairments typical of distant-talking interfaces. The collected database permitted to simulate at preliminary stage a realistic scenario and to tailor the involved algorithms to the observed user behaviors. In order to match the project requirements, the WOZ experiments were recorded in three languages: English, German and Italian. Besides the user inputs, the database also contains non-speech related acoustic events, room impulse response measurements and video data, the latter used to compute three-dimensional positions of each subject. Sessions were manually transcribed and segmented at word level, introducing also specific labels for acoustic events.


Multimodal Corpus annotation Audio