Meeting Segmentation Using Two-Layer Cascaded Subband Filters

  • Manuel Giuliani
  • Tin Lay Nwe
  • Haizhou Li
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4274)

Abstract

The extraction of information from recorded meetings is a very important yet challenging task. The problem lies in the inability of speech recognition systems to be directly applied onto meeting speech data, mainly because meeting participants speak concurrently and head-mounted microphones record more than just their wearers’ utterances – crosstalk from his neighbours are inevitably recorded as well. As a result, a degree of preprocessing of these recordings is needed. The current work presents an approach to segment meetings into four audio classes: Single speaker, crosstalk, single speaker plus crosstalk and silence. For this purpose, we propose Two-Layer Cascaded Subband Filters, which spread according to the pitch and formant frequency scales. This filters are able to detect the presence or absence of pitch and formants in an audio signal. In addition, the filters can determine how many numbers of pitches and formants are present in an audio signal based on the output subband energies. Experiments conducted on the ICSI meeting corpus, show that although an overall recognition rate of up to 57% was achieved, rates for crosstalk and silence classes are as high as 80%. This indicates the positive effect and potential of this subband feature in meeting segmentation tasks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dielmann, A., Renals, S.: Multistream Dynamic Bayesian Network for Meeting Segmentation. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2006), Toulouse, France, May 14-19 (2006)Google Scholar
  2. 2.
    Wrigley, S.N., Brown, G.J., Wan, V., Renals, S.: Speech and Crosstalk Detection in Multichannel Audio. IEEE Transactions on Speech and Audio Processing 13(1) (January 2005)Google Scholar
  3. 3.
    Janin, A., Baron, D., Edwards, J., Ellis, D., Gelbart, D., Morgan, N., Peskin, B., Pfau, T., Shriberg, E., Stolcke, A., Wooters, C.: The ICSI Meeting Corpus. In: Proc. ICASSP, pp. 364–367 (2003)Google Scholar
  4. 4.
    Wang, X., Pools, L.C.W., ten Bosch, L.F.M.: Analysis of Context- Dependent Segmental Duration for Automatic Speech Recognition. In: International Conference on Spoken Language Processing (ICSLP), pp. 1181–1184 (1996)Google Scholar
  5. 5.
    Klatt, D.H.: Software for a Cascade/Parallel Formant Synthesizer. J. Acoust. Soc. Am. 67, 971–995 (1980)CrossRefGoogle Scholar
  6. 6.
    Li, H., Nwe, T.L.: Vibrato-Motivated Acoustic Features for Singer Identification. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP, Toulouse, France, May 14-19 (2006)Google Scholar
  7. 7.
    Rabiner, L.R., Juang, B.H.: Fundamentals of Speech Recognition. Prentice Hall, Englewood Cliffs (1993)Google Scholar
  8. 8.
    Fant, G.: Speech Sounds and Features. MIT Press, Cambridge (1973)Google Scholar
  9. 9.
    Becchetti, C., Ricotti, L.P.: Speech Recognition Theory and C++ Implementation. John Wiley & Sons, New York (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Manuel Giuliani
    • 1
  • Tin Lay Nwe
    • 1
  • Haizhou Li
    • 1
  1. 1.Institute for Infocomm ResearchRepublic of Singapore

Personalised recommendations