Speech/Music Discrimination via Energy Density Analysis

  • Stanisław Kacprzak
  • Mariusz Ziółko
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7978)

Abstract

In this paper we suggest to apply a new feature, called Minimum Energy Density (MED), in discrimination of audio signals between speech and music. Our method is based on the analysis of local energy for 1 or 2.5 seconds audio signals. An elementary analysis of the probability for the power distribution is an effective tool supporting the decision making system. We compare our feature with Percentage of Low Energy Frames (LEF), Modified Low Energy Ratio (MLER) and examine their efficiency for two separate speech/music corpora.

Keywords

speech/music discrimination sound classification audio content analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cabañas Molero, P., Ruiz Reyes, N., Vera Candeas, P., Maldonado Bascon, S.: Low-complexity f0-based speech/nonspeech discrimination approach for digital hearing aids. Multimedia Tools and Applications 54, 291–319 (2011)CrossRefGoogle Scholar
  2. 2.
    Carey, M., Parris, E., Lloyd-Thomas, H.: A comparison of features for speech, music discrimination. In: Proceedings of 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. 149–152 (March 1999)Google Scholar
  3. 3.
    Didiot, E., Illina, I., Fohr, D., Mella, O.: A wavelet-based parameterization for speech/music discrimination. Comput. Speech Lang. 24(2), 341–357 (2010)CrossRefGoogle Scholar
  4. 4.
    Jones, R.C.: Electronic device for automatically discriminating between speech and music forms. US Patent 2761897 (1956)Google Scholar
  5. 5.
    Lu, L., Jiang, H., Zhang, H.: A robust audio classification and segmentation method. In: Proceedings of the Ninth ACM International Conference on Multimedia, MULTIMEDIA 2001, pp. 203–211. ACM, New York (2001)CrossRefGoogle Scholar
  6. 6.
    Masior, M., Ziółko, M., Kacprzak, S.: Multi-lingual speech samples base, http://speechsamples.agh.edu.pl/
  7. 7.
    Okamura, S., Aoyama, K.: An experimental study of energy dips for speech and music. Pattern Recognition 16(2), 163–166 (1983)CrossRefGoogle Scholar
  8. 8.
    Saunders, J.: Real-time discrimination of broadcast speech/music. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1996, vol. 2, pp. 993–996 (May 1996)Google Scholar
  9. 9.
    Scheirer, E., Slaney, M.: Construction and evaluation of a robust multifeature speech/music discriminator. In: 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1997, vol. 2, pp. 1331–1334 (April 1997)Google Scholar
  10. 10.
    Wang, W., Gao, W., Ying, D.: A fast and robust speech/music discrimination approach. In: Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, and Fourth Pacific Rim Conference on Multimedia, vol. 3, pp. 1325–1329 (December 2003)Google Scholar
  11. 11.
    Wei, Z., Ranran, D., Minhui, P., Qiuhong, W.: Automatic speech corpus construction from broadcasting speech databases. In: 2010 International Conference on Computational Intelligence and Security (CIS), pp. 639–643 (December 2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Stanisław Kacprzak
    • 1
  • Mariusz Ziółko
    • 1
  1. 1.Department of ElectronicsAGH University of Science and TechnologyKrakówPoland

Personalised recommendations