Advertisement

Information Rate for Fast Time-Domain Instrument Classification

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9617)

Abstract

In this paper, we propose a novel feature set for instrument classification which is based on the information rate of the signal in the time domain. The feature is extracted by calculating the Shannon entropy over a sliding short-time energy frame and binning statistical features into a unique feature vector. Experimental results are presented, including a comparison to frequency-domain feature sets. The proposed entropy features are shown to be faster than popular frequency-domain methods while maintaining comparable accuracy in an instrument classification task.

Keywords

Audio classification Audio features Audio signal processing Time-domain methods 

References

  1. 1.
    Altaf, M., Juang, B.: Audio signal classification with temporal envelopes. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 469–472, May 2011Google Scholar
  2. 2.
    Delgado-Contreras, J., Garcia-Vazquez, J.: Classification of environmental audio signals using statistical time and frequency features. In: International Conference on Electronics, Communications and Computers (CONIELECOMP), pp. 212–216 (2014)Google Scholar
  3. 3.
    Deng, J., Simmermacher, C.: A study on feature analysis for musical instrument classification. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 38(2), 429–438 (2008)CrossRefGoogle Scholar
  4. 4.
    Erdol, N., Castelluccia, C., Zilouchian, A.: Recovery of missing speech packets using the short-time energy and zero-crossing measurements. IEEE Trans. Speech Audio Process. 1(3), 295–303 (1993)CrossRefGoogle Scholar
  5. 5.
    Eronen, A.: Comparison of features for musical instrument recognition. In: IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics, pp. 19–22 (2001)Google Scholar
  6. 6.
    Herrera-Boyer, P., Dubnov, S.: Automatic classification of musical instrument sounds. J. New Music Res. 32(1), 3–21 (2003)CrossRefGoogle Scholar
  7. 7.
    Ibarrola, A., Chavez, E.: A robust entropy-based audio-fingerprint. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1729–1732, July 2006Google Scholar
  8. 8.
    Lambrou, T., Kudumakis, P., Speller, R., Sandler, M., Linney, A.: Classification of audio signals using statistical features on time and wavelet transform domains. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 6, pp. 3621–3624, May 1998Google Scholar
  9. 9.
    Nielsen, A., Sigurdsson, S., Hansen, L., Arenas-Garcia, J.: On the relevance of spectral features for instrument classification. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 2, pp. 485–488, April 2007Google Scholar
  10. 10.
    Peeters, G., Giordano, B., Susini, P., Misdariis, N., McAdams, S.: The timbre toolbox: extracting audio descriptors from musical signals. J. Acoust. Soc. Am. 130(5), 2902–2916 (2011)CrossRefGoogle Scholar
  11. 11.
    Shannon, C.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Swe, E., Pwint, M.: An efficient approach for classification of speech and music. In: Huang, Y.-M.R., Xu, C., Cheng, K.-S., Yang, J.-F.K., Swamy, M.N.S., Li, S., Ding, J.-W. (eds.) PCM 2008. LNCS, vol. 5353, pp. 50–60. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of ReginaReginaCanada

Personalised recommendations