Piano and Guitar Tone Distinction Based on Extended Feature Analysis

Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

In this work single piano and guitar tones are distinguished by means of various features of the music time series. In a first study, three different kinds of high-level features and MFCC are taken into account to classify the piano and guitar tones. The features are called high-level because they try to reflect the physical structure of a musical instrument on temporal and spectral levels. In our study, three spectral features and one temporal feature are used for the classification task. The spectral features characterize the distribution of overtones, the temporal feature the energy of a tone. In a second study as many low level and the high level features as possible proposed in the literature are combined for the classification task.

References

  1. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. doi:10.1109/TAC.1974.1100705.MathSciNetMATHCrossRefGoogle Scholar
  2. Bischl, B. (2011). Machine learning in R, R-package, TU Dortmund. http://r-forge.r-project.org/projects/mlr/.
  3. Bischl, B., Vatolkin, I., & Preuss, M. (2010). Selecting small audio feature sets in music classification by means of asymmetric mutation. In: Proceedings of the 11th International Conference on Parallel Problem Solving From Nature (PPSN), Krakow, pp. 314–323.Google Scholar
  4. Brown, J. C., Houix, O., & McAdams, S. (2001). Feature dependence in the automatic identification of musical woodwind instruments. Journal of the Acoustical Society of America, 109, 1064–1072.CrossRefGoogle Scholar
  5. Fletcher, N. H. (2008). The physics of musical instruments. New York: Springer.Google Scholar
  6. Goto, M., Hashiguchi, H., Nishimura, T., & Oka, R. (2003). RWC music database: Music genre database and musical instrument sound database. In: ISMIR 2003 Proceedings, Baltimore, pp. 229–230.Google Scholar
  7. Krey, S., & Ligges, U. (2010). SVM based instrument and timbre classification. In Locarek-Junge, H., & Weihs, C. (Eds.), Classification as a tool for research. Berlin/Heidelberg/New York: Springer.Google Scholar
  8. Lartillot, O., & Toiviainen, P. (2007). A matlab toolbox for musical feature extraction from audio. In: International Conference on Digital Audio Effects, Bordeaux.Google Scholar
  9. Ligges, U. (2010). tuneR–analysis of music. http://r-forge.r-project.org/projects/tuner.
  10. Livshin, A., & Rodet, X. (2006). The significance of the non-harmonic “Noise” versus the harmonic series for musical instrument recognition. In: ISMIR 2006 Proceedings, Victoria, pp. 95–100.Google Scholar
  11. Makhoul, J. (1975). Linear prediction: A tutorial review. IEEE, 63, 56.CrossRefGoogle Scholar
  12. McGill University. (2010). McGill master samples collection on DVD. http://www.music.mcgill.ca/resources/mums/html.
  13. Rabiner, L., & Juang B. H. (1993). Fundamentals of speech recognition. Englewood Cliffs: Prentice Hall PTR.Google Scholar
  14. Theimer, W., Vatolkin, I., & Eronen, A. (2008). Definitions of audio features for music content description (Tech. Rep. TR08-2-001) University of Dortmund, Chair of Algorithm Engineering.Google Scholar
  15. University of Iowa. (2010). Electronic music studios. Musical instrument samples. http://theremin.music.uiowa.edu.
  16. Vatolkin, I., Theimer, W., & Botteck, M. (2010). AMUSE (Advanced MUSic Explorer)–A multitool framework for music data analysis. In Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR), Utrecht, pp. 33–38.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Chair of Computational StatisticsTU DortmundDortmundGermany
  2. 2.Chair of Algorithm EngineeringTU DortmundDortmundGermany

Personalised recommendations