Advertisement

Recognition of Musical Instruments in Intervals and Chords

  • Markus Eichhoff
  • Claus Weihs
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Abstract

Recognition of musical instruments in pieces of polyphonic music given as mp3- or wav-files is a difficult task because the onsets are unknown. Using source-filter models for sound separation is one approach. In this study, intervals and chords played by instruments of four families of musical instruments (strings, wind, piano, plucked strings) are used to build statistical models for the recognition of the musical instruments playing them by using the four high-level audio feature groups Absolute Amplitude Envelope (AAE), Mel-Frequency Cepstral Coefficients (MFCC) windowed and not-windowed as well as Linear Predictor Coding (LPC) to take also physical properties of the instruments into account (Fletcher, The physics of musical instruments, 2008). These feature groups are calculated for consecutive time blocks. Statistical supervised classification methods such as LDA, MDA, Support Vector Machines, Random Forest, and Boosting are used for classification together with variable selection (sequential forward selection).

References

  1. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. doi:10.1109/TAC.1974.1100705.MathSciNetCrossRefMATHGoogle Scholar
  2. Brown, J. C. (1999). Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. Journal of the Acoustical Society of America, 105(3), 1933–1941.CrossRefGoogle Scholar
  3. Eichhoff, M., & Weihs, C. (2010). Musical instrument recognition by high-level features. In Proceedings of the 34th Annual Conference of the German Classification Society (GfKl), July 21–23 (pp. 373–381). Berlin: Springer.Google Scholar
  4. Fletcher, N. H. (2008). The physics of musical instruments. New York: Springer.Google Scholar
  5. Goto, M., Hashigushi, H., Nishimura, T., & Oka, R. (2003). RWC music database: Music genre database and musical instrument sound database. In ISMIR 2003 Proceedings (pp. 229–230). Baltimore: Johns Hopkins University Press.Google Scholar
  6. Hall, D. E. (2001). Musical acoustics (3rd ed.). Belmont: Brooks Cole.Google Scholar
  7. Krey, S., & Ligges, U. (2009). SVM based instrument and timbre classification. In H. Locarek-Junge, & C. Weihs (Eds.), Classification as a tool for research. Berlin: Springer.Google Scholar
  8. McGill University. (2010). Master samples collection on DVD. http://www.music.mcgill.ca/resources/mums/html.
  9. Rabiner, L., & Juang, B. H. (1993). Fundamentals of speech recognition. Englewood Cliffs: Prentice Hall PTR.Google Scholar
  10. University of IOWA. (2011). Electronic music studios. Musical instrument samples. http://theremin.music.uiowa.edu.
  11. Wold, E., Blum, T., Keislar, D., & Wheaton, J. (1999). Classification, search and retrieval of audio. Handbook of multimedia computing (pp. 207–226). Boca Raton: CRC Press.Google Scholar
  12. Zheng, F., Zhang, G., & Song, Z. (2001). Comparison of different implementations of MFCC. Journal of Computer Science & Technology 16(6), 582–589.CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Computational Statistics, Faculty of StatisticsTU DortmundDortmundGermany

Personalised recommendations