Advertisement

Applied Intelligence

, Volume 23, Issue 3, pp 267–275 | Cite as

Pitch-Dependent Identification of Musical Instrument Sounds

  • Tetsuro KitaharaEmail author
  • Masataka Goto
  • Hiroshi G. Okuno
Article

Abstract

This paper describes a musical instrument identification method that takes into consideration the pitch dependency of timbres of musical instruments. The difficulty in musical instrument identification resides in the pitch dependency of musical instrument sounds, that is, acoustic features of most musical instruments vary according to the pitch (fundamental frequency, F0). To cope with this difficulty, we propose an F0-dependent multivariate normal distribution, where each element of the mean vector is represented by a function of F0. Our method first extracts 129 features (e.g., the spectral centroid, the gradient of the straight line approximating the power envelope) from a musical instrument sound and then reduces the dimensionality of the feature space into 18 dimension. In the 18-dimensional feature space, it calculates an F0-dependent mean function and an F0-normalized covariance, and finally applies the Bayes decision rule. Experimental results of identifying 6,247 solo tones of 19 musical instruments shows that the proposed method improved the recognition rate from 75.73% to 79.73%.

Keywords

musical instrument identification the pitch dependency fundamental frequency automatic music transcription computational auditory scene analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J.C. Brown, “Computer identification of musical instruments using pattern recognition with cepstral coefficients as features,” Journal of Acoustic Society of America vol. 103, no. 3, pp. 1933–1941, 1999.Google Scholar
  2. A. Eronen and A. Klapuri, “Musical instrument recognition using cepstral coefficients and temporal features,” in Proceedings of International Conference on Acoustics, Speech and Signal Processing, IEEE, 2000, pp. 753–756.Google Scholar
  3. I. Fujinaga and K. MacMillan, “Realtime recognition of orchestral instruments,” in Proceedings of International Computer Music Conference, 2000, pp. 141–143.Google Scholar
  4. K. Kashino, K. Nakadai, T. Kinoshita, and H. Tanaka, “Application of the bayesian probability network to music scene analysis,” in Computational Auditory Scene Analysis, edited by D. Rosenthal and H.~G. Okuno, Eds., Lawrence Erlbaum Associates, 1998, pp. 115–137.Google Scholar
  5. K.D. Martin, “Sound-Source Recognition: A Theory and Computational Model,” Ph.D. Thesis, MIT, 1999.Google Scholar
  6. K. Kashino and H. Murase, “A sound source identification system for ensemble music based on template adaptation and music stream extraction,” Speech Communication, vol. 27, nos. 3–4, pp. 337–349, 1999.Google Scholar
  7. M. Goto, H. Hashiguchi, T. Nishimura, and R. Oka, “RWC music database: Music genre database and musical instrument sound database,” in Proceedings of International Conference on Music Information Retrieval, 2003, pp. 229–230.Google Scholar
  8. D. Rosenthal and H.G. Okuno, eds. Computational Auditory Scene Analysis, Lawrence Erlbaum Associates, Mahwah, New Jersey, 1998.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  • Tetsuro Kitahara
    • 1
    Email author
  • Masataka Goto
    • 2
  • Hiroshi G. Okuno
    • 3
  1. 1.Department of Intelligence Science and Technology, Graduate School of InformaticsKyoto UniversitySakyo-ku, KyotoJapan
  2. 2.“Information and Human Activity”, PRESTOJST/National Institute of Advanced Industrial Science and TechnologyTsukubaJapan
  3. 3.Department of Intelligence Science and Technology, Graduate School of InformaticsKyoto UniversitySakyo-ku, KyotoJapan

Personalised recommendations