Hierarchical Tree for Dissemination of Polyphonic Noise

  • Rory Lewis
  • Amanda Cohen
  • Wenxin Jiang
  • Zbigniew Raś
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5306)

Abstract

In the continuing investigation of identifying musical instruments in a polyphonic domain, we present a system that can identify an instrument in a polyphonic domain with added noise of numerous interacting and conflicting instruments in an orchestra. A hierarchical tree specifically designed for the breakdown of polyphonic sounds is used to enhance training of classifiers to correctly estimate an unknown polyphonic sound. This paper shows how goals to determine what hierarchical levels and what combination of mix levels is most effective has been achieved. Learning the correct instrument classification for creating noise together with what levels and mixed the noise optimizes training sets is crucial in the quest to discover instruments in noise. Herein we present a novel system that disseminates instruments in a polyphonic domain.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Davis, S., Mermelstein, P.: Comparison of parametric representation for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions of Acoustics, Speech, and Signal Processing ASSP-28(4), 357–366 (1980)CrossRefGoogle Scholar
  2. 2.
    Doerr, M.: Semantic problems of thesaurus mapping. Journal of Digital Information 1(8), Article No. 5: 2001-03-26:2001–03 (2001)Google Scholar
  3. 3.
    Fang, Z., Zhang, G.: Integrating the energy information into mfcc. In: International Conference on Spoken Language Processing, October 16-20, vol. 1, pp. 389–292 (2000)Google Scholar
  4. 4.
    Gomez, E., Gouyon, F., Herrera, P., Amatriain, X.: Using and enhancing the current mpeg-7 standard for a music content processing tool. In: Proceedings of the 114th Audio Engineering Society Convention, Amsterdam, The Netherlands (March 2003)Google Scholar
  5. 5.
    Martinez, J.M., Koenen, F.P.R.: Iso/iec jtc 1/sc 29. In: Information Technology– Multimedia Content Description Interface – Part 4: Audio (2001)Google Scholar
  6. 6.
    Lewis, R., Wieczorkowska, A.: Categorization of musical instrument sounds based on numerical parameters. In: Kryszkiewicz, M., Peters, J.F., Rybinski, H., Skowron, A. (eds.) RSEISP 2007. LNCS (LNAI), vol. 4585, pp. 784–792. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  7. 7.
    Lewis, R., Zhang, X., Raś, Z.: Knowledge discovery based identification of musical pitches and instruments in polyphonic sounds. International Journal of Engineering Applications of Artificial Intelligence 20(5), 637–645 (2007)CrossRefGoogle Scholar
  8. 8.
    Opolko, F., Wapnick, J.: Mums – mcgill university master samples. cd’s (1987)Google Scholar
  9. 9.
    Patel, M., Koch, T., Doerr, M., Tsinaraki, C.: Semantic interoperability in digital library systems. Technology-enhanced Learning and Access to Cultural Heritage. UKOLN, University of Bath, IST-2002-2.3.1.12 (2005)Google Scholar
  10. 10.
    Peeters, G., McAdams, S., Herrera, P.: Instrument sound description in the context of mpeg-7. In: Proceedings of the International Computer Music Conference (ICMC 2000), Berlin, Germany (2000)Google Scholar
  11. 11.
    Picone, J.: Signal modeling techniques in speech recognition. In: Bullock, T.H. (ed.) Life Science Research Report, vol. 81(9), pp. 1215–1247. IEEE, Los Alamitos (1993)Google Scholar
  12. 12.
    Schroeder, M.: Recognition of complex acoustic signals. In: Bullock, T.H. (ed.) Life Science Research Report, vol. 55, pp. 323–328. Abakon Verlag, Berlin (1977)Google Scholar
  13. 13.
    Stephens, S., Volkman, J.: The relation of pitch to frequency. American Journal of Psychology 53(3), 329–353 (1940)CrossRefGoogle Scholar
  14. 14.
    Wieczorkowska, A., Kolczynska, E.: Quality of musical instrument sound identification for various levels of accompanying sounds. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 28–36. Springer, Heidelberg (2007)Google Scholar
  15. 15.
    Wieczorkowska, A., Synak, P., Lewis, R., Raś, Z.: Creating reliable database for experiments on extracting emotions from music. Intelligent Information Processing and Web Mining, Advances in Soft Computing, 395–404 (2005)Google Scholar
  16. 16.
    Zhang, X., Raś, Z.: Analysis of sound features for music timbre recognition. In: Proceedings of the International Conference on Multimedia and Ubiquitous Engineering (MUE 2007), Seoul, South Korea, April 26-28, 2007, pp. 3–8. IEEE Computer Society, Los Alamitos (2007)CrossRefGoogle Scholar
  17. 17.
    Zhang, X., Raś, Z.: Differentiated harmonic feature analysis on music information retrieval for instrument recognition. In: Proc. of IEEE GrC 2006, IEEE International Conference on Granular Computing, Atlanta, Georgia (May 2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Rory Lewis
    • 1
  • Amanda Cohen
    • 2
  • Wenxin Jiang
    • 2
  • Zbigniew Raś
    • 2
  1. 1.University of Colorado at Colorado SpringsColorado SpringsUSA
  2. 2.University of North Carolina at CharlotteCharlotteUSA

Personalised recommendations