Advertisement

Computer-Aided Musical Orchestration Using an Artificial Immune System

  • José Abreu
  • Marcelo Caetano
  • Rui Penha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9596)

Abstract

The aim of computer-aided musical orchestration is to find a combination of musical instrument sounds that approximates a target sound. The difficulty arises from the complexity of timbre perception and the combinatorial explosion of all possible instrument mixtures. The estimation of perceptual similarities between sounds requires a model capable of capturing the multidimensional perception of timbre, among other perceptual qualities of sounds. In this work, we use an artificial immune system (AIS) called opt-aiNet to search for combinations of musical instrument sounds that minimize the distance to a target sound encoded in a fitness function. Opt-aiNet is capable of finding multiple solutions in parallel while preserving diversity, proposing alternative orchestrations for the same target sound that are different among themselves. We performed a listening test to evaluate the subjective similarity and diversity of the orchestrations.

Keywords

Genetic Algorithm Fitness Function Singular Value Decomposition Musical Instrument Artificial Immune System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

This work is financed by the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project “UID/EEA/50014/2013.” The authors would like to thank the integrated masters program in Electrical and Computer Engineering (MIEEC) from the University of Porto (FEUP) for the financial support.

References

  1. 1.
    Caclin, A., McAdams, S., Smith, B., Winsberg, S.: Acoustic correlates of timbre space dimensions: a confirmatory study using synthetic tones. J. Acoust. Soc. Am. 118(1), 471–482 (2005)CrossRefGoogle Scholar
  2. 2.
    Camacho, A., Harris, J.: A sawtooth waveform inspired pitch estimator for speech and music. J. Acoust. Soc. Am. 124(3), 1638–1652 (2008)CrossRefGoogle Scholar
  3. 3.
    Carpentier, G., Tardieu, D., Assayag, G., Rodet, X., Saint-James, E.: Imitative and generative orchestrations using pre-analysed sound databases. In: Proceedings of the Sound and Music Computing Conference, pp. 115–122 (2006)Google Scholar
  4. 4.
    Carpentier, G., Tardieu, D., Assayag, G., Rodet, X., Saint-James, E.: An evolutionary approach to computer-aided orchestration. In: Giacobini, M. (ed.) EvoWorkshops 2007. LNCS, vol. 4448, pp. 488–497. Springer, Heidelberg (2007)Google Scholar
  5. 5.
    Carpentier, G., Assayag, G., Saint-James, E.: Solving the musical orchestration problem using multiobjective constrained optimization with a genetic local search approach. J. Heuristics 16(5), 681–714 (2010)CrossRefzbMATHGoogle Scholar
  6. 6.
    Carpentier, G., Tardieu, D., Harvey, J., Assayag, G., Saint-James, E.: Predicting timbre features of instrument sound combinations: application to automatic orchestration. J. New Music Res. 39(1), 47–61 (2010)CrossRefGoogle Scholar
  7. 7.
    de Castro, L., Timmis, J.: An artificial immune network for multimodal function optimization. In: CEC 2002, Proceedings of the 2002 Congress on Evolutionary Computation, vol. 1, pp. 699–704, May 2002Google Scholar
  8. 8.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: RWC music database: popular, classical and Jazz music databases. In: Proceedings of the International Society for Music Information Retrieval Conference, vol. 2, pp. 287–288 (2002)Google Scholar
  9. 9.
    Grey, J.: Multidimensional perceptual scaling of musical timbres. J. Acoust. Soc. Am. 61(5), 1270–1277 (1977)CrossRefGoogle Scholar
  10. 10.
    Handelman, E., Sigler, A., Donna, D.: Automatic orchestration for automatic composition. In: 1st International Workshop on Musical Metacreation (MUME 2012), pp. 43–48. AAAI (2012)Google Scholar
  11. 11.
    Hummel, T.: Simulation of human voice timbre by orchestration of acoustic music instruments. In: Proceedings of the International Computer Music Conference (ICMC), p. 185 (2005)Google Scholar
  12. 12.
    Kendall, R.A., Carterette, E.C.: Identification and blend of timbres as a basis for orchestration. Contemp. Music Rev. 9(1–2), 51–67 (1993)CrossRefGoogle Scholar
  13. 13.
    Krumhansl, C.L.: Why is musical timbre so hard to understand? Struct. Percept. Electroacoust. Sound Music 9, 43–53 (1989)Google Scholar
  14. 14.
    Lartillot, O., Toiviainen, P.: A matlab toolbox for musical feature extraction from audio. In: International Conference on Digital Audio Effects, pp. 237–244 (2007)Google Scholar
  15. 15.
    McAdams, S., Giordano, B.L.: The perception of musical timbre. In: Hallam, S., Cross, I., Thaut, M. (eds.) The Oxford Handbook of Music Psychology, pp. 72–80. Oxford University Press, New York (2009)Google Scholar
  16. 16.
    McAdams, S., Winsberg, S., Donnadieu, S., De Soete, G., Krimphoff, J.: Perceptual scaling of synthesized musical timbres: common dimensions, specificities, and latent subject classes. Psychol. Res. 58(3), 177–192 (1995)CrossRefGoogle Scholar
  17. 17.
    Psenicka, D.: SPORCH: an algorithm for orchestration based on spectral analyses of recorded sounds. In: Proceedings of International Computer Music Conference (ICMC), p. 184 (2003)Google Scholar
  18. 18.
    Rose, F., Hetrik, J.E.: Enhancing orchestration technique via spectrally based linear algebra methods. Comput. Music J. 33(1), 32–41 (2009)CrossRefGoogle Scholar
  19. 19.
    Tardieu, D., Rodet, X.: An instrument timbre model for computer aided orchestration. In: 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pp. 347–350. IEEE (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Faculty of EngineeringUniversity of PortoPortoPortugal
  2. 2.Sound and Music Computing GroupINESC TECPortoPortugal

Personalised recommendations