An Automated Music Improviser Using a Genetic Algorithm Driven Synthesis Engine

  • Matthew John Yee-King
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4448)

Abstract

This paper describes an automated computer improviser which attempts to follow and improvise against the frequencies and timbres found in an incoming audio stream. The improviser is controlled by an ever changing set of sequences which are generated by analysing the incoming audio stream (which may be a feed from a live musician) for its physical and musical properties such as pitch and amplitude. Control data from these sequences is passed to the synthesis engine where it is used to configure sonic events. These sonic events are generated using sound synthesis algorithms designed by an unsupervised genetic algorithm where the fitness function compares snapshots of the incoming audio to snapshots of the audio output of the evolving synthesizers in the spectral domain in order to drive the population to match the incoming sounds. The sound generating performance system and sound designing evolutionary system operate in real time in parallel to produce an interactive stream of synthesised sound. An overview of related systems is provided, this system is described then some preliminary results are presented.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lewis, G.E.: Too Many Notes: Computers, Complexity and Culture in Voyager Leonardo Music Journal (10) pp. 33-39 (2000)Google Scholar
  2. 2.
    Hsu, W.: Managing gesture and timbre for analysis and instrument control in an interactive environment NIME Proceedings, pp. 376–379 (2006)Google Scholar
  3. 3.
    Blackwell, T., Young, M.: Swarm Granulator Applications of Evolutionary Computing vol. 3005 (2004)Google Scholar
  4. 4.
    Takala, T., Hahn, J., Gritz, L., Geigel, J., Lee, J.W.: Using Physically-Based Models and Genetic Algorithms for Functional Composition of Sound Signals, Synchronized to Animated Motion International Computer Music Conference (ICMC) (1993)Google Scholar
  5. 5.
    Johnson, C.G.: Exploring sound-space with interactive genetic algorithms Leonardo, 36(1)51–54 (2003)Google Scholar
  6. 6.
    Woolf, S.,Yee-King, M.: Virtual and Physical Interfaces for Collaborative Evolution of Sound Contemporary Music Review 22(3) (2003)Google Scholar
  7. 7.
    Holland: Adaptation in Natural and Artificial Systems University of Michigan Press (1975)Google Scholar
  8. 8.
    Thomas, J., Mitchell, J., Charles, W.: Sullivan: Frequency Modulation Tone Matching Using a Fuzzy Clustering Evolution Strategy AES 118th Convention, Barcelona, Spain, (2005)Google Scholar
  9. 9.
    Chowning, J., Bristow, D.: FM Theory and applications by musicians for musicians Yamaha Music Foundation Corp (1986)Google Scholar
  10. 10.
    Horner, A., Beauchamp, J.: Machine Tongues XVI: Genetic Algorithms and Their Application to FM, Matching Synthesis. Computer Music Journal 17(4), 17–29 (1993)CrossRefGoogle Scholar
  11. 11.
    Gounaropoulos, A., Johnson, C.G.: Timbre interfaces using adjectives and adverbs, Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06) pp. 101–102 (2006)Google Scholar
  12. 12.
    Magnus, C.: Evolving electroacoustic music: the application of genetic algorithms to time-domain waveforms. In: Proceedings of the 2004 International Computer Music Conference, pp. 173–176 (2004)Google Scholar
  13. 13.
    Rowe, R.: Interactive Music Systems. MIT Press, Cambs, MA (1993)Google Scholar
  14. 14.
    Bown, O., Lexer, S.: Continuous-Time Recurrent Neural Networks for Generative and Interactive Musical Performance EvoMusArt Workshop, EuroGP, Budapest (2006)Google Scholar
  15. 15.
    McCartney, J.: SuperCollider, a real time audio synthesis programming language http://www.audiosynth.com/ (last checked November 2006)Google Scholar
  16. 16.
    Wright, Freed, Momeni.: OpenSound Control: State of the Art 2003 Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME-03), Montreal, Canada, pp. 153-159 (2003)Google Scholar
  17. 17.
    Casey, M.: Acoustic Lexemes for Organizing Internet Audio Contemporary Music Review special issue on Internet Music, Marsden, A., Hugil, A.: (eds.) (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Matthew John Yee-King
    • 1
  1. 1.Creative Systems Lab, Department of Informatics, University of Sussex, BrightonUK

Personalised recommendations