Toward Certain Sonic Properties of an Audio Feedback System by Evolutionary Control of Second-Order Structures

  • Seunghun Kim
  • Juhan Nam
  • Graham Wakefield
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9027)


Aiming for high-level intentional control of audio feedback, though microphones, loudspeakers and digital signal processing, we present a system adapting toward chosen sonic features. Users control the system by selecting and changing feature objectives in real-time. The system has a second-order structure in which the internal signal processing algorithms are developed according to an evolutionary process. Genotypes develop into signal-processing algorithms, and fitness is measured by analysis of the incoming audio feedback. A prototype is evaluated experimentally to measure changes of audio feedback depending on the chosen target conditions. By enhancing interactivity of an audio feedback through the intentional control, we expect that feedback systems could be utilized more effectively in the fields of musical interaction, finding balance between nonlinearity and interactivity.


Audio feedback Evolutionary algorithm 



This work was supported by the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea.


  1. 1.
    Di Scipio, A.: ‘Sound is the interface’: from interactive to ecosystemic signal processing. Organised Sound 8(3), 269–277 (2004)Google Scholar
  2. 2.
    Kim, S., Yeo, W.S.: Musical control of a pipe based on acoustic resonance. In: Proceedings of the Internation Conference on New Interfaces for Musical Expression, pp. 217–219 (2011)Google Scholar
  3. 3.
    Kim, S., Yeo, W.S.: Electronic pipe organ using audio feedback. In: Proceedings of the Sound and Music Computing, pp. 75–78 (2012)Google Scholar
  4. 4.
    Kollias, P.A.: Ephemeron: control of self-organised music. Hz Music J. 14, 138–146 (2009)Google Scholar
  5. 5.
    Sanfilippo, D.: Turning perturbation into emergent sound, and sound into perturbation, pp. 1–16 (2013).
  6. 6.
    Di Scipio, A.: Listening to yourself through the otherself: on background noise study and other works. Organised Sound 16(2), 97–108 (2011)CrossRefGoogle Scholar
  7. 7.
    Mitchell, M.: Complex systems: network thinking. Artif. Intell. 170(18), 1194–1212 (2006)CrossRefGoogle Scholar
  8. 8.
    Kollias, P.A.: The self-organising work of music. Organised Sound 16(2), 192–199 (2011)CrossRefGoogle Scholar
  9. 9.
    Heylighen, F., Joslyn, C.: Cybernetics and second order cybernetics. Encycl. Phys. Sci. Technol. 4, 155–170 (2001)Google Scholar
  10. 10.
    Murray-Rust, D., Smaill, A., Maya, M.C.: VirtuaLatin-towards a musical multi-agent system. In: Proceedings of the International Conference on Computational Intelligence and Multimedia Applications, pp. 17–22 (2005)Google Scholar
  11. 11.
    Wulfhorst, R.D., Nakayama, L., Vicari, R.M.: A multiagent approach for musical interactive systems. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 584–591 (2003)Google Scholar
  12. 12.
    Van Nort, D., Braasch, J., Oliveros, P.: A system for musical improvisation combining sonic gesture recognition and genetic algorithms. In: Proceedings of the 6th Sound and Music Computing Conference, pp. 131–136 (2009)Google Scholar
  13. 13.
    Hsu, W.: Strategies for managing timbre and interaction in automatic improvisation systems. Leonardo Music J. 20, 33–39 (2010)CrossRefGoogle Scholar
  14. 14.
    Ciufo, T.: Beginner’s mind: an environment for sonic improvisation. In: Proceedings of the International Computer Music Conference, pp. 781–784 (2005)Google Scholar
  15. 15.
    De Cheveigné, A., Kawahara, H.: Yin, a fundamental frequency estimator for speech and music. J. Acoust. Soc. Am. 111(4), 1917–1930 (2002)CrossRefGoogle Scholar
  16. 16.
    Bown, O.: Experiments in modular design for the creative composition of live algorithms. Comput. Music J. 35(3), 73–85 (2011)CrossRefGoogle Scholar
  17. 17.
    Blackwell, T., Young, M.: Self-organised music. Organised Sound 9(2), 123–136 (2004)CrossRefGoogle Scholar
  18. 18.
    Neuman, I.: Generative tools for interactive composition: real-time musical structures based on schaeffer’s tartyp and on klumpenhouwer networks. Comput. Music J. 38(2), 63–77 (2014)CrossRefGoogle Scholar
  19. 19.
    Dahlstedt, P.: A mutasynth in parameter space: interactive composition through evolution. Organised Sound 6(2), 121–124 (2001)CrossRefGoogle Scholar
  20. 20.
    Bown, O., Lexer, S.: Continuous-time recurrent neural networks for generative and interactive musical performance. In: Rothlauf, F., Branke, J., Cagnoni, S., Costa, E., Cotta, C., Drechsler, R., Lutton, E., Machado, P., Moore, J.H., Romero, J., Smith, G.D., Squillero, G., Takagi, H. (eds.) EvoWorkshops 2006. LNCS, vol. 3907, pp. 652–663. Springer, Heidelberg (2006) CrossRefGoogle Scholar
  21. 21.
    Griffiths, D.: FastbreederGoogle Scholar
  22. 22.
    Scamarcio, M.: Space as an evolution strategy: Sketch of a generative ecosystemic structure of sound. In: Proceedings of the Sound and Music Computing Conference, pp. 95–99 (2008)Google Scholar
  23. 23.
    Biles, J.: GenJam: a genetic algorithm for generating jazz solos. In: Proceedings of the International Computer Music Conference, pp. 131–131 (1994)Google Scholar
  24. 24.
    Biles, J.A.: Evolutionary computation for musical tasks. In: Miranda, E.R., Biles, J.A. (eds.) Evolutionary Computer Music, pp. 28–51. Springer, London (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Graduate School of Culture TechnologyKorea Advanced Institute of Science and Technology (KAIST)Yuseong-gu, DaejeonRepublic of Korea
  2. 2.Digital Media, Visual Art and Art HistoryYork UniversityTorontoCanada

Personalised recommendations