Artificial Evolution of Expressive Performance of Music: An Imitative Multi-Agent Systems Approach

Chapter

Abstract

This chapter introduces an imitative multi-agent system approach to generate expressive performances of music, based on agents’ individual parameterized musical rules. We have developed a system called IMAP (imitative multi-agent performer). Aside from investigating the usefulness of such an application of the imitative multi-agent paradigm, there is also a desire to investigate the inherent feature of diversity and control of diversity in this methodology: a desirable feature for a creative application, such as musical performance. To aid this control of diversity, parameterized rules are utilized based on previous expressive performance research. These are implemented in the agents using previously developed musical analysis algorithms. When experiments are run, it is found that agents are expressing their preferences through their music performances and that diversity can be generated and controlled.

References

  1. 1.
    Park TH (2009) An interview with Max Mathews. Comput Music J 33(3):9–22CrossRefGoogle Scholar
  2. 2.
    Boulanger R (ed) (2000) The Csound book: perspectives in software synthesis, sound design, signal processing and programming. The MIT Press, CambridgeGoogle Scholar
  3. 3.
    Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33(3):203–216CrossRefGoogle Scholar
  4. 4.
    Kirke A, Miranda ER (2010) A survey of computer systems for expressive music performance. ACM Surv 42(1):1–49CrossRefGoogle Scholar
  5. 5.
    Sundberg J, Askenfelt A, Frydén L (1983) Musical performance. A synthesis-by-rule approach. Comput Music J 7:37–43CrossRefGoogle Scholar
  6. 6.
    Bresin R, Friberg A (2000) Emotional coloring of computer-controlled music performances. Comput Music J 24(4):44–63CrossRefGoogle Scholar
  7. 7.
    Clynes M (1986) Generative principles of musical thought: integration of microstructure with structure. Commun Cogn 3:185–223Google Scholar
  8. 8.
    Livingstone SR, Muhlberger R, Brown AR, Loch A (2007) Controlling musical emotionality: an affective computational architecture for influencing musical emotions. Digit Creat 18(1):43–53CrossRefGoogle Scholar
  9. 9.
    Johnson ML (1991) Toward an expert system for expressive musical performance. Computer 24(7):30–34CrossRefGoogle Scholar
  10. 10.
    Ishikawa O, Aono Y, Katayose H, Inokuchi S (2000) Extraction of musical performance rule using a modified algorithm of multiple regression analysis. In: Proceedings of the international computer music conference. International Computer Music Association, San Francisco, pp 348–351Google Scholar
  11. 11.
    Canazza S, Drioli C, De Poli G, Rodà A, Vidolin A (2000) Audio morphing different expressive intentions for multimedia systems. IEEE Multimed 7(3):79–83Google Scholar
  12. 12.
    Bresin R, Vecchio C (1995) Neural networks play Schumann. In: Proceedings of the KTH symposium on grammars for music performance, Stockholm. Department of Speech Communication and Music Acoustics, KTH, Stockholm, pp 5–14Google Scholar
  13. 13.
    Camurri A, Dillon R, Saron A (2000) An experiment on analysis and synthesis of musical expressivity. In: Proceedings of 13th colloquium on musical informatics, L’Aquila, ItalyGoogle Scholar
  14. 14.
    Grindlay GC (2005) Modelling expressive musical performance with Hidden Markov Models. Master’s thesis, University of California Santa Cruz, USAGoogle Scholar
  15. 15.
    Raphael C (2001) Synthesizing musical accompaniments with Bayesian belief networks. J New Music Res 30(1):59–67CrossRefGoogle Scholar
  16. 16.
    Widmer G, Tobudic A (2003) Playing Mozart by analogy: learning multi-level timing and dynamics strategies. J New Music Res 32(3):259–268CrossRefGoogle Scholar
  17. 17.
    Ramirez R, Hazan A (2005) Modeling expressive performance in Jazz. In: Proceedings of 18th international Florida artificial intelligence research society conference. AAAI Press, Palm Beach, pp 86–91Google Scholar
  18. 18.
    Zhang Q, Miranda ER (2006) Evolving musical performance profiles using genetic algorithms with structural fitness. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM Press, New York, pp 1833–1840CrossRefGoogle Scholar
  19. 19.
    Ramirez R, Hazan A, Maestre E, Serra X (2008) A genetic rule-based model of expressive performance for jazz saxophone. Comput Music J 32(1):38–50CrossRefGoogle Scholar
  20. 20.
    Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105:1469–1484CrossRefGoogle Scholar
  21. 21.
    Seashore CE (1938) Psychology of music. McGraw-Hill, New YorkGoogle Scholar
  22. 22.
    Palmer C (1997) Music performance. Annu Rev Psychol 48:115–138CrossRefGoogle Scholar
  23. 23.
    Gabrielsson A (2003) Music performance research at the millennium. Psychol Music 31:221–272CrossRefGoogle Scholar
  24. 24.
    Clarke EF (1988) Generative principles in music performance. In: Generative processes in music: the psychology of performance, improvisation, and composition. Clarendon Press, Oxford, pp 1–26Google Scholar
  25. 25.
    Juslin P (2003) Five facets of musical expression: a psychologist’s perspective on music performance. Psychol Music 31(3):273–302CrossRefGoogle Scholar
  26. 26.
    Miranda ER, Biles JA (eds) (2007) Evolutionary computer music. Springer, LondonGoogle Scholar
  27. 27.
    Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, ColchesterMATHGoogle Scholar
  28. 28.
    Dissanayake E (2001) Birth of the arts. Nat Hist 109(10):84–92Google Scholar
  29. 29.
    Zentall T, Galef BG (1988) Social learning: psychological and biological perspectives. Lawrence Erlbaum Associates, HillsdaleGoogle Scholar
  30. 30.
    Boyd R, Richerson PJ (2005) Solving the puzzle of human cooperation. In: Levinson S (ed) Evolution and culture. MIT Press, Cambridge, MA, pp 105–132Google Scholar
  31. 31.
    Miranda ER (2002) Emergent sound repertoires in virtual societies. Comput Music J 26(2):77–90CrossRefGoogle Scholar
  32. 32.
    Noble J, Franks DW (2004) Social learning in a multi-agent system. Comput Inform 22(6):561–574Google Scholar
  33. 33.
    De Boer B (2000) Emergence of vowel systems through self-organisation. AI Commun 13:27–29Google Scholar
  34. 34.
    Hashida M, Nagata N, Katayose H (2006) Pop-E: a performance rendering system for the ensemble music that considered group expression. In: Proceedings of 9th international conference on music perception and cognition, Spain, pp 526–534Google Scholar
  35. 35.
    Todd NP (1985) A model of expressive timing in tonal music. Music Percept 3:33–58CrossRefGoogle Scholar
  36. 36.
    Ben-David A, Mandel J (1995) Classification accuracy: machine learning vs. explicit knowledge acquisition. Mach Learn 18(1):109–114Google Scholar
  37. 37.
    Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cogn Psychol 2(2):145–161CrossRefGoogle Scholar
  38. 38.
    Todd NP (1992) The dynamics of dynamics: a model of musical expression. J Acoust Soc Am 91(6):3540–3550CrossRefGoogle Scholar
  39. 39.
    Clarke EF, Davidson JW (1998) The body in music as mediator between knowledge and action. In: Thomas W (ed) Composition, performance, reception: studies in the creative process in music. Oxford University Press, Oxford, pp 74–92Google Scholar
  40. 40.
    Sundberg J (2000) Grouping and differentiation two main principles of music. In: Nakada T (ed) Integrated human brain science: theory, method application (music). Elsevier, Amsterdam, pp 299–314Google Scholar
  41. 41.
    Cambouropoulos E (2002) The local boundary detection model (LBDM) and its application in the study of expressive timing. In: Proceedings of the 2001 international computer music conference. International Computer Music Association, San Francisco. Available online: http://hdl.handle.net/2027/spo.bbp2372.2001.021. Assessed on 05 Oct 2009
  42. 42.
    Lerdahl F, Jackendoff R (1983) A generative theory of tonal music. The MIT Press, CambridgeGoogle Scholar
  43. 43.
    Thomassen JM (1982) Melodic accent: experiments and a tentative model. J Acoust Soc Am 71(6):1596–1605CrossRefGoogle Scholar
  44. 44.
    Krumhansl C (1991) Cognitive foundations of musical pitch. Oxford University Press, OxfordGoogle Scholar
  45. 45.
    Bresin R (1998) Artificial neural network based models for automatic performance of musical scores. J New Music Res 27(3):239–270CrossRefGoogle Scholar
  46. 46.
    Kirke A (1977) Learning and co-operation in mobile multi-robot systems. PhD thesis, University of PlymouthGoogle Scholar
  47. 47.
    Wooldridge M (2004) An introduction to multi-agent systems. Wiley, MaldenGoogle Scholar
  48. 48.
    Winston WL, Venkataramanan M (2002) Introduction to mathematical programming: applications and algorithms. Duxbury Press, Pacific GroveGoogle Scholar
  49. 49.
    Kirke A, Miranda ER (2011) Using a biophysically-constrained multi-agent system to combine expressive performance with algorithmic composition. In: Miranda ER (ed) A-life for music: music and computer models of living systems. A-R Editions, MiddletonGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  • Eduardo R. Miranda
    • 1
  • Alexis Kirke
    • 1
  • Qijun Zhang
    • 1
  1. 1.Faculty of ArtsInterdisciplinary Centre for Computer Music Research, Plymouth UniversityPlymouthUK

Personalised recommendations