Keywords

1 Introduction

Music creation and performance have faced a tremendous evolution with new technology tools. In music styles, such as electroacoustic and electronic music, methods like digital signal processing and algorithmic composition (creating music through a computer program) have become the core of the composition process, forming the future of music creation. Various tools are used to support such creative attempts, one of which is SuperCollider, an open-source interface and programming language created in 1996 by James McCartney (McCartney, 2002). SuperCollider is widely used by artists for algorithmic composition and live coding (live-scripting an algorithmic composition). Still, it also provides various libraries for researchers interested in manipulating and/or analyzing sound (Collins N., 2011). Therefore, this tool is helpful for the creation and study of musical sound. The syntax of SuperCollider is based on C++ programming language but has its own unique commands, adapted to the needs of sound manipulation and design. This tool is used to create the interactive script presented in this chapter: SonicDesignHistory (Christodoulou, 2023). The reason behind selecting this tool, apart from the open-source nature and the number of sound control possibilities, is the large community of artists and academics behind it. By attempting to create an interactive script in such a community, there is a clear aim to start a conversation about exploring music history through algorithmic compositions and providing useful tools that will inspire and assist many SuperCollider users.

Music can be used to convey information (Shelemay, 2006). More specifically, in this chapter, there is an assumption that music–more specifically, an interactive music script–can become an effective way to present the historical evolution of a music genre. The exploration begins with a focus on electroacoustic music history, using it as a starting point. To gain a deeper understanding of the electroacoustic music style, music theory and analysis are incorporated. In this endeavor, music technology is employed as the core method, with the script development being facilitated by the use of SuperCollider. The outcome of this attempt could also be characterized as a “lecture-recital” since it combines music history presentation through a music composition. More specifically, during the International Seminar of Sonic Design (2022), SonicDesignHistory was presented, where a selection of composers and techniques from the electroacoustic music scene was displayed in a historical sonic design walkthrough, which was scripted live. It is worth mentioning that interactive music notebooks have been created before for educational purposes (Horn, Banerjee, & Brucker, 2022) and data science (Hermann & Reinsch, 2021), but this was the first known creative sonic design attempt of a music history overview. It should be mentioned that a notebook here means an interactive script that contains code and text.

It is important to understand the primary intention of this composition and the reasoning behind selecting an algorithm for the presentation of a music history summary. First, getting a deeper apprehension of the algorithmic composition techniques is possible by investigating multiple sonic outcomes and testing different ways to implement a particular strategy. Also, through this investigation, it is clear that even though there can be a large amount of computer automation in the composition process, it is still a human-controlled music structure. Furthermore, interaction and experience are expected to be more effective in understanding a concept and maintaining the audience’s attention. So, it is assumed that through such a presentation, the audience or the SonicDesignHistory users will get a clearer understanding of what electroacoustic music consists of and how it evolved over time. For me, the creator of the script and composer, this attempt is helpful to understand how the techniques work and distinguish outstanding elements of particular composers while carrying elements from their past.

The selected composition strategies that are presented in the SonicDesignHistory are assessed through their exploration of various pieces, taking into consideration their musicological importance and contribution. There is also an attempt to detect the elements of the various techniques that distinguish and unite the musical styles. This chapter describes the electroacoustic music composition process and the techniques I have used to achieve a historically faithful result. There is also a statement on the various components of the electroacoustic techniques and how each one inspired the creation of their legatees. Furthermore, there is a discussion about the various challenges faced in such an attempt and the prospective applications that can be developed further.

2 Methodology

I have chosen to work with sonic design and sonification to get a different grasp on music history than what is achieved through historiography with a verbal presentation of music history and music examples. The emphasis is not on the historical evolution of music as such, but mostly on the techniques and the artists that formed this history. The main method adopted for this project is the combination of various composition and sound processing techniques, taking advantage of multiple SuperCollider functions (as developed by James McCartney), useful examples from scientific sources (Karamanlis, 2021), and related work (LaFleur, 2020). SonicDesignHistory is based on the concept of unfolding musical sounds, which are created by borrowing elements from their previous sounds or sonic events. After integrating these elements, they develop their own unique sonic result, essentially a new musical event. In this case, a music event is an example of a specific electroacoustic music technique, and the music elements are the sound components of this technique.

SonicDesignHistory was created using the SuperCollider IDE. During execution, blocks of code are scripted and/or activated live, while explanatory comments guide the audience, providing information about the processes and techniques that are displayed during the performance, as seen in Picture 1. The composition is divided into three main categories, based on the historical era presented each time and with a special focus on the music of Europe and the USA. There have been multiple attempts to divide Electroacoustic Music into categories before, mostly based on the musical techniques (Manning, 2004) and the major computer developments that accompany the music evolution (Holmes & Pender, 1985). Based on these division attempts, I decided to categorize the historical eras as follows: Early Electroacoustic Music (1948–1960), Electroacoustic Music Evolution (1960–1990), and Digital Age (1990- today).

The main reason behind selecting composers originating from certain parts of Europe and the USA was my familiarity with the literature and the composing techniques, as well as previous interaction with the related compositions. The selection of composers was based on their influence on Western electroacoustic music evolution and their originality. On the other hand, many techniques were integrated into this script due to being influenced by previous composition norms (such as Stockhausen’s integration of Schoenberg’s 12-tone technique). In terms of the coding implementation of the techniques, it would be possible to reuse some commands from one technique to the other, making it possible to create a live coding manipulation of sound that encompasses the concept of a sound being “born” from its previous one (Fig. 1).

Fig. 1.
figure 1

Live presentation of SonicDesignHistory during the International Seminar of Sonic Design, 2022. The picture shows the script being executed live in the SuperCollider IDE, with comments on the screen guiding the audience through the process (Photo: Léo Migotti).

3 The Early Electroacoustic Music Era (1948–1960)

In the live version of SonicDesignHistory, an introductory sound is the first element of the composition. This sound does not add to the overall electroacoustic music history display; it is only produced to accompany the SynthDef activation, making this process more interesting for the audience. SynthDefs are synthesizer definitions used as sound-producing units. These classes are widely used in the script, and—as will be mentioned later on—they do not produce sound before activated. This first element of the algorithmic composition consists of a simple sound resulting from a fast sine oscillator (FSinOsc). The sound output was assigned to a NodeProxy, a placeholder for sound playing in the SuperCollider server. NodeProxies are chosen multiple times in the composition and let the user smoothly activate and deactivate the sound output while offering the possibility to change the sonic result in real-time. This initial sound is the base for the first technique, additive synthesis, one of the oldest and most studied composition strategies of this music genre (Karamanlis, 2021). The main goal of additive synthesis is the formation of a complex waveform by multiple simple – usually sinusoidal – waveforms (Karamanlis, 2021). Its concept originates from pipe organs, and their multiple register stops (Roads, 1995), while the actual idea comes from the Fourier Transform, which allows a complex waveform to be divided into multiple simple periodic waveforms (Karamanlis, 2021). The imitation of additive synthesis was achieved by using a class (Mix.ar) which mixed an array of four channels into one, creating a complex signal consisting of four different sinewave oscillators. To create a faster and easier-to-follow presentation, the SystemClock function was selected, which creates an automatic playback of complex signals in a certain number of seconds.

The first music era was dedicated to Europe and, more specifically, to the Studio for Electronic Music (WDR) in Germany. During the Early Electroacoustic Music era, Elektronische Musik was born in Cologne, introducing a new set of composition techniques, all of which included electronics. The first music event selected for this era is Herbert Eimert’s approach to serialism, based on Schoenberg’s 12-tone technique. This music segment was inspired by Eimert’s “Klangstudie II” (1952). More precisely, one of the musical elements that is introduced is a set of delayed “bubbly” sounds created by combining delayed non-band-limited sawtooth and sinewave oscillators. A wall of reverberated, low-frequency sounds is created in the background, consisting of manipulated noise signals. Furthermore, a SynthDef class was created to resemble the sound of a piano instrument that could play 12 specific notes.

It is important to mention that in the live version of the composition, all SynthDefs were activated at the beginning of the script since they don’t make any sound until they are activated. This is a common practice in live coding; it saves time and provides the audience with a simpler, more comprehensive algorithm to watch during the performance. The piece of code that contains the SynthDefs could be completely hidden from the audience during the live presentation. On the one hand, there is the goal to be able to present this script to an audience coming from backgrounds irrelevant to live coding or interfaces like SuperCollider, so it is important not to spend a lot of time showcasing these functions to avoid confusion. Furthermore, there is a clear aim that the interactive notebook is openly available to the audience, so it is important not to require any domain-specific knowledge to interact with it. On the other hand, a brief presentation of the SynthDefs was necessary for those curious to take a quick look into the composition’s elements.

The second music event created for the Early Electroacoustic Music era in Cologne was based on Karlheinz Stockhausen’s aleatory techniques. Therefore, this part of the script was inspired by the concept that some musical aspects are left to chance. Stockhausen used aleatoric techniques to provide the performers with freedom in sequence regarding the musical fragments, and one of the most notable examples of such an attempt is “Klavierstück XI” (1956). Here, only the concept of sequence freedom is borrowed, and the twelve tones that were selected for the previous music event are played randomly, using a pattern object (Prand) that would randomly select an item from a defined list. Here, there was a list consisting of twelve tones (frequencies) and another list consisting of three SynthDefs (piano, string, and bell instruments). This enabled the creation of a chaotic, random-sounding event that characterized some compositions of Elektronische Musik.

The next part of the sonic design was another important European city: Paris, France. Here, Musique Concrète was born, often considered the ‘polar opposite’ of Elektronische Musik, mostly because the artists of Musique Concrète used recorded sounds as their input for the compositions. This was opposed to Elektronische Musik, in which the artists would create their own sounds electronically. Musique Concrète also refers to how the composers would work and manipulate directly the so-called “sound objects” (Schaeffer, 1966). Therefore, it was important to create a music event that did not emerge from the previous one but contrasted it in an obvious way. Musique Concrète was developed first in the history of electroacoustic music, but for aesthetic reasons, in my composition, it was presented after Elektronische Musik.

The only composer who was selected to represent Musique Concrète in this attempt was one of the most important figures: Pierre Schaeffer. The technique that was selected for this music event was tape manipulation, and the piece of code that was created as an example was inspired by his composition “Symphonie pour un homme seul” (1949). Here a pre-recorded sound file of female vocals was used, and random slices of that file were selected using the pseudo-random generators (TWChoose) of SuperCollider. At the same time, Schaefferian typology was taken into consideration for the introduction of three main fracture types (the energy envelopes of the sound objects): impulsive sounds (fast and short), sustained sounds (prolonged with steady energy), and iterative sounds (stream of impulses) (Godøy, 2021).

As mentioned earlier, composers were selected from certain regions of Europe and the USA. After exploring some main aspects of Musique Concrète and Elektronische Musik, the next part displays certain elements of the Early Electroacoustic Music era in the USA. One composer selected for this music was Steve Reich and his phase-shifting composition technique. More specifically, a modified, pre-existing attempt to algorithmically recreate Steve Reich’s “Piano Shift” (LaFleur, 2020) was implemented in SonicDesignHistory. Here a SynthDef was created to resemble a piano sound, different from the one that was used in (LaFleur, 2020), while the note playback strategy remained the same. Two global variables were defined, one that stored the MIDI values for the notes and one that stored their timing. These notes are played every second by the SuperCollider routine ~steady (LaFleur, 2020). In this composition, one pianist is speeding up to put the second pianist out of phase until they are synced. To recreate this technique computationally, the instrument is enclosed in a routine called ~phasing (LaFleur, 2020).

The final music event of the Electroacoustic Music era was dedicated to John Cage. There was a reference to his work “4:33” (1952), which was also used as a creative transition between the two eras. Therefore, all the previous sounds and sonic events were gradually silenced by using fading-out and release functions. This passage was not four minutes and thirty-three seconds as in the original form, but thirty seconds, allowing the audience and the room to take part in the composition by letting their “unintended” sounds be heard, according to the original concept of Cage’s work (Davies, 1997). This transition was also practically useful for creating this script since it was challenging to find common elements that would assist the smooth transition from the complicated wall of sound of the Early Electroacoustic Music era to the simple, low-frequency sound that would initiate its Evolution era.

4 The Electroacoustic Music Evolution Era (1960–1990)

The Electroacoustic Music Evolution era is characterized by integrating computers into the composition process, occasionally giving these computers the freedom to make decisions for the music creation and production (Serra, 1993). Performers and composers of this time would take advantage of the digital processes, both for creating modern instruments and for sound transformation, providing endless opportunities for novelty and original creation (Emmerson, 2001). Electroacoustic music performances gradually integrated digital devices that allowed the performers to encode and process note information in real-time, resulting in the so-called “interactive compositions” (Emmerson, 2001). One of the composers who lived during this era and took advantage of the new technologies in his work is Ioannis Xenakis. Xenakis is selected as a representative of the Electroacoustic Music Evolution era. I believe that he signifies this era by his original way of composing and integrating natural sciences into his work, while his stochastic music approach can be used as an illustration of the musical advancements of this time.

Stochastic music was created with the composer’s aim to arrange the music structure using probability calculus (Manning, 2004). Xenakis’ composition “Diamorphoses” (1957-8) was the main inspiration for this era. First, a wall of sound was built by combining a fast sinewave oscillator (FSinOsc) and a low-frequency noise with random frequency values in a loop. In SuperCollider, there are three different dynamic stochastic synthesis generators named after Xenakis’ GENDY model: Gendy 1, 2, and 3. These allow the user to initialize a set of memory with X number of points that are modified one by one with each new period. Dynamic stochastic synthesis is a process during which probabilistic waveforms are generated after being stochastically calculated. Here, a dynamic stochastic synthesis Generator (Gendy2) was used, accepting a sinewave oscillator as a parameter for a random number generator (Lehmer) and another sinewave oscillator as a parameter for another random number generator, both perturbed by Xenakis.

5 The Digital Age (1990-Today)

After a brief display of Electroacoustic Music Evolution, the last era in my script is the Digital Age. The integration of noise as a major part of the composition process is a well-known element of this era, although it is not an innovative practice. Russolo (1885–1947) had already worked on mechanical noise-producing instruments decades prior to the Digital Age (Holmes & Pender, 1985). Noise is one of the elements that are shared among all of the eras. In the Early Electroacoustic Music era, Schaeffer aspired to combine music and noise in his work, while ambient noise was a common substance of Elektronische Musik (Holmes & Pender, 1985). In fact, in the NWDR studio, white-noise generators were quite common. A good example of a music art integrating noisy fragments would be Eimert’s “Klangstudie I,” where “noises appear into washes of echo frizz” (Holmes & Pender, 1985). Cage also had obvious influences of noise blending into his work (such as “Fontana Mix”). It is important to mention that until the Digital Age and the general evolution of recording with digital means, the existence of noise was sometimes inevitable. This could be one of the reasons behind various attempts at creative noise integration into music art. It should be noted, though, that the amount of noise was mostly controlled by the composers.

With the digitalization of music recordings, noise manipulation remained an important composition technique. In the script, a white noise (WhiteNoise.ar) is generated, instantiating the first music event of the Digital Age. This signal is transformed into a hi-hat sound by implementing a high-pass filter (RHPF.ar) (Karamanlis, 2021). Using this as a beat, the next music event is the introduction of microsounds and glitches. More specifically, a SynthDef (Rumush, 2015) designed to imitate a chaotic wall of glitch sounds was manipulated in a way that would match the aesthetics of the current work. This instrument consisted of one bass-like sound from a sinewave oscillator, three tone-like sounds generated from sinewave oscillators, one of which is placed in the stereo field, one pink noise generator (PinkNoise.ar), and an impulse oscillator (Impulse.ar), on which a resonant low-pass filter was applied.

The next music event of the Digital Age covers the creation of ambient sounds. Ambient sounds were introduced early in the 1960s with artists such as De Maria and Varèse, as well as later during the Electroacoustic Music Evolution with Brian Eno and Harold Budd (Holmes & Pender, 1985). Ambient music was correlated to atmospheric soundscapes and background decoration of public spaces, such as airports (Holmes & Pender, 1985). The main idea was to create a musical piece that would make the audience pay attention to everyday sounds that are otherwise ignored (Manning, 2004). It was fused with music styles such as jazz and electronic, but it gradually got a unique identity as a separate music style, known as “ambient” or “space music.” With the technological developments in music creation, other genres, such as pop or rock, became more popular. Aphex Twin made the music style relevant again during the 90s with the publication of “Selected Ambient Works’85 - ‘92” and “Classics” (1995) (Manning, 2004). In this interactive algorithmic composition, the ambiance is presented using an instrument designed by (Karamanlis, 2021) to imitate a “relaxed” sound environment. The main element of this instrument was the use of a bank of fixed-frequency resonators (Klank) which can be used to simulate the resonant modes of an object.

The final example of this era—and this interactive script in general—was the creation of a soundscape. A sinewave oscillator and a dynamic stochastic synthesis generator consisting of three other sinewave oscillators were selected and placed in the stereo field, creating a windy soundscape that would gradually become the only element of the composition. As this music event was activated using a NodeProxy, the rest of the instruments and players faded out. It is now clear that the structure began with simple sounds that gradually led to a chaotic burst of sound, abruptly cut by silence, and gradually building up to yet another chaotic wall of sounds that faded out with the creation of an ambient environment, which was then reduced to a windy soundscape and led naturally to the termination of SonicDesignHistory.

6 Similarities and Differences

The presentation of methods and techniques used in the interactive system shows that there are plenty of elements that unite and distinguish the different music styles and eras. To create something unique and innovative, many composers borrowed previous techniques and advanced them with respect. One example is the use of filters, which were used for sound manipulation throughout the whole electroacoustic music history, in the earlier time as delays and reverbs and later as low-pass filters for low-frequency and noise signals manipulation. Also, between the Early Electroacoustic Music era and the Electroacoustic Music Evolution, there is a common element of randomness. Stockhausen let performers select their own sequence in which they would play a music part, restrained to a predefined circle (Manning, 2004). Schaeffer would select random pieces of an audio file to achieve the desired sound montage. Xenakis used Lehmer’s random number generator to control the music structure based on mathematical sequences. None of them used randomness in a way that would create chaos or absolute freedom.

Despite their commonalities, many new strategies resulted in the creation of new eras characterized by novel sound identities. For example, Schaeffer’s tape manipulation was a unique practice, as well as Reich’s phase shift and Cage’s emphasis on silence. Of course, it is worth mentioning that even between artists of the same music era, there were contradictions and completely opposite composition directions, as happened with Musique Concrète and Elektronische Musik. However, it is worth mentioning that there was also the common element of electronic sound, either “pure” or with integrated acoustic elements.

My exploration aimed to achieve a historically faithful result by reading up on multiple sources, such as books and journals, and consulting historic musicologists. I believe that it was indeed a successful attempt, having a clear walkthrough of some of the main parts of electroacoustic music history. The presentation received positive comments during the Sonic Design Seminar, but in the future, it would be highly interesting to gather user feedback and perform a scientific evaluation of the script. The composers of electroacoustic music are not restricted to the ones selected here, and neither are the composition methods, but the goal was to present a brief overview of some of the selected strategies in Europe and the USA. Also, the relationship between sound and algorithms became clear through investigating techniques that define the sound by manipulating it in a certain way. It was also clear how complex this form of synthesis can be and how close it is to human-made art.

There were plenty of challenges throughout the conception of this creative attempt. When the purpose is only to introduce creative activity, one can justify their choices mainly on their vision and personal expression. Here, this wasn’t the case, since apart from the creativity aspect that was crucial, it was also important to present a meaningful structure and convey substantive information regarding the evolution of electroacoustic music history. An important challenge was the selection of composers and techniques, as well as a meaningful categorization of these in a clear algorithmic structure. Furthermore, even though, in theory, many of the composition techniques seem to share some commonalities, it is not always the case when it comes to their algorithmic implementation, and the same applies to the resulting sound. Therefore, important decisions had to be made regarding the placement of several techniques within the script. Another important challenge was the simulation of analogue techniques in a digital environment. There were sometimes differences in sound, and it was difficult to achieve certain results. However, the resulting sound was very close to the desired outcome, and it seemed like it was feasible to imitate both the techniques and the sounds of the electroacoustic music scene using algorithmic methods.

As far as the live coding was concerned, interaction was an important factor. The script has been made for audiences unfamiliar with algorithmic composition and live coding, so all the information should be presented concisely. For this, I have developed an oral presentation preceding the live coding session to guide the audience’s attention and help them understand the script’s syntax.

7 Conclusion

This project has led to a deeper understanding of various electroacoustic music composition strategies, including their historical development and implementation. Familiarization with the techniques was essential to provide the comfort to interact with them creatively. The interactive script was structured to be conveyed easily to the general public. It is also important to address the aesthetics that this particular form of music technology provides for music creation. Therefore, the techniques and styles that were imitated were put together in a way that would be aesthetically pleasing for the audience.

I believe there is high musicological value in creating such interactive scripts. It is a new way to present musical information, and the interactivity makes it both playful and helpful. While my project was mainly analytical in nature, it could expand music creativity while combining music theory and history at the same time.

In the future, it would be interesting to explore more aspects of electroacoustic music history. For example, more composers and techniques could be included, such as Stockhausen’s envelope design and techniques like ring modulation. It would be interesting to include music from other countries, beyond the current European and North American examples. Also, it would be intriguing to present multiple ways to algorithmically implement the same technique and examine all the possibilities and sound outcomes. The notebook is hopefully a source of inspiration to other SuperCollider users and live coders, and it could potentially start a discussion and evolution of technologically-mediated music history studies by enabling an open collaboration and sharing of ideas.

Finally, future work includes creating an online interactive application to assist education. More specifically, developing a way to teach music history, algorithmic composition, and interactive sonic design with descriptive comments and valuable sources will be relevant. This could be a new way to teach music history and the theory behind the multiple techniques. Many of the techniques are theoretically complex, so practice and interaction are helpful for overall comprehension.