Generic Additive Synthesis. Hints from the Early Foundational Crisis in Mathematics for Experiments in Sound Ontology

  • Julian RohrhuberEmail author
  • Juan Sebastián Lach Lau
Part of the Computational Music Science book series (CMS)


Motivated by an investigation of the historical roots of set theory in analysis, this paper proposes a generalisation of existing spectral synthesis methods, complemented by the idea of an experimental algorithmic composition. The background is the following argument: already since 19th century sound research, the idea of a frequency spectrum has been constitutive for the ontology of sound. Despite many alternatives, the cosine function thus still serves as a preferred basis of analysis and synthesis. This possibility has shaped what is taken as the most immediate and self-evident attributes of sound, be it in the form of sense-data and their temporal synthesis or the aesthetic compositional possibilities of algorithmic sound synthesis. Against this background, our article considers the early phase of the foundational crisis in mathematics (Krise der Anschauung), where the concept of continuity began to lose its self-evidence. This permits us to reread the historical link between the Fourier decomposition of an arbitrary function and Cantor’s early work on set theory as a possibility to open up the limiting dichotomy between time and frequency attributes. With reference to Alain Badiou’s ontological understanding of the praxis of axiomatics and genericity, we propose to take the search for a specific sonic situation as an experimental search for its conditions or inner logic, here in the form of a decompositional basis function without postulated properties. In particular, this search cannot be reduced to the task of finding the right parameters of a given formal frame. Instead, the formalisation process itself becomes a necessary part of its dialectics that unfolds at the interstices between conceptual and perceptual, synthetic and analytic moments, a praxis that we call musique axiomatique. Generalising the simple schema of additive synthesis, we contribute an algorithmic method for experimentally opening up the question of what an attribute of sound might be, in a way that hopefully is inspiring to mathematicians, composers, and philosophers alike.


Additional Synthesis Basis Function Decomposition Sound Synthesis Algorithms Frequency Attribute Concatenative Phase Modulation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Spectres of Accumulation

Adding up two numbers, adding up many numbers – this appears to be a most unquestionable and intuitive activity. Cutting a number in two, grouping its parts, is no less self-evident. The concept of natural number itself suggests a definite idea of accumulation, and thereby serves as a blueprint for other domains. Be that as it may – anyone who has ever worked with sound knows that understanding what happens in adding up and mixing, separating and analysing, is really far from trivial. The addition of one element may cancel out another, one part may interfere with, or may recontextualise others, become indistinguishable or irrecognisable – or may, for no apparent reason, suddenly turn out as entirely separable, untouched from the whole it coexists with. This is finally the reason why harmonic and rhythmic relationships have never ceased to provide an interesting and endless topic for investigation. It could also be part of the reason why it is so difficult to specify sound.

The simplest correlate in the realm of elementary arithmetic (we could call it Pythagorean) is the fact that addition entails multiplication: in general it is an undecidable question, for example, whether an unknown even number will result in a prime when adding one to it. In other words, the properties of a sum are non-trivial, and they are so already in the truly elementary case. For the inventory of mathematical entities that has ever grown and shifted over its history, such as infinitesimals, functions, sets, groups, categories, it doesn’t become much easier and, even more, the very notion of addition becomes a matter that needs, dependent on the subject matter, a separate justification.

Adding up, taken in full generality, does indeed entail both mathematical and philosophical challenges. In particular, in absence of an immediately given continuous grounding, the consequences of “making the next step” may be unforeseeable in the most general sense. Alain Badiou notes:

To understand and endure the test of the additional step, such is the true necessity of time. [...] There is nothing more to think in the limit than in that which precedes it. But in the successor there is a crossing. The audacity of thought is not to repeat ‘to the limit’ that which is already entirely retained within the situation which the limit limits; the audacity of thought consists in crossing a space where nothing is given. We must learn once more how to succeed. [1, 81f.]

2 The “Birth Place of Set Theory” and Its Potential Relevance to the Ontology of Sound

Spectrum and multiplicity are historically related concepts. The 19th and early 20th century attempts to gain a better understanding of the concept of function is one of the most telling in this respect. Let’s briefly recapitulate1:

The idea of ‘being the function of something’, of a linear continuum in particular, was at the source of the concept of function, which entailed ideas of dependent change (derivatives) and cumulative volume (integrals) that made it possible to ask questions about the specific properties and laws of functions. Thereby, the infinite series, the possibility to understand a function as a sum of other functions, became one of the most indispensable as well as problematic devices in the then emerging branch of mathematics, real analysis. But this idea of a ‘spectrum’ of a function also led to the radical rethinking of its domain, the continuum.

From a very general point of view, one can say that the idea of prismatic composition/decomposition exposes the possibility of looking at one and the same thing from different perspectives. Its effectiveness lies in the fact that some perspectives reveal properties that could never have been understood from any other. This is also what explains the ontological gravity of the spectrum: if its partials are mere devices to approach the whole of an intuitively continuous shape, what does it mean if, for some points, their sum does not converge to a single number? Or if, for some spectra, a rearrangement of their terms leads to a different result? Has one chosen the wrong ‘alphabet’ to form the ‘words’ of a given relation?

The decomposition of a function into trigonometric functions had its beginnings in the problem of understanding the movement of a plucked string, and because of the potential of the Fourier Series for calculating the ‘image’ of any function whatsoever, over the 19th century, harmonic analysis became a paradigmatic medium for the understanding of functions. Of course not only of mathematical functions in general, but also of sound. Even if the qualities of sound may escape immediate understanding, once the partial is assumed to be intuitive and self-evident, should it not be possible to finally access the totality of all possible sounds in one spectral world image? Should not the knowledge of the principal dimensions of sound allow access to every one of its instances?

Even though additive synthesis and harmonic analysis sufficiently approach completeness in many cases and can thus be helpful indeed, the harmonic spectrum is by far not as productive as one may think in solving and of finding and understanding unknown sounds. As it turns out, the difficulty remains in the interrelation between the coefficients and of finding the law that describes them best. The case of transients (or discontinuities) illustrates that the sum of partials does not converge well to some wave forms, and that the rules according to which it does are not helpful for understanding and, by implication, for finding interesting variations.

Over the course of the 19th century, establishing alternative and operatively adequate perspectives on the properties of functions, the non-trivial domain of partials and the limits of their series stabilised a process that slowly eroded the intuitive geometric image of a function. In the face of so called “monstrous” functions (today they are less dramatically called “pathological”), many obvious concepts had to be reviewed, an important one of them being the hitherto rather unsuspicious identity between the continuous and the differentiable. Essentially, the early “crisis of intuition”2 was an ontological one: should those monsters be admitted as properly existing, even though they contradicted the most basic spatial intuitions and could not be clearly visualised? Inspired by his senior colleague in Halle, Eduard Heine, from 1869 onward, Georg Cantor endeavoured to extend the possibility of representing (and thus making sense of) arbitrary functions in terms of infinite sums of trigonometric functions. He succeeded in showing that the series is unique (and its coefficients thus irreplaceable by another set of parameters) even for functions for which infinitely many points fail to converge to a single number. Sums of harmonic oscillations can indeed represent extremely discontinuous functions.

In subsequent years, the mathematical devices that Cantor developed in the course of these proofs were to become the impulse for his development of transfinite numbers, and were to motivate his conception of actual infinite and transfinite sets: accepting infinite series of rational numbers as properly existing entities (rather than mere approximations), allowed him to convey access to the extremely rich, but also disputed, structure of the continuum. In such a way, what is now called the Fourier Series is the entry point into modern set theory. Ernst Zermelo, who in 1932 edited the collection of Cantor’s papers, writes:

In the concept of “higher order derivations” of a point set, we thus should behold the proper nucleus, and in the theory of trigonometric series the birth place, of the Cantorian “set theory” (p. 102).

Considering the significance that set theory and harmonic analysis has for each of these two fields respectively, making sense of this transitory moment should be of interest for those who work at the intersection between mathematics and sound. So, how do we understand this fact from the perspective of sound? As Alain Badiou has emphasised, Cantor’s affirmation of the transfinite is an essential step in the history of ontology, because it departs from the idea of the “unity of being as such”—the continuum, rather than being a lawless or tensionless matter that serves as a medium of inscription for the arbitrary cuts enacted by thought, turns out to be a cloven, abstracted and non-unifiable landscape of structures. The idea is not, however, a total rule of orderless noise over each local part that renders it unintelligible. Monsters, even though counter intuitive, always constitute some new laws.
An aspect, or property, that cannot be described with the given means, a subset that is therefore indiscernible from its background with the means given in this horizon, has been called “generic”. According to the reconceptualisation of Paul Cohen’s notion by Badiou, the generic set is

neither a known or recognized multiple, nor an ineffable singularity, but that which detains in its multiple-being all the common traits of the collective in question: in this sense, it is the truth of the collective’s being. [2, p.17]

It is in this sense that mathematical monsters are generic: they have no proper place in the given order, so, if one chooses to accept their existence nevertheless, they make it necessary to find a new analytical apparatus instead of relying on the generality of the existing one. Such a process cannot proceed from a full understanding, a transparent intuition of a space for a free unfolding of self-evident laws. Finding an appropriate description, conversely, requires an incomplete process of experimentation and conjecture, which in the following we shall call partial understanding. We are now in the position to ask: how can we find new laws of sound and how can we enter a process of partially understanding their consequences? As one possible step in this direction, we propose a generalised, or better generic, form of additive synthesis that is inspired by the so far discussed “birth place” of set theory.

3 The Epistemic Value of Base Functions

In general, Fourier’s most celebrated contribution is widely applicable because it provided a method (the Fourier Transform) to calculate coefficients for each partial that works in many cases. It also serves as an intuitive model of breaking a complex signal into more accessible parts. The cosine function (or its equivalents), parameterised in phase and amplitude, effectively is a coordinate system that gives access to every point in the space of possible (and thus arbitrary) functions.

Since its discovery, many other functions that serve as ‘equally general’ basis functions for linear combinations have been found, e.g. the Chebychev polynomials (1854) and Spectral modeling [16]. Perhaps most influential today is the application of the uncertainty principle from quantum physics to sound by Dennis Gabor, with its information theoretical approach, that explicates possible trade offs between frequency and time representation [6, 13, 14] in the form of acoustical quanta. Among others, Gabor’s ideas inspired wavelet analysis [10], which uses distributions of suitably windowed and translated partials in order to render the decomposition more adequate to certain sound qualities. Even in the ideal lossless case, however, each method still may convey or obscure given properties. As the authors of yet another decomposition, namely Chirplet Transform (introduced for radar image processing) argue, that

[e]ach of the chirplets essentially models the underlying physics of motion of a floating object. Because it so closely captures the essence of the physical phenomena, the transform is near optimal for the problem of detecting floating objects.3

A decompositional basis is an observational paradigm: the choice of a coordinate system determines how an object can be understood, and the very coordinates themselves constitute which properties, or aspects, become apparent and what kind of transformations are thinkable. By consequence, despite the universality of the Fourier series, its partials may be more or less well suited to construct or understand a given waveform, its decomposition being more or less able to convey its hidden inner logic. Hence, a more general perspective on the idea of a ‘spectrum’ may be practically helpful, and ontologically necessary.

4 Generic Additive Synthesis

Most transforms mentioned so far have inspired specific methods in sound synthesis. Chebychev polynomials, for example, are typically used in waveshaping [9], wavelet-like sound functions in granular synthesis and microsound [5, 13]. The difference between analysis and synthesis is gradual: just as each method has a distinct sound character, it equivalently has its own domain of sonic investigation.

The general method underlying all of the above is the additive in a broader sense: fixing a number of simple functions (pseudo-partials) that can be transformed in some systematic way, and then combining them together (by pseudo-addition). So one starts with a list of functions that obey a common law, then combines them in some systematic way, usually regulating by coefficients ‘how much’ a certain partial contributes to the whole.

Expressed as a function of time, such generic additive synthesis can be written in terms of partials g and combinator G:
$$\begin{aligned} f(t) = \underset{\scriptscriptstyle {i = 1}}{\overset{\scriptscriptstyle {n}}{G}} g_i(t, c_i) \end{aligned}$$
where \(g_i\) represents a partial (each different, depending on i). Every partial is a function of time t, and takes a coefficient \(c_i\), conveniently in a way that only if \(c_i \ne 0\), the partial contributes.4 Finally, G is the combinator, a generalised map5 that joins n partials into \(\mathbb {R}^m\), in a way that entirely depends on the method chosen.
The basic schema at work here is an interweaving of two perspectives: the partial function describes the ‘horizontal’ dependence on time t (e.g. the shape of a harmonic oscillation), as well as the ‘vertical’ dependence on the partial number i (e.g. the frequency). By consequence, only a minor shift is needed for both \(c_i\) and \(g_i\) to be undestood as a function of i (‘vertical order’) and t (‘horizontal order’). Thus it is sometimes adequate to treat the generic spectrum as factored into a new basis function \(g^{\times }_i(t)\):
$$\begin{aligned} f(t) = \underset{\scriptscriptstyle {i = 1}}{\overset{\scriptscriptstyle {n}}{G}} g^{\times }_i(t) \end{aligned}$$
Apart from its temporal evolution, each of the n partials is determined by its place i in the spectrum, and \(g^{\times }\) thereby is the name of the crossing point between the specifications of partial and spectrum. Rather than a fixed space and a variable set of coordinates, both are here on the same level, and may equally be subject to variation.6
Factoring the other way round, the combinator and the partials can be seen as a single function that takes a sequence of coefficients, a generic spectrum:
$$\begin{aligned} f(t) = G^{\times }(t, s) \end{aligned}$$
As it is the case with conventional additive synthesis, each instance of a generic spectrum s is an ordered tuple of coefficients \(\langle c_1, c_2 \dots c_n \rangle \). We shall come back to this formulation later.

Before we discuss some consequences, a note on terminology. In related methods like additive synthesis, the basis function is assumed to be known—it is the ‘type’ of the dimensions of the space, and, for a given function, it is really the coefficients that are unknown. Movement is understood as a movement through a fixed space. In this narrow sense, a distribution can be taken as general in so far as it completely and uniquely represents any arbitrary function given in another well-defined domain—the main task is to find the right coefficients. In the broader sense of decomposition proposed here, however, there is no given basis function with reliable properties, and thus no ‘type’ given in advance.7 Instead of being general, it is generic.8

The task is now to show how the two schemata of generic additivity become productive under the specific conditions of algorithmic sound synthesis.

5 Musique Axiomatique

It is well known that the immediacy of the visualisation of a wave form or a spectrum is misleading: sound can be very difficult to specify. That is, the relation between some formal or causal description of a sound and its aesthetic or even physical consequences is non-trivial.

In such cases, the classical method is to make a clear divide between what is given in advance (e.g. the instrument or synthesis method) and what is subject to variation (e.g. the score or parameters). The instrumentation is then, first of all, the choice of a suitable relation between those two parts, the given and the unknown. Or, in the context of our present discussion, one can say that it is the search for a relation between a given basis function and an unknown set of coefficients.

The above mentioned foundational discourses in 19th and early 20th century mathematics not only brought about the discovery of new subject matters, but also affected the relation between known and unknown: while in the classical understanding, the axiom was to be understood as that which is self-evident and indubitably true, it increasingly became that of a posit, a starting point, even a counterintuitive precondition necessary for a certain fabric of investigation. Questioning the self-evidence of the continuum was one of them. Since then, axiomatic thought has become a back and forth movement between conditions and consequences rather than simply a construction from first premises.

In such a movement, formal languages have attained the role of a medium, pretty much like that of measurement instruments in a laboratory. And while today algorithmic proof systems slowly enter mathematical reasoning, high level programming languages are already a well established medium for sound synthesis and algorithmic composition. Having a common language for instrument and score has decisively blurred their distinction. Being able to modify code at runtime (interactive programming or live coding) further allows us to reconsider the temporal distinction between precondition and consequences. Therefore we are well equipped to embark on an experimental praxis of modern axiomatics that neither denies the sensual and situational qualities of sound nor the possibility of its mathematical and algorithmic formalisation—a praxis which we like to call musique axiomatique. Here, there is no need to keep the order between first devising a fixed synthesis method and then looking for the appropriate parameters. Rather, it becomes the very principle for interweaving algorithms that unfold in time and algorithms that specify their mutual relations, so that the path to finding a new sound moves back and forth between the rewriting of the one or the other.

6 Experiments in Partial Understanding

The existing and widespread decompositions—what Mazzola has called “omnibus-decompositions” [12, 899]—obey constraints that are necessary to address specific domains. These domains are inhabited by certain properties, in particular the complementary pair of frequency spectrum and points in time. This is why the laws of such decompositions can be seen as epistemological consequences of the ontological structure of the sound that they investigate. If we want to investigate other domains, by experimenting and reasoning, we may find other decompositions which are adequate to them, in particular implying properties that do not have to be ‘located’ in time and frequency as with the others. Axiomatics in the modern sense is, as we have seen, not the positing of self-evident properties; here, it means the search for a basis function and its logic of combinations.

So how to start such an investigation, how to set up a generic additive synthesis experiment? How to ‘proceed’? Here we can only mention a few elements that serve as one of many possible starting points, keeping in mind that the aim is to develop a partial understanding—in the double sense of the word—of the procedures involved. In the experiments so far, we have worked with the SuperCollider programming language, which—given the necessity of dealing with multidimensional signals and arbitrary functions—is most suitable for the task at hand.9

6.1 A Comparison of Two Examples

Here are two very simple examples. The first is a sum of harmonically related cosines (multiples of 110 Hz) whose coefficients are a composite modulo function:

\(c_i = 1 / ((i \mod 7) + (i\mod 8) + (i\mod 11) + 1))\).
In the second example we instead have a product of pulsed frequency modulated cosines, where \(c_i = 1/i\).

From a conventional point of view, these two examples combine very different synthesis methods. The main difference lies in the function of each partial and the method of combining them. They implement the same structure in so far as both define the three components—partial, spectral coefficient, and combinator—separately, and then combine them according to the schema of generic additive synthesis.


A few observations and remarks from the experiments so far:
  1. 1.

    From conventional additive synthesis we expect that a large number of partials is necessary. This is often not so with a different basis function. In such cases, we can say that the series converges almost ‘too quickly’. Looking closer, the situation is this: the inherently polyphonic character of generic additive synthesis becomes interesting because of the interference between the partials: adding two waveforms may well result in cancellation or other unexpected but characteristic effects. For example, in the low frequency range and with sparse functions, the resulting sounds resemble percussion ensembles. Thus, it is sometimes useful to start with the minimal case in which only two partials are combined. This minimal constellation can then be extended by finding new laws for both the coefficients and the basis functions (i.e. the intersection between horizontal and vertical features). Here, partial understanding implies a search for spectral basis functions in conjunction with its parametrisation law.

  2. 2.

    The resulting function need not be used directly as an audio output signal. It may well be sonified by different means, e.g. by modulating a parameter of a carrier wave.

  3. 3.

    Allowing a certain distance from the predominant idea of the ‘preset’, axiomatic composition does not need to always externalise the parameters. This is the justification for the unusual inclusion of the coefficients into the partial in Eq. (2). At any stage the spectrum can again be factored out again (1), moving to and fro between the first two equations.

  4. 4.

    In many cases, the norm of what it means to have found a solution cannot be given in general (this is somewhat unsurprising as it applies to music in particular). One basic method of algorithmic composition responds to this challenge by superposing the algorithmic description as much as possible with its temporal unfolding, and thus with its perceptual and aesthetic qualities. As a program is by definition a future process, this superposition is necessarily incomplete. By consequence, rewriting code at runtime makes it necessary to delimit the relation of changes in the description to changes in the process. Proxies are an approach to solve this problem [3, 15]. Partial understanding means here to understand the relations between a partially changed description and its corresponding partial change in sound.


7 One More Step: Two Meanings of ‘Concatenating Combinators’

A generic combinator G can consist of any ordered sequence of operations. Having defined the operation of ‘addition’ in the most generic sense—namely of a binary operation of a ‘next step’—suggests cases where the operands are composed, rather than accumulated. In other words, the result of \(g_i\), \(g_j\), and \(g_h\) is not any more e.g. \(g_i + g_j + g_h\) or \(g_i g_j g_h\), but instead \(g_i(g_j(g_h))\). In the simplest case, this can be written as:
$$\begin{aligned} f(t) = g_1(t, c_1) \circ g_2(t, c_2) \dots \circ g_n(t, c_n) \end{aligned}$$
Here, the sum operator becomes the function composition operator,10 and the coefficients \(c_i\) of the spectrum determine the contribution of each partial in a series of nested function applications.
Thereby, e.g. a kind of spectral modulation, ‘concatenative phase modulation’, can be formulated. In the example that follows, each partial takes the previous one as phase input, and each partial’s carrier frequency depends on its index i in the series. The spectrum is slowly modulated by linear triangular oscillators.

But note that function composition is indeed only the first of two possible interpretations of a concatenating combinator. The second interpretation one might call spectrum composition. It changes from an internal to an external perspective of concatenation: instead of combining a sequence of elementary partials, it concatenates a sequence of spectra.

For this, we re-expose the generic spectrum \(s = \langle c_1, c_2 \dots c_n \rangle \) (see Eq. 3), consisting of the coefficients of each partial, in the form of \(g_i(t, c_i)\). Treating the coefficients as m extra parameters of G, the spectrum can itself become a time varying argument of a function \(G^{\times }(t, s)\). Because the original combinator G can in principle map any number of partials into any number of ‘channels’ in \(\mathbb {R}^m\)), we can interpret the output (codomain) of one as a spectrum (domain) of the other.

This requires to consider the combined signal as a set of functions. A sequence of G is then ‘horizontally’ combined by concatenation:
$$\begin{aligned} f(t) = G^{\times }_1 \circ G^{\times }_2 \circ G^{\times }_3 \dots \circ G^{\times }_n \end{aligned}$$
Such a string of concatenated generic terms \(G^{\times }_n\) essentially represents an ordered set of mappings between generic spectra. In terms of sound synthesis, we simply have an m-channel signal chain, where each node maps one spectrum to the next. The mappings can be conveniently arranged so that they form a monoid: they can be combined arbitrarily, because each output can serve as input for any other. The composition operation could, in turn, also be expressed by a second order combinator, and a corresponding second order spectrum that encodes the contribution of each operand. Instead, we have devised a domain specific language11 that is useful for experimenting with heterogenous mappings that do not follow from a single definition by variation. In favour of a final resume, we leave this topic to future discussion.

8 A Final Note on the Ontology of Sound

Formally, our proposal is indeed very minimal, little more than a spectral skeleton. We hope, however, that the historical and conceptual analysis has oriented it in such a way that it inspires new ideas at the intersection between mathematics, philosophy, and music.

Generic additive synthesis results in sounds that are on the verge between singularity and plurality. It starts from the multiple without presupposing unity, arising from a common law without presupposing that the result will cover a given domain completely. Being much less specific than other forms of ‘additive’ synthesis, it comes with no guarantees of completeness, and, paradoxically perhaps, enforces a much more specific treatment. Intertwining an observational paradigm (consisting of a decompositional basis function and the combinator map) and the law that parametrises a singular sound object, this synthesis method makes a good example, but only one example, of musique axiomatique.

In many contemporary treatments of Fourier analysis, a strong opposition is made between frequency and time perspective, where the frequency and phase spectrum are shown to be insufficient with regards to capturing the discontinuous structure of the time evolution. The spectrum is an illegitimate ‘eternalist’ rationalisation of the anomalies of noise. It is interesting, however, that historically, the harmonic decomposition had precisely the opposite role, namely to provide a way to find and convey ever larger sets of discontinuous points in the seemingly smooth continuum. The experience of an insufficiency of the Fourier series may thus be merely the result of the projections of an infinite series to a finite one, and from the difficulty of actually finding the laws that allow us to understand the spectrum of a given function. In this sense, the experimentation with alternative basis functions assumes the role of opening up new methods for conveying a mix of the continuous and the discontinuous, and escaping the false choice between immediacy and eternity. More than that, perhaps, it permits a focus on the particularly difficult problem of choosing the right partial: as we have seen in the experiments so far, generic additive synthesis is not so much a question of convergence at a high number of partials anymore—it is less a matter of the limit, as it is a matter of finding the adequate successor.

In truth, the ordinal limit does not contain anything more than that which precedes it, and whose union it operates. It is thus determined by the inferior quantities. The successor, on the other hand, is in a position of genuine excess, since it must locally surpass what precedes it. As such – and this is a teaching of great political value, or aesthetic value – it is not the global gathering together ‘at the limit’ which is innovative and complex, it is rather the realization, on the basis of a point at which one finds oneself, of the one-more of a step. Intervention is an instance of the point, not of the place. [2, Appendix 3, p. 451]

Sound is a domain that matches this description surprisingly well.


  1. 1.

    In the historical description, we largely follow [7], as well as Cantor’s collected papers [4].

  2. 2.

    The “crisis” of intuition was called the “Krise der Anschauung” in the German discourse [17].

  3. 3.

    They continue with acoustic examples: “Besides applying it to our radar image processing interests, we also found the transform provided a very good analysis of actual sampled sounds, such as bird chirps and police sirens, which have a chirplike nonstationarity, as well as Doppler sounds from people entering a room, and from swimmers amid sea clutter” [11].

  4. 4.

    In the general case, these partials need not be linearly independent, and the coefficient need not be unique for a given resulting function. It is convenient, however, if we know a coefficient that cancels the contribution of the respective partial (typically zero). This means that depending on the combinator G, we need different scaling functions for each partial. With an explicit generalised scaling function, and a neutral element e with regard to G (usually, the neutral element, i.e. zero for addition and one for multiplication), we can write: \(f(t) = \underset{\scriptscriptstyle {i = 1}}{\overset{\scriptscriptstyle {n}}{G}} \; c_i g_i(t, 1) + (1 - c_i) e\)

  5. 5.

    In all ‘conventional’ series, the combinator is just the iterated addition. \(G = g_1(t, c_1) + g_2(t, c_2) + \dots g_n(t, c_n)\), or conveniently \(\sum _{i = 1}^{n}{g_i(t, c_n)}\), where usually \(g_i(t, c_i) = c_i g_i(t)\). In the general form, however, a combinator is thought of as any interpretation of ‘\(+\)’, thus any form of ‘one more’.

  6. 6.

    Operations on the spectrum will in this case be operations on the mapping \(i \rightarrow c_i\). Because both coefficient and partial are dependent on the same i, the two terms (1) and (2) can be used exchangingly.

  7. 7.

    This general schema does not lead to any method to calculate the coefficients for a given case and neither does it guarantee that it is orthogonal, unique, and linearly independent. But as we shall see more clearly in the next section, these properties need not be secured in advance where no type can be given anyhow.

  8. 8.

    We are aware that the term generic may lead to misunderstandings, in particular due to the existing terminology in topology. We use the term to mark a distance from the idea of ‘generalisation’, following Alain Badiou’s and Paul Cohen’s concept of a generic set, as briefly explained in the last part of section 2. We have to leave open to what degree the precise ramifications of this concept remain adequate to its origin.

  9. 9.

    Note that in the SuperCollider signal semantics, the time parameter t is usually factored out: UGens are essentially arrows, similar to the description given by Hughes [8].

  10. 10.

    ‘One more step’ here simply means ‘one more f unction applied’. Note that this is a case where the order in which the partials are combined influences the outcome (the operation of function composition is in general noncommutative). Furthermore, the coefficient scaling function is a little more complicated: a coefficient of zero must result in the identity function \(f(x) = x\), when applied to a partial g.

  11. 11.

    The concatenative language Steno is embedded in SuperCollider. See For examples of generic additive synthesis, see:



In the process of experimenting with generic additive synthesis in a multichannel laboratory environment, the inspiring contributions by Hans W. Koch and Florian Zeeh were essential. We would also like to thank Guerino Mazzola for his ideas on frequency modulation in the present context. This paper would have lacked much of what we like about it without the continuing exchanges with Gabriel Catren, Maarten Bullynck, Renate Wieser, Tzuchien Tho and Alberto de Campo. The clarity of James McCartney’s programming language design choices made it easy to develop these ideas. Last but not least, Frank Pasemann and Till Bovermann have given extremely valuable comments on the terminology and formalisation used – it goes without saying that we take full responsibility for remaining errors.


  1. 1.
    Badiou, A.: Number and Numbers (Le Nombre et les nombres). Des Travaux/Seuil (1990). Translation into English 2005 by Robin MackayGoogle Scholar
  2. 2.
    Badiou, A.: Being and Event. Continuum International Publishing Group, London (2007)Google Scholar
  3. 3.
    Bovermann, T., Rohrhuber, J., de Campo, A.: Laboratory methods for experimental sonification. The Sonification Handbook. Logos Publishing House, Berlin (2011)Google Scholar
  4. 4.
    Cantor, G.: Gesammelte Abhandlungen Mathematischen und Philosophischen Inhalts. von Julius Springer, Berlin (1932)zbMATHGoogle Scholar
  5. 5.
    de Campo, A.: Microsound. In: Wilson, S., Cottle, D., Collins, N. (eds.) SuperCollider Book, pp. 463–504. MIT Press, Cambridge (2008)Google Scholar
  6. 6.
    Gabor, D.: Acoustical quanta and the theory of hearing. Nature 4044, 591–594 (1947)CrossRefGoogle Scholar
  7. 7.
    Grattan-Guinness, I. (ed.): From the Calculus to Set Theory, 1630–1910. An Introductory History. Princeton University Press, Princeton (1980)zbMATHGoogle Scholar
  8. 8.
    Hughes, J.: Generalising monads to arrows. Sci. Comput. Program. 37, 67–111 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Le Brun, M.: Digital waveshaping synthesis. J. Audio Eng. Soc. 4(27), 250 (1979)Google Scholar
  10. 10.
    Hemandez, E., Weiss, G.: A first course on wavelets. In: Studies in advanced mathematics. CRC Press LLC, Boca Raton, London, New York, Washington, D.C. (1996)Google Scholar
  11. 11.
    Mann, S., Haykin, S.: The chirplet transform: a generalization of Gabor’s logon transform. In: Vision Interface ’91. Communications Research Laboratory, McMaster University, Hamilton Ontario (1991)Google Scholar
  12. 12.
    Mazzola, G.: The Topos of Music. Geometric Logic of Concepts, Theory, and Performance. Birkhäuser Basel, Zürich (2002)zbMATHGoogle Scholar
  13. 13.
    Roads, C.: Microsound. The MIT Press, Cambridge (2004)Google Scholar
  14. 14.
    Rohrhuber, J., de Campo, A.: Waiting and uncertainty in computer music networks. In: Proceedings of ICMC 2004: the 30th Annual International Computer Music Conference (2004)Google Scholar
  15. 15.
    Rohrhuber, J., de Campo, A., Wieser, R.: Algorithms today - notes on Language design for just in time programming. In: Proceedings of International Computer Music Conference, pp. 455–458. ICMC, Barcelona (2005)Google Scholar
  16. 16.
    Serra, X.: A System for Sound Analysis / Transformation / Synthesis based on a Deterministic plus Stochastic Decomposition. Ph.D. thesis, Stanford University, Stanford, California (1989)Google Scholar
  17. 17.
    Volkert, K.T.: Die Krise der Anschauung. Studien zur Wissenschafts-, vol. 3. Sozial - und Bildungsgeschichte der Mathematik. Vandenhoeck & Ruprecht, Göttingen (1986)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Institute for Music and MediaRobert Schumann HochschuleDuesseldorfGermany
  2. 2.Conservatorio de Las RosasMoreliaMexico

Personalised recommendations