1 Introduction

Applications of force feedback for designing musical instruments have been studied since as early as 1978 at ACROE [14, 17, 21, 36] (Chap. 8 reports on recent advancements). Such works provide a crucial reference for understanding the role that haptic technology can play in music, and these are described in detail in a preceding chapter. The wider computer music community has demonstrated a sustained interest in incorporating force-feedback technology into musical works and projects. This has been evidenced by a series of projects during recent decades.

Gillespie et al. have created some high-quality custom force-feedback devices and used them for simulating the action of a piano key [24, 26]. Verplank and colleagues, and Oboe et al. have initiated separate efforts in repurposing old hard drives into force-feedback devices for music [43, 55]. More recently, the work by Verplank and colleagues has been extended via a collaboration with Bak and Gauthier [2]. Several human–computer interface researchers have experimented with using motorized faders for rendering force feedback [48], even for audio applications [1, 23, 54]. The implementation of a force-feedback bowed string has also been studied in detail using various force-feedback devices [21, 37, 42, 49].

More recently, Kontogeorgakopoulos et al. have studied how to realize digital audio effects with physics-based models, for the purpose of creating force-feedback musical instruments [32, 33]. Also, Hayes has endowed digital musical instruments (DMIs) with force feedback using the NovInt Falcon [28]. Most recently, Battey et al. have studied how to realize generative music systems using force-feedback controllers [3].

1.1 Multisensory Feedback for Musical Instruments

As described in Chap. 2, when a performer plays a traditional musical instrument, he or she typically receives auditory, visual, and haptic feedback from the instrument. By integrating information from these feedback modalities together [15, 39], the performer can more precisely control the effect of the mechanical excitation that he or she provides to the instrument (see Fig. 9.1).

Most digital musical instruments have primarily aimed at providing auditory and visual feedback [40]. However, haptic force feedback is an intriguing additional modality that can provide performers with enhanced feedback from a DMI. It has advantages such as the following:

  • It can provide information separately from the auditory and visual modalities as depicted in Fig. 9.1—for example, a performer may be busy looking at a score and want to be able to feel the instrument to find the specific buttons or keys to press.

  • Haptic information can be delivered directly to locally relevant parts of the human body.

  • Digital interactions can potentially be made more intuitive (potentially preventing sensory overload [31]) by providing feedback resembling familiar interactions in the real world.

  • Haptic devices are highly reconfigurable, so the feel of a haptic musical instrument can be widely and greatly customized depending on what mode it is in.

  • Based on what reported in Chap. 5 for traditional instruments, when applied carefully, haptic feedback can provide further benefits such as enhanced user satisfaction, enhanced comfort/aesthetics, and/or a channel for sending private communications [31].

  • The human reaction time can be shorter for haptic feedback than for any other feedback modality [47].

  • Accordingly, due to the decreased phase lag in the reaction time, feedback control theory predicts that musicians could potentially play digital musical interfaces more accurately at faster speeds when provided with appropriately designed haptic feedback [22].

  • A similar increase in accuracy has been observed in some prior experiments in music technology [10, 45].

Fig. 9.1
figure 1

When a performer plays a traditional musical instrument, he or she receives auditory, visual, and haptic feedback. The performer integrates information together from these “multisensory” feedback channels [15, 39] while giving a mechanical excitation back to the musical instrument in response

1.2 Additional Force-Feedback Device Designs from the Haptics Community

Outside the realm of computer music, a wide variety of (historically typically very expensive) haptic devices have been created and researched. Many of these have been used for scientific visualization and/or applications in telerobotic surgery or surgical training [12, 16, 29, 35, 38]. The expense of these devices will prevent their use from ever trickling down to large numbers of practicing musicians, but they are useful for research in haptics.

For instructional purposes, several universities have made simple haptic force-feedback devices that are less expensive. For example, the series of “Haptic Paddles” are single degree-of-freedom devices based upon a cable connection to an off-the-shelf DC motor [44]. However, such designs tend to be problematic because of the unreliable supply of surplus high-performance DC motors [25]. In contrast, the iTouch device at the University of Michigan instead contains a voice coil motor, which is hand wound by students [25]. However, making a large number of devices is time intensive, and the part specifications are not currently available in an open-source hardware format.

1.3 Open-Source Technology for the Design of Haptic Musical Instruments

Force-feedback technologies tend to be rather complex. Consequently, small-scale projects have been hampered as the technological necessities have required so much attention that little time remained for aesthetic concerns. Furthermore, practical knowledge needed for prototyping haptic musical instruments has not been widely available, which has made it even more challenging for composers to access the technology.

In response, Berdahl et al. have created an open-source repository,Footnote 1 which contains simple examples that provide insight into the design of haptic musical instruments. These examples are built upon a series of open-source tools that can be used to rapidly prototype new haptic musical instruments. The main projects within the repository are the following:

  • The FireFader is an extensible and open-source force-feedback device design based on two motorized faders (see Fig. 9.2) [6]. Typically, the faders are feedback-controlled by a laptop. The faders’ positions are sent to a host computer via a low latency USB connection, and in turn force-feedback signals are rapidly sent back to the faders. Drivers are provided for controlling the FireFader from Max, Pure Data, and Faust. Because the design is based on the Arduino framework, it can easily be repurposed into other designs.

  • The Haptic Signal Processing (HSP) objects from 2010 are a series of abstractions in Max that enable rapid prototyping of physics-based sound synthesis models [7], with an emphasis on pedagogy. Some of the most important abstractions in HSP include .Footnote 2 Notably, physics-based models in HSP can be freely intermixed with other Max objects, which is useful for studying how physics-based models and traditional signal-based models can be mixed. Vibrotactile haptics can also be experimented with in HSP simply by connecting audio signals to the object.

  • Synth-A-Modeler [9, 11] is another tool for creating physics-based models. Table 9.1 summarizes the Synth-A-Modeler objects referred to in the rest of the chapter. Compared with HSP, the models created with Synth-A-Modeler are more efficient and can be compiled into a wider variety of target architectures using Faust [46]. However, HSP provides a gentler introduction to haptic technology.

Workshops have been taught at a series of international conferences using the repository.

Fig. 9.2
figure 2

FireFader is a force-feedback device with two motorized faders. It uses open-source hardware and is based on the Arduino platform, so it can easily be reconfigured for a wide variety of applications

Table 9.1 Some of the virtual objects implemented by Synth-A-Modeler

1.4 Laptop Orchestra of Louisiana

Since its inception, the so-called laptop orchestra has become known as an ensemble of musicians performing using laptops. Precisely what qualifies as a laptop orchestra is perhaps a matter of debate, but historically they seem to be configured similarly to the original Princeton Laptop Orchestra (PLOrk). As described by Dan Trueman in 2007, PLOrk was then comprised of fifteen performance stations consisting of a laptop, a six-channel hemispherical loudspeaker, a multichannel sound interface, a multichannel audio power amplifier, and various additional commercial music controllers and custom-made music controllers [51, 52].

The Laptop Orchestra of Louisiana (shown in Fig. 9.3) was created in 2011 and originally consisted of five performance stations. Since then, it has been expanded to include ten performance stations and a server. Organizationally, the ensemble aims to follow in the footsteps of PLOrk and the Stanford Laptop Orchestra (SLOrk) by leveraging the integrated classroom concept, which encourages students to naturally and concurrently learn about music performance, music composition, programming, and design [56]. The Laptop Orchestra of Louisiana further serves the local community by performing repertoire written by both local students and faculty [50].

As opposed to composing for traditional ensembles, whose formation is usually clearly defined, composing for laptop orchestra is generally a very open-ended activity. Some authors even consider composing for laptop orchestra to be an ill-defined problem [19]. An informative swath of repertoire now exists for laptop orchestras, and other ideas may be drawn from the history of experimental music. Due to its open-ended nature, treating the process of composing for laptop orchestra as a design activity can be fruitful. Specifically, early prototyping and iteration activities can be helpful in providing insight [19]. This kind of thinking is also helpful when designing virtual instruments for haptic interaction. The authors are working on this endeavor not only by prototyping, iterating, and refining interaction designs into music compositions, but also by expanding and honing the content available in the Open-Source Haptics for Artists repository [6, 7, 9, 11].

Fig. 9.3
figure 3

Laptop Orchestra of Louisiana performing in the Digital Media Center Theater at Louisiana State University

In 2013, students at Louisiana State University built a FireFader for each performance station. A laser-cut enclosure design was also created (see Fig. 9.2) to provide performers with a place to rest their hands. Then students and faculty started composing music for the Laptop Orchestra of Louisiana with FireFaders. This chapter reports on some ideas for composing this kind of music, as informed by the outcomes of these works. The following specific approaches are suggested: providing performers with precise, physically intuitive, and reconfigurable controls, using traditional controls alongside force-feedback controls as appropriate, and designing timbres that sound uncannily familiar but are nonetheless novel.

2 Enabling Precise and Physically Intuitive Control of Sound (“Quartet for Strings”)

Compared with other electronic controls for musical instruments, such as buttons, knobs, sliders, switches, touchscreens, force-feedback devices have the ability to provide performers with precise, physically intuitive, and programmable control. To achieve this, instruments need to be carefully designed so that they both feel good and sound good. It is helpful to carefully match the mechanical impedance of the instruments to the device and performers, and it is recommended to apply the principle of acoustic viability.

Demonstrating these characteristics, Quartet for Strings by Stephen David Beck is a quartet written for four virtual vibrating strings. Each of these strings is played by a single performer using a FireFader as depicted in Fig. 9.4. To match the structure of a traditional string quartet, the instruments are similarly scaled to allow different performers to play different pitch ranges. This results in four different virtual instrument scales: first violin, second violin, viola, and cello.

Fig. 9.4
figure 4

Quartet for Strings is for a quartet of FireFaders and laptops, each of which enables a performer to play a virtual vibrating string

2.1 Instrument Design

2.1.1 Acoustic Viability

Acoustic viability is a digital design principle that recognizes the importance of integrating nuance and expressive control into digital instruments, using traditional acoustic instruments as inspiration [4, 5]. Traditional acoustic musical instruments have been refined over long periods, often spanning performers’ lifetimes, whole centuries, or even longer. Consequently, traditional instruments tend to exhibit complex mechanics for providing performers with nuanced, precise, expressive, and perhaps even intimate control of sound [4].

However, these nuanced relationships tend to sometimes be lacking in simple signal processing-based or even physics-based synthesizer designs. The reason for this is that significant effort is required during synthesizer design in order to afford nuance and expressive control. Therefore, for a digital instrument to be acoustically viable, it has been suggested that the synthesizer designer should implement cross-relationships between parameters such as amplitude, pitch, and spectral content [4, 5]. For example, designers can consider how changes in amplitude could affect the spectral centroid and vice versa [4].

With physics-based modeling, such cross-relationships will tend to be clearly evident if strong nonlinearities are present in a model. For example, if a lightly damped material exhibits a stiffening spring characteristic, then the pitch modulation effect will tend to result in these kinds of cross-relationships. This kind of effect can be observed in many real chordophones, membranophones, and idiophones [20].

Accordingly for Quartet for Strings, it was decided to create a plucked string instrument that exhibited tension modulation by interspersing masses () with stiffeninglink objects () as shown in Fig. 9.5 [8, 20]. As with related force-feedback instruments, the right-hand side FireFader knob () can be used to pluck () the string (see Fig. 9.5, right). However, it was desired to also control the pitch of the string using the FireFader. This was achieved by making the string very loose or “slack” and then using the left-hand side FireFader knob to simultaneously touch () all of the string masses. For more information on how the stiffeninglink objects are parameterized, the reader is referred to a prior publication [8]. A demonstration video helps to illustrate how this instrument leverages the principle of acoustic viability to realize physically intuitive and expressive control.Footnote 3

Fig. 9.5
figure 5

String model GooeyStringPitchModBass in Synth-A-Modeler consists of forty masses, interconnected by stiffeninglink objects and terminated by ground objects (see Table 9.1). The fader knob on the right-hand side is used to pluck one of the masses. The fader knob on the left-hand side is used to depress all of the masses simultaneously, which gradually increases the pitch

2.1.2 Impedance Matching

Impedance matching is a technique in which the impedances of two interacting objects are arranged to be similar to each other. This allows optimal energy exchange between them. As explained in Sect. 2.2, in the musician–instrument interaction, impedance matching ensures effective playability and tight coupling.

In the model GooeyStringPitchModBass, the weight of the virtual model (e.g., the string) needs to be approximately matched to the combined weight of a hand holding a fader knob. This is achieved by setting the weight of each virtual mass to be 1 g. Since the string is comprised of 40 masses, its total weight is 40 g, which is comparable to the combined weight of a hand holding a fader knob.

2.2 Performance Techniques

Two special performance techniques further exploit the precise and physically intuitive control afforded by the designed instruments.

2.2.1 Pizzicato with Exaggerated Pitch Modulation

First, a performer can fully depress the string and then quickly release it. Then the force feedback rapidly moves the left-hand side fader knob back to a resting position. The sound of this technique is reminiscent of a Bartók pizzicato, except that the pitch descends considerably and rapidly during the attack. In Quartet for Strings, this can be heard after the first introduction of the cello instrument.

It should be noted that this technique can only be used expressively due to the virtual nature of the string’s implementation. The authors are not aware of any real strings that demonstrate such strong stiffening characteristic, do not break easily, and which could be reliably performed without gradual detuning of the pitch that the string tends toward upon release.

2.2.2 Force-Feedback Jeté

A second special technique emerges when a performer lightly depresses the left-hand side knob to lightly make contact with the virtual string. The model responds accordingly with force feedback to push the knob in the opposite direction (against the performer’s finger). When the pressure the performer exerts and the response the model synthesizes are balanced in a particular proportion, the fader and instrument become locked together in a controlled oscillation. This oscillation can be precisely controlled through the physically intuitive connection with the performer. This technique is used extensively near the end of the piece. On the score, this technique is indicated using the marking jeté, giving a nod to the violin technique with the same name.

2.3 Compositional Structure

Quartet for Strings is composed as a modular piece with three-line staves representing relative pitch elements (see Fig. 9.6). While precision of time and pitch is not critical to its performance, the piece was conceived as a composed, and not as an improvised work. It balances control over gesture and density with aleatoric arrangements of the parts.

In the sense that the score invites performers with less extensive performance experience to try to perform as expressively as possible, the authors believe that the score is highly effective in the context of a laptop orchestra. The score provides expressive markings to encourage the performers to try to fully leverage the acoustically viable quality of the instruments. At the same time, the score allows for some imprecision of the timing and pitches, freeing the performers from limiting their performance through precisely attending to strict performance requirements.

A studio video recording of Quartet for Strings is available for viewing at the project Web site, which demonstrates how the force feedback facilitates precise and physically intuitive control.Footnote 4

Fig. 9.6
figure 6

Excerpt from Quartet for Strings

3 Traditional Controls Can Be Used Alongside Force-Feedback Controls (“Of Grating Impermanence”)

Different kinds of controls provide different affordances. In the context of laptop orchestra, where a variety of controls are available (such as trackpads, computer keyboards, MIDI keyboards, or even drum pads, tablets [51]), traditional controls can be used appropriately alongside force-feedback controls. For example, to help manage mental workload [41], buttons or keys can be used to change modes while force-feedback controls enable continuous manipulation of sound.

This approach is applied in Of grating impermanence by Andrew Pfalz. For this composition, each of the four performers plays a virtual harp with twenty strings (see Fig. 9.7), which can be strummed using a FireFader knob. As with Quartet for Strings, the performance of subtle gestures is facilitated by the force feedback coming from the device. The musical gestures are intuitive, comfortable, and feel natural to execute on the instruments.

3.1 Instrument Design

The harp model incorporates both continuous control (via the faders) and discrete control (via the laptop keyboard). Due to this combination, performers can focus on dexterously making continuous musical gestures with the FireFader, while easily stepping through harp tunings using simple button presses. Specifically, the model shown in Fig. 9.7 is controlled as follows:

  • The first FireFader knob enables performers to strum across twenty evenly spaced strings, each of which provides force feedback.

  • The second FireFader knob does not provide force feedback—instead, it enables rapid and precise control of the timbre of the strings. As the performer moves this knob from one extreme to another, the timbre of the strings goes from being dark and short, like a palm-muted guitar, to bright and resonant, like guitar strings plucked near their terminations.

  • The right and left arrow keys of the laptop keyboards enable the performer to step forward or backward, respectively, through preprogrammed tunings for each of their twenty virtual strings. Consequently, the performers do not need to be continuously considering the precise tuning of the strings.

Fig. 9.7
figure 7

For Of grating impermanence, the harp model PluckHarp20 includes twenty strings that can be plucked using a single FireFader knob. Each of these strings is created by connecting a termination to a waveguide to a junction to a touch link to a second waveguide to a second termination (for more details, see Table 9.1)

3.2 Performance Techniques

3.2.1 Simultaneously Changing the Chord and Strumming

With training, the performers gravitate toward a particular performance technique, especially in sections of the composition with numerous chord changes. In these sections, the performers learn to use the following procedure: (1) wait for notes to decay, (2) use the arrow key to advance the harp’s tuning to the next chord, (3) immediately strum the virtual strings using the FireFader, and (4) repeat. The ergonomics of this performance technique are illustrated in Fig. 9.8, which shows how each performer’s right hand is operating a FireFader, while the left hand is operating the arrow keys (shown boxed in yellow in Fig. 9.8).

Visual feedback is further employed to help the performers stay on track. The index of each chord is displayed on the laptop screen in a large font, so that performers can error check their progress in advancing through the score.

Fig. 9.8
figure 8

For Of grating impermanence, the performers use their right hands to pluck a harp of virtual strings and their left hands to press the arrow keys on the laptop keyboard (see the yellow rectangles above). The right arrow advances to the next chord for the harp, and the left arrow goes back to the previous chord

3.2.2 Accelerating Strums

Preprogramming the note changes for banks of twenty plucked strings also enables a specialized strumming technique. Since each performer is passing the fader knob over so many strings, it is possible for the performer to noticeably accelerate or decelerate during a single strumming gesture. This technique aids in building tension during the first section of the composition. The authors would like to note that, although no formal tests have been conducted, they have the impression that the force feedback is crucial for this performance technique, as it makes it possible to not only hear but also feel each of the individual strings.

3.2.3 Continuous Control of Timbre for Strumming

The second knob on each FireFader enables the performers to occasionally but immediately alter the timbre of the strings as indicated in the score. Since this technique is used sparingly, it has a stark influence upon the overall sound, but it is a powerful control that makes the instrument almost seem more lifelike. An additional distortion effect further influences the timbre of the strings, and this distortion is enabled and disabled by the arrow keys so as to match the printed score.

3.3 Compositional Structure

Of grating impermanence is performed from a fixed score. The composition comprises several sections that demonstrate various performance techniques of the instrument. The score shows the notes that are heard, but each performer needs only choose where he or she is in the score, not to actually select notes as they would on a traditional instrument. In this way, the job of the performer is similar to that of a member of a bell choir: following along in the score and playing notes at the appropriate times.

The beginning and ending sections of the composition are texturally dense and somewhat freer. The gestures and timings are indicated, but the precise rhythms are not notated. The interior sections are metered and fully notated. Stylistically, these sections range from monophony to interlocking textures to fast unison passages.

A studio video recording is available for viewing at the project Web site, which illustrates how these performance techniques are enabled by combining traditional controls and force-feedback controls.Footnote 5

4 Finding Timbres that Sound Uncannily Familiar but Are Nonetheless Novel (“Guest Dimensions”)

When composing electroacoustic music, it can generally be useful to compose new timbres, which can help give listeners new listening experiences. In contrast, if timbres sound familiar to a listener, they can beneficially provide “something to hold on to” for less experienced listeners [34], particularly when pitch and rhythm are not employed traditionally. In the present chapter, it is therefore suggested that finding timbres that sound uncannily familiar but are nonetheless novel can help bridge these two extremes [13, 18].

Guest Dimensions by Michael Blandino is a quartet that explores this concept, extending it by making analyzed timbres tangible using haptic technology. For example, each of the four performers uses a FireFader to pluck one of two virtual resonator models (see Fig. 9.9), whose original parameters are determined to match the timbre of prerecorded percussion sound samples.

4.1 Instrument Design

4.1.1 Calibrating the Timbre of Virtual Models to Sound Samples

Two virtual resonator physical models were calibrated through modal decomposition of sound files of a struck granite block and of a gayageum, which is a Korean plucked string instrument [27, 30, 53]. This provided a large parameter set to use for starting the instrument design process.

4.1.2 Scaling Model Parameters to Discover Novel Timbres

Then, for each part and section of the composition, multiple model parameters were scaled with respect to the original estimated fundamental frequency, the original estimated decay times, reference mass values, pluck interaction stiffness, pluck interaction damping parameter, and virtual excitation location. It was discovered that even with the granite block, which did not have a harmonic tone, melodies could nonetheless be realized by scaling the modal frequencies over the range of a few octaves. This same approach was used to enable melodies to be played with the gayageum model.

Fig. 9.9
figure 9

For Guest Dimensions, the general modal synthesis model incorporates a resonators object that is plucked using a single FireFader knob (see Table 9.1)

Although performance techniques affected the timbre, the timbre could be more strongly adjusted via the model parameters. For example, to increase overall timbral interest and to increase sustain of the resonances, the decay times for the struck granite block sound were lengthened significantly, enhancing the resonance of the model. Further adjustment of the virtual excitation location and scaling of the virtual dimensions allowed for additional accentuation of shimmering and certain initial transient qualities. Similarly, the gayageum model’s decay time was slightly extended, and its virtual excitation position was tuned for desired effects.

This exploration of uncannily familiar yet novel timbres is evident when listening to the video recording of Guest Dimensions on the project Web site.Footnote 6 The reader should keep in mind that the range of somehow familiar timbres realized during the performance stems from the two originally calibrated models of a struck granite block and a plucked gayageum.

4.1.3 Visual Display of the Force-Feedback Interaction

The FireFaders are not marked to indicate where the center points of the sliders are, which corresponds to where the resonators were located in virtual space. Since Guest Dimensions calls for specific rhythms to be played, it was necessary to create a very simple visual display enabling the performers to see what they were doing. The display showed the position of the fader knob and the position of the virtual resonator that the fader knob was plucking. The authors have the impression that this display may have made it easier for the performers to play more precisely in time. Overall, the need for implementing visual displays for some music compositions is emphasized by the discussion in Sect. 9.1.1—generally speaking, the implementation of additional feedback modalities has the potential to enable more precise control.

4.2 Performance Techniques

Two plucking performance techniques in Guest Dimensions are particularly notable. Of particular note is that these performance techniques are facilitated by the programmable nature of the force feedback. This enables the virtual model to be differently impedance matched when different performance techniques are being employed. For example, the tremolo performance technique is enhanced through a decreased virtual plectrum stiffness, while the legato performance technique is enhanced through a moderately increased virtual plectrum stiffness.

4.2.1 Tremolo

In the first section of the composition, the stiffness of the pluck link (see Fig. 9.9 and Table 9.1) in the model is set to be relatively low. This haptic quality enables the performers to particularly rapidly pluck back and forth across the virtual resonators object, obtaining a tremolo effect. Especially rapid plucking results in a louder sound, while slower plucking results in a quieter sound. According to the indications in the score of Guest Dimensions, the performers use the tremolo technique to create a range of dynamics.

4.2.2 Legato

In the sections not involving tremolo, the performers are mostly plucking more vigorously in a style that could be called legato. In those sections, the performers are playing various, interrelated note sequences. Instead of providing the performers with manual control over changing the notes (as with Of grating impermanence), it was decided that it would be more practical to automate the selection of all of the notes. Accordingly, the following approach was used to trigger note updates: right before one of the models is plucked, in other words right as the fader knob is approaching the center point for the plectrum, the next corresponding fundamental frequency is read out of a table and used to rapidly scale the fundamental frequency of the model. Careful adjustment of the threshold point is needed to avoid pitch changes during the resonance of prior attacks or changes after new attacks. Performers develop an intuition for avoiding false threshold detection through confident plucking. An advantage of this approach is that performers do not need to manually advance the notes; however, a performer without adequate practice may occasionally advance one note too many, and in this case, the performer will require a moment of tacit to recover.

4.3 Compositional Structure

As with Of grating impermanence, Guest Dimensions is performed from a fixed score. Performers play in precise time according to a pre-written score, sometimes in homorhythm. Each part for each section utilizes one of the two models, but adjustments of the models are unique to the sections of each part. Melodic themes in counterpoint are performed with the gayageum, which are accompanied by the decorative chimes of the granite block model. Extended percussive sections feature the granite block model in strict meter, save for a brief passage in which the performers are free to separately overlap in interpretive gestures.

5 Conclusions

A case study was presented demonstrating some ways that force-feedback DMIs could be integrated into laptop orchestra practice. The contributing composers realized a variety of compositional structures, but more commonalities were found in the successful instrument design approaches that were applied. Accordingly, the authors suggest that composers working in this field should consider the following: (1) providing performers with precise, physically intuitive, and reconfigurable controls, (2) using traditional controls alongside force-feedback controls as appropriate, and (3) designing timbres that sound uncannily familiar but are nonetheless novel. Music performance techniques were enabled that more closely resembled some traditional music performance techniques, which are less commonly observed in laptop orchestra practice.