Force-Feedback Instruments for the Laptop Orchestra of Louisiana
Digital musical instruments yielding force feedback were designed and employed in a case study with the Laptop Orchestra of Louisiana. The advantages of force feedback are illuminated through the creation of a series of musical compositions. Based on these and a small number of other prior music compositions, the following compositional approaches are recommended: providing performers with precise, physically intuitive, and reconfigurable controls, using traditional controls alongside force-feedback controls as appropriate, and designing timbres that sound uncannily familiar but are nonetheless novel. Video-recorded performances illustrate these approaches, which are discussed by the composers.
Applications of force feedback for designing musical instruments have been studied since as early as 1978 at ACROE [14, 17, 21, 36] (Chap. 8 reports on recent advancements). Such works provide a crucial reference for understanding the role that haptic technology can play in music, and these are described in detail in a preceding chapter. The wider computer music community has demonstrated a sustained interest in incorporating force-feedback technology into musical works and projects. This has been evidenced by a series of projects during recent decades.
Gillespie et al. have created some high-quality custom force-feedback devices and used them for simulating the action of a piano key [24, 26]. Verplank and colleagues, and Oboe et al. have initiated separate efforts in repurposing old hard drives into force-feedback devices for music [43, 55]. More recently, the work by Verplank and colleagues has been extended via a collaboration with Bak and Gauthier . Several human–computer interface researchers have experimented with using motorized faders for rendering force feedback , even for audio applications [1, 23, 54]. The implementation of a force-feedback bowed string has also been studied in detail using various force-feedback devices [21, 37, 42, 49].
More recently, Kontogeorgakopoulos et al. have studied how to realize digital audio effects with physics-based models, for the purpose of creating force-feedback musical instruments [32, 33]. Also, Hayes has endowed digital musical instruments (DMIs) with force feedback using the NovInt Falcon . Most recently, Battey et al. have studied how to realize generative music systems using force-feedback controllers .
9.1.1 Multisensory Feedback for Musical Instruments
As described in Chap. 2, when a performer plays a traditional musical instrument, he or she typically receives auditory, visual, and haptic feedback from the instrument. By integrating information from these feedback modalities together [15, 39], the performer can more precisely control the effect of the mechanical excitation that he or she provides to the instrument (see Fig. 9.1).
It can provide information separately from the auditory and visual modalities as depicted in Fig. 9.1—for example, a performer may be busy looking at a score and want to be able to feel the instrument to find the specific buttons or keys to press.
Haptic information can be delivered directly to locally relevant parts of the human body.
Digital interactions can potentially be made more intuitive (potentially preventing sensory overload ) by providing feedback resembling familiar interactions in the real world.
Haptic devices are highly reconfigurable, so the feel of a haptic musical instrument can be widely and greatly customized depending on what mode it is in.
Based on what reported in Chap. 5 for traditional instruments, when applied carefully, haptic feedback can provide further benefits such as enhanced user satisfaction, enhanced comfort/aesthetics, and/or a channel for sending private communications .
The human reaction time can be shorter for haptic feedback than for any other feedback modality .
Accordingly, due to the decreased phase lag in the reaction time, feedback control theory predicts that musicians could potentially play digital musical interfaces more accurately at faster speeds when provided with appropriately designed haptic feedback .
9.1.2 Additional Force-Feedback Device Designs from the Haptics Community
Outside the realm of computer music, a wide variety of (historically typically very expensive) haptic devices have been created and researched. Many of these have been used for scientific visualization and/or applications in telerobotic surgery or surgical training [12, 16, 29, 35, 38]. The expense of these devices will prevent their use from ever trickling down to large numbers of practicing musicians, but they are useful for research in haptics.
For instructional purposes, several universities have made simple haptic force-feedback devices that are less expensive. For example, the series of “Haptic Paddles” are single degree-of-freedom devices based upon a cable connection to an off-the-shelf DC motor . However, such designs tend to be problematic because of the unreliable supply of surplus high-performance DC motors . In contrast, the iTouch device at the University of Michigan instead contains a voice coil motor, which is hand wound by students . However, making a large number of devices is time intensive, and the part specifications are not currently available in an open-source hardware format.
9.1.3 Open-Source Technology for the Design of Haptic Musical Instruments
Force-feedback technologies tend to be rather complex. Consequently, small-scale projects have been hampered as the technological necessities have required so much attention that little time remained for aesthetic concerns. Furthermore, practical knowledge needed for prototyping haptic musical instruments has not been widely available, which has made it even more challenging for composers to access the technology.
The FireFader is an extensible and open-source force-feedback device design based on two motorized faders (see Fig. 9.2) . Typically, the faders are feedback-controlled by a laptop. The faders’ positions are sent to a host computer via a low latency USB connection, and in turn force-feedback signals are rapidly sent back to the faders. Drivers are provided for controlling the FireFader from Max, Pure Data, and Faust. Because the design is based on the Arduino framework, it can easily be repurposed into other designs.
The Haptic Signal Processing (HSP) objects from 2010 are a series of abstractions in Max that enable rapid prototyping of physics-based sound synthesis models , with an emphasis on pedagogy. Some of the most important abstractions in HSP include Open image in new window .2 Notably, physics-based models in HSP can be freely intermixed with other Max objects, which is useful for studying how physics-based models and traditional signal-based models can be mixed. Vibrotactile haptics can also be experimented with in HSP simply by connecting audio signals to the object.
Synth-A-Modeler [9, 11] is another tool for creating physics-based models. Table 9.1 summarizes the Synth-A-Modeler objects referred to in the rest of the chapter. Compared with HSP, the models created with Synth-A-Modeler are more efficient and can be compiled into a wider variety of target architectures using Faust . However, HSP provides a gentler introduction to haptic technology.
Some of the virtual objects implemented by Synth-A-Modeler
9.1.4 Laptop Orchestra of Louisiana
Since its inception, the so-called laptop orchestra has become known as an ensemble of musicians performing using laptops. Precisely what qualifies as a laptop orchestra is perhaps a matter of debate, but historically they seem to be configured similarly to the original Princeton Laptop Orchestra (PLOrk). As described by Dan Trueman in 2007, PLOrk was then comprised of fifteen performance stations consisting of a laptop, a six-channel hemispherical loudspeaker, a multichannel sound interface, a multichannel audio power amplifier, and various additional commercial music controllers and custom-made music controllers [51, 52].
The Laptop Orchestra of Louisiana (shown in Fig. 9.3) was created in 2011 and originally consisted of five performance stations. Since then, it has been expanded to include ten performance stations and a server. Organizationally, the ensemble aims to follow in the footsteps of PLOrk and the Stanford Laptop Orchestra (SLOrk) by leveraging the integrated classroom concept, which encourages students to naturally and concurrently learn about music performance, music composition, programming, and design . The Laptop Orchestra of Louisiana further serves the local community by performing repertoire written by both local students and faculty .
In 2013, students at Louisiana State University built a FireFader for each performance station. A laser-cut enclosure design was also created (see Fig. 9.2) to provide performers with a place to rest their hands. Then students and faculty started composing music for the Laptop Orchestra of Louisiana with FireFaders. This chapter reports on some ideas for composing this kind of music, as informed by the outcomes of these works. The following specific approaches are suggested: providing performers with precise, physically intuitive, and reconfigurable controls, using traditional controls alongside force-feedback controls as appropriate, and designing timbres that sound uncannily familiar but are nonetheless novel.
9.2 Enabling Precise and Physically Intuitive Control of Sound (“Quartet for Strings”)
Compared with other electronic controls for musical instruments, such as buttons, knobs, sliders, switches, touchscreens, force-feedback devices have the ability to provide performers with precise, physically intuitive, and programmable control. To achieve this, instruments need to be carefully designed so that they both feel good and sound good. It is helpful to carefully match the mechanical impedance of the instruments to the device and performers, and it is recommended to apply the principle of acoustic viability.
9.2.1 Instrument Design
126.96.36.199 Acoustic Viability
Acoustic viability is a digital design principle that recognizes the importance of integrating nuance and expressive control into digital instruments, using traditional acoustic instruments as inspiration [4, 5]. Traditional acoustic musical instruments have been refined over long periods, often spanning performers’ lifetimes, whole centuries, or even longer. Consequently, traditional instruments tend to exhibit complex mechanics for providing performers with nuanced, precise, expressive, and perhaps even intimate control of sound .
However, these nuanced relationships tend to sometimes be lacking in simple signal processing-based or even physics-based synthesizer designs. The reason for this is that significant effort is required during synthesizer design in order to afford nuance and expressive control. Therefore, for a digital instrument to be acoustically viable, it has been suggested that the synthesizer designer should implement cross-relationships between parameters such as amplitude, pitch, and spectral content [4, 5]. For example, designers can consider how changes in amplitude could affect the spectral centroid and vice versa .
With physics-based modeling, such cross-relationships will tend to be clearly evident if strong nonlinearities are present in a model. For example, if a lightly damped material exhibits a stiffening spring characteristic, then the pitch modulation effect will tend to result in these kinds of cross-relationships. This kind of effect can be observed in many real chordophones, membranophones, and idiophones .
188.8.131.52 Impedance Matching
Impedance matching is a technique in which the impedances of two interacting objects are arranged to be similar to each other. This allows optimal energy exchange between them. As explained in Sect. 2.2, in the musician–instrument interaction, impedance matching ensures effective playability and tight coupling.
In the model GooeyStringPitchModBass, the weight of the virtual model (e.g., the string) needs to be approximately matched to the combined weight of a hand holding a fader knob. This is achieved by setting the weight of each virtual mass to be 1 g. Since the string is comprised of 40 masses, its total weight is 40 g, which is comparable to the combined weight of a hand holding a fader knob.
9.2.2 Performance Techniques
Two special performance techniques further exploit the precise and physically intuitive control afforded by the designed instruments.
184.108.40.206 Pizzicato with Exaggerated Pitch Modulation
First, a performer can fully depress the string and then quickly release it. Then the force feedback rapidly moves the left-hand side fader knob back to a resting position. The sound of this technique is reminiscent of a Bartók pizzicato, except that the pitch descends considerably and rapidly during the attack. In Quartet for Strings, this can be heard after the first introduction of the cello instrument.
It should be noted that this technique can only be used expressively due to the virtual nature of the string’s implementation. The authors are not aware of any real strings that demonstrate such strong stiffening characteristic, do not break easily, and which could be reliably performed without gradual detuning of the pitch that the string tends toward upon release.
220.127.116.11 Force-Feedback Jeté
A second special technique emerges when a performer lightly depresses the left-hand side knob to lightly make contact with the virtual string. The model responds accordingly with force feedback to push the knob in the opposite direction (against the performer’s finger). When the pressure the performer exerts and the response the model synthesizes are balanced in a particular proportion, the fader and instrument become locked together in a controlled oscillation. This oscillation can be precisely controlled through the physically intuitive connection with the performer. This technique is used extensively near the end of the piece. On the score, this technique is indicated using the marking jeté, giving a nod to the violin technique with the same name.
9.2.3 Compositional Structure
Quartet for Strings is composed as a modular piece with three-line staves representing relative pitch elements (see Fig. 9.6). While precision of time and pitch is not critical to its performance, the piece was conceived as a composed, and not as an improvised work. It balances control over gesture and density with aleatoric arrangements of the parts.
In the sense that the score invites performers with less extensive performance experience to try to perform as expressively as possible, the authors believe that the score is highly effective in the context of a laptop orchestra. The score provides expressive markings to encourage the performers to try to fully leverage the acoustically viable quality of the instruments. At the same time, the score allows for some imprecision of the timing and pitches, freeing the performers from limiting their performance through precisely attending to strict performance requirements.
9.3 Traditional Controls Can Be Used Alongside Force-Feedback Controls (“Of Grating Impermanence”)
Different kinds of controls provide different affordances. In the context of laptop orchestra, where a variety of controls are available (such as trackpads, computer keyboards, MIDI keyboards, or even drum pads, tablets ), traditional controls can be used appropriately alongside force-feedback controls. For example, to help manage mental workload , buttons or keys can be used to change modes while force-feedback controls enable continuous manipulation of sound.
This approach is applied in Of grating impermanence by Andrew Pfalz. For this composition, each of the four performers plays a virtual harp with twenty strings (see Fig. 9.7), which can be strummed using a FireFader knob. As with Quartet for Strings, the performance of subtle gestures is facilitated by the force feedback coming from the device. The musical gestures are intuitive, comfortable, and feel natural to execute on the instruments.
9.3.1 Instrument Design
The first FireFader knob enables performers to strum across twenty evenly spaced strings, each of which provides force feedback.
The second FireFader knob does not provide force feedback—instead, it enables rapid and precise control of the timbre of the strings. As the performer moves this knob from one extreme to another, the timbre of the strings goes from being dark and short, like a palm-muted guitar, to bright and resonant, like guitar strings plucked near their terminations.
The right and left arrow keys of the laptop keyboards enable the performer to step forward or backward, respectively, through preprogrammed tunings for each of their twenty virtual strings. Consequently, the performers do not need to be continuously considering the precise tuning of the strings.
9.3.2 Performance Techniques
18.104.22.168 Simultaneously Changing the Chord and Strumming
With training, the performers gravitate toward a particular performance technique, especially in sections of the composition with numerous chord changes. In these sections, the performers learn to use the following procedure: (1) wait for notes to decay, (2) use the arrow key to advance the harp’s tuning to the next chord, (3) immediately strum the virtual strings using the FireFader, and (4) repeat. The ergonomics of this performance technique are illustrated in Fig. 9.8, which shows how each performer’s right hand is operating a FireFader, while the left hand is operating the arrow keys (shown boxed in yellow in Fig. 9.8).
22.214.171.124 Accelerating Strums
Preprogramming the note changes for banks of twenty plucked strings also enables a specialized strumming technique. Since each performer is passing the fader knob over so many strings, it is possible for the performer to noticeably accelerate or decelerate during a single strumming gesture. This technique aids in building tension during the first section of the composition. The authors would like to note that, although no formal tests have been conducted, they have the impression that the force feedback is crucial for this performance technique, as it makes it possible to not only hear but also feel each of the individual strings.
126.96.36.199 Continuous Control of Timbre for Strumming
The second knob on each FireFader enables the performers to occasionally but immediately alter the timbre of the strings as indicated in the score. Since this technique is used sparingly, it has a stark influence upon the overall sound, but it is a powerful control that makes the instrument almost seem more lifelike. An additional distortion effect further influences the timbre of the strings, and this distortion is enabled and disabled by the arrow keys so as to match the printed score.
9.3.3 Compositional Structure
Of grating impermanence is performed from a fixed score. The composition comprises several sections that demonstrate various performance techniques of the instrument. The score shows the notes that are heard, but each performer needs only choose where he or she is in the score, not to actually select notes as they would on a traditional instrument. In this way, the job of the performer is similar to that of a member of a bell choir: following along in the score and playing notes at the appropriate times.
The beginning and ending sections of the composition are texturally dense and somewhat freer. The gestures and timings are indicated, but the precise rhythms are not notated. The interior sections are metered and fully notated. Stylistically, these sections range from monophony to interlocking textures to fast unison passages.
A studio video recording is available for viewing at the project Web site, which illustrates how these performance techniques are enabled by combining traditional controls and force-feedback controls.5
9.4 Finding Timbres that Sound Uncannily Familiar but Are Nonetheless Novel (“Guest Dimensions”)
When composing electroacoustic music, it can generally be useful to compose new timbres, which can help give listeners new listening experiences. In contrast, if timbres sound familiar to a listener, they can beneficially provide “something to hold on to” for less experienced listeners , particularly when pitch and rhythm are not employed traditionally. In the present chapter, it is therefore suggested that finding timbres that sound uncannily familiar but are nonetheless novel can help bridge these two extremes [13, 18].
Guest Dimensions by Michael Blandino is a quartet that explores this concept, extending it by making analyzed timbres tangible using haptic technology. For example, each of the four performers uses a FireFader to pluck one of two virtual resonator models (see Fig. 9.9), whose original parameters are determined to match the timbre of prerecorded percussion sound samples.
9.4.1 Instrument Design
188.8.131.52 Calibrating the Timbre of Virtual Models to Sound Samples
Two virtual resonator physical models were calibrated through modal decomposition of sound files of a struck granite block and of a gayageum, which is a Korean plucked string instrument [27, 30, 53]. This provided a large parameter set to use for starting the instrument design process.
184.108.40.206 Scaling Model Parameters to Discover Novel Timbres
Although performance techniques affected the timbre, the timbre could be more strongly adjusted via the model parameters. For example, to increase overall timbral interest and to increase sustain of the resonances, the decay times for the struck granite block sound were lengthened significantly, enhancing the resonance of the model. Further adjustment of the virtual excitation location and scaling of the virtual dimensions allowed for additional accentuation of shimmering and certain initial transient qualities. Similarly, the gayageum model’s decay time was slightly extended, and its virtual excitation position was tuned for desired effects.
This exploration of uncannily familiar yet novel timbres is evident when listening to the video recording of Guest Dimensions on the project Web site.6 The reader should keep in mind that the range of somehow familiar timbres realized during the performance stems from the two originally calibrated models of a struck granite block and a plucked gayageum.
220.127.116.11 Visual Display of the Force-Feedback Interaction
The FireFaders are not marked to indicate where the center points of the sliders are, which corresponds to where the resonators were located in virtual space. Since Guest Dimensions calls for specific rhythms to be played, it was necessary to create a very simple visual display enabling the performers to see what they were doing. The display showed the position of the fader knob and the position of the virtual resonator that the fader knob was plucking. The authors have the impression that this display may have made it easier for the performers to play more precisely in time. Overall, the need for implementing visual displays for some music compositions is emphasized by the discussion in Sect. 9.1.1—generally speaking, the implementation of additional feedback modalities has the potential to enable more precise control.
9.4.2 Performance Techniques
Two plucking performance techniques in Guest Dimensions are particularly notable. Of particular note is that these performance techniques are facilitated by the programmable nature of the force feedback. This enables the virtual model to be differently impedance matched when different performance techniques are being employed. For example, the tremolo performance technique is enhanced through a decreased virtual plectrum stiffness, while the legato performance technique is enhanced through a moderately increased virtual plectrum stiffness.
In the first section of the composition, the stiffness of the pluck link (see Fig. 9.9 and Table 9.1) in the model is set to be relatively low. This haptic quality enables the performers to particularly rapidly pluck back and forth across the virtual resonators object, obtaining a tremolo effect. Especially rapid plucking results in a louder sound, while slower plucking results in a quieter sound. According to the indications in the score of Guest Dimensions, the performers use the tremolo technique to create a range of dynamics.
In the sections not involving tremolo, the performers are mostly plucking more vigorously in a style that could be called legato. In those sections, the performers are playing various, interrelated note sequences. Instead of providing the performers with manual control over changing the notes (as with Of grating impermanence), it was decided that it would be more practical to automate the selection of all of the notes. Accordingly, the following approach was used to trigger note updates: right before one of the models is plucked, in other words right as the fader knob is approaching the center point for the plectrum, the next corresponding fundamental frequency is read out of a table and used to rapidly scale the fundamental frequency of the model. Careful adjustment of the threshold point is needed to avoid pitch changes during the resonance of prior attacks or changes after new attacks. Performers develop an intuition for avoiding false threshold detection through confident plucking. An advantage of this approach is that performers do not need to manually advance the notes; however, a performer without adequate practice may occasionally advance one note too many, and in this case, the performer will require a moment of tacit to recover.
9.4.3 Compositional Structure
As with Of grating impermanence, Guest Dimensions is performed from a fixed score. Performers play in precise time according to a pre-written score, sometimes in homorhythm. Each part for each section utilizes one of the two models, but adjustments of the models are unique to the sections of each part. Melodic themes in counterpoint are performed with the gayageum, which are accompanied by the decorative chimes of the granite block model. Extended percussive sections feature the granite block model in strict meter, save for a brief passage in which the performers are free to separately overlap in interpretive gestures.
A case study was presented demonstrating some ways that force-feedback DMIs could be integrated into laptop orchestra practice. The contributing composers realized a variety of compositional structures, but more commonalities were found in the successful instrument design approaches that were applied. Accordingly, the authors suggest that composers working in this field should consider the following: (1) providing performers with precise, physically intuitive, and reconfigurable controls, (2) using traditional controls alongside force-feedback controls as appropriate, and (3) designing timbres that sound uncannily familiar but are nonetheless novel. Music performance techniques were enabled that more closely resembled some traditional music performance techniques, which are less commonly observed in laptop orchestra practice.
https://github.com/eberdahl/Open-Source-Haptics-For-Artists (last accessed on August 16, 2017).
The functionality of Max is extended by abstractions, which are custom-defined objects that encapsulate program code.
https://cct.lsu.edu/eberdahl/V/DemoOfASlackString.mov (last accessed on August 16, 2017).
https://www.youtube.com/watch?v=l-29Xete1KM (last accessed on August 16, 2017).
https://www.youtube.com/watch?v=NcxO1ChLcr0 (last accessed on August 16, 2017).
https://www.youtube.com/watch?v=SrlZ_RUXybc (last accessed on August 16, 2017).
- 1.Andersen, T., Huber, R., Kretz, A., Fjeld, M.: Feel the beat: direct manipulation of sound during playback. In: First IEEE International Workshop on Horizontal Interactive Human-Computer Systems, pp. 123–126. Adelaide, Australia (2006)Google Scholar
- 2.Bak, J., Verplank, W., Gauthier, D.: Motors, music and motion. In: Proceedings of the 9th International Conference on Tangible, Embedded, and Embodied Interaction (ACM TEI), pp. 367–374. New York, NY, USA (2015)Google Scholar
- 3.Battey, B., Giannoukakis, M., Picinali, L.: Haptic control of multistate generative music systems. In: Proceedings of the International Computer Music Conference (ICMC) Denton, TX, USA (2015)Google Scholar
- 4.Beck, S.: Making Music with Csound, Chap. Acoustic Viability: A Design Aesthetic for Synthetic Musical Instruments. MIT Press, Cambridge, MA, USA (2000)Google Scholar
- 5.Berdahl, E., Blessing, M., Beck, S.D., Blandino, M.: More masses for the masses. In: Proceedings of the Audio Mostly 2016, pp. 105–110. Norrköping, Sweden (2016)Google Scholar
- 7.Berdahl, E., Kontogeorgakopoulos, A., Overholt, D.: HSP v2: Haptic signal processing with extensions for physical modeling. In: Nordahl R., Serafin S., Fontana F., Brewster S. (eds) Haptic and Audio Interaction Design (HAID). Lecture Notes in Computer Science, 6306, 61–62. Springer, Berlin, Heidelberg (2010)Google Scholar
- 8.Berdahl, E., Pfalz, A., Beck, S.D.: Very slack strings: a physical model and its use in the composition quartet for strings. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 9–10. Brisbane, Australia (2016)Google Scholar
- 9.Berdahl, E., Pfalz, A., Blandino, M.: Hybrid virtual modeling for multisensory interaction design. In: Proceedings of the Audio Mostly 2016, pp. 215–221. Norrköping, Sweden (2016)Google Scholar
- 11.Berdahl, E., Smith, III, J.: An introduction to the Synth-A-Modeler compiler: Modular and open-source sound synthesis using physical models. In: Proceedings of the Linux Audio Conference Stanford, CA, USA (2012)Google Scholar
- 13.Cadoz, C.: The physical model as metaphor for musical creation: PICO. TERA, a piece entirely generated by physical model. In: Proceedings of the International Computer Music Conference (ICMC), pp. 305–312. Gothenburg, Sweden (2002)Google Scholar
- 15.Cadoz, C., Luciani, A., Florens, J.L., Castagné, N.: Artistic creation and computer interactive multisensory simulation force feedback gesture transducers. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 235–246. Montreal, Canada (2003)Google Scholar
- 16.Campion, G.: The pantograph MK-II: a Haptic instrument. In: The Synthesis of Three Dimensional Haptic Textures: Geometry, Control, and Psychophysics, pp. 45–58. Springer (2005)Google Scholar
- 17.Castagne, N., Cadoz, C.: Creating music by means of ‘physical thinking’: the musician oriented Genesis environment. In: Proceedings of the 5th International Conference on Digital Audio Effects (DAFx), pp. 169–174. Hamburg, Germany (2002)Google Scholar
- 18.Chafe, C.: Case studies of physical models in music composition. In: Proceedings of the 18th International Congress on Acoustics. Kyoto, Japan (2004)Google Scholar
- 19.Dahl, L.: Wicked problems and design considerations in composing for laptop orchestra. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME). University of Michigan, Ann Arbor, MI, USA (2012)Google Scholar
- 21.Florens, J.L., Cadoz, C., Luciani, A.: A real-time workstation for physical model of multi-sensorial and gesturally controlled instrument. In: Proceedings of the International Computer Music Conference (ICMC), pp. 518–526. Ann Arbor, MI, USA (1998)Google Scholar
- 23.Gabriel, R., Sandsjo, J., Shahrokni, A., Fjeld, M.: Bounceslider: actuated sliders for music performance and composition. In: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction (ACM TEI), pp. 127–130. Bonn, Germany (2008)Google Scholar
- 24.Gillespie, B.: The touchback keyboard. In: Proceedings of the International Computer Music Conference (ICMC), pp. 77–80. San Jose, CA (1992)Google Scholar
- 25.Gillespie, B.: Haptic interface for hands-on instruction in system dynamics and embedded control. In: Proceedings of the 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pp. 410–415 (2003)Google Scholar
- 28.Hayes, L.: Performing articulation and expression through a Haptic interface. In: Proceedings of the International Computer Music Conference (ICMC), pp. 400–403. Ljubljana, Slovenia (2012)Google Scholar
- 29.Hayward, V., Gregorio, P., Astley, O., Greenish, S., Doyon, M., Lessard, L., McDougall, J., Sinclair, I., Boelen, S., Chen, X., et al.: Freedom-7: a high fidelity seven axis Haptic device with application to surgical training. In: Experimental Robotics V, pp. 443–456. Springer (1998)CrossRefGoogle Scholar
- 30.Hou, S.: Review of modal synthesis techniques and a new approach. Shock Vibrat. Bullet. 40(4), 25–39 (1969)Google Scholar
- 31.Immersion: the value of Haptics. Technical Report Immersion Corporation, San Jose, CA (2010)Google Scholar
- 32.Kontogeorgakopoulos, A., Kouroupetroglou, G.: Simple cases of low cost force-feedback haptic interaction with Haptic digital audio effects. In: Proceedings of the 9th International Gesture Workshop. Athens, Greece (2011)Google Scholar
- 33.Kontogeorgakopoulos, A., Kouroupetroglou, G.: Low cost force-feedback haptic interaction with haptic digital audio effects. In: Efthimiou, E., Kouroupetroglou, G., Fotinea, S.E. (eds.) Lecture Notes in Artificial Intelligence: Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, vol. 7206, pp. 48–57. Springer (2012)CrossRefGoogle Scholar
- 36.Leonard, J., Cadoz, C.: Physical modelling concepts for a collection of multisensory virtual musical instruments. In: Proceedings of the Conference on New Instruments for Musical Expression (NIME). Baton Rouge, LA, USA (2015)Google Scholar
- 37.Luciani, A., Florens, J.L., Couroussé, D., Cadoz, C.: Ergotic sounds: A new way to improve playability, believability and presence of digital musical instruments. In: Proceedings of the 4th International Conference on Enactive Interfaces, pp. 373–376. Grenoble, France (2007)Google Scholar
- 38.Massie, T., Salisbury, K.: The PHANToM haptic interface: a device for probing virtual objects. In: Proceedings of the ASME 2016 Dynamic Systems and Control Conference (1994)Google Scholar
- 40.Miranda, E., Kirk, R., Wanderley, M.: New Digital Musical Instruments. A-R Editions, Middleton, WI (2006)Google Scholar
- 41.Moray, N.: Mental workload since 1979. Int. Rev. Ergonom. 2, 123–150 (1988)Google Scholar
- 42.Nichols, C.: The vBow: Development of a virtual violin bow haptic human-computer interface. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 1–4. National University of Singapore (2002)Google Scholar
- 43.Oboe, R., De Poli, G.: Multi-instrument virtual keyboard: The MIKEY Project. In: Proceedings of the Conference on New Instruments for Musical Expression, pp. 137–142. Dublin, Ireland (2002)Google Scholar
- 45.O’Modhrain, S.: Playing by Feel: Incorporating Haptic Feedback into Computer-Based Musical Instruments. Ph.D. thesis, Stanford University, Stanford, CA, USA (2000)Google Scholar
- 46.Orlarey, Y., Fober, D., Letz, S.: New Computational Paradigms for Computer Music, chap. An Efficient Functional Approach to DSP Programming. Edition Delatour, Sampzon, France, FAUST (2009)Google Scholar
- 47.Postman, L., Egan, J.: Experimental psychology: An introduction. Harper, New York (1949)Google Scholar
- 48.Rodriguez, J., Shahrokni, A., Schrittenloher, M., Fjeld, M.: One-dimensional force feedback slider: Digital platform. In: IEEE VR 2007 Workshop on Mixed Reality User Interfaces: Specification, Authoring, Adaptation, pp. 47–51 (2007)Google Scholar
- 49.Sinclair, S., Florens, J.L., Wanderley, M.: A haptic simulator for gestural interaction with the bowed string. In: Proceedings of the 10th French Congress on Acoustics. Lyon, France (2010)Google Scholar
- 52.Trueman, D., DuBois, R.L., Bahn, C.: Discovering expressive realtime parameters: the performing laptop artist as effect designer. In: Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFx), Limerick, Ireland, vol. 32 (2001)Google Scholar
- 53.Van Den Doel, K., Kry, P.G., Pai, D.K.: Foleyautomatic: physically-based sound effects for interactive simulation and animation. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 537–544. ACM (2001)Google Scholar
- 54.Verplank, B., Georg, F.: Can haptics make new music? Fader and Plank demos. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 539–540. Oslo, Norway (2011)Google Scholar
- 55.Verplank, B., Gurevich, M., Mathews, M.: The Plank: Designing a simple Haptic controller. In: Proceedings of the Conference on New Interfaces for Musical Expression (NIME), pp. 1–4. Dublin, Ireland (2002)Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.