Skip to main content

2003: OpenSound Control: State of the Art 2003

  • Chapter
  • First Online:

Part of the book series: Current Research in Systematic Musicology ((CRSM,volume 3))

Abstract

OpenSound Control (“OSC”) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. OSC has achieved wide use in the field of computer-based new interfaces for musical expression for wide-area and local-area networked distributed music systems, inter-process communication, and even within a single application.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.cnmat.berkeley.edu/OSC.

  2. 2.

    http://web.tiscali.it/mupuxeddu/csound.

  3. 3.

    http://www.native-instruments.com.

  4. 4.

    http://www.swig.org.

  5. 5.

    https://github.com/kaoskorobase/PyOSC.

  6. 6.

    http://fastlabinc.com/Siren/.

References

  • Agon, C., Assayag, G., Laurson, M., & Rueda, C. (1999). Computer assisted composition at IRCAM: PatchWork & OpenMusic. Computer Music Journal.

    Google Scholar 

  • Bargar, R., Choi, I., Das, S., & Goudeseune, C. (1994). Model-based interactive sound for an immersive virtual environment. In Proceedings of the International Computer Music Conference (pp. 471–474). Denmark: Aarhus.

    Google Scholar 

  • Berry, M. (2002). Grainwave 3. http://web.archive.org/web/20130201102609, http://www.7cities.net/users/mikeb/GRAINW.HTM

  • Brandt, E., & Dannenberg. R.B. (1999). Time in distributed real-time systems. In Proceedings of the 1999 International Computer Music Conference (pp. 523–526). San Francisco, CA.

    Google Scholar 

  • Campion, E., Momeni, A., & Murota, C. (2002). Persistent vision. Interactive computer music with dance. http://www.edmundcampion.com/project_persistentvision.

  • Chaudhary, A., Freed, A., & Wright, M. (1999). An open architecture for real-time audio processing software. In Audio Engineering Society 107th Convention, preprint #5031. New York: Audio Engineering Society.

    Google Scholar 

  • Chun, B. (2002). flosc: Flash OpenSound Control. https://github.com/benchun/flosc.

  • Dechelle, F., et al. (1999). jMax: an environment for real-time musical applications. Computer Music Journal, 23(3), 50–58.

    Google Scholar 

  • Durand, H., & Brown, B. (2002). A distributed audio interface using CORBA and OSC. In Symposium on Sensing and Input for Media-Centric Systems (SIMS) (pp. 44–48). CA, Santa Barbara.

    Google Scholar 

  • Fléty, E. (2002). AtoMIC Pro: a multiple sensor acquisition device. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 96–101). Ireland: Dublin.

    Google Scholar 

  • Friedl, J. (1997). Regular Expressions: Powerful Techniques for Perl and Other Tools. Sebastopol, CA: O’Reilly & Associates.

    Google Scholar 

  • Garnett, G., Jonnalagadda, M., Elezovic, I., Johnson, T., & Small, K. (2001). Technological advances for conducting a virtual ensemble. In Proceedings of the International Computer Music Conference. Habana, Cuba.

    Google Scholar 

  • Garnett, G., Choi, K., Johnson, T., & Subramanian, V. (2002). VirtualScore: Exploring music in an immersive virtual environment. In Symposium on Sensing and Input for Media-Centric Systems (SIMS) (pp. 19–23). CA, Santa Barbara.

    Google Scholar 

  • Goudeseune, C., Garnett, G., & Johnson, T. (2001). Resonant processing of instrumental sound controlled by spatial position. In Proceedings of the International Conference on New Interfaces for Musical Expression. Seattle, WA.

    Google Scholar 

  • Hankins, T., Merrill, D., & Robert, J. (2002). Circular Optical Object Locator. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 163–164). Dublin, Ireland.

    Google Scholar 

  • Hansen, M. & Rubin, B. (2002). Listening post: Giving voice to online communication. In International Conference on Auditory Display. Kyoto, Japan.

    Google Scholar 

  • La Kitchen Hardware. (2002). Kroonde: 16 sensors wireles udp interface.

    Google Scholar 

  • Huntington, J. (2008). In praise of custom, IP-based entertainment control, part 1.

    Google Scholar 

  • Impett, J. & Bongers, B. (2001). Hypermusic and the sighting of sound—a nomadic studio report. In Proceedings of the International Computer Music Conference (pp. 459–462). Habana, Cuba: ICMA.

    Google Scholar 

  • Jehan, T., & Schoner, B. (2001). An audio-driven perceptually meaningful timbre synthesizer. In Proceedings of the International Computer Music Conference (pp. 381–388). Habana, Cuba.

    Google Scholar 

  • Kling, G., & Schlegel, A. (2002). OSCar and OSC: Implementation and use of distributed multimedia in the media arts. In Symposium on Sensing and Input for Media-Centric Systems (SIMS) (pp. 55–57). CA: Santa Barbara.

    Google Scholar 

  • Madden, T., Smith, R., Wright, M., & Wessel, D. (2001). Preparation for interactive live computer performance in collaboration with a symphony orchestra. In Proceedings of the International Computer Music Conference. Habana, Cuba: ICMA.

    Google Scholar 

  • McCartney, J. (2000). A new, flexible framework for audio and image synthesis. In Proceedings of the International Computer Music Conference (pp. 258–261). Berlin: ICMA.

    Google Scholar 

  • McMillen, K., Wessel, D., & Wright, M. (1994). The ZIPI Music Parameter Description Language. Computer Music Journal, 18(4), 52–73.

    Article  Google Scholar 

  • Mills, D. (1992). Network Time Protocol (Version 3) Specification, Implementation and Analysis. http://www.faqs.org/rfcs/rfc1305.html.

  • Mills, D. (1996). Simple Network Time Protocol (SNTP) Version 4 for IPv4, IPv6 and OSI. http://www.faqs.org/rfcs/rfc2030.html.

  • Overholt, D. (2001). The MATRIX: A novel controller for musical expression. In Proceedings of the International Conference on New Interfaces for Musical Expression. Seattle, WA: ACM SIGCHI.

    Google Scholar 

  • Overholt, D. (2002). Musical mapping and synthesis for the MATRIX interface. In Symposium on Sensing and Input for Media-Centric Systems (SIMS) (pp. 7–10). Santa Barbara, CA.

    Google Scholar 

  • Pope, S., & Engberg, A. (2002). Distributed control and computation in the HPDM and DSCP projects. In Symposium on Sensing and Input for Media-Centric Systems (SIMS) (pp. 38–43). CA, Santa Barbara.

    Google Scholar 

  • Puckette, M. (1991). Combining event and signal processing in the Max graphical programming environment. Computer Music Journal, 15(3), 68–77.

    Google Scholar 

  • Puckette, M. (1996). Pure Data. In Proceedings of the International Computer Music Conference (pp. 269–272). Hong Kong: ICMA.

    Google Scholar 

  • Puckette, M. (2002). Max at seventeen. Computer Music Journal, 26(4), 31–43.

    Article  Google Scholar 

  • Ramakrishnan, C. (2003). Java OSC. http://www.illposed.com/software/javaosc.html.

  • Wei, S., Visell, Y., & MacIntyre, B. T. (2003). Media choreography system. Technical report, Graphics, Visualization and Usability Center, Georgia Institute of Technology, Atlanta, GA.

    Google Scholar 

  • Wikipedia (2014). Zeta instrument processor interface.

    Google Scholar 

  • Wright, M. (1998). Implementation and performance issues with OpenSound Control. In Proceedings of the International Computer Music Conference (pp. 224–227). Ann Arbor: Michigan.

    Google Scholar 

  • Wright, M. (2002). OpenSound Control Specification. http://opensoundcontrol.org/spec-1_0

  • Wright, M., Freed, A., Lee, A., Madden, T., & Momeni, A. (2001). Managing complexity with explicit mapping of gestures to sound control with OSC. In Proceedings of the International Computer Music Conference (pp. 314–317). Habana, Cuba.

    Google Scholar 

  • Young, J. (2001). Using the web for live interactive music. In Proceedings of the International Computer Music Conference (pp. 302–305). Habana, Cuba.

    Google Scholar 

  • Zicarelli, D. (1998). An extensible real-time signal processing environment for MAX. In Proceedings of the International Computer Music Conference (pp. 463–466). Ann Arbor, Michigan.

    Google Scholar 

Download references

Acknowledgements

Amar Chaudhary, John ffitch, Guenter Geiger, Camille Goudeseune, Peter Kassakian, Stefan Kersten, James McCartney, Marcelo Wanderley, David Wessel.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew Wright .

Editor information

Editors and Affiliations

Appendices

Open Sound Control: Some Context and Reflections on Thirteen Years’ Advances

Matthew Wright and Adrian Freed

OSC is a widely used encoding in NIME projects and this paper (the 4th on OSC) provides practitioners with a solid introduction on what OSC is and how it they might use it. Although today we distinguish the terms “encoding” and “protocol” and tend to be more specific when talking about “client” and “server,” OSC essentially has not changed since this paper. Receiving a message, modeled here as “directing” a “kind of message” and its “arguments” to “entities” within the receiver, can now be thought of as parallel assignment or writing to a key/value store such as a “record,” “dictionary,” “associative array,” or “ property list,” but the underlying interoperable machinery remains substantially identical. Today’s wealth of OSC APIs and libraries for most active programming languages (including the 38 “programming language libraries” on opensoundcontrol.org) allows most users to ignore the details of the encoding.

This was the first OSC paper to describe type tags, one of the only new features since the original 1997 ICMC paper. Credit belongs to James McCartney, who needed to support situations where the sender did not know in advance what types the receiver expected for each message, or where type polymorphism (same message name, different argument types) was desired. Just as CNMAT had unilaterally defined and implemented OSC, McCartney unilaterally defined and implemented OSC type tags (originally optional, hence the funny comma character), which CNMAT later adopted and are now universal.

Today OSC-encoded packets are embedded in many kinds of data streams and transported among devices with different wired and wireless protocols, including SLIP via USB serial or “TTL” serial; TCP via web-, UNIX, or Windows sockets; and most often UDP packets via TCP/IP based LANs and WANs.

OSC use preceded that of the popular XML and JSON encodings, which both have simpler representations for hierarchical data than OSC’s anonymous recursive subbundles (which are rarely implemented, tested, or used). OSC now supports bundles as message arguments.

This paper continued our tradition of talking about a query system that didn’t really exist; today there are several incompatible query systems none widely implemented. The paper’s optimism that “obviously” all implementations would eventually support every feature of OSC has given way to accepting the fact that there will always be incomplete implementations.

OSC has traveled into many different development communities who use it in innovative ways not envisaged by its inventors. Although in 2003 we believed we were aware of almost all OSC use, the full history and extent of its cultural uptake is still to be carefully chronicled. Although the existence of the OSC Kit may have helped adoption of OSC, to our knowledge nobody actually fully used it as designed. Some implementations used portions of it (e.g., pattern matching) but most reimplemented OSC until the existence of other libraries such as oscpack and liblo. On the other hand, sendOSC and dumpOSC are still indispensable, confirming whether valid messages are sent and their contents (or the specific problem if invalid), and helping troubleshoot networks, firewalls, address space mismatches, faulty arguments, etc.

OSC has also traveled to many creative communities outside of computer music. Within the domains of virtual and augmented reality (Unity, Unreal Engine), staged dramatic works with complex media-design (Processing, OpenFrameworks, D3, Millumin, Madmapper, TouchDesigner), modeling and fabrication (Rhino Grasshopper with gHowl), physical computing and Internet of Things applications (software: ROS, oscuino, Maxuino; hardware: Particle SparkCore, ESP8266) many practitioners are adapting OSC for their needs in communicating among several processes, platforms, or locations.

We offer this broad taxonomy of current use patterns:

  1. 1.

    Client/server: e.g., communications between computers optimized for user interaction (e.g. Apple Macintosh) and machines optimized for computing performance (e.g. SGI O2), Meyer/LCS spatial audio and show control

  2. 2.

    Inter-Process communication, e.g., between Max and synthesis “servers” on the same machine, Supercollider, OpenSound World...

  3. 3.

    Inter-Media synchronization and communication, e.g. between Unity as a real-time visual virtual reality and Max as a synthesizer for responsive environmental sound

  4. 4.

    Synchronization and automation, netjamming

  5. 5.

    Transcoding and wrapping (as described in Sect. 5.4), e.g., TUIO for multitouch, DMX lighting control, many wrappers in CNMAT’s MMJ Depot, and o.io.

  6. 6.

    Native encoding for input and output devices (MIDI alternative), e.g., Monome, OSC for Arduino, many phone and tablet apps. (We note that all of the commercial hardware projects listed in Sect. 2.6 are now gone, but many others have taken their place)

  7. 7.

    Extension language (o dot, gdif, spatdif)

  8. 8.

    dynamic programming hooks (OSW)

Regarding education, though in 2003 institutions teaching OSC were noteworthy, today one would expect OSC in any computer music or interactive digital arts program and especially in any hands-on NIME course.

Expert Commentary: OpenSound Control: State of the Art 2003

Roger B. Dannenberg

OpenSound Control (OSC) has become a standard building block for not only music systems but any number of interactive art installations, virtual reality systems, and other systems needing distributed control and communication. OSC follows a long history of protocols including MIDI for music, ZIPI, intended to extend and replace MIDI, CORBA and DCE, middleware for distributed computing. A series of protocols for the Web followed earlier developments, resulting in HTTP, SOAP, REST, and certainly more will come. With all these distributed systems protocols, it is surprising that OSC has gained so much traction. Why OSC?

John Huntington (Huntington 2008) describes some “common attributes of successful standards,” including “Successful standards are pulled into existence by the market; they are not pushed. They fill a clear commercial demand in the market, especially one driven by users.” One could argue that OSC greatly simplified existing standards and better met the requirements of interactive music programs. However, even ZIPI was said to have an “unusual addressing scheme which required substantial increase in complexity,” (Wikipedia 2014) and OSC’s addressing scheme is even more complex. When announced, many complained that the pattern-matching features in OSC and the URL-like addresses would be too slow and difficult to implement. On the other hand, OSC arguably was and still is too simple. OSC was modified to include datatype information and is still considered to have problems with queries, discovery, timestamps, and the use of different transports. Whatever the reasons, OSC has been wildly successful, especially as an open technology with academic origins.

One explanation for OSC’s popularity could be that it lowers the barriers to interfacing with Max/MSP and Pd (Puckette 2002), two very popular visual programming systems used by musicians and artists. Without OSC, one can extend Max/MSP or Pd by writing “external” objects in C, but this requires a rather detailed knowledge of internal program structures and interfaces. Once OSC “objects” were created for Max/MSP and Pd, one could extend these programs through OSC. For example, one can now build a sensor that sends data through very simple network packets. Using OSC with Max/MSP and Pd solves both the connection problem (just use Ethernet or WIFI) and the interface problem (forming and sending simple network packets is simpler than developing “externals”). The inter-operability, modularity, and distributed processing support resulting from a network-based protocol are all added attractions, but could it be that the desire to connect to Max and Pd drove widespread adoption of OSC?

What is next? When OSC was announced, the fastest personal computers were comparable to today’s smartphones. Now that even low-cost embedded processors used for sensors are quite capable of running sophisticated software, one can imagine much more powerful protocols being deployed as “standards” in the worlds of interactive art and music. But going back to systems like CORBA seems unlikely to catch hold. A lesson we can take from OSC and the Web is that, while highly optimized binary protocols may be attractive for performance, the flexibility of symbolic addresses and late or dynamic binding is more important to most users. I can imagine the next generation of interprocess control being built upon bi-directional network connections where every node runs an active server to offer named services (as opposed to IP addresses and port numbers most often used with OSC), peer-to-peer connections, automatic routing, clock synchronization (Brandt and Dannenberg 1999), web interfaces for performance monitoring and testing, and many other services. Clients might construct URL-like requests that specify a destination by name (“audio-mixer”) to be routed automatically to the node providing that resource. Of course, this would represent a big step up in complexity from OSC, but it might make things simpler for end users.

Whatever happens in the future, OSC has established itself as a versatile standard for inter-process communication, enabling countless systems to be constructed in a modular and flexible way. This paper, “OpenSound Control: State of the Art 2003,” gives an excellent overview of OSC, the designer’s intentions, and how applications use OSC. More than 10 years later, the article is still an excellent way to become acquainted with OpenSound Control.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Wright, M., Freed, A., Momeni, A. (2017). 2003: OpenSound Control: State of the Art 2003. In: Jensenius, A., Lyons, M. (eds) A NIME Reader. Current Research in Systematic Musicology, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-319-47214-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-47214-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-47213-3

  • Online ISBN: 978-3-319-47214-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics