A Model for the Use of Different Sound Description Layers Within a Multimedia Environment

Part of the IFIP Series on Computer Graphics book series (IFIP SER.COMP.)

Abstract

The number of applications on graphics workstations using sound in order to enhance the human-computer interaction capabilities is increasing. Using and handling sound in addition to text and graphics seems to be the next forthcoming step towards a multimedia environment.

This paper outlines a first approach for an audio content architecture consisting of different description layers which are able to handle different levels of abstraction. Therefore a comparison between image rendering and sound synthesis is made. Then a short overview on digital sound synthesis techniques and sound color models is given in order to show hardware requirements as well as psychophysical difficulties. Finally, the audio layer model is presented. It contains three layers for parametric and symbolic audio description, one layer for digital audio, and one presentation layer.

Keywords

Digital Audio Scientific Visualization Presentation Layer Multimedia Environment Audio Description 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Buxt89]
    William Buxton, William Gaver and Sara Bly: Tutorial Notes of the 1989 SIGCHI conference on “The Use of Non-Speech Audio at the Interface”, ACM 1989Google Scholar
  2. [Chow73]
    John M. Chowing: “The Synthesis of Complex Audio Spectra by Means of Frequency Modulation”, Journal of the Audio Engineering Society, Vol. 21(7), 1973Google Scholar
  3. [Gave89]
    William W. Gaver: “The SonicFinder”, Tutorial Notes of the 1989 SIGCHI conference on “The Use of Non-Speech Audio at the Interface”, ACM 1989Google Scholar
  4. [Gerh89]
    Heinz Gerhäuser und Dieter Seitzer: “Musikcodierung — Signalverarbeitung in Echtzeit auf psychoakustischer Grundlage”, FhG-Bericht 1/89Google Scholar
  5. [Grey75]
    J. M. Grey: “An Exploration of Musical Timbre”, PhD. diss., Dept. of Psychology, Stanford University, 1975Google Scholar
  6. [Grin89]
    Georges G. Grinstein, Ronald M. Picket, Marian G. Williams: “EXVIS: An Exploratory Visualization Environment”, Graphics Research Laboratory, University of Lowell, 1989Google Scholar
  7. [ISO8]
    ISO/IEC JTC1/SC1/WG5 document N638: “Liaison Statement on User Requirements for an Audio Content Architecture”, 1988Google Scholar
  8. [Loy85]
    Gareth Loy: “Musicians make a Standard: The MIDI Phenomenon”, Computer Music Journal, Vol. 9, No. 4, 1985Google Scholar
  9. [Roed73]
    Juan G. Roeder: “Introduction to the Physics and Psychophysics of Music”, Springer Verlag, New York — Heidelberg — Berlin, 1973Google Scholar
  10. [Smith89]
    Stuart Smith: “An Auditory Display for Exploratory Visualization of Multidimensional Data”, Computer Science Department, University of Lowell, 1989Google Scholar
  11. [Stra85]
    John Strawn and Curtis Roads: “Synthesizer Hardware and Engineering”, chapter 2 of “Foundations of Computer Music”, edited by John Strawn and Curis Roads, MIT Press, Cambridge, MA., 1987Google Scholar
  12. [Stol88]
    G. Stoll und G. Theile: “MASCAM: Minimale Datenrate durch Berücksichtigung der Gehöreigenschaften bei der Codierung hochwertiger Tonsignale”, Fernseh- und Kino-Technik Nr. 11/1988Google Scholar
  13. [Wess85]
    David L. Wessel: “Timbre Space as a Musical Structure”, chapter 5 of “Foundations of Computer Music”, edited by John Strawn and Curtis Roads, MIT Press, Cambridge, MA., 1987Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • C. Blum

There are no affiliations available

Personalised recommendations