Skip to main content

The Realtime Method Based on Audio Scenegraph for 3D Sound Rendering

  • Conference paper
Advances in Multimedia Information Processing - PCM 2005 (PCM 2005)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3767))

Included in the following conference series:

Abstract

Recent studies have shown that the combination of auditory and visual cues enhances the sense of immersion in virtual reality or interactive entertainment applications. However, realtime 3D audiovisual rendering requires high computational cost. In this paper, to reduce realtime computation, we suggest a novel framework of optimized 3D sound rendering, where we define Audio Scenegraph that contains reduced 3D scene information and the necessary parameters for computing early reflections of sound. During pre-computation phase using our framework, graphic reduction and sound source reduction are accomplished according to the environment containing complex 3D scene, sound sources, and a listener. That is, complex 3D scene is reduced to a set of significant facets for sound rendering, and the resulting scene is represented as Audio Scenegraph we defined. And then, the graph is transmitted to the sound engine which clusters a number of sound sources for reducing realtime calculation of sound propagation. For sound source reduction, it is required to estimate early reflection time to test perceptual culling and to cluster sounds which are reachable to facets of each sub space according to the estimation results. During realtime phase according to the position, direction and index of the space of a listener, sounds inside sub space are played by image method and sounds outside sub space are also played by assigning clustered sounds to buffers. Even if the number of sounds is increased, realtime calculation is very stable because most calculations about sounds can be performed offline. It took very consistent time for 3D sound rendering regardless of complexity of 3D scene including hundreds of sound sources by this method. As a future study, it is required to estimate the perceptual acceptance of grouping algorithm by user test.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vian, J., Van Maercke, D.: Calculation of Room Response Using a Ray Tracing Method. In: Proceedings of the ICA Symposium on Acoustics and Theatre Planning for Performing, pp. 74–78 (1986)

    Google Scholar 

  2. Allen, J., Berkley, D., Allen, J.: Image Method for efficiently simulating small room acoustics. Journal Acoustical Society of America 65(4), 943–951 (1979)

    Article  Google Scholar 

  3. Borish, J.: Extension of the Image Model to arbitrary polyhedra. Journal Acoustical Society of America 75(6), 1827–1836 (1984)

    Article  Google Scholar 

  4. Dadoun, N., Kirkpatrick, D., Walsh, J.: The Geometry of Beam Tracing. In: Proc. Computational Geometry, pp. 55–71 (1985)

    Google Scholar 

  5. Heckbert, P., Hanrahan, P.: Beam Tracing Polygonal Objects. ACM SIGGRAPH 18(3), 119–127 (1984)

    Article  Google Scholar 

  6. Tomas, F., Nicolas, T., Jean-Marc, J.: A Beam Tracing Approach to Acoustic Modeling for Interactive Virtual Environments. ACM SIGGRAPH, 21–32 (1998)

    Google Scholar 

  7. Tomas, F., Ingrid, C., Gary, E., Gopal, P., Mohan, S., Jim, W.: Survey of Methods for Modeling Sound Propagation in interactive virtual Environment system. Presence (2003)

    Google Scholar 

  8. Topio, L., Lauri, S.: el.3: Creating Interactive Virtual Auditory Environments. IEEE Computer Graphics and Applications (2002)

    Google Scholar 

  9. Chris, J., Nadia, T.: Significant facet Retrieval for real-time 3D Sound Rendering in complex virtual Environments. In: VRST 2003 (2003)

    Google Scholar 

  10. Nicolas, T., Emmanuel, G., George, D.: Perceptual audio Rendering of complex virtual Environments. ACM SIGGRAPH (2004)

    Google Scholar 

  11. Erich, G., Richard, H., Ralph, J., John, V.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading (1995)

    Google Scholar 

  12. Ted, P., Andreas, S.: A Review of Algorithms for Perceptual Coding of Digital Audio Signals. In: Proc. of International Conference on Digital Signal Processing, pp. 179–205 (1997)

    Google Scholar 

  13. James, B.: Game Audio Programming. CHARLES RIVER MEDIA Inc. (2002)

    Google Scholar 

  14. Tomas, F., Patrick, M., Ingrid, C.: Real-Time Acoustic Modeling for Distributed Virtual Environments. ACM SIGGRAPH (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yi, JS., Seong, SJ., Nam, YH. (2005). The Realtime Method Based on Audio Scenegraph for 3D Sound Rendering. In: Ho, YS., Kim, H.J. (eds) Advances in Multimedia Information Processing - PCM 2005. PCM 2005. Lecture Notes in Computer Science, vol 3767. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11581772_63

Download citation

  • DOI: https://doi.org/10.1007/11581772_63

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30027-4

  • Online ISBN: 978-3-540-32130-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics