Experiencing Audio and Music in a Fully Immersive Environment

  • Xavier Amatriain
  • Jorge Castellanos
  • Tobias Höllerer
  • JoAnn Kuchera-Morin
  • Stephen T. Pope
  • Graham Wakefield
  • Will Wolcott
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4969)


The UCSB Allosphere is a 3-story-high spherical instrument in which virtual environments and performances can be experienced in full immersion. The space is now being equipped with high-resolution active stereo projectors, a 3D sound system with several hundred speakers, and with tracking and interaction mechanisms.

The Allosphere is at the same time multimodal, multimedia, multi-user, immersive, and interactive. This novel and unique instrument will be used for research into scientific visualization/auralization and data exploration, and as a research environment for behavioral and cognitive scientists. It will also serve as a research and performance space for artists exploring new forms of art. In particular, the Allosphere has been carefully designed to allow for immersive music and aural applications.

In this paper, we give an overview of the instrument, focusing on the audio subsystem. We give the rationale behind some of the design decisions and explain the different techniques employed in making the Allosphere a truly general-purpose immersive audiovisual lab and stage. Finally, we present first results and our experiences in developing and using the Allosphere in several prototype projects.


Virtual Environment Sound Source Immersive Environment Virtual Source Spatial Aliasing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amatriain, X.: A domain-specific metamodel for multimedia processing systems. IEEE Transactions on Multimedia 9(6), 1284–1298 (2007)CrossRefGoogle Scholar
  2. 2.
    Baalman, M.A.J.: Updates of the WONDER software interface for using Wave Field Synthesis. In: Proc. of the 3rd International Linux Audio Conference, Karlsruhe, Germany (2005)Google Scholar
  3. 3.
    Baalman, M.A.J.: Reproduction of arbitrarily shaped sound sources with wave field synthesis - physical and perceptual effects. In: Proc. of the 122nd AES Conference, Vienna, Austria (2007)Google Scholar
  4. 4.
    Ballas, J.: Delivery of information through sound. In: Kramer, G. (ed.) Auditory Display: Sonification, Audification and Auditory Interfaces, vol. XVIII, pp. 79–94. Addison Wesley, Reading (1994)Google Scholar
  5. 5.
    Berkhout, A.J.: A holographic approach to acoustic control. Journal of the Audio Engineering Society 36, 977–995 (1988)Google Scholar
  6. 6.
    Blauert, J.: Spatial Hearing. MIT Press, Cambridge (2001)Google Scholar
  7. 7.
    Castellanos, J.: Design of a framework for adaptive spatial audio rendering. Master’s thesis, University of California, Santa Barbara (2006)Google Scholar
  8. 8.
    Chowning, J.: The simulation of moving sound sources. Journal of the Audio Engineering Society 19(11) (1971)Google Scholar
  9. 9.
    Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.A., Hart, J.C.: The CAVE: Audio visual experience automatic virtual environment. Communications of the ACM (35), 64–72 (1992)CrossRefGoogle Scholar
  10. 10.
    Freed, A.: Design of a 120-channel loudspeaker array. Technical report, CNMAT, University of California Berkeley (2005)Google Scholar
  11. 11.
    Gerzon, M.A.: Periphony: With-height sound reproduction. Journal of the Audio Engineering Society 21(1), 2–10 (1973)Google Scholar
  12. 12.
    Höllerer, T., Kuchera-Morin, J., Amatriain, X.: The allosphere: a large-scale immersive surround-view instrument. In: EDT 2007: Proceedings of the 2007 workshop on Emerging displays technologies, p. 3. ACM Press, New York (2007)Google Scholar
  13. 13.
    Hollerweger, F.: Periphonic sound spatialization in multi-user virtual environments. Master’s thesis, Austrian Institute of Electronic Music and Acoustics (IEM) (2006)Google Scholar
  14. 14.
    Humphrey, W., Dalke, A., Schulten, K.: Vmd - visual molecular dynamics. Journal of Molecular Graphics (14), 33–38 (1996)CrossRefGoogle Scholar
  15. 15.
    Ihren, J., Frisch, K.J.: The fully immersive CAVE. In: Proc. 3 rd International Immersive Projection Technology Workshop, pp. 59–63 (1999)Google Scholar
  16. 16.
    Malham, D.G.: Spherical harmonic coding of sound objects - the ambisonic ’o’ format. In: Proceedings of the AES 19th International Conference, pp. 54–57 (2001)Google Scholar
  17. 17.
    Malham, D.G., Myatt, A.: 3-d sound spatialization using ambisonic techniques. Computer Music Journal (CMJ) 19(4), 58–70 (1995)CrossRefGoogle Scholar
  18. 18.
    McCoy, D.: Ventriloquist: A performance interface for real-time gesture-controlled music spatialization. Master’s thesis, University of California Santa Barbara (2005)Google Scholar
  19. 19.
    McGurk, H., McDonald, T.: Hearing lips and seeing voices. Nature (264), 746–748 (1976)CrossRefGoogle Scholar
  20. 20.
    Pope, S.T., Ramakrishnan, C.: The Create Signal Library (”Sizzle”): Design, Issues and Applications. In: Proceedings of the 2003 International Computer Music Conference (ICMC 2003) (2003)Google Scholar
  21. 21.
    Pulkki, V., Hirvonen, T.: Localization of virtual sources in multi-channel audio reproduction. IEEE Transactions on Speech and Audio Processing 13(1), 105–119 (2005)CrossRefGoogle Scholar
  22. 22.
    Rabenstein, R., Spors, S., Steffen, P.: Wave Field Synthesis Techniques for Spatial Sound Reproduction. In: Selected methods of Acoustic Echo and Noise Control, Springer, Heidelberg (2005)Google Scholar
  23. 23.
    Spors, S., Teutsch, H., Rabenstein, R.: High-quality acoustic rendering with wave field synthesis. In: Proc. Vision, Modeling, and Visualization Workshop, pp. 101–108 (2002)Google Scholar
  24. 24.
    Spors, R., Rabenstein, S.: Spatial aliasing artifacts produced by linear and circular loudspeaker arrays used for wave field synthesis. In: Proc. of The AES 120th Convention (2006)Google Scholar
  25. 25.
    Teutsch, H., Spors, S., Herbordt, W., Kellermann, W., Rabenstein, R.: An integrated real-time system for immersive audio applications. In: Proc. 2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY (2003)Google Scholar
  26. 26.
    Theile, G.: Wave field synthesis - a promising spatial audio rendering concept. In: Proc. of the 7th Int. Conference on Digial Audio Effects (DAFx 2004) (2004)Google Scholar
  27. 27.
    Wakefield, G.: Third-order ambisonic extensions for max/msp with musical applications. In: Proceedings of the 2006 ICMC (2006)Google Scholar
  28. 28.
    Wegman, E.J., Symanzik, J.: Immersive projection technology for visual data mining. Journal of Computational and Graphical Statistics (March 2002)Google Scholar
  29. 29.
    Wenzel, E.M.: Effect of increasing system latency on localization of virtual sounds. In: Proc. of the AES 16th International Conference: Spatial Sound Reproduction (1999)Google Scholar
  30. 30.
    Zicarelli, D.: How I Learned to Love a Program that Does Nothing. Computer Music Journal 26(4), 44–51 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Xavier Amatriain
    • 1
  • Jorge Castellanos
    • 1
  • Tobias Höllerer
    • 1
  • JoAnn Kuchera-Morin
    • 1
  • Stephen T. Pope
    • 1
  • Graham Wakefield
    • 1
  • Will Wolcott
    • 1
  1. 1.UC Santa Barbara 

Personalised recommendations