Advertisement

Personal and Ubiquitous Computing

, Volume 16, Issue 8, pp 987–999 | Cite as

Auditory display design for exploration in mobile audio-augmented reality

  • Yolanda Vazquez-AlvarezEmail author
  • Ian Oakley
  • Stephen A. Brewster
Original Article

Abstract

In this paper, we compare four different auditory displays in a mobile audio-augmented reality environment (a sound garden). The auditory displays varied in the use of non-speech audio, Earcons, as auditory landmarks and 3D audio spatialization, and the goal was to test the user experience of discovery in a purely exploratory environment that included multiple simultaneous sound sources. We present quantitative and qualitative results from an initial user study conducted in the Municipal Gardens of Funchal, Madeira. Results show that spatial audio together with Earcons allowed users to explore multiple simultaneous sources and had the added benefit of increasing the level of immersion in the experience. In addition, spatial audio encouraged a more exploratory and playful response to the environment. An analysis of the participants’ logged data suggested that the level of immersion can be related to increased instances of stopping and scanning the environment, which can be quantified in terms of walking speed and head movement.

Keywords

Sound garden Spatial audio Auditory displays Eyes-free interaction Mobile audio-augmented reality Exploratory environments 

Notes

Acknowledgments

This work was supported by the Ken Browning Travelling Scholarship (University of Glasgow, UK), Nokia and EPSRC research grant EP/F023405 “Gaime”. We would like to express our gratitude to the members of the Madeira-ITI group at Madeira University who participated in this research project.

References

  1. 1.
    Shepard M (2006) Tactical Sound Garden [TSG] Toolkit. In: 3rd international workshop on mobile music technology, Brighton, UKGoogle Scholar
  2. 2.
    Walker BN, Lindsay J (2006) Navigation performance with a virtual auditory display: effects of beacon sound, capture radius, and practice. Hum Factors 48(2):265–278CrossRefGoogle Scholar
  3. 3.
    Stahl C (2007) The roaring navigator: a group guide for the zoo with shared auditory landmark display. In: Proceedings of MobileHCI 2007. ACM Press, New York, pp 282–386Google Scholar
  4. 4.
    Blauert J (1997) Spatial hearing: the psychophysics of human sound localization. The MIT Press, CambridgeGoogle Scholar
  5. 5.
    Begault DR (1994) 3D sound for virtual reality and multimedia. AP Professional, BostonGoogle Scholar
  6. 6.
    Mariette N (2007) From backpack to handheld: the recent trajectory of personal location aware spatial audio. In: Hutchison A (ed) PerthDAC 2007: Proceedings of the 7th digital arts and culture conference. Curtin University of Technology, Perth, pp 233–240Google Scholar
  7. 7.
    Mynatt E, Back M, Want R, Baer M, Ellis JB (1995) Designing audio aura. In: Proceedings of CHI 1998. ACM Press, New York, pp 566–573Google Scholar
  8. 8.
    Marentakis GN, Brewster SA (2006) Effects of feedback, mobility and index of difficulty on deictic spatial audio target acquisition in the horizontal plane. In: Proceedings of ACM CHI 2006. ACM Press, New York, pp 359–368Google Scholar
  9. 9.
    Rozier J, Karahalios K, Donath J (2000) Hear & There: an augmented reality system of linked audio. In: Proceedings of the international conference on auditory display—ICAD 2000Google Scholar
  10. 10.
    Reid J, Geelhoed E, Hull R, Carter K, Clayton B (2005) Parallel worlds: immersion in location-based experiences. In: Proceedings of CHI 2005, vol 2. ACM Press, New York, pp 1733–1736Google Scholar
  11. 11.
    Holland S, Morse DR, Gedenryd H (2002) AudioGPS: spatial audio in a minimal attention interface. Pers Ubiquit Comput 6(4):253–259CrossRefGoogle Scholar
  12. 12.
    Cater K, Hull R, O’Hara K, Melamed T, Clayton B (2007) The potential of spatialised audio for location based services on mobile devices: Mediascapes. SAMD: Workshop on Spatialised Audio for Mobile Devices, MobileHCI 2007Google Scholar
  13. 13.
    McGookin D, Brewster S, Priego P (2009) Audio Bubbles. Employing non-speech audio to support tourist wayfinding. In: Altinsoy ME, Jekosch U, Brewster S (eds) HAID 2009, LNCS, vol 5763, pp 41–50Google Scholar
  14. 14.
    Magnusson C, Breidegard B, Rassmus-Gröhn K (2009) Soundcrumbs—Hansel and Gretel in the 21st century. In: HAID 2009, LNCS. SpringerGoogle Scholar
  15. 15.
    Blattner MM, Sumikawa DA, Greenberg RM (1989) Earcons and icons: their structure and common design principles. Hum Comput Interact 4(1):11–44CrossRefGoogle Scholar
  16. 16.
    Gaver WW (1997) Auditory interfaces. In: Helander MG, Landauer TK, Prabhu PV (eds) Handbook of human-computer interaction, 2nd edn. Elsevier, Amsterdam, pp 1003–1041Google Scholar
  17. 17.
    Lemordant J, Guerraz A (2007) Mobile immersive music. In: Proceedings of the 2007 international computer music conference, ICMC 2007. ICMA, San Francisco, pp 21–24Google Scholar
  18. 18.
    Etter R, Specht M (2005) Melodious walkabout: implicit navigation with contextualized personal audio contents. Fraunhofer Institute of applied Information Technology, Sankt AugustinGoogle Scholar
  19. 19.
    Jones M, Jones S, Bradley G, Warren N, Bainbridge D, Holmes G (2008) ONTRACK: dynamically adapting music playback to support navigation. Pers Ubiquit Comput 12(7):513–525CrossRefGoogle Scholar
  20. 20.
    Strachan S, Eslambolchilar P, Murray-Smith R (2005) gpstunes—controlling navigation via audio feedback. MobileHCI 2005, vol 1. ACM, New York, pp 275–278Google Scholar
  21. 21.
    Lyons K, Gandy M, Starner T (2000) Guided by Voices: An Audio Augmented Reality System. In: International conference on auditory display (ICAD), Atlanta, pp 57–62Google Scholar
  22. 22.
    Heller F, Knott T, Weiss M, Borchers J (2009) Multi-user interaction in virtual audio spaces. CHI Extended Abstracts of ACM CHI 2009. ACM Press, pp 4489–4494Google Scholar
  23. 23.
    Mariette N (2010) Navigation performance effects of render method and head-turn latency in mobile audio augmented reality. CMMR/ICAD 2009, LNCS, vol 5954, pp 239–265Google Scholar
  24. 24.
    Brungart DS, Simpson BD, Kordik AJ (2005) The detectability of headtracker latency in virtual audio displays. In: Proceedings of the 11th international conference on auditory display (ICAD), Limerick, Ireland, pp 37–42Google Scholar
  25. 25.
    Vazquez-Alvarez Y, Brewster SA (2010) Designing spatial audio interfaces to support multiple audio streams. In: Proceedings of MobileHCI 2010. ACM Press, New York, pp 253–256Google Scholar
  26. 26.
    Grossman T, Balakrishnan R (2005) The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor’s activation area. In: Proceedings of CHI 2005. ACM Press, New York, pp 281–290Google Scholar

Copyright information

© Springer-Verlag London Limited 2011

Authors and Affiliations

  • Yolanda Vazquez-Alvarez
    • 1
    Email author
  • Ian Oakley
    • 2
  • Stephen A. Brewster
    • 1
  1. 1.Glasgow Interactive Systems Group, School of Computing ScienceUniversity of GlasgowGlasgowUK
  2. 2.Madeira-ITI, University of MadeiraFunchalPortugal

Personalised recommendations