Personal and Ubiquitous Computing

, Volume 12, Issue 7, pp 513–525 | Cite as

ONTRACK: Dynamically adapting music playback to support navigation

  • Matt Jones
  • Steve Jones
  • Gareth Bradley
  • Nigel Warren
  • David Bainbridge
  • Geoff Holmes
Original Article


Listening to music on personal, digital devices whilst mobile is an enjoyable, everyday activity. We explore a scheme for exploiting this practice to immerse listeners in navigation cues. Our prototype, ONTRACK, continuously adapts audio, modifying the spatial balance and volume to lead listeners to their target destination. First we report on an initial lab-based evaluation that demonstrated the approach’s efficacy: users were able to complete tasks within a reasonable time and their subjective feedback was positive. Encouraged by these results we constructed a handheld prototype. Here, we discuss this implementation and the results of field-trials. These indicate that even with a low-fidelity realisation of the concept, users can quite effectively navigate complicated routes.


Global Position System Completion Time Navigation Information Music Track Global Position System Technology 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Bederson BB (1995) Audio augmented reality: a prototype automated tour guide. Conference companion on human factors in computing systems, pp 210–211Google Scholar
  2. 2.
    Bull M (2006) Investigating the culture of mobile listening: from Walkman to Ipod. In: O’Hara K, Brown B (eds) Consuming music together: social and collaborative aspects of music consumption technologies (computer supported cooperative work series), vol 35, Chapter 7. Kluwer Academic Publishers, DordrechtGoogle Scholar
  3. 3.
    Brewster S (1993) An evaluation of earcons for use in auditory human-computer interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI’93), pp 222–227Google Scholar
  4. 4.
    Dahley A, Wisneski C, Ishii H (1998) Water lamp and pinwheels: ambient projection of digital information into architectural space. In: Proceedings of the SIGCHI conference summary on human factors in computing systems (CHI’98), pp 269–270Google Scholar
  5. 5.
    Etter R (2005) Melodious walkabout - implicit navigation with contextualized personal audio contents. In: Adjunct proceedings of the third international conference on pervasive computing, ISBN 3-85403-191-2Google Scholar
  6. 6.
    Flintham M, et al (2003) Where on-line meets on the streets: experiences with mobile mixed reality games. Proceedings of the conference on human factors in computing systems, pp 569–576Google Scholar
  7. 7.
    Gaver WW, Smith RB, O’Shea T (1991) Effective sounds in complex systems: the ARKOLA simulation. In: Proceedings of the SIGCHI conference on Human factors in computing systems: Reaching through technology, pp 85–90Google Scholar
  8. 8.
    Hart SG, Staveland LE (1988) Development of NASA-TLX (Task Load Index): results of empirical and theoretical research, in human mental workload. In: Hancock PA, Meshkati N (eds) 1988, North Holland, Amsterdam, pp 139–183Google Scholar
  9. 9.
    Holland S, Morse DR, Gedenryd H (2002) AudioGPS: spatial audio navigation with a minimal attention interface. Personal Ubiquitous Comput 6(4):253–259CrossRefGoogle Scholar
  10. 10.
    Ishii H et al (1998) ambientROOM: integrating ambient media with architectural space. CHI 98 conference summary on human factors in computing systems, pp 173–174Google Scholar
  11. 11.
    Jones M, Bradley G, Jones S, Holmes G (2006) Navigation-by-music: an initial prototype and evaluation. microsoft research international symposium on intelligent environments. Cambridge, UK (in press)Google Scholar
  12. 12.
    Kray C et al (2003) Presenting route instructions on mobile devices. In: Proceedings of the 8th international conference on Intelligent user interfaces, pp 117–124Google Scholar
  13. 13.
    May AJ et al (2003) Pedestrian navigation aids: information requirements and design implications. Personal Ubiquitous Comput 7(6):331–338CrossRefGoogle Scholar
  14. 14.
    Nemirovsky P (1999) Aesthetic forms of expression as information delivery units. Masters thesis, Massachusetts Institute of Technology, accessible via∼pauln/research/guideshoes/thesis/Thesis.pdf
  15. 15.
    Pascoe J (2001) Context-aware software. PhD thesis, University of Kent at Canterbury, accessible via
  16. 16.
    Rekimoto J, Nagao K (1995) The world through the computer: computer augmented interaction with real world environments. In: Proceedings of the 8th annual ACM symposium on user interface and software technology, pp 29–36Google Scholar
  17. 17.
    Sawhney N, Murphy A (1996) ESPACE 2: an experimental hyperaudio environment. Conference companion on human factors in computing systems: common ground, pp 105–106Google Scholar
  18. 18.
    Sawhney N, Schmandt C (2000) Nomadic radio: speech and audio interaction for contextual messaging in nomadic environments. ACM transactions on computer–human interaction (TOCHI), pp 353–383Google Scholar
  19. 19.
    Shoval S, Borenstein J, Koren Y (1998) Auditory guidance with the NavBelt—a computerized travel aid for the blind. IEEE Trans Syst Man Cybern 28(3):459–467CrossRefGoogle Scholar
  20. 20.
    Strachan S, Eslambolchilar P, Murray-Smith R, Hughes S, O’Modhrain S (2005) GPSTunes: controlling navigation via audio feedback. In: Proceedings of the 7th international conference on human computer interaction with mobile devices & services, pp 275–278Google Scholar
  21. 21.
    Tran TV, Letowski T, Abouchacra KS (2000) Evaluation of acoustic beacon characteristics for navigation tasks. Ergonomics 43(6):807–827CrossRefGoogle Scholar
  22. 22.
    Warren N, Jones M, Jones S, Bainbridge D (2005) Navigation via continuously adapted music. Exteded Abstracts, ACM conference on human factors and computing systems (CHI ‘05), Portland, Oregon, USA (April 3–7th 2005), 1849–1852, ACM Press, New YorkGoogle Scholar
  23. 23.
    Weiser M and Brown J S. (1995) Designing calm technology. (last accessed 25/02/2005)
  24. 24.
    Williams J, Murray-Smith R (2004) Granular synthesis for display of time-varying probability densities. In: Proceedings international workshop on interactive sonification, Bielefeld, GermanyGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2007

Authors and Affiliations

  • Matt Jones
    • 1
  • Steve Jones
    • 2
  • Gareth Bradley
    • 2
  • Nigel Warren
    • 2
  • David Bainbridge
    • 2
  • Geoff Holmes
    • 2
  1. 1.Future Interaction Technology Lab, Department of Computer ScienceUniversity of WalesSwanseaUK
  2. 2.Department of Computer ScienceUniversity of WaikatoHamiltonNew Zealand

Personalised recommendations