Lemma 4: Haptic Input + Auditory Display = Musical Instrument?

  • Paul Vickers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4129)


In this paper we look at some of the design issues that affect the success of multimodal displays that combine acoustic and haptic modalities. First, issues affecting successful sonification design are explored and suggestions are made about how the language of electroacoustic music can assist. Next, haptic interaction is introduced in the light of this discussion, particularly focusing on the roles of gesture and mimesis. Finally, some observations are made regarding some of the issues that arise when the haptic and acoustic modalities are combined in the interface. This paper looks at examples of where auditory and haptic interaction have been successfully combined beyond the strict confines of the human-computer application interface (musical instruments in particular) and discusses lessons that may be drawn from these domains and applied to the world of multimodal human-computer interaction. The argument is made that combined haptic-auditory interaction schemes can be thought of as musical instruments and some of the possible ramifications of this are raised.


Musical Instrument Abstract Syntax Haptic Interaction Auditory Display Haptic Modality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Eisenstein, S., Pudovkin, V., Alexandrov, G.: Statement on sound. In: Taylor, R., Christie, I. (eds.) The Film Factory: Russian and Soviet Cinema in Documents, 1896-1939, pp. 234–235. Harvard University Press, Cambridge (1988)Google Scholar
  2. 2.
    Hudson, S.E., Smith, I.: Techniques for addressing fundamental privacy and disruption tradeoffs in awareness support systems. In: CSCW 1996: Proceedings of the 1996 ACM conference on Computer supported cooperative work, pp. 248–257. ACM Press, New York (1996)CrossRefGoogle Scholar
  3. 3.
    Gutwin, C., Greenberg, S.: Support for group awareness in real time desktop conferences. In: Proceedings of The Second New Zealand Computer Science Research Students Conference, Hamilton, New Zealand, University of Waikato (1995)Google Scholar
  4. 4.
    Kilander, F., Lönnqvist, P.: A whisper in the woods – an ambient soundscape for peripheral awareness of remote processes. In: ICAD 2002 – International Conference on Auditory Display (2002)Google Scholar
  5. 5.
    Sawhney, N., Schmandt, C.: Nomadic radio: Scaleable and contextual notification for wearable audio messaging. In: Proceedings of the CHI 1999 Conference on Human factors in computing systems: the CHI is the limit, pp. 96–103. ACM Press, New York (1999)CrossRefGoogle Scholar
  6. 6.
    Sawhney, N., Schmandt, C.: Nomadic radio: speech and audio interaction for contextual messaging in no- madic environments. ACM Transactions on Computer-Human Interaction (TOCHI) 7(3), 353–383 (2000)CrossRefGoogle Scholar
  7. 7.
    Pedersen, E.R., Sokoler, T.: Aroma: abstract representation of presence supporting mutual awareness. In: CHI 1997: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 51–58. ACM Press, New York (1997)CrossRefGoogle Scholar
  8. 8.
    Somers, E.: A pedagogy of creative thinking based on sonification of visual structures and visualization of aural structures. In: Brewster, S.A., Edwards, A.D.N. (eds.) ICAD 1998 Fifth International Conference on Auditory Display. Electronic Workshops in Computing. British Computer Society, Glasgow (1998)Google Scholar
  9. 9.
    Fishwick, P. (ed.): Aesthetic Computing. MIT Press, Cambridge (2006)Google Scholar
  10. 10.
    Pedersen, E.R.: People presence or room activity supporting peripheral awareness over distance. In: CHI 1998: CHI 1998 conference summary on Human factors in computing systems, pp. 283–284. ACM Press, New York (1998)CrossRefGoogle Scholar
  11. 11.
    Cohen, J.: Monitoring background activities. In: Kramer, G. (ed.) Auditory Display. Santa Fe Institute, Studies in the Sciences of Complexity Proceedings, vol. XVIII, pp. 499–532. Addison-Wesley, Reading (1994)Google Scholar
  12. 12.
    Buxton, W.: Introduction to this special issue on nonspeech audio. Human-Computer Interaction 4, 1–9 (1989)Google Scholar
  13. 13.
    Wrightson, K.: An introduction to acoustic ecology. Soundscape: The Journal of Acoustic Ecology 1(1), 10–13 (2000)MathSciNetGoogle Scholar
  14. 14.
    Schafer, R.M.: The Tuning of the World. Random House (1977)Google Scholar
  15. 15.
    Søråsen, S.: Monitoring continuous activities with rhythmic music. In: Proceedings of the Student Interaction Design Research Conference, Sønderborg, Mads Clausen Institute, University of Southern Denmark (2005)Google Scholar
  16. 16.
    Vickers, P., Hogg, B.: Sonification abstraite/sonification concrète: An ‘æsthetic perspective space’ for classifying auditory displays in the ars musica domain. In: Edwards, A.D.N., Stockman, T. (eds.) ICAD 2006 - The 12th Meeting of the International Conference on Auditory Display, London, UK (2006)Google Scholar
  17. 17.
    Smalley, D.: Spectro-morphology and structuring processes. In: Emmerson, S. (ed.) The Language of Electroacoustic Music, pp. 61–96. Macmillan, London (1986)Google Scholar
  18. 18.
    Emmerson, S.: The relation of language to materials. In: Emmerson, S. (ed.) The Language of Electroacoustic Music, pp. 17–39. Macmillan, London (1986)Google Scholar
  19. 19.
    Vickers, P., Alty, J.L.: Musical program auralization: Empirical studies. ACM Trans. Appl. Percept. 2(4), 477–489 (2005)CrossRefGoogle Scholar
  20. 20.
    Gaver, W.W.: Auditory icons: Using sound in computer interfaces. Human Computer Interaction 2, 167–177 (1986)CrossRefGoogle Scholar
  21. 21.
    Barra, M., Cillo, T., De Santis, A., Petrillo, U., Negro, A., Scarano, V.: Multimodal monitoring of web servers. IEEE Multimedia 9(3), 32–41 (2002)CrossRefGoogle Scholar
  22. 22.
    Hayward, C.: Listening to the earth sing. In: Kramer, G. (ed.) Auditory Display. Santa Fe Institute, Studies in the Sciences of Complexity Proceedings, vol. XVIII, pp. 369–404. Addison-Wesley, Reading (1994)Google Scholar
  23. 23.
    Vickers, P., Alty, J.L.: Siren songs and swan songs: Debugging with music. Communications of the ACM 46(7), 86–92 (2003)CrossRefGoogle Scholar
  24. 24.
    Kramer, G.: An introduction to auditory display. In: Kramer, G. (ed.) Auditory Display. Santa Fe Institute, Studies in the Sciences of Complexity Proceedings, vol. XVIII, pp. 1–78. Addison-Wesley, Reading (1994)Google Scholar
  25. 25.
    Brewster, S.A., Brown, L.M.: Tactons: Structured tactile messages for non-visual information display. In: Proceedings of Australasian User Interface Conference, pp. 15–23. Austalian Computer Society, Dunedin (2004)Google Scholar
  26. 26.
    McGee, K., Harup, A.: Contact expressions for touching technologies. In: Mitchell, G. (ed.) Proc. 3rd Conf. Computational Semiotics in Games and New Media, pp. 68–76. University of Teesside, UK (2003)Google Scholar
  27. 27.
    Magnusson, T.: Screen-based musical interfaces as semiotic machines. In: Schnell, N., Bevilacqua, F., Lyons, M., Tanaka, A. (eds.) Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME 2006), pp. 162–167. IRCAM – Centre Pompidou, Paris (2006)Google Scholar
  28. 28.
    Winograd, T., Flores, F.: Understanding Computers and Cognition. Ablex Publishing Corp., Norwood (1986)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Paul Vickers
    • 1
  1. 1.School of Computing, Engineering, and Information SciencesNorthumbria UniversityNewcastle upon TyneUK

Personalised recommendations