Advertisement

Leaping across Modalities: Speed Regulation Messages in Audio and Tactile Domains

  • Kai Tuuri
  • Tuomas Eerola
  • Antti Pirhonen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6306)

Abstract

This study examines three design bases for speed regulation messages by testing their ability to function across modalities. Two of the design bases utilise a method originally intended for sound design and the third uses a method meant for tactile feedback. According to the experimental results, all designs communicate the intended meanings similarly in audio and tactile domains. It was also found that melodic (frequency changes) and rhythmic (segmentation) features of stimuli function differently for each type of message.

Keywords

audio tactile crossmodal interactions crossmodal design 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hoggan, E., Brewster, S.: Designing audio and tactile crossmodal icons for mobile devices. In: Proc. of the 9th International Conference on Multimodal Interfaces, pp. 162–169. ACM, NY (2007)Google Scholar
  2. 2.
    Tuuri, K., Eerola, T.: Could function-specific prosodic cues be used as a basis for non-speech user interface sound design? In: Proc. of ICAD 2008. IRCAM, Paris (2008)Google Scholar
  3. 3.
    Lylykangas, J., Surakka, V., Rantala, J., Raisamo, J., Raisamo, R., Tuulari, E.: Vibrotactile Information for Intuitive Speed Regulation. In: Proc. of HCI 2009, pp. 112–119 (2009)Google Scholar
  4. 4.
    Fodor, J.A.: The language of thought. Harvard University Press, Cambridge (1975)Google Scholar
  5. 5.
    Gallese, V., Lakoff, G.: The brain’s concepts: The role of the sensory-motor system in reason and language. Cognitive Neuropsychology 22, 455–479 (2005)CrossRefGoogle Scholar
  6. 6.
    Johnson, M., Rohrer, T.: We are live creatures: Embodiment, American pragmatism and the cognitive organism. In: Zlatev, J., Ziemke, T., Frank, R., Dirven, R. (eds.) Body, language, and mind, vol. 1, pp. 17–54. Mouton de Gruyter, Berlin (2007)Google Scholar
  7. 7.
    Leman, M.: Embodied Music Cognition and Mediation Technology. MIT Press, Cambridge (2008)Google Scholar
  8. 8.
    Johnson, M.: The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. University of Chicago, Chicago (1987)Google Scholar
  9. 9.
    Pirhonen, A., Tuuri, K.: In Search for an Integrated Design Basis for Audio and Haptics. In: Pirhonen, A., Brewster, S. (eds.) HAID 2008. LNCS, vol. 5270, pp. 81–90. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Fernald, A.: Intonation and communicative intent in mothers’ speech to infants: Is the melody the message? Child development, 1497–1510 (1989)Google Scholar
  11. 11.
    Tuuri, K., Eerola, T.: Identifying function-specific prosodic cues for non-speech user interface sound design. In: Proc. of the 11th International Conference on Digital Audio Effects, pp. 185–188 (2008)Google Scholar
  12. 12.
    Tuuri, K., Eerola, T., Pirhonen, A.: Design and Evaluation of Prosody Based Non-Speech Audio Feedback for Physical Training Application (Journal submission)Google Scholar
  13. 13.
    Tuuri, K.: Gestural attributions as semantics in user interface sound design. In: Kopp, S., Wachsmuth, I. (eds.) Gesture in Embodied Communication and Human-Computer Interaction. LNCS (LNAI), vol. 5934, pp. 257–268. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Chion, M.: Audio-vision: Sound on screen. Columbia University Press, NY (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Kai Tuuri
    • 1
  • Tuomas Eerola
    • 2
  • Antti Pirhonen
    • 1
  1. 1.Department of Computer Science and Information SystemsUniversity of JyväskyläFinland
  2. 2.Department of MusicUniversity of JyväskyläFinland

Personalised recommendations