Multimodal Excitatory Interfaces with Automatic Content Classification

Chapter
Part of the Human-Computer Interaction Series book series (HCIS)

Abstract

We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

Keywords

Vibrotactile Audio Language model Mobile 

References

  1. 1.
    T. Bell, J. Cleary, and I. Witten. Data compression using adaptive coding and partial string matching. IEEE Transactions on Communications, 32(4):396–402, 1984.CrossRefGoogle Scholar
  2. 2.
    J. Cleary, W. Teahan, and I. Witten. Unbounded length contexts for PPM. In DCC-95, pages 52–61. IEEE Computer Society Press, 1995.Google Scholar
  3. 3.
    P. Eslambolchilar and R. Murray-Smith. Model-based, multimodal interaction in document browsing. In Multimodal Interaction and Related Machine Learning Algorithms, 2006.Google Scholar
  4. 4.
    M. Fernström. Sound objects and human-computer interaction design. In D. Rocchesso and F. Fontana, editors, The Sounding Object, pages 45–59. Mondo Estremo Publishing, 2003.Google Scholar
  5. 5.
    T. Hermann. Sonification for Exploratory Data Analysis. PhD thesis, Bielefeld University, Bielefeld, Germany, 2002.Google Scholar
  6. 6.
    T. Hermann, J. Krause, and H. Ritter. Real-time control of sonification models with an audio-haptic interface. In R. Nakatsu and H. Kawahara, editors, Proceedings of the International Conference on Auditory Display, pages 82–86, Kyoto, Japan, 7 2002. International Community for Auditory Display (ICAD), ICAD.Google Scholar
  7. 7.
    T. Hermann and H. Ritter. Listen to your data: Model-based sonification for data analysis. In M. R. Syed, editor, Advances in intelligent computing and mulimedia systems, pages 189–194. Int. Inst. for Advanced Studies in System Research and Cybernetics, 1999.Google Scholar
  8. 8.
    T. Hermann and H. Ritter. Crystallization sonification of high-dimensional datasets. ACM Transactions on Applied Perception, 2(4):550–558, 2005.CrossRefGoogle Scholar
  9. 9.
    K. Hinckley, J. Pierce, M. Sinclair, and E. Horvitz. Sensing techniques for mobile interaction. In UIST’2000, 2000.Google Scholar
  10. 10.
    Y. How and M.-Y. Kan. Optimizing predictive text entry for short message service on mobile phones. In Human Computer Interfaces International, 2005.Google Scholar
  11. 11.
    S. Hughes, I. Oakley, and S. O’Modhrain. Mesh: Supporting mobile multi-modal interfaces. In UIST 2004. ACM, 2004.Google Scholar
  12. 12.
    K.J. Kuchenbecker, J. Fiene, and G. Niemeyer. Improving contact realism through event-based haptic feedback. IEEE Transactions on Visualization and Computer Graphics, 12(2):219–230, 2006.CrossRefGoogle Scholar
  13. 13.
    J. Linjama, J. Hakkila, and S. Ronkainen. Gesture interfaces for mobile devices - minimalist approach for haptic interaction. In CHI Workshop: Hands on Haptics: Exploring Non-Visual Visualisation Using the Sense of Touch, 2005.Google Scholar
  14. 14.
    S. O’Modhrain and G. Essl. Pebblebox and crumblebag: Tactile interfaces for granular synthesis. In NIME’04, 2004.Google Scholar
  15. 15.
    M. Rath and D. Rocchesso. Continuous sonic feedback from a rolling ball. IEEE MultiMedia, 12(2):60–69, 2005.Google Scholar
  16. 16.
    J. Rekimoto. Tilting operations for small screen interfaces. In ACM Symposium on User Interface Software and Technology, pages 167–168, 1996.Google Scholar
  17. 17.
    W.J. Teahan and J.G. Cleary. The entropy of English using PPM-based models. In Data Compression Conference, pages 53–62, 1996.Google Scholar
  18. 18.
    K. van den Doel, P.G. Kry, and D.K. Pai. Foleyautomatic: physically-based sound effects for interactive simulation and animation. In SIGGRAPH ’01, pages 537–544. ACM Press, 2001.Google Scholar
  19. 19.
    J. Williamson. Continuous Uncertain Interaction. PhD thesis, University of Glasgow, 2006.Google Scholar
  20. 20.
    J. Williamson and R. Murray-Smith. Granular synthesis for display of time-varying probability densities,. In A. Hunt and Th. Hermann, editors, International Workshop on Interactive Sonification, 2004.Google Scholar
  21. 21.
    J. Williamson and R. Murray-Smith. Sonification of probabilistic feedback through granular synthesis. IEEE Multimedia, 12(2):45–52, 2005.CrossRefGoogle Scholar
  22. 22.
    J. Williamson, R. Murray-Smith, and S. Hughes. Shoogle: Excitatory multimodal interaction on mobile devices. In Proceedings of CHI 2007, page In Press, 2007.Google Scholar
  23. 23.
    H.-Y. Yao and V. Hayward. An experiment on length perception with a virtual rolling stone. In Eurohaptics 06, 2006.Google Scholar

Copyright information

© Springer-Verlag London 2010

Authors and Affiliations

  1. 1.University of GlasgowGlasgowUK

Personalised recommendations