Chapter

The Engineering of Mixed Reality Systems

Part of the series Human-Computer Interaction Series pp 233-250

Date:

Multimodal Excitatory Interfaces with Automatic Content Classification

  • John WilliamsonAffiliated withUniversity of Glasgow Email author 
  • , Roderick Murray-SmithAffiliated withUniversity of Glasgow

* Final gross prices may vary according to local VAT.

Get Access

Abstract

We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

Keywords

Vibrotactile Audio Language model Mobile