Journal on Multimodal User Interfaces

, Volume 1, Issue 1, pp 1–5 | Cite as

How really effective are multimodal hints in enhancing visual target spotting? Some evidence from a usability study

  • Suzanne Kieffer
  • Noëlle Carbonell


The main aim of the work presented here is to contribute to computer science advances in the multimodal usability area, in-asmuch as it addresses one of the major issues relating to the generation of effective oral system messages: how to design messages which effectively help users to locate specific graphical objects in information visualisations? An experimental study was carried out to determine whether oral messages including coarse information on the locations of graphical objects on the current display may facilitate target detection tasks sufficiently for making it worth while to integrate such messages in GUIs. The display spatial layout varied in order to test the influence of visual presentation structure on the contribution of these messages to facilitating visual search on crowded displays. Finally, three levels of task difficulty were defined, based mainly on the target visual complexity and the number of distractors in the scene. The findings suggest that spatial information messages improve participants’ visual search performances significantly; they are more appropriate to radial structures than to matrix, random and elleptic structures; and, they are particularly useful for performing difficult visual search tasks.


Visual search Multimodal system messages Speech and graphics Usability study Experimental evaluation Visual target spotting 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

7. References

  1. [1]
    B. Bederson, “Photomesa: a zoomable image browser using quantum treemaps and bubblemaps”, inProceedings 14th ACM Annual Symposium on User Interfaces Software and Technology (UIST’01), CHI Letters, vol. 3, pp. 71–80, 2001. 1CrossRefGoogle Scholar
  2. [2]
    B. Shneiderman, “The eyes have it: A task by data type taxonomy for information visualizations”, inProc. IEEE Workshop on Visual Languages, pp. 336–343, 1996. 1, 3Google Scholar
  3. [3]
    S. Oviatt, A. DeAngeli, and K. Kuhn, “Integration and synchronisation of input modes during multimodal human-computer interaction”, inProc. CHI’97, pp. 415–422, 1997. 1Google Scholar
  4. [4]
    S. Oviatt,Multimodal Interfaces for Human-Machine Communication, Series on Machine Perception and Artificial Intelligence, ch. Advances in the Robust Processing of Multimodal Speech Systems, pp. 203–218. London: World Scientific Publishers, 2001. 1Google Scholar
  5. [5]
    S. Robbe, N. Carbonell, and P. Dauchy, “Expression constraints in multimodal human-computer interaction”, inProceedings 2000 International Conference on Intelligent User Interfaces (IUI’2000) (H. Lieberman, ed.), (New Orleans), pp. 225–229, New York (NY): ACM Press, January 2000. 1CrossRefGoogle Scholar
  6. [6]
    N. Carbonell, “Recommendations for the design of usable multimodal command languages”,Universal Access in the Information Society International Journal (UAIS), Special issue ‘Multimodality, a step towards universal access’, vol. 2, no. 2, pp. 143–159, 2003. 1Google Scholar
  7. [7]
    A. Corradini, R. Wesson, and P. Cohen, “A map-based system using speech and 3d gestures for pervasive computing”, inProceedings 4th IEEE International Conference on Multimodal Interfaces (ICMI02), (Pittsburg, PA), pp. 191–196, October 2002. 1Google Scholar
  8. [8]
    E. Kaiser, A. Olwal, D. McGee, H. Benko, A. Corradini, X. Li, P. Cohen, and S. Feiner, “Mutual disambiguation of 3d multimodal interaction in augmented and virtual reality”, inProceedings of the 5th IEEE International Conference on Multimodal Interfaces (ICMI0-PUI’03), (Vancouver, BC), pp. 12–19, November 2003. 1Google Scholar
  9. [9]
    L. Nigay and J. Coutaz, “A design space for multimodal systems: concurrent processing and data fusion”, inProc. INTERCHI’93, pp. 172–178, 1993. 1Google Scholar
  10. [10]
    P. Faraday and A. Sutcliffe, “Designing effective multimedia presentations”, inProceedings of the SIGCHI Conference on Human Factors in Computing systems (CHI’97), (Atlanta, GE), pp. 272–278, New York: ACM Press & Addison Wesley, March 1997. 1CrossRefGoogle Scholar
  11. [11]
    Y. Wang, Z. Liu, and J.-C. Huang, “Multimedia content analysis-using both audio and visual clues”,Signal Processing Magazine, IEEE, vol. 17, no. 6, pp. 12–36, 2000. 1CrossRefGoogle Scholar
  12. [12]
    W. Yu and S. Brewster, “Evaluation of multimodal graphs for blind people”,Universal Access in the Information Society International Journal (UAIS), Special issue ‘Multimodality, a step towards universal access’, vol. 2, no. 2, pp. 105–124, 2003. 1Google Scholar
  13. [13]
    C. Baber, “Computing in a multimodal world”, inProceedings of 1st International Conference on Universal Access in Human Computer Interaction (UAHCI’01 - HCI International 2001) (M. N. L. Erlbaum, ed.), (New Orleans, LA), pp. 232–236, August 2001. 1Google Scholar
  14. [14]
    J. Lumsden and S. A. Brewster, “Paradigm shift: Alternative interaction techniques for use with mobile and wearable devices”, inProceedings 13th Annual IBM Centers for Advanced Studies Conference (CASCON 2003) (D. Stewart, ed.), (Toronto, Canada), pp. 97–110, October 2003. 1Google Scholar
  15. [15]
    G. De Vries and G. Johnson, “Spoken help for a car stereo: an exploratory study”,Behaviour and Information Technology, vol. 16, no. 2, pp. 79–87, 1997. 1CrossRefGoogle Scholar
  16. [16]
    C. Ware, ed.,Information Visualization. Perception for design (2nd edition). North Holland Elsevier Science Ltd-Oxford, 2004. 1Google Scholar
  17. [17]
    L. Chelazzi, “Serial attention mechanisms in visual search: a critical look at the evidence”,Psychological Research, vol. 62, no. 2–3, pp. 195–219, 1999. 1CrossRefGoogle Scholar
  18. [18]
    C. Plaisant, “The challenge of information visualization evaluation”, inProc. AVI’04, pp. 109–116, 2004. 4Google Scholar
  19. [19]
    A. Descampe, P. Vandergheynst, C. D. Vleeschouwer, and B. Macq, “Coarse-to-fine textures retrieval in the jpeg 2000 compressed domain for fast browsing of large image databases”, inLNCS (IWMRCS), pp. 282–289, Springer Verlag, 2006. 4Google Scholar

Copyright information

© OpenInterface Association 2007

Authors and Affiliations

  • Suzanne Kieffer
    • 1
  • Noëlle Carbonell
    • 1
  1. 1.LORIA, Campus Scientifique, Vandoeuvre-lès-NancyFrance

Personalised recommendations