Advertisement

Integrating Support for Usability Evaluation into High Level Interaction Descriptions with NiMMiT

  • Karin Coninx
  • Erwin Cuppens
  • Joan De Boeck
  • Chris Raymaekers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4323)

Abstract

Nowadays, the claim that a human-computer interface is user friendly, must be supported by a formal usability experiment. Due to its inherent complexity, this is particularly true when developing a multimodal interface. For such a rich user interface, there is a lack of support for automated testing and observing, so in preparation of its formal evaluation a lot of time is spent to adapt the programming code itself. Based on NiMMiT, which is a high-level notation to describe and automatically execute multimodal interaction techniques, we propose in this paper an easy way for the interaction designer to collect and log data related to the user experiment. Inserting ’probes’ and ’filters’ in NiMMiT interaction diagrams is indeed more efficient than editing the code of the interaction technique itself. We will clarify our approach as applied during a concrete user experiment.

Keywords

User Experiment Usability Evaluation Output Port Dominant Hand Interaction Technique 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Vanacken, D., et al.: NiMMiT: A notation for modeling multimodal interaction techniques. In: Proceedings of the International Conference on Computer Graphics Theory and Applications (GRAPP06), Setúbal, Portugal (2006)Google Scholar
  2. 2.
    Palanque, P., Bastide, R.: Petri net based design of user-driven interfaces using the interactive cooperative objects formalism. In: Interactive Systems: Design, Specification, and Verification, pp. 383–400. Springer, Heidelberg (1994)Google Scholar
  3. 3.
    Ambler, S.: Object Primer, The Agile Model-Driven Development with UML 2.0. Cambridge University Press, Cambridge (2004)Google Scholar
  4. 4.
    Ivory, M.Y., Hearst, M.A.: The state of the art in automating usability evaluation of user interfaces. ACM Computing Surveys 33(4), 470–516 (2001)CrossRefGoogle Scholar
  5. 5.
    Apache Software Foundation: Logging services project @ apache (2006), http://logging.apache.org
  6. 6.
    Mine, M.R., Brooks, F.P.: Moving objects in space: Exploiting proprioception in virtual environment interaction. In: Proceedings of the SIGGRAPH 1997 annual conference on Computer graphics, Los Angeles, CA, USA (1997)Google Scholar
  7. 7.
    De Boeck, J., et al.: Multisensory interaction metaphors with haptics and proprioception in virtual environments. In: Proceedings of the third ACM Nordic Conference on Human-Computer Interaction (NordiCHI 2004), Tampere, FI, ACM Press, New York (2004)Google Scholar
  8. 8.
    Guiard, Y.: Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of Motor Behaviour 19, 486–517 (1997)Google Scholar
  9. 9.
    De Boeck, J., et al.: Using the non-dominant hand for selection in 3D. In: Proceedings of the first IEEE Symposium on 3D User Interfaces 2006, Alexandria, Virginia, US, IEEE Computer Society Press, Los Alamitos (2006)Google Scholar
  10. 10.
    Forsberg, A., Herndon, K., Zeleznik, R.: Aperture based selection for immersive virtual environment. In: Proceedings of UIST96, pp. 95–96 (1996)Google Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Karin Coninx
    • 1
  • Erwin Cuppens
    • 1
  • Joan De Boeck
    • 1
  • Chris Raymaekers
    • 1
  1. 1.Hasselt University, Expertise Centre for Digital Media (EDM), and transnationale Universiteit Limburg, Wetenschapspark 2, B-3590 DiepenbeekBelgium

Personalised recommendations