Using Annotated Task Models for Accessibility Evaluation

  • Ivo Malý
  • Jiří Bittner
  • Pavel Slavík
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7382)


Evaluation of application accessibility is a challenging task that requires an intensive testing with potential application users. An alternative to user tests is the model based testing using simulations. The simulations provide important feedback about application accessibility particularly when it is hard to involve the target users in the tests which is often the case for users with disabilities. In this paper we propose a methodology of providing the quickly and easily necessary data for the simulations. In particular we show how to annotate task models using application walkthroughs logs that is data obtained by recording the application usage. We create annotated task models, which together with the user models are suitable for simulation of application usage by virtual users with various disabilities. We present tools for recording and processing of the application walkthrough logs and tools for the interactive task model annotation. Finally, we provide actual examples of task model annotation on three scenarios involving the Second Life metaverse.


Task Models Accessibility Evaluation User Centred Design and User Involvement 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    The VERITAS project, FP7-247765,
  2. 2.
    Second Life,
  3. 3.
    Maly, I., Mikovec, Z., Vystrcil, J.: Interactive Analytical Tool for Usability Analysis of Mobile Indoor Navigation Application. In: 3rd International Conference on Human System Interaction, Rzeszow, pp. 259–266 (2010)Google Scholar
  4. 4.
    Kaklanis, N., Moschonas, P., Moustakas, K., Tzovaras, D.: A Framework for Automatic Simulated Accessibility Assessment in Virtual Environments. In: Duffy, V.G. (ed.) Digital Human Modeling, HCII 2011. LNCS, vol. 6777, pp. 302–311. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Aho, P., Menz, N., Raty, T., Schieferdecker, I.: Automated Java GUI Modeling for Model-Based Testing Purposes. In: 8th International Conference on Information Technology, Las Vegas, pp. 268–273 (2011)Google Scholar
  6. 6.
  7. 7.
    Bertolini, C., Mota, A.: A Framework for GUI Testing based on Use Case Design. In: 3rd International Conference on Software Testing, Verification, and Validation, Washington, DC, pp. 252–259 (2010)Google Scholar
  8. 8.
    Selenium IDE documentation,
  9. 9.
    Malý, I., Hapala, M., Bittner, J., Slavík, P.: On Tools for Game Interaction Analysis. In: 2nd IASTED International Conference on Assistive Technologies (AT 2012), pp. 835–841. ACTA Press, Innsbruck (2012)Google Scholar
  10. 10.
    Paganelli, L., Paterno, F.: Automatic reconstruction of the underlying interaction design of web applications. In: 14th International Conference on Software Engineering and Knowledge Engineering (SEKE 2002), pp. 439–445. ACM, New York (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ivo Malý
    • 1
  • Jiří Bittner
    • 1
  • Pavel Slavík
    • 1
  1. 1.Department of Computer Graphics and Interaction, Faculty of Electrical EngineeringCzech Technical University in PragueCzech Republic

Personalised recommendations