Advertisement

Enriching Task Models with Usability and User Experience Evaluation Data

  • Regina Bernhaupt
  • Philippe Palanque
  • Dimitri Drouet
  • Celia Martinie
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11262)

Abstract

Evaluation results focusing on usability and user experience are often difficult to be taken into account during an iterative design process. This is due to the fact that evaluation exploits concrete artefacts (prototype or system) while design and development are based on more abstract descriptions such as task models or software models. As concrete data cannot be represented, evaluation results are just discarded. This paper addresses the problem of discrepancy between abstract view of task models and concrete data produced in evaluations by first, describing the requirements for a task modelling notation: (a) representation of data for each individual participant, (b) representation of aggregated data for one evaluation as well as (c) several evaluations and (d) the need to visualize multi-dimensional data from the evaluation as well as the interactive system gathered during runtime. Second: by showing how the requirements were integrated in a task modelling tool. Using an example from an experimental evaluation possible usages of the tool are demonstrated.

Keywords

Task models User study Usability User experience Evaluation Formal description 

References

  1. 1.
    Roto, V., et al.: All About UX: All UX Evaluation Methods, 17 October 2018. http://www.allaboutux.org/all-methods
  2. 2.
    Hassenzahl, M.: AttrakDiff – Fragebogen, 17 October 2018. www.attrakdiff.de
  3. 3.
    Baecker, R.M. (ed.): Readings in Human-computer Interaction: Toward the Year 2000. Morgan Kaufmann, San Francisco (1995)Google Scholar
  4. 4.
    Barboni, E., Ladry, J.-F., Navarre, D., Palanque, P., Winckler, M.: Beyond modelling: an integrated environment supporting co-execution of tasks and systems models. In: EICS 2010, pp. 165–174 (2010)Google Scholar
  5. 5.
    Bernhaupt, R., Manciet, F., Pirker, M.: User experience as a parameter to enhance automation acceptance: lessons from automating articulatory tasks. In: Proceedings of the 5th International Conference on Application and Theory of Automation in Command and Control Systems, pp. 140–150. ACM, New York (2015)Google Scholar
  6. 6.
    Bernhaupt, R., Pirker, M.: Evaluating user experience for interactive television: towards the development of a domain-specific user experience questionnaire. In: Kotzé, P., Marsden, G., Lindgaard, G., Wesson, J., Winckler, M. (eds.) INTERACT 2013, Part II. LNCS, vol. 8118, pp. 642–659. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40480-1_45CrossRefGoogle Scholar
  7. 7.
    Bernhaupt, R., Navarre, D., Palanque, P., Winckler, M.: Model-based evaluation: a new way to support usability evaluation of multimodal interactive applications. In: Law, E.L.-C., Hvannberg, E.T., Cockton, G. (eds.) Maturing Usability. HIS, pp. 96–119. Springer, London (2008).  https://doi.org/10.1007/978-1-84628-941-5_5CrossRefGoogle Scholar
  8. 8.
    Bernhaupt, R., Palanque, P., Manciet, F., Martinie, C.: User-test results injection into task-based design process for the assessment and improvement of both usability and user experience. In: Bogdan, C., et al. (eds.) HCSE/HESSD 2016. LNCS, vol. 9856, pp. 56–72. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-44902-9_5CrossRefGoogle Scholar
  9. 9.
    Brooke, J.: SUS-A quick and dirty usability scale. In: Usability Evaluation in Industry, pp. 189–194 (1996)Google Scholar
  10. 10.
    Cockton, G., Woolrych, A.: Understanding inspection methods: lessons from an assessment of heuristic evaluation. In: Blandford, A., Vanderdonckt, J., Gray, P. (eds.) People and Computers XV—Interaction without Frontiers, pp. 171–191. Springer, London (2001).  https://doi.org/10.1007/978-1-4471-0353-0_11CrossRefGoogle Scholar
  11. 11.
    Desmet, P., Overbeeke, K., Tax, S.: Designing products with added emotional value: development and application of an approach for research through design. Des. J. 4, 32–47 (2001)Google Scholar
  12. 12.
    Roto, V., et al.: Emo Cards, 17 October 2018. http://www.allaboutux.org/emocards
  13. 13.
    Fahssi, R., Martinie, C., Palanque, P.: Enhanced task modelling for systematic identification and explicit representation of human errors. In: Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M. (eds.) INTERACT 2015, Part IV. LNCS, vol. 9299, pp. 192–212. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-22723-8_16CrossRefGoogle Scholar
  14. 14.
    Farenc, C., Palanque, P., Vanderdonckt, J.: User Interface evaluation: is it ever usable? In: Yuichiro, A., Katsuhiko, O., Hirohiko, M. (eds.) Advances in Human Factors/Ergonomics, vol. 20, pp. 329–334. Elsevier (1995)Google Scholar
  15. 15.
    Fayollas, C., et al.: An approach for assessing the impact of dependability on usability: application to interactive cockpits. In: Proceedings of the 2014 Tenth European Dependable Computing Conference, Washington, DC, USA, pp. 198–209. IEEE Computer Society (2014)Google Scholar
  16. 16.
    Field, A.: Discovering Statistics Using IBM SPSS Statistics. Sage Publication Ltd., London (2013)Google Scholar
  17. 17.
    Forbrig, P., Martinie, C., Palanque, P., Winckler, M., Fahssi, R.: Rapid task-models development using sub-models, sub-routines and generic components. In: Sauer, S., Bogdan, C., Forbrig, P., Bernhaupt, R., Winckler, M. (eds.) HCSE 2014. LNCS, vol. 8742, pp. 144–163. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-44811-3_9CrossRefGoogle Scholar
  18. 18.
    Furnas, G.W.: Generalized fisheye views. In: Mantei, M., Orbeton, P. (eds.) Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 16–23. ACM, New York (1986)Google Scholar
  19. 19.
    Hollnagel, E.: Cognitive Reliability and Error Analysis Method. Elsevier Science, Oxford (1998)Google Scholar
  20. 20.
    ISO 9241-210 Ergonomics of Human-System Interaction Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems (2010)Google Scholar
  21. 21.
    Jones, M., Marsden, G.: Mobile Interaction Design. John Wiley & Sons, Chichester (2006)Google Scholar
  22. 22.
    Karapanos, E., Zimmerman, J., Forlizzi, J., Martens, J.-B.: Measuring the dynamics of remembered experience over time. Interact. Comput. 22, 328–335 (2010)CrossRefGoogle Scholar
  23. 23.
    Lazar, D.J., Feng, D.J.H., Hochheiser, D.H.: Research Methods in Human-Computer Interaction. John Wiley & Sons, New York (2010)Google Scholar
  24. 24.
    Martinie, C., et al.: Formal tasks and systems models as a tool for specifying and assessing automation designs. In: Proceedings of the 1st International Conference on Application and Theory of Automation in Command and Control Systems, pp. 50–59. IRIT Press, Toulouse (2011)Google Scholar
  25. 25.
    Martinie, C., Palanque, P., Fahssi, R., Blanquart, J.P., Fayollas, C.: Seguin: task model-based systematic analysis of both system failures and human errors. IEEE Trans. Hum. Mach. Syst. 46, 243–254 (2016)CrossRefGoogle Scholar
  26. 26.
    Martinie, C., Navarre, D., Palanque, P., Fayollas, C.: A generic tool-supported framework for coupling task models and interactive applications. In: Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 244–253. ACM, New York (2015)Google Scholar
  27. 27.
    Martinie, C., Palanque, P., Navarre, D., Winckler, M., Poupart, E.: Model-based training: an approach supporting operability of critical interactive systems. In: Proceedings of the 3rd ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 53–62. ACM, New York (2011)Google Scholar
  28. 28.
    Martinie, C., Palanque, P., Ragosta, M., Fahssi, R.: Extending procedural task models by systematic explicit integration of objects, knowledge and information. In: Proceedings of the 31st European Conference on Cognitive Ergonomics, pp. 23:1–23:10. ACM, New York (2013)Google Scholar
  29. 29.
    Martinie, C., Palanque, P., Winckler, M.: Structuring and composition mechanisms to address scalability issues in task models. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part III. LNCS, vol. 6948, pp. 589–609. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23765-2_40CrossRefGoogle Scholar
  30. 30.
    Morae: Software, 17 October 2018. https://www.techsmith.com/morae.html
  31. 31.
    Mori, G., Paterno, F., Santoro, C.: CTTE: support for developing and analyzing task models for interactive system design. IEEE Trans. Softw. Eng. 28, 797–813 (2002)CrossRefGoogle Scholar
  32. 32.
    Nasa: Task Load Index, 17 October 2018. https://humansystems.arc.nasa.gov/groups/TLX/
  33. 33.
    Nielsen, J., Mack, R.L. (eds.): Usability Inspection Methods. John Wiley & Sons, Inc., New York (1994)Google Scholar
  34. 34.
    Norman, D.A., Draper, S.W.: User Centered System Design; New Perspectives on Human-Computer Interaction. L. Erlbaum Associates Inc., Hillsdale (1986)CrossRefGoogle Scholar
  35. 35.
    Palanque, P., Bastide, R., Sengès, V.: Validating interactive system design through the verification of formal task and system models. In: Bass, L.J., Unger, C. (eds.) Engineering for Human-Computer Interaction, pp. 189–212. Springer, US (1996).  https://doi.org/10.1007/978-0-387-34907-7_11CrossRefGoogle Scholar
  36. 36.
    Paterno, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: a diagrammatic notation for specifying task models. In: Howard, S., Hammond, J., Lindgaard, G. (eds.) Human-Computer Interaction, INTERACT ’97. IFIP AICT, pp. 362–369. Springer, Boston (1997).  https://doi.org/10.1007/978-0-387-35175-9_58CrossRefGoogle Scholar
  37. 37.
    Pinelle, D., Gutwin, C., Greenberg, S.: Task analysis for groupware usability evaluation: modeling shared-workspace tasks with the mechanics of collaboration. ACM Trans. Comput. Hum. Interact. 10, 281–311 (2003)CrossRefGoogle Scholar
  38. 38.
    Pirker, M., Bernhaupt, R., Mirlacher, T.: Investigating usability and user experience as possible entry barriers for touch interaction in the living room. In: Proceedings of the 8th International Interactive Conference on Interactive TV&Video, pp. 145–154. ACM, New York (2010)Google Scholar
  39. 39.
    Reason, J.: Human Error. Cambridge University Press, Cambridge (1990)CrossRefGoogle Scholar
  40. 40.
    Shneiderman, B.: The eyes have it: a task by data type taxonomy for information visualizations. In: Proceedings of the 1996 IEEE Symposium on Visual Languages, Washington, DC, USA, pp. 336–343. IEEE Computer Society (1996)Google Scholar
  41. 41.
    Swearngin, A., Cohen, M.B., John, B.E., Bellamy, R.K.E.: Human performance regression testing. In: Proceedings of the 2013 International Conference on Software Engineering, Piscataway, NJ, USA, pp. 152–161. IEEE Press (2013)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  • Regina Bernhaupt
    • 1
    • 3
  • Philippe Palanque
    • 1
    • 2
  • Dimitri Drouet
    • 3
  • Celia Martinie
    • 2
  1. 1.Department of Industrial DesignEindhoven University of TechnologyEindhoventhe Netherlands
  2. 2.IRIT, ICSToulouseFrance
  3. 3.ruwidoNeumarktAustria

Personalised recommendations