Advertisement

UXAmI Observer: An Automated User Experience Evaluation Tool for Ambient Intelligence Environments

  • Stavroula NtoaEmail author
  • George Margetis
  • Margherita Antona
  • Constantine Stephanidis
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 868)

Abstract

Ambient Intelligence constitutes a new human-centered technological paradigm, where environments are oriented towards anticipating and satisfying the needs of their inhabitants. In this context, evaluation becomes of paramount importance. This paper presents UXAmI Observer, an automated user experience evaluation tool for Ambient Intelligence environments, taking advantage of their inherent infrastructure to automatically acquire measurements during user testing experiments. The tool provides powerful data visualizations for the entire experiment, for each system and application evaluated, as well as for each experiment participant individually, ensuring synchronization of data with video recordings, and facilitating manual data input by the evaluators themselves.

Keywords

User experience evaluation Automated evaluation tool Ambient intelligence User testing 

Notes

Acknowledgment

This work is supported by the FORTH-ICS internal RTD Programme “Ambient Intelligence Environments”.

References

  1. 1.
    Stephanidis, C.: Human factors in ambient intelligence environments. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 4th edn., pp. 1354–1373. John Wiley and Sons (2012)Google Scholar
  2. 2.
    Augusto, J.C., Nakashima, H., Aghajan, H.: Ambient intelligence and smart environments: a state of the art. In: Nakashima, H., Aghajan, H., Augusto, J.C. (eds.) Handbook of Ambient Intelligence and Smart Environments, pp. 3–31. Springer, US (2010)CrossRefGoogle Scholar
  3. 3.
    MacDonald, C.M., Atwood, M.E.: Changing perspectives on evaluation in HCI: past, present, and future. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 1969–1978. ACM, New York, NY, USA (2013)Google Scholar
  4. 4.
    Martins, A.I., Queirós, A., Silva, A.G., Rocha, N.P.: Usability evaluation methods: a systematic review. In: Saeed, S., Bajwa, I.S., Mahmood, Z. (eds.) Human Factors in Software Development and Design, pp. 250–273. IGI Global (2015)Google Scholar
  5. 5.
    Paz, F., Pow-Sang, J.A.: Current trends in usability evaluation methods: a systematic review. In: 2014 7th International Conference on Advanced Software Engineering and Its Applications (ASEA), pp. 11–15. IEEE (2014)Google Scholar
  6. 6.
    Nielsen, J.: Usability engineering. Elsevier (1994)Google Scholar
  7. 7.
    Hornbæk, K.: Current practice in measuring usability: challenges to usability studies and research. Int. J. Hum. Comput. Stud. 64(2), 79–102 (2006)CrossRefGoogle Scholar
  8. 8.
    Hertzum, M., Jacobsen, N.E.: The evaluator effect: a chilling fact about usability evaluation methods. Int. J. Hum. Comput. Interact. 13(4), 421–443 (2001)CrossRefGoogle Scholar
  9. 9.
    Hornbæk, K., Law, E.L.C.: Meta-analysis of correlations among usability measures. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘07), pp. 617–626. ACM (2007)Google Scholar
  10. 10.
    Bevan, N.: What is the difference between the purpose of usability and user experience evaluation methods. In: Proceedings of the Workshop UXEM, vol. 9, pp. 1–4 (2009)Google Scholar
  11. 11.
    Bargas-Avila, J.A., Hornbæk, K.: Old wine in new bottles or novel challenges: a critical analysis of empirical studies of user experience. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘11), pp. 2689–2698. ACM (2011)Google Scholar
  12. 12.
    Maia, C.L.B., Furtado, E.S.: A systematic review about user experience evaluation. In: Marcus, A. (ed.) Design, User Experience, and Usability: Design Thinking and Methods, DUXU 2016. Lecture Notes in Computer Science, vol. 9746, pp. 445–455. Springer, Cham (2016)CrossRefGoogle Scholar
  13. 13.
    Gaggioli, A.: Optimal experience in ambient intelligence. In: Riva, G., Vatalaro, F., Davide, F., Alcañiz, M. (eds.) Ambient Intelligence, pp. 35–43. IOS Press (2005)Google Scholar
  14. 14.
    Poppe, R., Rienks, R., van Dijk, B.: Evaluating the future of HCI: challenges for the evaluation of emerging applications. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) Artificial Intelligence for Human Computing. Lecture Notes in Artifical Intelligence, vol. 4451, pp. 234–250. Springer (2007)Google Scholar
  15. 15.
    Ivory, M.Y., Hearst, M.A.: The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv. (CSUR) 33(4), 470–516 (2001)CrossRefGoogle Scholar
  16. 16.
    Hilbert, D.M., Redmiles, D.F.: Extracting usability information from user interface events. ACM Comput. Surv. (CSUR) 32(4), 384–421 (2000)CrossRefGoogle Scholar
  17. 17.
    Macleod, M., Rengger, R.: The development of DRUM: a software tool for video-assisted usability evaluation. In: Alty, J.L., Diaper, D., Guest, S. (eds.) People and Computers, vol. VIII, p. 293. Cambridge University Press (1997)Google Scholar
  18. 18.
    Sears, A.: Heuristic walkthroughs: finding the problems without the noise. Int. J. Hum. Comput. Inter. 9(3), 213–234 (1997)CrossRefGoogle Scholar
  19. 19.
    Lecerof, A., Paternò, F.: Automatic support for usability evaluation. IEEE Trans. Softw. Eng. 24(10), 863–888 (1998)CrossRefGoogle Scholar
  20. 20.
    Kim, J.H. et al.: Tracking real-time user experience (TRUE): a comprehensive instrumentation solution for complex systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 443–452. ACM (2008)Google Scholar
  21. 21.
    Heilbrunn, B., Herzig, P., Schill, A.: Tools for gamification analytics: a survey. In: Proceedings of the 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC), pp. 603–608. IEEE (2014)Google Scholar
  22. 22.
    Bateman, S., Gutwin, C., Osgood, N., McCalla, G.: Interactive usability instrumentation. In: Proceedings of the 1st ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 45–54. ACM (2009)Google Scholar
  23. 23.
    Chang, E., Dillon, T.S.: Automated usability testing. In: Howard, S., Hammond, J., Lindgaard, G. (eds.) Human-Computer Interaction INTERACT’97, pp. 77–84. Springer (1997)Google Scholar
  24. 24.
    Alexander, J., Cockburn, A., Lobb, R.: AppMonitor: a tool for recording user actions in unmodified Windows applications. Behav. Res. Methods 40(2), 413–421 (2008)CrossRefGoogle Scholar
  25. 25.
    Ma, X., et al.: Design and implementation of a toolkit for usability testing of mobile apps. Mob. Networks Appl. 18(1), 81–97 (2013)CrossRefGoogle Scholar
  26. 26.
    Tran, C.D., Ezzedine, H., Kolski, C.: EISEval, a generic reconfigurable environment for evaluating agent-based interactive systems. Int. J. Hum Comput Stud. 71(6), 725–761 (2013)CrossRefGoogle Scholar
  27. 27.
    Assila, A., de Oliveira, K.M., Ezzedine, H.: An environment for integrating subjective and objective usability findings based on measures. In: 2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS), pp. 1–12. IEEE (2016)Google Scholar
  28. 28.
    Muhi, K., Szőke, G., Fülöp, L.J., Ferenc, R., Berger, Á.: A Semi-automatic usability evaluation framework. In: Murgante, B. et al. (eds.) Computational Science and Its Applications–ICCSA 2013. Lecture Notes in Computer Science, vol. 7972, pp. 529–542. Springer, Heidelberg (2013)Google Scholar
  29. 29.
    Christensen, L., Frøkjær, E.: Distributed usability evaluation: enabling large-scale usability evaluation with user-controlled instrumentation. In: Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI 2010), ACM, pp. 118–127 (2010)Google Scholar
  30. 30.
    Bao, L., et al.: Extracting and analyzing time-series HCI data from screen-captured task videos. Empirical Softw. Eng. 22(1), 134–174 (2017)CrossRefGoogle Scholar
  31. 31.
    Howarth, J., Smith-Jackson, T., Hartson, R.: Supporting novice usability practitioners with usability engineering tools. Int. J. Hum. Comput. Stud. 67(6), 533–549 (2009)CrossRefGoogle Scholar
  32. 32.
    Morae: Morae 3 Tutorial (2017). Retrieved from: https://www.techsmith.com/tutorial-morae-current.html
  33. 33.
    Norman, K.L., Panizzi, E.: Levels of automation and user participation in usability testing. Interact. Comput. 18(2), 246–264 (2005)CrossRefGoogle Scholar
  34. 34.
    Alspaugh, S., Ganapathi, A., Hearst, M.A., Katz, R.: Better logging to improve interactive data analysis tools. In: Proceedings of the ACM SIGKDD Workshop on Interactive Data Exploration and Analytics (IDEA 2014), pp. 19–25 (2014)Google Scholar
  35. 35.
    Au, F.T., Baker, S., Warren, I., Dobbie, G.: Automated usability testing framework. In: Proceedings of the Ninth Conference on Australasian User Interface (AUIC 2008), vol. 76, pp. 55–64. Australian Computer Society, Inc. (2008)Google Scholar
  36. 36.
    Cook, D.J., Augusto, J.C., Jakkula, V.R.: Ambient intelligence: Technologies, applications, and opportunities. Pervasive Mob. Comput. 5(4), 277–298 (2009)CrossRefGoogle Scholar
  37. 37.
    Augusto, J.C., McCullagh, P.: Ambient intelligence: Concepts and applications. Comput. Sci. Inf. Syst. 4(1), 1–27 (2007)CrossRefGoogle Scholar
  38. 38.
    Nielsen, J.: Success Rate: The Simplest Usability Metric (2001). Retrieved from: https://www.nngroup.com/articles/success-rate-the-simplest-usability-metric/
  39. 39.
    Nielsen, J.: Heuristic evaluation. Usability Inspection Methods 17(1), 25–62 (1994)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Stavroula Ntoa
    • 1
    Email author
  • George Margetis
    • 1
  • Margherita Antona
    • 1
  • Constantine Stephanidis
    • 1
    • 2
  1. 1.Institute of Computer Science (ICS)Foundation for Research and Technology Hellas (FORTH)HeraklionGreece
  2. 2.Computer Science DepartmentUniversity of CreteHeraklionGreece

Personalised recommendations