A Structured Expert Evaluation Method for the Evaluation of Children’s Computer Games

  • Ester Baauw
  • Mathilde M. Bekker
  • Wolmet Barendregt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3585)

Abstract

Inspection-based evaluation methods predicting usability problems can be applied for evaluating products without involving users. A new method (named SEEM), inspired by Norman’s theory-of-action model [18] and Malone’s concepts of fun [15], is described for predicting usability and fun problems in children’s computer games. This paper describes a study to assess SEEM’s quality. The results show that the experts in the study predicted about 76% of the problems found in a user test. The validity of SEEM is quite promising. Furthermore, the participating experts were able to apply the inspection-questions in an appropriate manner. Based on this first study ideas for improving the method are presented.

References

  1. 1.
    Milo and the magical stones (Max en de toverstenen). MediaMix Benelux (2002)Google Scholar
  2. 2.
    Rabbit, R.: Group 3: Fun in the Clouds (Robbie Konijn, Groep 3: Pret in de Wolken). Mindscape (2003)Google Scholar
  3. 3.
    Barendregt, W., Bekker, M.M.: Towards a Framework for Design Guidelines for Young Children’s Computer Games. In: Rauterberg, M. (ed.) ICEC 2004. LNCS, vol. 3166, pp. 365–376. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Barendregt, W., Bekker, M.M., Bouwhuis, D., Baauw, E.: Predicting effectiveness of children participants in user testing based on personality characteristics. Submitted to Behaviour & Information Technology (Unpublished manuscript)Google Scholar
  5. 5.
    Chattratichart, J., Brodie, J.: Applying User Testing Data to UEM Performance Metrics. In: Late Breaking Results Paper, Vienna, Austria, April 24, pp. 1119–1122 (2004)Google Scholar
  6. 6.
    Cockton, G., Lavery, D., Woolrych, A.: Inspection-based evaluations. In: Jacko, J., Sears, A. (eds.) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. Lawrence Erlbaum Associates, Mahwah (2003)Google Scholar
  7. 7.
    Cockton, G., Woolrych, A., Hall, L., Hindmarch, M.: Changing Analysts’ Tunes: The Surprising Impact of a New Instrument for Usability Inspection Method Assessment. In: Palanque, P., Johnson, P., O’Neill, E. (eds.) People and Computers, Designing for Society (Proceedings of HCI 2003), pp. 145–162. Springer, Heidelberg (2003)Google Scholar
  8. 8.
    Cockton, G., Woolrych, A.: Understanding Inspection Methods: Lessons from an Assessment of Heuristic Evaluation. In: Blandford, A., Vanderdonckt, J., Gray, P.D. (eds.), pp. 171–192. Springer, Heidelberg (2001)Google Scholar
  9. 9.
    Desurvire, H., Caplan, M., Toth, J.A.: Using heuristics to evaluate the playability of games. In: CHI extended abstracts 2004, Vienna, Austria, pp. 1509–1512Google Scholar
  10. 10.
    Federoff, M.A.: Heuristics and usability guidelines for the creation and evaluation of fun in video games. Msc Department of Telecommunications of Indiana University (2002)Google Scholar
  11. 11.
    Gray, W.D., Salzman, M.C.: Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-Computer Interaction 13(3), 203–261 (1998)CrossRefGoogle Scholar
  12. 12.
    Hartson, H.R., Andre, T.S., Williges, R.C.: Criteria for evaluating usability evaluation methods. International Journal of Human-Computer Interaction: Special issue on Empirical Evaluation of Information Visualisations 13(4), 373–410 (2001)CrossRefGoogle Scholar
  13. 13.
    Kanis, H., Arisz, H.J.: How many participants: A simple means for concurrent monitoring. In: Proceedings of the IEA 2000/HFES 2000 Congress, pp. 637–640 (2000)Google Scholar
  14. 14.
    Lavery, D., Cockton, G., Atkinson, M.P.: Comparison of Evaluation Methods Using Structured Usability Problem Reports. Behaviour and Information Technology 16(4), 246–266 (1997)CrossRefGoogle Scholar
  15. 15.
    Malone, T.W.: What makes things fun to learn? A study of intrinsically motivating computer games. Technical Report CIS-7, Xerox PARC, Palo Alto (1980)Google Scholar
  16. 16.
    von Nes, F.: On the validity of design guidelines and the role of standardisation. In: Nicolle, C., Abascal, J. (eds.) Inclusive Design Guidelines for HCI, pp. 61–70. Taylor & Francis Group, London (2001)Google Scholar
  17. 17.
    Nielsen, J., Mack, R.L.: Usability Inspection Methods. John Wiley & Sons, Inc., New York (1994)CrossRefGoogle Scholar
  18. 18.
    Norman, D.A.: The design of everyday things. MIT Press, London (1998)Google Scholar
  19. 19.
    Pagulayan, R.J., Keeker, K., Wixon, D., Romero, R., Fuller, T.: User-centered design in games. In: Jacko, J., Sears, A. (eds.) Handbook for Human-Computer Interaction in Interactive Systems, pp. 883–906. Lawrence Erlbaum Associates, Mahwah (2003)Google Scholar
  20. 20.
    Sears, A.: Heuristic Walkthroughs: Finding the problems without the noise. International Journal of Human-Computer Interactions 9(3), 213–234 (1997)CrossRefGoogle Scholar
  21. 21.
    Zapf, D., Maier, G.W., Irmer, C.: Error Detection, Task Characteristics, and Some Consequences for Software Design. Applied Psychology: an international review 43, 499–520 (1994)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2005

Authors and Affiliations

  • Ester Baauw
    • 1
  • Mathilde M. Bekker
    • 1
  • Wolmet Barendregt
    • 1
  1. 1.Department of Industrial DesignTU EindhovenEindhovenThe Netherlands

Personalised recommendations