Advertisement

Personal and Ubiquitous Computing

, Volume 17, Issue 8, pp 1605–1620 | Cite as

Design requirements, student perception indicators and validation metrics for intelligent exploratory learning environments

  • Manolis Mavrikis
  • Sergio Gutierrez-Santos
  • Eirini Geraniou
  • Richard Noss
Original Article

Abstract

The new forms of interaction afforded by innovative technology and open-ended environments provide promising opportunities for exploratory learning. Exploratory environments, however, require appropriate support to lead to meaningful learning outcomes. This paper focuses on the design and validation of intelligent exploratory environments. The goal is twofold: requirements that guide the operationalisation of pedagogical strategies to computer-based support and methodology for the validation of the system. As designers, we need to understand what kind of interaction is conducive to learning and aligned with the theoretical principles behind exploratory learning. We summarise this in the form of three requirements—rare interruption of interaction, co-location of feedback and support towards specific goals. Additionally, developing intelligent systems requires many resources and a long time to build. To facilitate their evaluation, we define three indicators— helpfulness, repetitiveness and comprehension—of students’ perception of the intelligent system and three metrics—relevance, coverage, and scope—which allow the identification of design or implementation problems at various phases of the development. The paper provides a case study with a mathematical microworld that demonstrates how the three requirements are taken into account in the design of the user-facing components of the system and outline the methodology for formative validation of the intelligent support.

Keywords

Intelligent microworlds Feedback interruption Co-located feedback Validation metrics Child interaction Exploratory learning 

Notes

Acknowledgments

The authors would like to acknowledge the rest of the members of the MiGen project, which was supported by ESRC/TLRP Grant RES-139-25-0381 (see http://www.migen.org).

References

  1. 1.
    Aleven V (2002) An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cogn Sci 26(2):147–179CrossRefGoogle Scholar
  2. 2.
    Aleven V, Mclaren B, Roll I, Koedinger K (2006) Toward meta-cognitive tutoring: a model of help seeking with a cognitive tutor. Int J Artif Intell Edu 16(2):101–128Google Scholar
  3. 3.
    Arroyo I, Ferguson K, Johns J, Dragon T, Meheranian H, Fisher D, Barto A, Mahadevan S, Woolf BP (2007) Repairing disengagement with non-invasive interventions. In: Proceeding of the 2007 conference on artificial intelligence in education, IOS Press, Amsterdam, pp 195–202Google Scholar
  4. 4.
    Baker RS, Corbett AT, Koedinger KR, Wagner AZ (2004) Off-task behavior in the cognitive tutor classroom: when students “game the system”. In: CHI ’04: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, New York, pp 383–390Google Scholar
  5. 5.
    Bark I, Folstad A, Gulliksen J (2005) Use and usefulness of HCI methods: results from an exploratory study among nordic HCI practitioners. In: International conference on HCIGoogle Scholar
  6. 6.
    Bransford JD, Franks JJ, Vye NJ, Sherwood RD (1989) New approaches to instruction: because wisdom can’t be told. In: Vosniadou S, Ortony A (eds) Similarity and analogical reasoning. Cambridge University Press, New YorkGoogle Scholar
  7. 7.
    Bunt A, Conati C (2003) Probabilistic Student modelling to improve exploratory behaviour. User Model User-Adapt Interact 13:269–309CrossRefGoogle Scholar
  8. 8.
    Chevallard Y (1991) La transposition didactique—Du savoir savant au savoir enseigné. La Pensée sauvage (1re éd., 1985), GrenobleGoogle Scholar
  9. 9.
    Dempsey JV, Wager SU (1988) A taxonomy for the timing of feedback in computer-based instruction. Educ Technol 28(10):20–25Google Scholar
  10. 10.
    Disessa AA, Cobb P (2004) Ontological innovation and the role of theory in design experiments. J Learn Sci 13(1):77–103CrossRefGoogle Scholar
  11. 11.
    Do-Lenh S, Jermann P, Cuendet S, Zufferey G, Dillenbourg P (2010) Task performance vs. learning outcomes: a study of a tangible user interface in the classroom. In: Wolpers M, Kirschner P, Scheffel M, Lindstaedt S, Dimitrova V (eds) Sustaining TEL: from innovation to learning and practice. Lecture Notes in Computer Science, vol 6383. Springer, Berlin, pp 78–92Google Scholar
  12. 12.
    Gutierrez-Santos S, Mavrikis M, Magoulas G (2010) Layered development and evaluation for intelligent support in exploratory environments: the case of microworlds. In: Aleven V, Kay J, Mostow J (eds) Intelligent tutoring systems. Lecture Notes in Computer Science, vol 6094. Springer, Berlin, pp 105–114Google Scholar
  13. 13.
    Healy L, Hoelzl R, Hoyles C, Noss R (1994) Messing up. Micromath 10:14–17Google Scholar
  14. 14.
    Hoyles C, Sutherland R (1989) Logo mathematics in the classroom. Routledge, LondonGoogle Scholar
  15. 15.
    de van Jong T, Joolingen WR (1998) Scientific discovery learning with computer simulations of conceptual domains. Rev Educ Res 68:179–201CrossRefGoogle Scholar
  16. 16.
    Joolingen WR, Zacharia ZC (2009) Developments in inquiry learning. In: Balacheff N, Ludvigsen S, de Jong T, Lazonder A, Barnes S (eds) Technology-enhanced learning, chap 2, pp 21–37Google Scholar
  17. 17.
    Kelleher C, Pausch R (2005) Stencils-based tutorials: design and evaluation. In: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, NY, CHI ’05, pp 541–550Google Scholar
  18. 18.
    Kirschner P, Sweller J, Clark RE (2006) Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential and inquiry-based teaching. Educ Psychol 41(2):75–86CrossRefGoogle Scholar
  19. 19.
    Kynigos C (1992) Insights into pupils’ and teachers’ activities in pupil-controlled problem-solving situations. In: Information technology and mathematics problem solving: research in contexts of practice, NATO ASI Series, vol 2. Springer, Berlin, pp 219–238Google Scholar
  20. 20.
    Lesh R, Kelly AE (1996) A constructivist model for redesigning AI tutors in mathematics. In: Laborde JM (eds) Intelligent learning environments: the case of geometry. Springer, New YorkGoogle Scholar
  21. 21.
    Mason J (2008) Being mathematical with and in front of learners: attention, awareness, and attitude as sources of differences between teacher educators, teachers and learners. In: Wood (ed) International handbook of mathematics teacher education, vol 4. Sense Publishers, RotterdamGoogle Scholar
  22. 22.
    Mavrikis M (2004) Improving the effectiveness of interactive open learning environments. In: 3rd hellenic conference on artificial intelligence—companion volume, pp 260–269Google Scholar
  23. 23.
    Mavrikis M, Gutierrez-Santos S (2010) Not all wizards are from Oz: iterative design of intelligent learning environments by communication capacity tapering. Comput Educ 54(3):641–651CrossRefGoogle Scholar
  24. 24.
    Mavrikis M, Noss R, Geraniou E, Hoyles C (2012) Sowing the seeds of algebraic generalisation: designing epistemic affordances for an intelligent microworld. J Comput Assist Learn (in press)Google Scholar
  25. 25.
    Mayer RE (2004) Should there be a three-strikes rule against pure discovery learning?—the case for guided methods of instruction. Am Psychol 59(1):14–19Google Scholar
  26. 26.
    Noss R, Hoyles C (1996) Windows on mathematical meanings: learning cultures and computers. Kluwer, DordrechtCrossRefGoogle Scholar
  27. 27.
    Noss R, Poulovassilis A, Geraniou E, Gutierrez-Santos S, Hoyles C, Kahn K, Magoulas GD, Mavrikis M (2012) The design of a system to support exploratory learning of algebraic generalisation. Comput Educ 59(1):63–81CrossRefGoogle Scholar
  28. 28.
    Paramythis A, Weibelzahl S, Masthoff J (2010) Layered evaluation of interactive adaptive systems: framework and formative methods. User Model User-Adapt Interact 20(5):383–453CrossRefGoogle Scholar
  29. 29.
    Pontual Falcão T, Price S (2010) Interfering and resolving: how tabletop interaction facilitates co-construction of argumentative knowledge. Int J Comput-Support Collab Learn 6(4):1–21Google Scholar
  30. 30.
    Read J, Macfarlane S (2002) Endurability, Engagement and Expectations: Measuring children’s fun. In: Interaction design and children,vol 2. Shaker Publishing, Aachen, pp 1–23Google Scholar
  31. 31.
    Read JC (2008) Validating the fun toolkit: an instrument for measuring children’s opinions of technology. Cogn Technol Work 10(2):119–128MathSciNetCrossRefGoogle Scholar
  32. 32.
    Rounding K, Tee K, Wu X, Guo C, Tse E (2011) Evaluating interfaces with children. In: Child computer interaction. 2nd workshop on UI technologies and educational pedagogy. At ACM CHI 2011, Conference on human factors in computing systemsGoogle Scholar
  33. 33.
    Scardamalia M, Bereiter C (1991) Higher levels of agency for children in knowledge building: a challenge for the design of new knowledge media. J Learn Sci 1(1):37–68CrossRefGoogle Scholar
  34. 34.
    Tse E, Schöning J, Huber J, Marentette T, Beckwith R, Rogers Y, Mühlhäuse M (2011) Child computer interaction. 2nd workshop on UI technologies and educational pedagogy. At ACM CHI 2011, Conference on human factors in computing systemsGoogle Scholar
  35. 35.
    Wood H, Wood D (1999) Help seeking, learning and contigent tutoring. Comput Educ 33:153–169CrossRefGoogle Scholar
  36. 36.
    Zayas Pérez B, Cox R (2009) Teaching safety precautions in a laboratory DVE: the effects of information location and interactivity. Computacion y Sistemas (special issue on ’Innovative applications of artificial intelligence’) 13:96–110Google Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  • Manolis Mavrikis
    • 1
  • Sergio Gutierrez-Santos
    • 2
  • Eirini Geraniou
    • 1
  • Richard Noss
    • 1
  1. 1.London Knowledge Lab, Institute of Education, University of LondonLondonUK
  2. 2.London Knowledge Lab, Computer Science and Information SystemsBirkbeck, University of LondonLondonUK

Personalised recommendations