Skip to main content

Advertisement

Log in

Design requirements, student perception indicators and validation metrics for intelligent exploratory learning environments

  • Original Article
  • Published:
Personal and Ubiquitous Computing Aims and scope Submit manuscript

Abstract

The new forms of interaction afforded by innovative technology and open-ended environments provide promising opportunities for exploratory learning. Exploratory environments, however, require appropriate support to lead to meaningful learning outcomes. This paper focuses on the design and validation of intelligent exploratory environments. The goal is twofold: requirements that guide the operationalisation of pedagogical strategies to computer-based support and methodology for the validation of the system. As designers, we need to understand what kind of interaction is conducive to learning and aligned with the theoretical principles behind exploratory learning. We summarise this in the form of three requirements—rare interruption of interaction, co-location of feedback and support towards specific goals. Additionally, developing intelligent systems requires many resources and a long time to build. To facilitate their evaluation, we define three indicators— helpfulness, repetitiveness and comprehension—of students’ perception of the intelligent system and three metrics—relevance, coverage, and scope—which allow the identification of design or implementation problems at various phases of the development. The paper provides a case study with a mathematical microworld that demonstrates how the three requirements are taken into account in the design of the user-facing components of the system and outline the methodology for formative validation of the intelligent support.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The inspiration for similar messages comes from previous work in implementing tutorials for Alice; a 3D programming environment for introductory computing [17]. In the context of the MiGen project, we developed a Java Feedback Toolkit (JFT) that allows the generation of such messages in a general way and, therefore, can be used in other Java-based environments. See http://www.migen.org details.

References

  1. Aleven V (2002) An effective metacognitive strategy: learning by doing and explaining with a computer-based cognitive tutor. Cogn Sci 26(2):147–179

    Article  Google Scholar 

  2. Aleven V, Mclaren B, Roll I, Koedinger K (2006) Toward meta-cognitive tutoring: a model of help seeking with a cognitive tutor. Int J Artif Intell Edu 16(2):101–128

    Google Scholar 

  3. Arroyo I, Ferguson K, Johns J, Dragon T, Meheranian H, Fisher D, Barto A, Mahadevan S, Woolf BP (2007) Repairing disengagement with non-invasive interventions. In: Proceeding of the 2007 conference on artificial intelligence in education, IOS Press, Amsterdam, pp 195–202

  4. Baker RS, Corbett AT, Koedinger KR, Wagner AZ (2004) Off-task behavior in the cognitive tutor classroom: when students “game the system”. In: CHI ’04: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, New York, pp 383–390

  5. Bark I, Folstad A, Gulliksen J (2005) Use and usefulness of HCI methods: results from an exploratory study among nordic HCI practitioners. In: International conference on HCI

  6. Bransford JD, Franks JJ, Vye NJ, Sherwood RD (1989) New approaches to instruction: because wisdom can’t be told. In: Vosniadou S, Ortony A (eds) Similarity and analogical reasoning. Cambridge University Press, New York

  7. Bunt A, Conati C (2003) Probabilistic Student modelling to improve exploratory behaviour. User Model User-Adapt Interact 13:269–309

    Article  Google Scholar 

  8. Chevallard Y (1991) La transposition didactique—Du savoir savant au savoir enseigné. La Pensée sauvage (1re éd., 1985), Grenoble

  9. Dempsey JV, Wager SU (1988) A taxonomy for the timing of feedback in computer-based instruction. Educ Technol 28(10):20–25

    Google Scholar 

  10. Disessa AA, Cobb P (2004) Ontological innovation and the role of theory in design experiments. J Learn Sci 13(1):77–103

    Article  Google Scholar 

  11. Do-Lenh S, Jermann P, Cuendet S, Zufferey G, Dillenbourg P (2010) Task performance vs. learning outcomes: a study of a tangible user interface in the classroom. In: Wolpers M, Kirschner P, Scheffel M, Lindstaedt S, Dimitrova V (eds) Sustaining TEL: from innovation to learning and practice. Lecture Notes in Computer Science, vol 6383. Springer, Berlin, pp 78–92

  12. Gutierrez-Santos S, Mavrikis M, Magoulas G (2010) Layered development and evaluation for intelligent support in exploratory environments: the case of microworlds. In: Aleven V, Kay J, Mostow J (eds) Intelligent tutoring systems. Lecture Notes in Computer Science, vol 6094. Springer, Berlin, pp 105–114

  13. Healy L, Hoelzl R, Hoyles C, Noss R (1994) Messing up. Micromath 10:14–17

    Google Scholar 

  14. Hoyles C, Sutherland R (1989) Logo mathematics in the classroom. Routledge, London

    Google Scholar 

  15. de van Jong T, Joolingen WR (1998) Scientific discovery learning with computer simulations of conceptual domains. Rev Educ Res 68:179–201

    Article  Google Scholar 

  16. Joolingen WR, Zacharia ZC (2009) Developments in inquiry learning. In: Balacheff N, Ludvigsen S, de Jong T, Lazonder A, Barnes S (eds) Technology-enhanced learning, chap 2, pp 21–37

  17. Kelleher C, Pausch R (2005) Stencils-based tutorials: design and evaluation. In: Proceedings of the SIGCHI conference on Human factors in computing systems, ACM, NY, CHI ’05, pp 541–550

  18. Kirschner P, Sweller J, Clark RE (2006) Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential and inquiry-based teaching. Educ Psychol 41(2):75–86

    Article  Google Scholar 

  19. Kynigos C (1992) Insights into pupils’ and teachers’ activities in pupil-controlled problem-solving situations. In: Information technology and mathematics problem solving: research in contexts of practice, NATO ASI Series, vol 2. Springer, Berlin, pp 219–238

  20. Lesh R, Kelly AE (1996) A constructivist model for redesigning AI tutors in mathematics. In: Laborde JM (eds) Intelligent learning environments: the case of geometry. Springer, New York

    Google Scholar 

  21. Mason J (2008) Being mathematical with and in front of learners: attention, awareness, and attitude as sources of differences between teacher educators, teachers and learners. In: Wood (ed) International handbook of mathematics teacher education, vol 4. Sense Publishers, Rotterdam

  22. Mavrikis M (2004) Improving the effectiveness of interactive open learning environments. In: 3rd hellenic conference on artificial intelligence—companion volume, pp 260–269

  23. Mavrikis M, Gutierrez-Santos S (2010) Not all wizards are from Oz: iterative design of intelligent learning environments by communication capacity tapering. Comput Educ 54(3):641–651

    Article  Google Scholar 

  24. Mavrikis M, Noss R, Geraniou E, Hoyles C (2012) Sowing the seeds of algebraic generalisation: designing epistemic affordances for an intelligent microworld. J Comput Assist Learn (in press)

  25. Mayer RE (2004) Should there be a three-strikes rule against pure discovery learning?—the case for guided methods of instruction. Am Psychol 59(1):14–19

    Google Scholar 

  26. Noss R, Hoyles C (1996) Windows on mathematical meanings: learning cultures and computers. Kluwer, Dordrecht

    Book  Google Scholar 

  27. Noss R, Poulovassilis A, Geraniou E, Gutierrez-Santos S, Hoyles C, Kahn K, Magoulas GD, Mavrikis M (2012) The design of a system to support exploratory learning of algebraic generalisation. Comput Educ 59(1):63–81

    Article  Google Scholar 

  28. Paramythis A, Weibelzahl S, Masthoff J (2010) Layered evaluation of interactive adaptive systems: framework and formative methods. User Model User-Adapt Interact 20(5):383–453

    Article  Google Scholar 

  29. Pontual Falcão T, Price S (2010) Interfering and resolving: how tabletop interaction facilitates co-construction of argumentative knowledge. Int J Comput-Support Collab Learn 6(4):1–21

  30. Read J, Macfarlane S (2002) Endurability, Engagement and Expectations: Measuring children’s fun. In: Interaction design and children,vol 2. Shaker Publishing, Aachen, pp 1–23

  31. Read JC (2008) Validating the fun toolkit: an instrument for measuring children’s opinions of technology. Cogn Technol Work 10(2):119–128

    Article  MathSciNet  Google Scholar 

  32. Rounding K, Tee K, Wu X, Guo C, Tse E (2011) Evaluating interfaces with children. In: Child computer interaction. 2nd workshop on UI technologies and educational pedagogy. At ACM CHI 2011, Conference on human factors in computing systems

  33. Scardamalia M, Bereiter C (1991) Higher levels of agency for children in knowledge building: a challenge for the design of new knowledge media. J Learn Sci 1(1):37–68

    Article  Google Scholar 

  34. Tse E, Schöning J, Huber J, Marentette T, Beckwith R, Rogers Y, Mühlhäuse M (2011) Child computer interaction. 2nd workshop on UI technologies and educational pedagogy. At ACM CHI 2011, Conference on human factors in computing systems

  35. Wood H, Wood D (1999) Help seeking, learning and contigent tutoring. Comput Educ 33:153–169

    Article  Google Scholar 

  36. Zayas Pérez B, Cox R (2009) Teaching safety precautions in a laboratory DVE: the effects of information location and interactivity. Computacion y Sistemas (special issue on ’Innovative applications of artificial intelligence’) 13:96–110

Download references

Acknowledgments

The authors would like to acknowledge the rest of the members of the MiGen project, which was supported by ESRC/TLRP Grant RES-139-25-0381 (see http://www.migen.org).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manolis Mavrikis.

Appendix

Appendix

See Table 7.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mavrikis, M., Gutierrez-Santos, S., Geraniou, E. et al. Design requirements, student perception indicators and validation metrics for intelligent exploratory learning environments. Pers Ubiquit Comput 17, 1605–1620 (2013). https://doi.org/10.1007/s00779-012-0524-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00779-012-0524-3

Keywords

Navigation