Advertisement

Fine-Grained Cognitive Assessment Based on Free-Form Input for Math Story Problems

  • Bastiaan Heeren
  • Johan Jeuring
  • Sergey Sosnovsky
  • Paul Drijvers
  • Peter Boon
  • Sietske Tacoma
  • Jesse Koops
  • Armin Weinberger
  • Brigitte Grugeon-Allys
  • Françoise Chenevotot-Quentin
  • Jorn van Wijk
  • Ferdinand van Walree
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11082)

Abstract

We describe an approach to using ICT for assessing mathematics achievement of pupils using learning environments for mathematics. In particular, we look at fine-grained cognitive assessment of free-form answers to math story problems, which requires determining the steps a pupil takes towards a solution, together with the high-level solution approach used by the pupil. We recognise steps and solution approaches in free-form answers and use this information to update a user model of mathematical competencies. We use the user model to find out for which student competencies we need more evidence of mastery, and determine which next problem to offer to a pupil. We describe the results of our fine-grained cognitive assessment on a large dataset for one problem, and report the results of two pilot studies in different European countries.

Keywords

Math story problems Step-based assessment Free-form input Solution strategies User modelling 

Notes

Acknowledgements

The Advise-Me project has received funding from the European Union’s ERASMUS+ Programme, Strategic Partnerships for school education for the development of innovation, under grant agreement number 2016-1-NL01-KA201-023022. For more information, visit http://advise-me.ou.nl.

References

  1. 1.
    Almond, R.G., Mislevy, R.J., Steinberg, L.S., Yan, D., Williamson, D.M.: Bayesian Networks in Educational Assessment. Springer, New York (2015).  https://doi.org/10.1007/978-1-4939-2125-6CrossRefGoogle Scholar
  2. 2.
    Conati, C., Gertner, A., VanLehn, K.: Using Bayesian networks to manage uncertainty in student modeling. User Model. User Adapt. Interact. 12(4), 371–417 (2002)CrossRefGoogle Scholar
  3. 3.
    Conejo, R., Guzmán, E., Millán, E., Trella, M., Pérez-De-La-Cruz, J.L., Ríos, A.: SIETTE: a web-based tool for adaptive testing. Int. J. Artif. Intell. Educ. 14(1), 29–61 (2004)Google Scholar
  4. 4.
    Dowker, A.: Children with Difficulties in Mathematics: What Works?. DfES Publications, London (2004)Google Scholar
  5. 5.
    Feng, M., Heffernan, N., Koedinger, K.: Addressing the assessment challenge with an online system that tutors as it assesses. User Model. User Adapt. Interact. 19(3), 243–266 (2009)CrossRefGoogle Scholar
  6. 6.
    Grugeon-Allys, B., Chenevotot-Quentin, F., Pilet, J., Prévit, D.: Online automated assessment and student learning: the PEPITE project in elementary algebra. In: Ball, L., Drijvers, P., Ladel, S., Siller, H.-S., Tabach, M., Vale, C. (eds.) Uses of Technology in Primary and Secondary Mathematics Education. ICME-13, pp. 245–266. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-76575-4_13CrossRefGoogle Scholar
  7. 7.
    Heeren, B., Jeuring, J.: Feedback services for stepwise exercises. Sci. Comput. Program. 88, 110–129 (2014)CrossRefGoogle Scholar
  8. 8.
    Murray, T.: An overview of intelligent tutoring system authoring tools: updated analysis of the state of the art. In: Murray, T., Blessing, S.B., Ainsworth, S. (eds.) Authoring Tools for Advanced Technology Learning Environments, pp. 491–544. Springer, Dordrecht (2003).  https://doi.org/10.1007/978-94-017-0819-7_17CrossRefGoogle Scholar
  9. 9.
    Narciss, S., et al.: Exploring feedback and student characteristics relevant for personalizing feedback strategies. Comput. Educ. 71, 56–76 (2014)CrossRefGoogle Scholar
  10. 10.
    Nwana, H.: Intelligent tutoring systems: an overview. AI Rev. 4(4), 251–277 (1990)Google Scholar
  11. 11.
    Sosnovsky, S., Brusilovsky, P., Lee, D.H., Zadorozhny, V., Zhou, X.: Re-assessing the value of adaptive navigation support in e-Learning context. In: Nejdl, W., Kay, J., Pu, P., Herder, E. (eds.) AH 2008. LNCS, vol. 5149, pp. 193–203. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-70987-9_22CrossRefGoogle Scholar
  12. 12.
    VanLehn, K.: The behavior of tutoring systems. J. AIED 16(3), 227–265 (2006)Google Scholar
  13. 13.
    VanLehn, K.: The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ. Psychol. 46(4), 197–221 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Bastiaan Heeren
    • 1
  • Johan Jeuring
    • 1
    • 2
  • Sergey Sosnovsky
    • 2
  • Paul Drijvers
    • 3
  • Peter Boon
    • 3
  • Sietske Tacoma
    • 3
  • Jesse Koops
    • 4
  • Armin Weinberger
    • 5
  • Brigitte Grugeon-Allys
    • 6
  • Françoise Chenevotot-Quentin
    • 6
  • Jorn van Wijk
    • 1
  • Ferdinand van Walree
    • 1
  1. 1.Faculty of Management, Science and TechnologyOpen University of the NetherlandsHeerlenThe Netherlands
  2. 2.Department of Information and Computing SciencesUniversiteit UtrechtUtrechtThe Netherlands
  3. 3.Freudenthal InstituteUtrecht UniversityUtrechtThe Netherlands
  4. 4.CitoArnhemThe Netherlands
  5. 5.Department of Educational TechnologySaarland UniversitySaarbrückenGermany
  6. 6.Laboratoire de Didactique André Revuz, Université Paris Est CréteilParisFrance

Personalised recommendations