Advertisement

Wheel-Spinning: Students Who Fail to Master a Skill

  • Joseph E. Beck
  • Yue Gong
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7926)

Abstract

The concept of mastery learning is powerful: rather than a fixed number of practices, students continue to practice a skill until they have mastered it. However, an implicit assumption in this formulation is that students are capable of mastering the skill. Such an assumption is crucial in computer tutors, as their repertoire of teaching actions may not be as effective as commonly believed. What if a student lacks sufficient knowledge to solve problems involving the skill, and the computer tutor is not capable of providing sufficient instruction? This paper introduces the concept of “wheel-spinning;” that is, students who do not succeed in mastering a skill in a timely manner. We show that if a student does not master a skill in ASSISTments or the Cognitive Tutor quickly, the student is likely to struggle and will probably never master the skill. We discuss connections between such lack of learning and negative student behaviors such as gaming and disengagement, and discuss alterations to ITS design to overcome this issue.

Keywords

mastery learning student modeling wheel-spinning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Koedinger, K.R., et al.: Intelligent Tutoring Goes To School in the Big City. International Journal of Artificial Intelligence in Education 8, 30–43 (1997)Google Scholar
  2. 2.
    Bloom, B.S.: Human characteristics and school learning. McGraw-Hill (1976)Google Scholar
  3. 3.
    Frick, T.W.: A comparison of three decision models for adapting the length of computer-based mastery tests. Journal of Educational Computing Research 6(4), 479–513 (1990)CrossRefGoogle Scholar
  4. 4.
    Corbett, A., Anderson, J.R.: Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction 4, 253–278 (1995)CrossRefGoogle Scholar
  5. 5.
    Beck, J.E., Chang, K.-M.: Identifiability: A Fundamental Problem of Student Modeling. In: International Conference on User Modeling, Corfu, Greece (2007)Google Scholar
  6. 6.
    Menard, S.: Applied Logistic Regression Analysis. Quantitative Applications in the Social Sciences. Sage Publications (2001)Google Scholar
  7. 7.
    Pardos, Z., et al.: Analyzing fine-grained skill models using bayesian and mixed effect methods. In: Thirteenth Conference on Artificial Intelligence in Education. IOS Press (2007)Google Scholar
  8. 8.
    Chi, M., et al.: Instructional Factors Analysis: A Cognitive Model For Multiple Instructional Interventions. In: Proceedings of Educational Data Mining (2011)Google Scholar
  9. 9.
    de Koning, K., et al.: Model-based reasoning about learner behaviour. Artificial Intelligence 117, 173–229 (2000)zbMATHCrossRefGoogle Scholar
  10. 10.
    Baker, R.S.J.d., et al.: Adapting to When Students Game an Intelligent Tutoring System. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 392–401. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Baker, R.S.J.d.: Modeling and Understanding Students’ Off-Task Behavior in Intelligent Tutoring Systems. In: Proceedings of ACM CHI 2007: Computer-Human Interaction (2007)Google Scholar
  12. 12.
    Baker, R.S.J.d., Goldstein, A.B., Heffernan, N.T.: Detecting the Moment of Learning. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010, Part I. LNCS, vol. 6094, pp. 25–34. Springer, Heidelberg (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Joseph E. Beck
    • 1
  • Yue Gong
    • 1
  1. 1.Worcester Polytechnic InstituteUSA

Personalised recommendations