Does Time Matter in Learning? A Computer Simulation of Carroll’s Model of Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12214)


This paper is an exploratory theoretical study of the role of time in learning. We present a computer simulation based on Carroll’s model of school learning. Our aim is to probe some key theoretical questions in educational research: Can all students learn well? If so, under what conditions? What is time’s role in learning achievement? How does time relate to other instructional variables such as student aptitude, student perseverance, and quality of instruction? In our approach we regard learning as a causal system in which a few variables predict and explain different levels of learning. While the simulation is not a causal analysis in the strict sense, it lays some of the groundwork for a fuller causal approach. Our main result confirms the Carroll-Bloom hypothesis that time, as opportunity to learn, is a central variable in learning achievement and also key to closing the achievement gap. We also demonstrate that time, as learner perseverance, accelerates achievement, especially for less prepared students. However, perseverance becomes effective only when the instructional environment surpasses a basic quality threshold. We conclude by considering some implications for designing alternative learning environments, particularly adaptive instructional systems.


Time on task Adaptive instructional systems Mastery learning Computer simulation 


  1. 1.
    Carroll, J.B.: A model of school learning. Teach. Coll. Rec. 64, 723–733 (1963)Google Scholar
  2. 2.
    Guskey, T.R.: Closing achievement gaps: revisiting Benjamin S. Bloom’s ‘Learning for Mastery’. J. Adv. Acad. 19(1), 8–31 (2007)CrossRefGoogle Scholar
  3. 3.
    Bloom, B.S.: Time and learning. Am. Psychol. 29(9), 682 (1974)CrossRefGoogle Scholar
  4. 4.
    Bloom, B.S.: Learning for mastery. Instruction and curriculum. Regional education laboratory for the Carolinas and Virginia, topical papers and reprints, number 1. Eval. Comment 1(2), n2 (1968)Google Scholar
  5. 5.
    Durán, J.M.: Computer Simulations in Science and Engineering: Concepts–Practices–Perspectives. TFC. Springer, Cham (2018). Scholar
  6. 6.
    Carroll, J.B.: Computer applications in the investigation of models in educational research. In: Proceedings of a Harvard Symposium on Digital Computers and Their Applications, 3–6 April 1961 (1962)Google Scholar
  7. 7.
    Plant, E.A., Ericsson, K.A., Hill, L., Asberg, K.: Why study time does not predict grade point average across college students: Implications of deliberate practice for academic performance. Contemp. Educ. Psychol. 30(1), 96–116 (2005)CrossRefGoogle Scholar
  8. 8.
    Beer, J., Beer, J.: Classroom and home study times and grades while at college using a single-subject design. Psychol. Rep. 71(1), 233–234 (1992) CrossRefGoogle Scholar
  9. 9.
    Gortner Lahmers, A., Zulauf, C.R.: Factors associated with academic time use and academic performance of college students: a recursive approach. J. Coll. Stud. Dev. (2000)Google Scholar
  10. 10.
    Masui, C., Broeckmans, J., Doumen, S., Groenen, A., Molenberghs, G.: Do diligent students perform better? Complex relations between student and course characteristics, study time, and academic performance in higher education. Stud. High. Educ. 39(4), 621–643 (2014)CrossRefGoogle Scholar
  11. 11.
    Doumen, S., Broeckmans, J., Masui, C.: The role of self-study time in freshmen’s achievement. Educ. Psychol. 34(3), 385–402 (2014)CrossRefGoogle Scholar
  12. 12.
    Schuman, H., Walsh, E., Olson, C., Etheridge, B.: Effort and reward: the assumption that college grades are affected by quantity of study. Soc. Forces 63(4), 945–966 (1985)CrossRefGoogle Scholar
  13. 13.
    Romero, M., Barbera, E.: Quality of e-learners’ time and learning performance beyond quantitative time-on-task. Int. Rev. Res. Open Distrib. Learn. 12(5), 125–137 (2011)Google Scholar
  14. 14.
    Wagner, P., Schober, B., Spiel, C.: Time students spend working at home for school. Learn. Instr. 18(4), 309–320 (2008)CrossRefGoogle Scholar
  15. 15.
    Baker, R.S.J.: Modeling and understanding students’ off-task behavior in intelligent tutoring systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1059–1068 (2007)Google Scholar
  16. 16.
    Rushkin, I., Chuang, I., Tingley, D.: Modelling and using response times in MOOCs (2017)Google Scholar
  17. 17.
    Essa, A., Agnihotri, L.: Measuring Procrastination and Associated Probabilities for Student Success, Unpublished Manuscript (2018)Google Scholar
  18. 18.
    Caple, C.: The Effects of Spaced Practice and Spaced Review on Recall and Retention Using Computer Assisted Instruction (1996)Google Scholar
  19. 19.
    Rohrer, D., Pashler, H.: Increasing retention without increasing study time. Curr. Dir. Psychol. Sci. 16(4), 183–186 (2007)CrossRefGoogle Scholar
  20. 20.
    Kovanović, V., Gašević, D., Dawson, S., Joksimović, S., Baker, R.S., Hatala, M.: Penetrating the black box of time-on-task estimation. In: Proceedings of the Fifth International Conference on Learning Analytics and Knowledge, pp. 184–193 (2015)Google Scholar
  21. 21.
    Karweit, N., Slavin, R.E.: Time-on-task: Issues of timing, sampling, and definition. J. Educ. Psychol. 74(6), 844 (1982)CrossRefGoogle Scholar
  22. 22.
    Bloom, B.S.: The 2 sigma problem: the search for methods of group instruction as effective as one-to-one tutoring. Educ. Res. 13(6), 4–16 (1984)CrossRefGoogle Scholar
  23. 23.
    Slavin, R.E.: Mastery learning reconsidered. Rev. Educ. Res. 57(2), 175–213 (1987)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Simon InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.Apple Inc.CupertinoUSA

Personalised recommendations