Time management skills are an essential component of college student success, especially in online classes. Through a randomized control trial of students in a for-credit online course at a public 4-year university, we test the efficacy of a scheduling intervention aimed at improving students’ time management. Results indicate the intervention had positive effects on initial achievement scores; students who were given the opportunity to schedule their lecture watching in advance scored about a third of a standard deviation better on the first quiz than students who were not given that opportunity. These effects are concentrated in students with the lowest self-reported time management skills. However, these effects diminish over time such that we see a marginally significant negative effect of treatment on the last week’s quiz grade and no difference in overall course scores. We examine the effect of the intervention on plausible mechanisms to explain the observed achievement effects. We find no evidence that the intervention affected cramming, procrastination, or the time at which students did work.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Due to an administrative error, two students who were assigned to the control condition were given the treatment scheduling surveys. We consider this a compliance issue and leave them in our ITT estimates.
To further alleviate concerns related to this sample reduction, we present results from ITT and TOT models without covariates using the fully randomized sample of 157 students in the first two columns of Appendix Table 7. Reassuringly, results are consistent with the reported results from the pre-course survey sample of 145 students.
While most of the students enrolled in the course were degree-seeking students at the university, some (fewer than ten) students were enrolled in another university. We do not have demographic information available for these students, but we do have full data (including survey responses) for these students. They are included in our analyses.
Much of the data presented in this table come from institutional data provided to us by the school’s institutional research office. Sample sizes vary based on which data were available for which students. The second set of variables were asked on the pre-course survey for this specific class and are available for all students in our analytic sample.
We note that we could appropriately call our control group a placebo group, although we preference the former term.
As noted earlier, we did not include student demographic characteristics in our analyses because we were missing these measures for about one quarter of our sample. Appendix Table 7 Columns 7–10 include estimates of the treatment effect from this reduced sample with and without controls. Those estimates are a bit larger, but remain qualitatively similar (same sign and significance).
In Appendix Table 7, we present results from all students who were randomized into the treatment and control groups, and results from our pre-course survey sample but without the 13 controls. Results from all models are similar in magnitude and significance to the main results we present here.
Half a percentage point on the course grade is the same amount of extra credit that we offered to students for answering the weekly surveys (both treatment and control students). While many things, including wanting to respond to the professor, liking filling out surveys, etc. could induce student to fill out the survey, the very high take up rates indicate that an increase of 0.5 points is likely meaningful to students.
We describe how we define and operationalize procrastination and cramming in depth in Appendix 2.
There are a number of factors that need to be taken into consideration when creating these statistics. Although many students clicked play for the same video multiple times, (the average is 1.29 times across all videos), we only used students’ first video-watching record in our analyses. Videos that were watched for the first time after the deadline were still included in the cramming and procrastination calculation. No more than 2% of video watches each week were after the deadline. Some students also do not have a recorded watching time for all videos. This could be because they watched it on a friend’s computer or accessed it some other way. All students in the class received credit for watching all of the lectures (meaning they completed the accompanying quiz), so we assume these missing video watching times are noise. For the purposes of calculating procrastination and cramming, we define a video as skipped if a student did not have a record of clicking on the video on the course management platform. Across all students and weeks, about 9% of videos were skipped. For students who skipped at least one video in a week, the cramming and procrastination variables reflects the standard deviation and mean video-watching time for less than five videos. Students who watched one or no videos in a given week do not have cramming or procrastination for that week. Students only had overall course cramming and procrastinating scores if they had corresponding scores for each of the 5 weeks. We acknowledge the possibility of measurement error for these video watching measures but we believe such error is likely to be small given the high percentage of students who we observe watching each lecture.
Allen, E., Seaman, J., Poulin, R., & Straut, T. (2016). Online report card: Tracking online education in the United States. Babson Survey Research Group and Quahog Research Group, LLC.
Angelino, L., Williams, F., & Natvig, D. (2007). Strategies to engage online students and reduce attrition rates. The Journal of Educators Online, 4, 1–14.
Ariely, D., & Wertenbroch, K. (2002). Procrastination, deadlines, and performance: Self-control by precommitment. Psychological Science, 13, 219–224.
Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press.
Ashraf, N., Karlan, D., & Yin, W. (2006). Tying Odysseus to the mast: Evidence from a commitment savings product in the Philippines. Quarterly Journal of Economics, 121, 635–672.
Babcock, P., & Marks, M. (2011). The falling time cost of college: Evidence from half a century of time use data. The Review of Economics and Statistics, 93, 468–478.
Baker, R., Evans, B., & Dee, T. (2016). A randomized experiment testing the efficacy of a scheduling nudge in a Massive Open Online Course (MOOC). AERA Open, 2, 1–18.
Beattie, G., Laliberté, J. P., Michaud-Leclerc, C., & Oreopoulos, P. (2017). What sets college thrivers and divers apart? A contrast in study habits, attitudes, and mental health. National Bureau of Economic Research Working Paper No. 23588.
Bjorklund, S. A., Parente, J. M., & Sathianathan, D. (2004). Effects of faculty interaction and feedback on gains in student skills. Journal of Engineering Education, 93(2), 153–160.
Black, A. E., & Deci, E. L. (2000). The effects of instructors’ autonomy support and students’ autonomous motivation on learning organic chemistry: A self-determination theory perspective. Science Education, 84(6), 740–756.
Bowen, W. G., Chingos, M. M., Lack, K. A., & Nygren, T. I. (2014). Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis & Management, 33, 94–111.
Bowers, J., & Kumar, P. (2015). Students’ perceptions of teaching and social presence: A comparative analysis of face-to-face and online learning environments. International Journal of Web-Based Learning and Teaching Technologies (IJWLTT), 10(1), 27–44.
Britton, B. K., & Tesser, A. (1991). Effects of time-management practices on college grades. Journal of Educational Psychology, 83, 405–410.
Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies and academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 1–13.
Carrell, S. E., Maghakian, T., & West, J. E. (2011). A’s from Zzzz’s? The causal effect of school start time on the academic achievement of adolescents. American Economics Journal: Economic Policy, 3, 62–81.
Cochran, J. D., Campbell, S. M., Baker, H. M., & Leeds, E. M. (2014). The role of student characteristics in predicting retention in online courses. Research in Higher Education, 55, 27–48.
Deming, D. J., Goldin, C., Katz, L. F., & Yuchtman, N. (2015). Can online learning bend the higher education cost curve? American Economic Review, 105, 496–501.
Elvers, G. C., Polzella, D. J., & Graetz, K. (2003). Procrastination in online courses: Performance and attitudinal differences. Teaching of Psychology, 30, 159–162.
Evans, B., Baker, R., & Dee, T. (2016). Persistence patterns in Massive Open Online Courses (MOOCs). Journal of Higher Education, 87, 206–242.
Figlio, D., Rush, M., & Yin, L. (2013). Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics, 31, 763–784.
Frederick, S., Loewenstein, G., & O’donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of economic literature, 40(2), 351–401.
Gine, Xavier, Karlan, Dean, & Zinman, Jonathan. (2010). Put your money where your butt is: A commitment contract for smoking cessation. American Economic Journal: Applied Economics, 2, 213–225.
Goldstein, D., Hahn, C. S., Hasher, L., Wiprzycka, U. J., & Zelazo, P. D. (2007). Time of day, intellectual performance, and behavioral problems in morning versus evening type adolescents: Is there a synchrony effect? Personality and Individual Differences, 42, 431–440.
Griffith, A. L. (2010). Persistence of women and minorities in STEM field majors: Is it the school that matters? Economics of Education Review, 29, 911–922.
Guàrdia, L., Maina, M., & Sangrà, A. (2013). MOOC design principles. A pedagogical approach form the learner’s perspective. eLearning Papers, 33, 1–6.
Hart, C. (2012). Factors associated with student persistence in an online program of study: A review of the literature. Journal of Interactive Online Learning, 11, 19–42.
Hartwig, M. K., & Dunlosky, J. (2012). Study strategies of college students: Are self-testing and scheduling related to achievement? Psychonomic Bulletin & Review, 19, 126–134.
Heiman, T. (2008). The effects of e-mail messages in a distance learning university on perceived academic and social support, academic satisfaction, and coping. Quarterly Review of Distance Education, 9(3), 237.
Kang, M., & Im, T. (2013). Factors of learner–instructor interaction which predict perceived learning outcomes in online learning environment. Journal of Computer Assisted Learning, 29(3), 292–301.
Kaur, S., Kremer, M., & Mullainathan, S. (2013). Self-control at work. Duke University working paper.
Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2016). Recommending self-regulated learning strategies does not improve performance in a MOOC. Learning @ Scale Work in Progress.
Koch, A. K., & Nafziger, J. (2017). Motivational goal bracketing: An experiment (No. 10955). Institute for the Study of Labor (IZA) Discussion Paper.
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.
Lack, K. A. (2013). Current status of research on online learning in postsecondary education. ITHAKA S + R.
Leeds, E. M., Campbell, S. M., Baker, H., Ali, R., & Brawley, D. (2013). The impact of student retention strategies: An empirical study. International Journal of Management in Education, 7, 22–43.
Macan, T. H., Shahani, C., Dipboye, R. L., & Phillips, A. P. (1990). College students’ time management: Correlations with academic performance and stress. Journal of Educational Psychology, 82, 760–768.
Michinov, N., Brunot, S., Le Bohec, O., Juhel, J., & Delaval, M. (2011). Procrastination, participation, and performance in online learning environments. Computers & Education, 56, 243–252.
Miltiadou, M., & Savenye, W. C. (2003). Applying social cognitive constructs of motivation to enhance student success in online distance education. AACE Journal, 11(1), 78–95.
Misra, R., & McKean, M. (2000). College students’ academic stress and its relation to their anxiety, time management, and leisure satisfaction. American Journal of Health Studies, 16, 41–51.
Moody, J. (2004). Distance education: Why are the attrition rates so high? Quarterly Review of Distance Education, 5, 205–210.
Mullen, G. E., & Tallent-Runnels, M. K. (2006). Student outcomes and perceptions of instructors’ demands and support in online and traditional classrooms. The Internet and Higher Education, 9(4), 257–266.
National Center for Education Statistics. (NCES). (2015). Digest of Education Statistics, Table 311.15.
Nawrot, I., & Doucet, A. (2014). Building engagement for MOOC students: Introduction support for time management on online learning platforms. In Proceedings of the 23rd international conference on the World Wide Web (pp. 1077–1082). New York, NY: ACM.
Patterson, R. W. (2014). Can behavioral tools improve online student outcomes? Experimental evidence from a Massive Open Online Course. Working paper.
Perna, L. W., Ruby, A., Boruch, R. F., Wang, N., Scull, J., Ahmad, S., et al. (2014). Moving through MOOCs: Understanding the progression of users in Massive Open Online Courses. Educational Researcher, 43, 421–432.
Pintrich, P. R. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ).
Rask, K., & Tiefenthaler, J. (2008). The role of grade selectivity in explaining the gender imbalance in undergraduate economics. Economics of Education Review, 27, 676–687.
Roper, A. R. (2007). How students develop online learning skills. Educause Quarterly, 30, 62–65.
Rostaminezhad, M. A., Mozayani, N., Norozi, D., & Iziy, M. (2013). Factors related to e-learner dropout: Case study of IUST elearning center. Procedia, 83, 522–527.
Rovai, A. P. (2003). In search of higher persistence rates in distance education online programs. The Internet and Higher Education, 6, 1–16.
Schudde, L., & Scott-Clayton, J. (2016). Pell Grants as performance-based scholarships? An examination of satisfactory academic progress requirements in the nation’s largest need-based aid program. Research in Higher Education, 57, 943–967.
Schwartz, B., & Ward, A. (2004). Doing better but feeling worse: The paradox of choice. In P. A. Linley & S. Joseph (Eds.), Positive Psychology in Practice (pp. 86–104). Hoboken, NJ: Wiley.
Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. Internet and Higher Education, 7, 59–70.
Stratton, L. S., O’Toole, D. M., & Wetzel, J. N. (2008). A multinomial logit model of college stopout and dropout behavior. Economics of Education Review, 27, 319–331.
Trueman, M., & Hartley, J. (1996). A comparison between the time-management skills and academic performance of mature and traditional-entry university students. Higher Education, 32, 199–215.
Tuckman, B. W. (2005). Relations of academic procrastination, rationalizations, and performance in a web course with deadlines. Psychological Reports, 96(3_suppl), 1015–1021.
Van Den Hurk, M. (2006). The relation between self-regulated strategies and individual study time, prepared participation and achievement in a problem-based curriculum. Active Learning in Higher Education, 7, 155–169.
What Works Clearinghouse. (2017). Standard handbook version 4.0. Washington, DC: Institute of Education Sciences.
Xu, D., & Jaggars, S. S. (2013). The impact of online learning on students’ course outcomes: Evidence from a large community and technical college system. Economics of Education Review, 37, 46–57.
Zhan, Z., & Mei, H. (2013). Academic self-concept and social presence in face-to-face and online learning: Perceptions and effects on students’ learning achievement and satisfaction across environments. Computers & Education, 69, 131–138.
The authors are grateful for feedback and advice from the Investigating Virtual Learning Environments and Digital Learning Lab groups at UCI’s School of Education, particularly Di Xu, Fernando Rodriguez, and Mark Warschauer; seminar participants at AEFP; and the instructor with whom we partnered to implement this intervention. This work was supported by grant number 1535300 from the National Science Foundation.
Measuring Procrastination and Cramming
We examine two potential mechanisms that could explain how the scheduling intervention affects students’ performance: procrastination and cramming. We define procrastination as how far the average watch time for the five lecture videos in a given week is from the Friday midnight deadline; a larger negative number indicates more time before the deadline (and thus less procrastination). Cramming is defined as the standard deviation of the watch time for each of the five course videos within a week. A smaller number indicates more cramming. Both of these variables are measured in days. The overall course cramming and procrastinating for each student is the average of each student’s average across all five weeks.Footnote 10 Appendix Table 9 provides summary statistics of students’ video-watching habits in terms of their cramming and procrastination.
We first examined the relationship between course outcomes and our measures of procrastination and cramming to examine if these time management measures predict academic outcomes, as has been found in other studies. These two variables were individually included in a series of linear regression models, with the same student level covariates as used in the main regressions, predicting each week’s quiz scores and the final course score. For the weekly quizzes, we used the student’s cramming and procrastination estimate for that given week. For the final course score, we used the average cramming and procrastination across all five weeks. We subsequently combined cramming and procrastination into a single model to account for the shared variance between the two predictors.
The left panel of Appendix Table 10 shows that, in the separate model specification, students who procrastinated more tended to have lower quiz and final course scores. This negative trend is consistent across all five weeks, and it is statistically significant for the third week and the overall course score. On average across students and weeks, watching the weekly videos a day closer to the Friday deadline is associated with over a fifth of a standard deviation worse final course score. We believe the effect appears strongest in the final course score because it reflects the sum total of students’ online problem set scores, participation in lecture-video quizzes, weekly quiz scores, and final exam score. If students are procrastinating on lecture watching, they are also likely to procrastinate on the completion of assignments further reducing their final course score. Furthermore, students who spaced their lecture-video watching out over the entire week instead of watching multiple videos in quick succession tended to have better first week quiz and final course scores; the coefficients on cramming are negative in most weeks. These results for both cramming and procrastinating hold for the final course score even in the combined model which includes cramming and procrastination simultaneously as independent variables. Appendix Table 10 results suggest students who do not cram or procrastinate have higher scores, on average.
About this article
Cite this article
Baker, R., Evans, B., Li, Q. et al. Does Inducing Students to Schedule Lecture Watching in Online Classes Improve Their Academic Performance? An Experimental Analysis of a Time Management Intervention. Res High Educ 60, 521–552 (2019). https://doi.org/10.1007/s11162-018-9521-3
- Online courses