Advertisement

Gap between actual and expected time allocation to academic activities and its impact on undergraduate academic performance

  • Felly Chiteng Kot
Original Paper
  • 1 Downloads

Abstract

This study uses survey data and administrative records collected over a three-year period to examine the gap between the amount of time students invested and the amount they were expected to invest in academic activities. The sample includes 2232 first-year and final-year undergraduate students at an elite research university in Kazakhstan. The study measured time allocation gap in terms of the degree to which the total amount of time invested in academic activities fell short of the expected amount (class attendance and out-of-class study time combined), given the student’s credit load. The study found that, on average, undergraduate students (first and fourth year) allocated 35% less time to academic activities than expected under ECTS standards or 28% less time than expected under Carnegie standards. Using a quasi-experimental research design (propensity score matching), the study found that time allocation gap had a negative impact on undergraduate academic performance.

Keywords

Time allocation Academic performance Propensity score matching Quasi-experimental design Carnegie credits ECTS credits 

References

  1. Andrietti, V., & Velasco, C. (2015). Lecture attendance, study time, and academic performance: A panel data study. The Journal of Economic Education, 46(3), 239–259.  https://doi.org/10.1080/00220485.2015.1040182.Google Scholar
  2. Arum, R., & Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago: The University of Chicago Press.Google Scholar
  3. Babcock, P., & Marks, M. (2010). Leisure college, USA: The decline in student study time. Washington: American Enterprise Institute.Google Scholar
  4. Babcock, P., & Marks, M. (2011). The falling time cost of college: Evidence from half a century of time use data. The Review of Economics and Statistics, 93(2), 468–478.Google Scholar
  5. Baik, C., Naylor, R., & Arkoudis, S. (2015). The first year experience in Australian universities: Findings from two decades. Melbourne: Center for the Study of Higher Education.Google Scholar
  6. Becker, S. O., & Caliendo, M. (2007). Sensitivity analysis for average treatment effects. The Stata Journal, 7(1), 71–83.Google Scholar
  7. Bonsaksen, T., Brown, T., Lim, H. B., & Fong, K. (2017). Approaches to studying predict academic performance in undergraduate occupational therapy students: aAcross-cultural study. BMC Medical Education, 17(1), 76.  https://doi.org/10.1186/s12909-017-0914-3.Google Scholar
  8. Bratti, M., & Staffolani, S. (2013). Student time allocation and educational production functions. Annals of Economics and Statistics (111/112), 103–140.  https://doi.org/10.2307/23646328.
  9. Brint, S., & Cantwell, A. M. (2010). Undergraduate time use and academic outcomes: Results from the University of California Undergraduate experience survey 2006. Teachers College Record, 112(9), 2441–2470.Google Scholar
  10. Carnegie Foundation for the Advancement of Teaching. (2013). Reconsidering the Carnegie unit: Is it time for a new measure? Washington: Carnegie Foundation for the Advancement of Teaching.Google Scholar
  11. DiPrete, T. A., & Engelhardt, H. (2000). Estimating causal effects with matching methods in the presence and absence of bias cancellation (report no 2000-013). Rostock, Germany: Max Planck institute for Demographic Research. Retrieved January 29, 2016 from http://www.demogr.mpg.de/Papers/Working/wp-2000-013.pdf
  12. DiPrete, T. A., & Gangl, M. (2004). Assessing bias in the estimation of causal effects: Rosenbaum bounds on matching estimators and instrumental variables estimation with imperfect instruments. Sociological Methodology, 34, 271–271.Google Scholar
  13. Dolton, P., Marcenaro, O. D., & Navarro, L. (2003). The effective use of student time: A stochastic frontier production function case study. Economics of Education Review, 22(6), 547–560.  https://doi.org/10.1016/S0272-7757(03)00027-X.Google Scholar
  14. European Union. (2015). ECTS users' guide 2015. Luxembourg: Publications Office of the European Union.Google Scholar
  15. Fernex, A., Lima, L., & de Vries, E. (2015). Exploring time allocation for academic activities by university students in France. Higher Education, 69(3), 399–420.  https://doi.org/10.1007/s10734-014-9782-5.Google Scholar
  16. Fosnacht, K., McCormick, A. C., & Lerma, R. (2018). First-year students’ time use in college: A latent profile analysis. Research in Higher Education, 59(7), 958–978.  https://doi.org/10.1007/s11162-018-9497-z.Google Scholar
  17. Grave, B. S. (2010). The effect of student time allocation on academic achievement. Retrieved August 05, 2017 from http://repec.rwi-essen.de/files/REP_10_235.pdf
  18. Grave, B. S. (2011). The effect of student time allocation on academic achievement. Education Economics, 19(3), 291–310.  https://doi.org/10.1080/09645292.2011.585794.Google Scholar
  19. Gu, X. S., & Rosenbaum, P. R. (1993). Comparison of multivariate matching methods: Structures, distances, and algorithms. Journal of Computational and Graphical Statistics, 2(4), 405–420.Google Scholar
  20. Guo, F. (2014). The impact of term-time working on college outcomes in China (Doctoral dissertation), Columbia University. Retrieved from Columbia Academic Commons.Google Scholar
  21. Hansen, B. B. (2004). Full matching in an observational study of coaching for the SAT. Journal of the American Statistical Association, 99(467), 609–618.Google Scholar
  22. Harris, J. W. (2002). Brief history of American academic credit system: A recipe for incoherence in student learning. Retrieved July 31, 2017 from https://eric.ed.gov/?id=ED470030
  23. Ho, D. E., Imai, K., King, G., & Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependency in parametric causal inference. Political Analysis, 15(3), 199–236.Google Scholar
  24. Ho, D. E., Imai, K., King, G., & Stuart, E. A. (2011). MatchIt: Nonparametric preprocessing for parametric causal inference. Journal of Statistical Software, 42(8), 1–28.Google Scholar
  25. Imai, K., King, G., & Lau, O. (2007). Zelig: Everyone's statistical software. Retrieved August 31, 2012 from http://gking.harvard.edu/zelig/docs/zelig.pdf
  26. James, R., Krause, K.-L., & Jennings, C. (2010). The first year experience in Australian universities: Findings from 1994 to 2009. Melbourne: Center for the Study of Higher Education.Google Scholar
  27. Kuh, G. D., Kinzie, J., Schuh, J. H., & Whitt, E. J. (2010). Student success in college: Creating conditions that matter. San Francisco: Jossey-Bass.Google Scholar
  28. Masui, C., Broeckmans, J., Doumen, S., Groenen, A., & Molenberghs, G. (2014). Do diligent students perform better? Complex relations between student and course characteristics, study time, and academic performance in higher education. Studies in Higher Education, 39(4), 621–643.  https://doi.org/10.1080/03075079.2012.721350.Google Scholar
  29. McCormick, A. C. (2011). It's about time: What to make of reported declines in how much college students study. Liberal Education, 97(1), 30–39.Google Scholar
  30. Meng, C., & Heijke, H. (2005). Student time allocation, the learning environment and the acquisition of competencies. Maastricht, The Netherlands: Research Centre for Education and the Labour Market.Google Scholar
  31. Moreno-Serra, R. (2007). Matching estimator of average treatment effects: A review applied to the evaluation of health care programs. Retrieved September 04, 2012 from http://www.york.ac.uk/res/herc/documents/wp/07_02.pdf
  32. Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures, 5(4), 297–310.  https://doi.org/10.1080/19312458.2011.624490.Google Scholar
  33. Neves, J., & Hillman, N. (2016). The 2016 Student academic experience survey. York: Higher Education Academy.Google Scholar
  34. Neves, J., & Hillman, N. (2018). The 2018 Student academic experience survey. York: Advance Higher Education & Higher Education Policy Institute.Google Scholar
  35. Porter, S. R. (2011). Do college student surveys have any validity. The Review of Higher Education, 35(1), 45–76.  https://doi.org/10.1353/rhe.2011.0034.Google Scholar
  36. Quadlin, N. Y., & Rudel, D. (2015). Responsibility or liability? Student loan debt and time use in college. Social Forces, 94(2), 589–614.  https://doi.org/10.1093/sf/sov053.Google Scholar
  37. R Core Team. (2013). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
  38. Ribera, A. K., Rocconi, L. M., & McCormick, A. C. (2013). Undergraduates in the professional fields: Exploring the impact of institutional characteristics on time spent preparing for class. Paper presented at the American Educational Research Association, San Francisco.Google Scholar
  39. Rosenbaum, P. R. (1991). A characterization of optimal designs for observational studies. Journal of the Royal Statistical Society. Series B (Methodological), 53(3), 597–610.Google Scholar
  40. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer.Google Scholar
  41. Rosenbaum, P. R. (2005). Sensitivity analysis in observational studies. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of Statistics in Behavioral Science (Vol. 4, pp. 1809–1814). Chichester: John Wiley & Sons.Google Scholar
  42. Rosenbaum, P. R. (2010). Design of observational studies. Philadelphia: Springer.Google Scholar
  43. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.Google Scholar
  44. Rosenbaum, P. R., & Rubin, R. B. (1984). Reducing bias in observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79(387), 516–524.Google Scholar
  45. Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1), 33–38.Google Scholar
  46. Roth, P. (1994). Missing data: A conceptual review for applied psychology. Personnel Psychology, 47(3), 537–560.Google Scholar
  47. Rubin, D. B. (2002). Using propensity scores to help design observational studies: Application to the tobacco litigation. Health Services & Outcomes Research Methodology, 2, 169–188.Google Scholar
  48. Schneider, B., Carnoy, M., Kilpatrick, J., & Shavelson, R. J. (2007). Estimating causal effects using experimental and observational designs. Washington: American Educational Research Association.Google Scholar
  49. Seidimbek, A. (2013). Nazarbayev University: Integration of Western and central Asian educational systems. Procedia - Social and Behavioral Sciences, 89, 682–686.  https://doi.org/10.1016/j.sbspro.2013.08.915.Google Scholar
  50. Shedd, J. M. (2003). The history of the student credit hour. New Directions for Higher Education, 2003(122), 5–12.Google Scholar
  51. Silva, E. (2013). The Carnegie unit - Revisited. Retrieved July 31, 2017 from https://www.carnegiefoundation.org/blog/the-carnegie-unit-revisited/
  52. StataCorp. (2015). Stata statistical software: Release 14. College Station: StataCorp LP.Google Scholar
  53. Stinebrickner, R., & Stinebrickner, T. R. (2008). The causal effect of studying on academic performance. The B.E. Journal of Economic Analysis & Policy, 8(1), 1–55.Google Scholar
  54. Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge: MIT Press.Google Scholar

Copyright information

© EAIR - The European Higher Education Society 2018

Authors and Affiliations

  1. 1.Nazarbayev UniversityAstanaKazakhstan

Personalised recommendations