Innovative Higher Education

, Volume 31, Issue 4, pp 227–236 | Cite as

Assessing Assessment: The Effects of Two Exam Formats on Course Achievement and Evaluation



This research examines the effect of two testing strategies on academic achievement and summative evaluations in an introductory statistics course. In 2001, 63 students underwent an hourly midterm format; and in 2002, 68 students underwent a bi-weekly exam format. Other than the exam format, the class lectures and labs were identical in terms of content, structure, pace, and the cumulative final exam. Findings from the regression analyses show that students in the bi-weekly format performed better than the students in the hourly midterm format. On average, students who took the bi-weekly exams performed about 10 percentage points higher (one letter grade) on the exams during the semester and about 15 percentage points higher on the cumulative final exam compared to their peers who took hourly midterms. The benefits of the bi-weekly format were significantly greater among female students than male students. Finally, students in the bi-weekly format were less likely to drop the class and evaluated the class far more favorably.

Key words

assessment academic achievement exam formats 


  1. Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco, CA: Jossey-Bass.Google Scholar
  2. Baumeister, R., Bratslavsky, E., Maraven, M., & Tice, D. (1998). Ego depletion: Is the active self a limited resource? Journal of Personality and Social Psychology, 74, 1252–1265.CrossRefGoogle Scholar
  3. Chickering, A. W., & Gamson, Z. F. (Eds.) (1991). Applying the seven principles for good practice in undergraduate education. New directions for teaching and learning. No.47. San Francisco, CA: Jossey-Bass.Google Scholar
  4. Covington, M. V. (1992). Making the grade: A self-worth perspective on motivation and school reform. New York, NY: Cambridge University Press.Google Scholar
  5. Donovan, J. J., & Radosevich, D. J. (1999). A meta-analytic review of the distribution of practice effect: Now you see it, now you don’t. Journal of Applied Psychology, 84, 795–805.CrossRefGoogle Scholar
  6. Graham, R. B. (1999). Unannounced quizzes raise test scores selectively for mid-range students. Teaching of Psychology, 26, 271–273.CrossRefGoogle Scholar
  7. Guskey, T. R. (2003). How classroom assessments improve learning. Educational Leadership, 60, 6–11.Google Scholar
  8. Haberyan, K. A. (2003). Do weekly quizzes improve student performance on general biology exams? The American Biology Teacher, 65, 110–114.CrossRefGoogle Scholar
  9. Hakel, M. D. (1997). What we must learn from Alverno. About Campus, 2 (July/August), 16–21.Google Scholar
  10. Hancock, D. R. (2001). Effects of test anxiety and evaluative threat on students’ achievement and motivation. Journal of Educational Research, 94, 284–290.Google Scholar
  11. Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from teaching to learning. Boston, MA: Allyn & Bacon.Google Scholar
  12. Kohn, A. (1993). Punished by rewards: The trouble with gold stars, incentive plans, A’s, praise, and other bribes. New York, NY: Houghton Mifflin.Google Scholar
  13. Kuh, G. D., & Hu, S. (2001). The effects of student–faculty interaction in the 1990s. The Review of Higher Education, 24, 309–322.Google Scholar
  14. Linn, R. L. (2001). A century of standardized testing: Controversies and pendulum swings. Educational Assessment, 7, 29–38.CrossRefGoogle Scholar
  15. Orpen, C. (1998). The causes and consequences of academic procrastination: A research note. Westminster Studies in Education, 21, 73–75.Google Scholar
  16. Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 543–578.CrossRefGoogle Scholar
  17. Ruscio, J. (2001). Administering quizzes at random to increase students’ reading. Teaching of Psychology, 28, 204–206.CrossRefGoogle Scholar
  18. Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO: Westview.Google Scholar
  19. Stiggins, R. J. (2001). Student-involved classroom assessment. Upper Saddle River, NJ: Prentice-Hall.Google Scholar
  20. Tuckman, B. W. (1996). The relative effectiveness of incentive motivation and prescribed learning strategies in improving college students’ course performance. The Journal of Experimental Education, 64, 197–210.CrossRefGoogle Scholar
  21. Tuckman, B. W. (1998). Using tests as an incentive to motivate procrastinators to study. The Journal of Experimental Education, 66, 141–147.CrossRefGoogle Scholar
  22. Willingham, D. T. (2002). Allocating student time: “Massed” versus “distributed” practice. American Educator, 26, 37–39.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Department of EducationMontana State UniversityBozemanUSA
  2. 2.Department of SociologyMontana State UniversityBozemanUSA

Personalised recommendations