Journal of Behavioral Education

, Volume 15, Issue 4, pp 256–273 | Cite as

Differential Daily Writing Contingencies and Performance on Major Multiple-Choice Exams

  • Briana Hautau
  • Haley C. Turner
  • Erin Carroll
  • Kathryn Jaspers
  • Megan Parker
  • Katy Krohn
  • Robert L. Williams
Original Paper

Abstract

On 4 of 7 days in each unit of an undergraduate human development course, students responded in writing to specific questions related to instructor notes previously made available to them. The study compared the effects of three writing contingencies on the quality of student writing and performance on major multiple-choice exams in the course. The three contingencies were (1) receiving credit for all writing products each unit, (2) receiving credit for one randomly selected writing product each unit, and (3) receiving no credit for any writing product each unit. On all dimensions of exam performance, writing for daily credit produced higher scores than did writing for random credit and writing for no credit. The daily-writing contingency also produced the highest writing ratings across all units; the writing for random credit produced the next highest writing scores; and the writing for no credit yielded the lowest writing scores. Across all three contingencies, writing scores were highly correlated with performance on multiple-choice exams.

Keywords

Writing contingencies Multiple-choice exams College instruction 

References

  1. Aiken, L. R. (1982). Writing multiple-choice items to measure higher-order educational objectives. Educational and Psychological Measurement, 42, 803–806.Google Scholar
  2. Center for Postsecondary Research. (2004). National survey of task engagement. Annual Report. Retrieved April 4, 2005 from www.iub.edu∼nsse.Google Scholar
  3. Cirino-Gerena, G. (1981). Strategies in answering essay tests. Teaching of Psychology, 8, 53–54.CrossRefGoogle Scholar
  4. Clegg, V. L., & Cashin, W. E. (1986). Improving multiple-choice tests. Manhatten: KS: Center for Faculty Evaluation and Development in Higher Education. ERIC Document Reproduction Service No. ED298150.Google Scholar
  5. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Eribaum.Google Scholar
  6. Entwistle, N. (1997). Contrasting perspectives on learning. In F. Marton, D. Hounsell, & N. Entwistle (Eds.) The experience of learning: Implications for teaching and studying in higher education (2nd ed., pp. 3–22). Edinburgh: Scottish Academic Press.Google Scholar
  7. Fuhrman, M. (1996). Developing good multiple-choice tests and test questions. Journal of Geoscience Education, 44, 379–384.Google Scholar
  8. Hautau, B., Turner, H. C., Carroll, E., Jaspers, K., Krohn, K., Parker, M., & Williams, R. L. (2006). Differential daily writing conditions and performance on major multiple-choice exams. Journal of Behavioral Education, 15, 171–181.Google Scholar
  9. Karras, R. (1978). Writing multiple choice questions: The problem and a proposed solution. History Teacher, 11, 211–218.CrossRefGoogle Scholar
  10. Kelton, S. (1997). On assessing philosophical literacy. ERIC document ED409943.Google Scholar
  11. Leeming, F. C. (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology, 29, 210–212.CrossRefGoogle Scholar
  12. Marton, F., & Saljo, R. (1976). On qualitative differences in learning: Outcome and process. British Journal of Educational Psychology, 46, 115–127.Google Scholar
  13. Morrison, S., & Free, K. W. (2001). Writing multiple-choice test items that promote and measure critical thinking. Journal of Nursing Education, 40, 17–24.PubMedGoogle Scholar
  14. Paxton, M. (2000). A linguistic perspective on multiple choice questioning. Assessment & Evaluation in Higher Education, 25, 109–119.Google Scholar
  15. Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: New tools for educational reform. In B. Gifford & M. O’Connor (Eds.) Changing assessment: Alternative views of aptitude, achievement and instruction (pp. 37–75). London: Kluwer Academic Publishers.Google Scholar
  16. Tuckman, B. W. (1991). Evaluating the alternative to multiple-choice testing for teachers. Contemporary Education, 62, 299–300.Google Scholar
  17. Turner, H. C., Bliss, S., Hautau, B., Carroll, E., Jaspers, K. E., & Williams, R. L. (in press). Brief daily writing activities and performance on major multiple-choice exams. Journal of General Education.Google Scholar
  18. Wallace, M., & Williams, R. L. (2003). Multiple-choice exams: Explanations for student choices. Teaching of Psychology, 30, 136–139.Google Scholar
  19. Williams, R. L., Oliver, R., Allin, J., Winn, B., & Booher, C. (2003). Psychological critical thinking as a course predictor and outcome variable. Teaching of Psychology, 30, 220–223.CrossRefGoogle Scholar
  20. Williams, R. L., Oliver, R., & Stockdale, S. (2004). Psychological versus generic critical thinking as predictors and outcome measures in a large undergraduate human development course. Journal of General Education, 53, 37–58.CrossRefGoogle Scholar
  21. Williams, R. L., & Worth, S. (2002). Thinking skills and work habits: Contributors to course performance. Journal of General Education, 51, 200–227.Google Scholar

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  • Briana Hautau
    • 1
  • Haley C. Turner
    • 1
  • Erin Carroll
    • 1
  • Kathryn Jaspers
    • 1
  • Megan Parker
    • 1
  • Katy Krohn
    • 1
  • Robert L. Williams
    • 1
    • 2
  1. 1.The University of TennesseeTennesseeUSA
  2. 2.Department of Educational Psychology and CounselingThe University of TennesseeKnoxvilleTN 37996-3452

Personalised recommendations