Advertisement

Research in Science Education

, Volume 29, Issue 3, pp 353–363 | Cite as

An alternative method of answering and scoring multiple choice tests

  • Charles Taylor
  • Paul L. Gardner
Article

Abstract

A simple modification to the method of answering and scoring multiple choice tests allows students to indicate their estimates of the probability of the correctness of the multiple choice options for each question, without affecting the validity of the assessment. A study was conducted using a test that investigated common misconceptions in mechanics. The study showed that for assessment purposes this method gives results that are very similar to results obtained by students who answer in the traditional manner. Year 12 Physics students (N=85) were randomly allocated to two treatment groups: one received a standard format multiple choice test, the other a test format allowing students to select more than one response in a multiple choice test, and to distribute their marks among their chosen optionsl An analysis of the students' uncertainties is used to argue that not only can students appeal to different conceptions in different contexts, but that they can also hold conflicting conceptions with respect to a single context.

Keywords

Correct Answer Differential Item Functioning Multiple Choice Subjective Probability Probability Format 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bennet, R. (1993). On the meanings of constructed response. In R. Bennet, & W. Ward (Eds.),Construction versus choice in cognitive measurement (pp. 1–27). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  2. Brun, W., & Teigen, K. (1988). Verbal probabilities: Ambiguous, context dependent or both?Organization Behavior and Human Decision Processes, 41, 390–404.CrossRefGoogle Scholar
  3. Crismore, A., Kopple, W. J. V. (1997). Hedges and readers: Effects on attitudes and learning. In R. Markkanen, & H. Schroder (Eds.),Hedging and discourse: Approaches to the analysis of a pragmatic phenomenon in academic texts (pp. 83–114). Berlin, NY: de Gruyter.Google Scholar
  4. Dawson, C., & Rowell, J. (1995). Snapshots of uncertainty: A new tool for the identification of students' misconceptions of scientific phenomena.Research in Science Education, 25(1), 89–100.CrossRefGoogle Scholar
  5. De Finetti, B. (1972).Probability induction and statistics: The art of guessing. Chichester, Chichester: Wiley.Google Scholar
  6. Evans, J. (1989).Bias in human reasoning: Causes and consequences. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  7. Gunstone, R., Chia, T., Giam, K., Sung-Jae, P., Pattanasuwan, C., Rodriguez, E., Swamy, N., & Talisayson, V. (1989).Conceptions in mechanics: A survey of students beliefs in seven Asian countries. Asia-Pacific Physics Teachers and Educators Association Research Report No. 1. Clayton, Victoria: Monash University.Google Scholar
  8. Hake, R. (1996). Interactive engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses.American Journal of Physics, 66(1), 64–74.CrossRefGoogle Scholar
  9. Hansen, R. (1971). The influence of variables other than knowledge on probability tests.Journal of Educational Measurement, 8, 9–14.CrossRefGoogle Scholar
  10. Hendrickson, A., & Buehler, R. (1971). Proper scores for probability forecasters.The Annals of Mathematical Statistics, 42, 1916–1921.Google Scholar
  11. Hestenes, D., Wells, M., & Swackhammer, G. (1992). Force concept inventory.Physics Teacher, 30, 141–153.Google Scholar
  12. Holland, P., & Wainer, H. (Eds.). (1993).Differential item functioning. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  13. Hollingworth, R. (1913). Experimental studies in judgement.Archives of Psychology, 29, 1–119.Google Scholar
  14. Hutchinson, T. (1991).Ability, partial information, guessing: Statistical modelling applied to multiple-choice tests. Adelaide, SA: Rumsby Scientific Publishing.Google Scholar
  15. Jacobs, S. (1971). Correlates of unwarranted confidence in responses to objective test items.Journal of Educational Measurement, 1, 15–20.CrossRefGoogle Scholar
  16. Koehler, R. (1974). Overconfidence on probabilistic tests.Journal of Educational Measurement.11, 101–108.CrossRefGoogle Scholar
  17. Lundebert, M., Fox, P., & Puncochar. (1994). Highly confident but wrong: Gender differences and similarities in confidence judgements.Journal of Educational Psychology, 86(1), 114–121.CrossRefGoogle Scholar
  18. Mehrens, W., & Lehmann, I. (1984).Measurement and evaluation in education and psychology. London: Holt Rinehart and Winston.Google Scholar
  19. Michael, J. (1968). The reliability of a multiple choice examination under various test-taking instructions.Journal of Educational Measurement, 5, 307–314.CrossRefGoogle Scholar
  20. Mintzes, J., Wandersee, J., & Novak, J. (1997). Meaningful learning in science: The human constructivist perspective. In G. Phye (Ed.),Handbook of academic learning: Construction of knowledge (pp. 405–447). London, UK: Academic Press.Google Scholar
  21. Reynoso, E., Fierro, E., Torres, G., Vicentini-Missoni, M., & de Celis, J. (1993). The alternative frameworks presented by Mexican students and teachers concerning the free fall of bodies.International Journal of Science Educational, 15(2), 127–138.Google Scholar
  22. Rippey, R. (1968). Probabilistic testing.Journal of Educational Measurement, 5, 211–215.CrossRefGoogle Scholar
  23. Rippey, R. (1970). A comparison of five different scoring functions for confidence tests.Journal of Educational Measurement, 7, 165–170.CrossRefGoogle Scholar
  24. Rowell, J., Dawson, C., & Madsen, P. (1993). Probing students' non-scientific conceptions: A new tool for conventional and action-research in science teaching.The Australian Science Teachers Journal, 39(1), 62–68.Google Scholar
  25. Savage, L. (1971). Elicitation of personal probabilities and expectations.Journal of the American Statistical Association, 66, 783–801.CrossRefGoogle Scholar
  26. Shuford, E., Albert, A., & Massengill, H. (1966). Admissible probability measurement procedures.Psychometrika, 31, 125–145.CrossRefGoogle Scholar
  27. Smith, M. (1991). Put to the test: The effects of external testing on teachers.Educational Research, 20(5), 8–11.CrossRefGoogle Scholar
  28. Yule, G. (1988). Highly confidence wrong answering—and how to detect it.ELT Journal, 42(2), 84–88.CrossRefGoogle Scholar

Copyright information

© Australian Science Research Association 1999

Authors and Affiliations

  1. 1.Taylors CollegeHuntingdaleAustralia
  2. 2.Monash UniversityAustralia

Personalised recommendations