Journal of Science Education and Technology

, Volume 10, Issue 2, pp 115–126 | Cite as

Gender Differences in Introductory Atmospheric and Oceanic Science Exams: Multiple Choice Versus Constructed Response Questions

  • Andrew J. Weaver
  • Helen Raptis

Abstract

An analysis of 295 male and 194 female examinations from introductory atmospheric and oceanic science courses is conducted to determine whether or not there exists gender differences in the performance on multiple choice versus constructed response sections of the exams. The difference in the mean performance of males and females on constructed response relative to multiple choice sections of final exams, even in years where the females performed better than or worse than the males on both sections, is on average 5% that is significant at the 0.1% level. Gender differences on time-limited midterm exams are not significant. It is further shown that final exam performance is not significantly related to whether or not the exam starts with a multiple choice versus constructed response set of questions. While our analysis is unable to differentiate between the possibilities that multiple choice questions favor male students and the competing hypothesis that constructed response questions favor female students, existing literature is reviewed to suggest that a combination of both is possible. Nevertheless, from the analysis of our examination results, we can conclude that an exam of introductory atmospheric or oceanic science curricula, which is made up of 60% multiple choice questions and 40% constructed response questions, would not be skewed to favor any particular gender.

Gender differences science education earth science education multiple choice constructed response 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

REFERENCES

  1. Ahrens, C. D. (1994). Meteorology Today, An Introduction to Weather, Climate, and the Environment, 4th edn., West Publishing Company, St. Paul, MN, 591 pp.Google Scholar
  2. Ahrens, C. D. (1999). Meteorology Today, An Introduction to Weather, Climate, and the Environment, 5th edn., Brooks/Cole, Pacific Grove CA, 528 pp.Google Scholar
  3. Allison, D. E. (1984). The effect of item-difficulty sequence, intelligence, and sex on test performance, reliability, and item difficulty and discrimination. Measurement and Evaluation in Guidance 16: 211–217.Google Scholar
  4. Barnett-Foster, D., and Nagy, P. (1995). A comparison of undergraduate test response strategies for multiple-choice and constructed-response questions. The Alberta Journal of Educational Research XLI: 18–35.Google Scholar
  5. Ben-Shakhar, G., and Sinai, Y. (1991). Gender differences in multiple-choice tests: The role of differential guessing tendencies. Journal of Educational Measurement 28: 23–35.Google Scholar
  6. Bolger, N., and Kellaghan, T. (1990). Method of measurement and gender differences in scholastic achievement. Journal of Educational Measurement 27: 165–174.Google Scholar
  7. Breland, H. M., Danos, D. O., Kahn, H. D., Kubota, M. Y., and Bonner, M. W. (1994). Performance versus objective testing and gender: An exploratory study of an advanced placement history examination. Journal of Educational Measurement 31: 275–293.Google Scholar
  8. Bridgeman, B., and Lewis, C. (1994). The relationship of essay and multiple-choice scores with grades in college courses. Journal of Educational Measurement 31: 37–50.Google Scholar
  9. Chase, C. I. (1968). The impact of some obvious variables on essay test scores. Journal of Educational Measurement 5: 315–318.Google Scholar
  10. De Mars, C. E. (1998). Gender differences in mathematics and science on a high school proficiency exam: The role of response format. Applied Measurement in Education 11: 279–299.Google Scholar
  11. De Mars, C. E. (2000). Test stakes and item format interactions. Applied Measurement in Education 13: 55–77.Google Scholar
  12. Duxbury, A. L., and Duxbury, A. B. (1994). An Introduction to the World's Oceans. W. M. C. Brown Publishers, Dubuque, IA, 472 pp.Google Scholar
  13. Garner, M., and Engelhard, G. J. (1999). Gender differences in performance on multiple-choice and constructed response mathematics items. Applied Measurement in Education 12: 29–51.Google Scholar
  14. Gelargi, A. M. G. (1998). The effects of item order arrangement, change of format, ability and gender on student performance in multiple choice questions in managerial accounting. Accounting Educators' Journal X: 33–46.Google Scholar
  15. Hamilton, L. S. (1998). Gender differences on high school science achievement tests: Do format and content matter. Educational Evaluation and Policy Analysis 20: 179–195.Google Scholar
  16. Hanna, G. (1986). Sex differences in the mathematics achievement of eighth graders in Ontario. Journal of Research in Mathematics Education 17: 231–237.Google Scholar
  17. Heim, A. W., and Watts, K. P. (1967). An experiment on multiple choice versus open-ended answering in a vocabulary test. British Journal of Educational Psychology 37: 339–346.Google Scholar
  18. Herlitz, A., Nilsson, L.-G., and Bäckman, L. (1997). Gender differences in episodic memory. Memory and Cognition 25: 801–811.Google Scholar
  19. Hughes, D.C., Keeling, B., and Tuck, B.F. (1983). Effects of achievement expectations and handwriting quality on scoring essays. Journal of Educational Measurement 20: 65–70.Google Scholar
  20. Klimko, I. P. (1984). Item arrangement, cognitive entry characteristics, sex, and test anxiety as predictors of achievement examination performance. The Journal of Experimental Education 52: 214–219.Google Scholar
  21. Langenfeld, T. E. (1997). Test fairness: Internal and external investigations of gender bias in mathematics testing. Educational Measurement 16: 20–26.Google Scholar
  22. Lumsden, K.G., and Scott, A. (1987). The economics student reexamined: Male-female differences in comprehension. Journal of Economic Education 18: 365–376.Google Scholar
  23. Maccoby, E. E., and Jacklin, C. N. (1974). The psychology of sex differences, Standard University Press, Stanford, CA, 634 pp.Google Scholar
  24. Mazzeo, J., Schmitt, A. P., and Bleistein, C. A. (1993): Sex-related performance differences on constructed-response and multiple-choice sections of advanced placement examinations. College Board Report No. 92–7. College Entrance Examination Board, New York, NY.Google Scholar
  25. Meyer, M. R. (1992). Gender differences in test taking: A review. In Romberg, T.A. (Ed.), Mathematics Assessment and Evaluation: Imperatives for Mathematics Educators, State University of New York Press, Albany, NY, 369 pp.Google Scholar
  26. Miller, L. D., Mitchell, C. E., and Ausdall, M. V. (1994). Evaluating achievement in mathematics: Exploring the gender biases of timed testing. Education 114: 436–438.Google Scholar
  27. Moran, J. M., and Morgan, M. D. (1991). Meteorology: The Atmosphere and the Science of Weather, 3rd edn., MacMillan Publishing Company, New York, NY, 586 pp.Google Scholar
  28. Moran, J. M., and Morgan, M. D. (1994). Meteorology: The Atmosphere and the Science of Weather, 4th edn., MacMillan Publishing Company, New York, NY, 517 pp.Google Scholar
  29. Moran, J. M., and Morgan, M. D. (1997). Meteorology: The Atmosphere and the Science of Weather, 5th edn., Prentice Hall, Upper Saddle River, NJ, 530 pp.Google Scholar
  30. Murphy, R. J. L. (1980). Sex differences in GCE examination entry statistics and success rates. Educational Studies 6: 169–178.Google Scholar
  31. Murphy, R. J. L. (1982). Sex differences in objective test performance. British Journal of Educational Psychology 52: 213–219.Google Scholar
  32. Plake, B. S., and Ansorge, C. J. (1984). Effects of item arrangement, sex of the subject, and test anxiety on cognitive and self-perception scores in a nonquantitative content area. Educational and Psychological Measurement 19: 49–57.Google Scholar
  33. Plake, B. S., Ansorge, C. J., Parker, C. S., and Lowry, S. R. (1982). Effects of item arrangement, knowledge of arrangement, and test anxiety, and sex on test performance. Journal of Educational Measurement 44: 423–430.Google Scholar
  34. Rowley, G. L. (1974). Which examinees are most favoured by the use of multiple choice tests. Journal of Educational Measurement 11: 15–23.Google Scholar
  35. Skinnner, N. F. (1983). Switching answers on multiple-choice questions: Shrewdness or shibboleth? Teaching of Psychology 10: 220–222.Google Scholar
  36. Slakter, M. J. (1968). The effect of guessing strategy on objective test scores. Journal of Educational Measurement 5: 217–222.Google Scholar
  37. Ryan, K. E., and Fan, M. (1996). Examining gender DIF on a multiple-choice test of mathematics: A confirmatory approach. Educational Measurement: Issues and Practice 15: 15–20.Google Scholar
  38. Traub, R. E. (1992). On the equivalence of traits assessed by multiple choice and constructed response tests. In Bennett, R., and Ward, W. (Eds.), Construction versus choice in cognitive measurement, Erlbaum, Hillsdale, NJ, pp. 29–44.Google Scholar
  39. Walstad, W. B., and Robson, D. (1997). Differential item functioning and male-female differences on multiple-choice tests in economics. Journal of Economic Education 16: 155–171.Google Scholar
  40. Wester, A., and Henriksson, W. (2000). The interaction between item format and gender differences in mathematics performance based on TIMSS Data. Studies in Educational Evaluation 26: 79–90.Google Scholar

Copyright information

© Plenum Publishing Corporation 2001

Authors and Affiliations

  • Andrew J. Weaver
    • 1
  • Helen Raptis
    • 2
  1. 1.School of Earth and Ocean SciencesUniversity of VictoriaVictoriaCanada
  2. 2.Department of Curriculum and InstructionUniversity of VictoriaVictoriaCanada

Personalised recommendations