Format Effects of Empirically Derived Multiple-Choice Versus Free-Response Instruments When Assessing Graphing Abilities

Article

Abstract

Prior graphing research has demonstrated that clinical interviews and free-response instruments produce very different results than multiple-choice instruments, indicating potential validity problems when using multiple-choice instruments to assess graphing skills (Berg & Smith in Science Education, 78(6), 527–554, 1994). Extending this inquiry, we studied whether empirically derived, participant-generated graphs used as choices on the multiple-choice graphing instrument produced results that corresponded to participants’ responses on free-response instruments. The 5 – 8 choices on the multiple-choice instrument came from graphs drawn by 770 participants from prior research on graphing (Berg, 1989; Berg & Phillips in Journal of Research in Science Teaching, 31(4), 323–344, 1994; Berg & Smith in Science Education, 78(6), 527–554, 1994). Statistical analysis of the 736 7th – 12th grade participants indicate that the empirically derived multiple-choice format still produced significantly more “picture-of-the-event” responses than did the free-response format for all three graphing questions. For two of the questions, participants who drew graphs on the free-response instruments produced significantly more correct responses than those who answered multiple-choice items. In addition, participants having “low classroom performance” were affected more significantly and negatively by the multiple-choice format than participants having “medium” or “high classroom performance.” In some cases, prior research indicating the prevalence of “picture-of-the-event” and graphing treatment effects may be spurious results, a product of the multiple-choice item format and not a valid measure of graphing abilities. We also examined how including a picture of the scenario on the instrument versus only a written description affected responses and whether asking participants to add marker points to their constructed or chosen graph would overcome the short-circuited thinking that multiple-choice items seem to produce.

Keywords

Assessing Construction Graphing Graphs Interpretation Validity 

References

  1. Adams, D. D. & Shrum, J. W. (1990). The effects of microcomputer-based laboratory exercises on the acquisition of line graph construction and interpretation skills by high school biology students. Journal of Research in Science Teaching, 27(8), 777–787.CrossRefGoogle Scholar
  2. Ackerman, T. A. & Smith, P. L. (1988). A comparison of the information provided by essay, multiple-choice, and free- response writing tests. Applied Psychological Measurement, 12(2), 117–128.Google Scholar
  3. Aikenhead, G. (1988). An analysis of four ways of assessing student beliefs about sts topics. Journal of Research in Science Teaching, 25(8), 607–629.CrossRefGoogle Scholar
  4. American Educational Research Association (2014). Standards for educational and psychological testing. Washington, DC: AERA.Google Scholar
  5. American Educational Research Association, American Psychological Association, & National Council of Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC: AERA.Google Scholar
  6. Ates, S. & Stevens, J. T. (2003). Teaching line graphs to tenth grade students having different cognitive developmental levels by using two different instructional modules. Research in Science & Technological Education, 21(1), 55–66.CrossRefGoogle Scholar
  7. Barclay, W. (1986). Graphing misconceptions and possible remedies using microcomputer based labs. Paper presented at the 7th National Educational Computing Conference, University of San Diego, CA.Google Scholar
  8. Beichner, R. J. (1990). The effect of simultaneous motion presentation and graph generation in a kinematics lab. Journal of Research in Science Teaching, 27(8), 803–815.CrossRefGoogle Scholar
  9. Berg, C. (1989). An investigation of the relationship between logical thinking structures and the ability to construct and interpret line graphs. (Unpublished doctoral dissertation). Iowa City, IA: The University of Iowa.Google Scholar
  10. Berg, C. & Phillips, D. (1994). An investigation of the relationship between logical thinking structures and the ability to construct and interpret line graphs. Journal of Research in Science Teaching, 31(4), 323–344.Google Scholar
  11. Berg, C. & Smith, P. (1994). Assessing students’ abilities to construct and interpret line graphs: Disparities between multiple-choice and free-response graphs. Science Education, 78(6), 527–554.Google Scholar
  12. Boote, S. K. (2014). Assessing and understanding line graph interpretations using a scoring rubric of organized cited factors. Journal of Science Teacher Education, 25(3), 333–354.Google Scholar
  13. Brasell, H. M. (1987). Effect of real time laboratory graphing on learning graphic representations of distance and velocity. Journal of Research in Science Teaching, 24, 385–395.CrossRefGoogle Scholar
  14. Brasell, H. M. (1990). Graphs, graphing, graphers. In M. B. Rowe (Ed.), What research says to the science teacher (Vol. Six). Washington, DC: The National Science Teachers Association.Google Scholar
  15. Clariana, R. B. (2003). The effectiveness of constructed-response and multiple-choice study tasks in computer-aided learning. Journal of Educational Computing Research, 28(4), 395–406.CrossRefGoogle Scholar
  16. Clement, J. (1986). The concept of variation and misconception in cartesian graphing. Paper presented at the American Educational Research Association, San Francisco, CA.Google Scholar
  17. Common Core State Standards Initiative (2010). Common Core State Standards for Mathematics (CCSSM). Washington, DC: National Governors Association Center for Best Practices and the Council of Chief State School Officers.Google Scholar
  18. Culbertson, H. M. & Powers, R. D. (1959). A study of graph comprehension difficulties. Educational Technology Research and Development, 7(3), 97–110.Google Scholar
  19. DeMars, C. E. (1998). Gender differences in mathematics and science on a high school proficiency exam: The role of response format. Applied Measurement in Education, 11(3), 279–299.CrossRefGoogle Scholar
  20. Dori, Y. J. & Sasson, I. (2008). Chemical understanding and graphing skills in an honors case‐based computerized chemistry laboratory environment: The value of bidirectional visual and textual representations. Journal of Research in Science Teaching, 45(2), 219–250.CrossRefGoogle Scholar
  21. Friedler, Y., Nachmias, R. & Linn, M. C. (1990). Learning scientific reasoning skills in microcomputer‐based laboratories. Journal of Research in Science Teaching, 27(2), 173–192.CrossRefGoogle Scholar
  22. Friel, S. N., Curcio, F. R. & Bright, G. W. (2001). Making sense of graphs: Critical factors influencing comprehension and instructional implications. Journal for Research in Mathematics Education, 32(2), 124–158.CrossRefGoogle Scholar
  23. Glazer, N. (2011). Challenges with graph interpretation: A review of the literature. Studies in Science Education, 47(2), 183–210.CrossRefGoogle Scholar
  24. Howe, K. R. (1985). Two dogmas of educational research. Educational Researcher, 14(8), 10–18.CrossRefGoogle Scholar
  25. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrah, Strauss, and Giroux.Google Scholar
  26. Keller, S. K. (2008). Levels of line graph question interpretation with intermediate elementary students of varying scientific and mathematical knowledge and ability: A think aloud study. (Unpublished doctoral dissertation). Retrieved from ProQuest LLC. (UMI Microform 3340991).Google Scholar
  27. Kerslake, D. (1977). The understanding of graphs. Mathematics in Schools, 6(2), 22–25.Google Scholar
  28. Kimbal, M. (1967). Understanding the nature of science: A comparison of scientists and science teachers. Journal of Research in Science Teaching, 5, 110–120.CrossRefGoogle Scholar
  29. Kwon, C., Kim, Y. & Woo, T. (2015). Digital–physical reality game mapping of physical space with fantasy in context-based learning games. Games and Culture. doi:10.1177/1555412014568789.Google Scholar
  30. Lapp, D. A. & Cyrus, V. F. (2000). Using data-collection devices to enhance students’ understanding. Mathematics Teacher, 93(6), 504–510.Google Scholar
  31. Leinhardt, G., Zaslavsky, O. & Stein, M. K. (1990). Functions, graphs, and graphing: Tasks, learning, and teaching. Review of Educational Research, 60(1), 1–64.CrossRefGoogle Scholar
  32. Linn, M. C., Layman, J. & Nachmias, R. (1987). Cognitive consequences of microcomputer based laboratories: Graphing skills development. Journal of Contemporary Educational Psychology, 12, 244–253.CrossRefGoogle Scholar
  33. Lissitz, R. W., Hou, X. & Slater, S. C. (2012). The contribution of constructed response items to large scale assessment: Measuring and understanding their impact. Journal of Applied Testing Technology, 13(3), 1–50.Google Scholar
  34. Macdonald-Ross, M. (1977). How numbers are shown. AV Communication Review, 25(4), 359–409.Google Scholar
  35. McDermott, L., Rosenquist, M., Popp, B. & van Zee, E. (1983). Student difficulties in connecting graphs, concepts and physical phenomena. Paper presented at the the meeting of the American Educational Research Association, Montreal, Canada.Google Scholar
  36. McKenzie, D. L. & Padilla, M. J. (1984). Effects of laboratory activities and written simulations on the acquistion of graphing skills by eighth grades students. Paper presented at the National Association for Research in Science Teaching, New Orleans, LA.Google Scholar
  37. McKenzie, D. L. & Padilla, M. J. (1986). The construction and validation of the test of graphing in science (TOGS). Journal of Research in Science Teaching, 23(7), 571–579.CrossRefGoogle Scholar
  38. Mokros, J. R. (1986). The impact of MBL on children’s use of symbol systems. Paper presented at the the meeting of the American Educational Research Association, San Francisco, CA.Google Scholar
  39. Mokros, J. R. & Tinker, R. F. (1987). The impact of microcomputer based science labs on children’s ability to interpret graphs. Journal of Research in Science Teaching, 24, 369–383.CrossRefGoogle Scholar
  40. Munby, H. (1982). The place of teachers’ beliefs in research on teacher thinking and decision making, and an alternative methodology. Instructional Science, 11, 201–225.CrossRefGoogle Scholar
  41. National Research Council (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: The National Academies Press.Google Scholar
  42. NGSS Lead States (2013). Next generation science standards: For states, by states. Washington, DC: The National Academies Press.Google Scholar
  43. Pellegrino, J. W., Chudowsky, N. & Glaser, R. (Eds.) (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.Google Scholar
  44. Ploetzner, R., Lippitsch, S., Galmbacher, M., Heuer, D. & Scherrer, S. (2009). Students’ difficulties in learning from dynamic visualisations and how they may be overcome. Computers in Human Behavior, 25(1), 56–65.CrossRefGoogle Scholar
  45. Rodriguez, M. C. (2003). Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. Journal of Educational Measurement, 40(2), 163–184.CrossRefGoogle Scholar
  46. Rowland, P. & Stuessy, C. L. (1988). Matching mode of CAI to cognitive style: An exploratory study. Journal of Computers in Mathematics and Science Teaching, 7(4), 36–40, 55.Google Scholar
  47. Sasson, I. & Dori, Y. J. (2012). Transfer skills and their case-based assessment. In B. J. Fraser, K. Tobin & C. J. McRobbie (Eds.), Second international handbook of science education (pp. 691–709). Netherlands: Springer.CrossRefGoogle Scholar
  48. Savinainen, A. & Viiri, J. (2008). The force concept inventory as a measure of students’ conceptual coherence. International Journal of Science and Mathematics Education, 6(4), 719–740.CrossRefGoogle Scholar
  49. Schultz, K., Clement, J. & Mokros, J. (1986). Adolescent graphing skills: A descriptive analysis. Paper presented at the meeting of the American Educational Research Association, San Francisco.Google Scholar
  50. Shaw, E. L., Padilla, M. J. & McKenzie, D. L. (1983, April). An examination of the graphing abilities of students in grades seven through twelve. Paper presented at the the meeting of the National Association for Research in Science Teaching, Dallas, TX.Google Scholar
  51. Svec, M. T. (1995). Effect of micro-computer based laboratory on graphing interpretation skills and understanding of motion. Paper presented at the annual meeting of the National Association for Research in Science Teaching, San Francisco, CA.Google Scholar
  52. Tairab, H. H. & Khalaf Al-Naqbi, A. K. (2004). How secondary school science students interpret and construct scientific graphs. Journal of Biological Education, 38(2), 119–124.Google Scholar
  53. Ward, W. C., Frederiksen, N. & Carlson, S. B. (1980). Construct validity of free-response and machine-scorable forms of a test. Journal of Educational Measurement, 17(1), 11–28.CrossRefGoogle Scholar
  54. Weintraub, S. (1967). Reading graphs, charts and diagrams. Reading Teacher, 20, 345–349.Google Scholar
  55. Wu, Y., Shah, J. J. & Davidson, J. K. (2003). Computer modeling of geometric variations in mechanical parts and assemblies. Journal of Computing and Information Science in Engineering, 3(1), 54–63.CrossRefGoogle Scholar

Copyright information

© Ministry of Science and Technology, Taiwan 2015

Authors and Affiliations

  1. 1.University of Wisconsin-MilwaukeeMilwaukeeUSA
  2. 2.University of North FloridaJacksonvilleUSA

Personalised recommendations