USING ITEM RESPONSE THEORY TO CONDUCT A DISTRACTER ANALYSIS ON CONCEPTUAL INVENTORY OF NATURAL SELECTION

  • Bryce Thomas Battisti
  • Nikki Hanegan
  • Richard Sudweeks
  • Rex Cates
Article

Abstract

Concept inventories are often used to assess current student understanding although conceptual change models are problematic. Due to controversies with conceptual change models and the realities of student assessment, it is important that concept inventories are evaluated using a variety of theoretical models to improve quality. This study used a modified item response theory model to determine university nonmajor biology students’ levels of understanding of natural selection (n = 1,192). Using Conceptual Inventory of Natural Selection, we have reported how we applied Bock’s modified nominal item response theory model and the distracter test item analysis. We found that the use of this model can define student levels of understanding and identify problematic distracters.

Key words

biology concept inventories item response theory 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aleixandre, M. P. J. (1994). Teaching evolution and natural selection: A look at textbooks and teachers. Journal of Research in Science Teaching, 31, 519–535.CrossRefGoogle Scholar
  2. Alters, B. J., & Nelson, C. E. (2002). Perspective: Teaching evolution in higher education. Evolution, 56, 1891–1901.Google Scholar
  3. Anderson, D. L., Fisher, K. M., & Norman, G. J. (2002). Development and evaluation of the Conceptual Inventory of Natural Selection. Journal of Research in Science Teaching, 39, 952–978.CrossRefGoogle Scholar
  4. Author. (2002). Biology principles & applications: A syllabus. San Francisco: McGraw-Hill Primis Custom.Google Scholar
  5. Author. (2008). Disconnections between teacher expectations and student confidence in bioethics. Science & Education, 17(8–9), 921–940.Google Scholar
  6. Bishop, B. A., & Anderson, C. W. (1990). Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching, 27, 415–427.CrossRefGoogle Scholar
  7. Bock, R. D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37, 29–51.CrossRefGoogle Scholar
  8. Bock, R. D. (1997). The nominal categories model. In W. J. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 23–49). New York: Springer.Google Scholar
  9. Brumby, M. N. (1984). Misconceptions about the concept of natural selection by medical biology students. Science Education, 68, 493–503.CrossRefGoogle Scholar
  10. Cummins, C. L., & Demastes, S. S. (1994). Evolution: Biological education's under-researched unifying theme. Journal of Research in Science Teaching, 31, 445–448.CrossRefGoogle Scholar
  11. de Ayala, R. J. (2009). The theory and practice of item response theory. New York: Guilford.Google Scholar
  12. Demastes, S. S., Good, R. G., & Peebles, P. (1996). Patterns of conceptual change in evolution. Journal of Research in Science Teaching, 33, 407–431.CrossRefGoogle Scholar
  13. Dillon, J. (2008). Discussion, debate and dialog: Changing minds about conceptual change research in science education. Cultural Studies in Science Education, 3(2), 397–416.CrossRefGoogle Scholar
  14. Dobzhansky, T. (1973). Nothing in Biology makes sense except in the light of evolution. The American Biology Teacher, 35(3), 125–129.CrossRefGoogle Scholar
  15. Driver, R., & Oldham, V. (1986). A constuctivist approach to curriculum development in science. Studies in Science Education, 13, 105–122.CrossRefGoogle Scholar
  16. du Toit, M. (Ed.) (2003). IRT from SSI: BILOG-MG, MULTILOG, PARSCALE, and TESTFACT. Lincolnwood, IL: Scientific Software Program.Google Scholar
  17. Ebel, R. L. (1951). Writing the test item. In E. F. Lindquist (Ed.), Educational measurement (pp. 185–249). Washington, D.C.: American Council on Education.Google Scholar
  18. Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah: Lawrence Erlbaum Associates.Google Scholar
  19. Ferrari, M., & Chi, T. H. M. (1998). The nature of naive explanations of natural selection. International Journal of Science Education, 20, 1231–1256.CrossRefGoogle Scholar
  20. Greene, E. D., Jr. (1990). The logic of university students' misunderstanding of natural selection. Journal of Reasearch in Science Teaching, 27, 875–885.CrossRefGoogle Scholar
  21. Haladyna, T. M. (1992). Context-dependent item sets. Educational Measurement: Issues and Practice, 11(1), 21–25.CrossRefGoogle Scholar
  22. Haladyna, T. M. (1994). Developing and validating multiple-choice test items. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  23. Hatton, J., & Plouffe, P. B. (Eds.) (1997). Science and its ways of knowing. Upper Saddle River: Prentice-Hall.Google Scholar
  24. Hewson, P. (2008). Conceptions over time: Are language and the her-and-now up to the task? Cultural Studies in Science Education, 3(2), 263–276.CrossRefGoogle Scholar
  25. Jensen, M. S., & Finley, F. N. (1995). Teaching evolution using historical arguments in a conceptual change strategy. Science Education, 79, 147–166.CrossRefGoogle Scholar
  26. Johnson, B., & Christensen, L. (2000). Educational research: Quantitative and qualitative approaches. Boston: Allyn & Bacon.Google Scholar
  27. Linn, R. L. & Gronlund, N. E. (2000). Measurement and assessment in teaching (8th ed.). Upper Saddle River, NJ: Merrill Prentice-Hall.Google Scholar
  28. Livingston, S. A. (2006). Item analysis. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 421–441). Mahwah: Erlbaum.Google Scholar
  29. McKinley, R. L. (1989). An introduction to item response theory. Measurement and Evaluation in Counseling and Development, 22, 37–57.Google Scholar
  30. Mercer, N. (2008). Changing our minds: A commentary on ‘Conceptual change: A discussion of theoretical, methodological and practical challenges for science education’. Cultural Studies in Science Education, 3(2), 351–362.CrossRefGoogle Scholar
  31. Miller, K. R. (1999). Finding Darwin's God: A scientist's search for common ground between god and evolution. New York: Harper Collins.Google Scholar
  32. Moore, R., Mitchell, G., Bally, R., Inglis, M., Day, J., & Jacobs, D. (2002). Undergraduates' understanding of evolution: Ascriptions of agency as a problem for student learning. Journal of Biological Education, 36, 65–71.Google Scholar
  33. National Research Council. (2000). Inquiry and the national science education standards: A guide for teaching and learning (report). Washington, D.C.: National Academy.Google Scholar
  34. Novak, J. D., Mintzes, J. J., & Wandersee, J. H. (2000). Epilogue: On ways of assessing science understanding. In J. J. Mintzes (Ed.), Assessing science understanding (pp. 355–374). New York: Academic.Google Scholar
  35. Oosterhof, A. (1994). Classroom applications of educational measurement (2nd ed.). Columbus, OH: Merrill Publishing Co.Google Scholar
  36. Palmer, D. H. (1999). Exploring the link between students' scientific and nonscientific conceptions. Science Education, 83, 639–653.CrossRefGoogle Scholar
  37. Passmore, C., & Stewart, J. (2002). A modeling approach to teaching evolutionary biology in high schools. Journal of Research in Science Teaching, 39, 185–204.CrossRefGoogle Scholar
  38. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. A. (1982). Accommodation of a scientific conception: Toward a theory of conceptual change. Science Education, 66, 211–227.CrossRefGoogle Scholar
  39. Sadler, P. M. (1998). Psychometric models of student conceptions in science: Reconciling qualitative studies and distracter-driven assessment instruments. Journal of Research in Science Teaching, 35, 265–296.CrossRefGoogle Scholar
  40. Sadler, P. M. (2000). The relevance of multiple-choice tests in assessing science understanding. In J. J. Mintzes (Ed.), Assessing science understanding (pp. 249–278). New York: Academic.Google Scholar
  41. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores [psychometric monograph, 17]. Iowa City: Psychometric SocietyGoogle Scholar
  42. Scharmann, L. C., & Harty, H. (1986). Shaping the non-major general biology course. The American Biology Teacher, 48, 166–169.Google Scholar
  43. Settlage, J. J. (1994). Conceptions of natural selection: A snapshot of the sense-making process. Journal of Research in Science Teaching, 31, 449–457.CrossRefGoogle Scholar
  44. Soderburg, P. (2003). An examination of problem-based teaching and learning in population genetics and evolution using EVOLVE, a computer simulation. International Journal of Science Education, 25, 35–55.CrossRefGoogle Scholar
  45. Tamir, P. (1971). An alternative approach to the construction of multiple choice test items. Journal of Biological Education, 5, 305–307.Google Scholar
  46. Thissen, D. (1991). MULTILOG user’s guide: Multiple, categorical item analysis and test scoring using item response theory (Version 6.0) [Computer program]. Chicago: Scientific Software.Google Scholar
  47. Thissen, D., & Steinberg, L. (1984). A response model for multiple choice items. Psychometrika, 49, 501–519.CrossRefGoogle Scholar
  48. Thissen, D., & Steinberg, L. (1997). A response model for multiple-choice items. Handbook of modern item response theory (pp. 51–65). New York: Springer.Google Scholar
  49. Thissen, D., Steinberg, L., & Fitzpatrick, A. R. (1989). Multiple-choice models: The distracters are also part of the item. Journal of Educational Measurement, 26, 161–176.CrossRefGoogle Scholar
  50. Treagust, D. F., & Duit, R. (2008). Conceptual change: A discussion of theoretical, methodological and practical challenges for science education. Cultural Studies in Science Education, 3(2), 297–328.CrossRefGoogle Scholar
  51. Wesman, A. G. (1971). Writing the test item. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 81-129). Washington, DC: American Council on Education.Google Scholar
  52. Yen, W. M., & Fitzpatrick, A. R. (2006). Item response theory. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 111–153). Westport: American Council on Education.Google Scholar

Copyright information

© National Science Council, Taiwan 2009

Authors and Affiliations

  • Bryce Thomas Battisti
    • 1
  • Nikki Hanegan
    • 2
  • Richard Sudweeks
    • 3
  • Rex Cates
    • 2
  1. 1.Department of Integrated SciencesAsian University for WomenChittagongBangladesh
  2. 2.Department of Biology, College of Life SciencesBrigham Young UniversityProvoUSA
  3. 3.Department of Instructional Psychology and Technology, McKay School of EducationBrigham Young UniversityProvoUSA

Personalised recommendations