Advertisement

Mathematics Education Research Journal

, Volume 23, Issue 4, pp 397–416 | Cite as

Developing fair tests for mathematics curriculum comparison studies: the role of content analyses

  • Óscar ChávezEmail author
  • Ira Papick
  • Daniel J. Ross
  • Douglas A. Grouws
Original Article

Abstract

This article describes the process of development of assessment instruments for a three-year longitudinal comparative study that focused on evaluating American high school students’ mathematics learning from two distinct approaches to content organization: curriculum built around a sequence of three full-year courses (Algebra 1, Geometry, and Algebra 2) and a sequence of integrated mathematics courses (algebra and geometry content, together with functions, data analysis, and discrete mathematics is integrated each year). The study was conducted in six school districts in five states involving over 4,000 students from schools that were using both curricular approaches but with different groups of students. In order to develop assessment instruments that were not biased towards either of the two curriculum programs (Fair Tests), an iterative process of content analyses, identification of common topics, internal and external reviews, pilot tests, and revisions was followed, resulting in five tests that were used in the three years of the study. Results indicate that these tests have solid discrimination properties and address adequately mathematics content common to both secondary curriculum programs. The corresponding scoring rubrics are highly reliable, with interrater reliability above 94% for all tests. Mathematics education researchers involved in curriculum comparison studies need to conduct content analyses of the curriculum materials under study in order to identify salient relationships between curriculum programs and student outcomes.

Keywords

Secondary mathematics Curriculum comparison studies Assessment Measurement 

References

  1. Baker, E. L., & Herman, J. L. (1983). Task structure design: beyond linkage. Journal of Educational Measurement, 20(2), 149–164.CrossRefGoogle Scholar
  2. Center for Mathematics Education (CME), P. (2009). CME project: Algebra 1: Student Edition. Boston: Pearson.Google Scholar
  3. Coxford, A. F., Fey, J. T., Hirsch, C. R., Schoen, H. L., Burrill, G., Hart, E. W., et al. (1998). Contemporary mathematics in context: a unified approach (course 1). New York: Glencoe/McGraw-Hill.Google Scholar
  4. Dahl, T., Johnson, J., Morton, M., & Whalen, S. (2005). Mathematics assessment sampler, grades 9–12: Items aligned with NCTM’s principles and standards for school mathematics. Reston: National Council of Teachers of Mathematics.Google Scholar
  5. De Lange, J. (2007). Large-scale assessment and mathematics education. In F. K. Lester Jr. (Ed.), Second handbook of research on mathematics teaching and learning (pp. 1111–1142). Charlotte: Information Age Publishing.Google Scholar
  6. Domino, G., & Domino, M. L. (2006). Psychological testing: an introduction. Cambridge: Cambridge University Press.Google Scholar
  7. Dossey, J., Halvorsen, K., & McCrone, S. (2008). Mathematics education in the United States 2008: a capsule summary book written for the eleventh International Congress on Mathematical Education (ICME-11). Reston: National Council of Teachers of Mathematics.Google Scholar
  8. Gorin, J. S. (2007). Test construction and diagnostic testing. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive diagnostic assessment for education: theory and applications (pp. 173–201). New York: Cambridge University Press.CrossRefGoogle Scholar
  9. Hirsch, C. R. (Ed.). (2007). Perspectives on the design and development of school mathematics curricula. Reston, VA: National Council of Teachers of Mathematics.Google Scholar
  10. Holliday, B., Cuevas, G. J., Moore-Harris, B., Carter, J. A., Marks, D., Casey, R. M., Day, R., & Hayek, L. (2005). Algebra 1. New York: Glencoe/McGraw-Hill.Google Scholar
  11. Keitel, C., & Kilpatrick, J. (1999). The rationality and irrationality of international comparative studies. In G. Kaiser, E. Luna, & I. Huntley (Eds.), International comparisons in mathematics education (pp. 241–256). London: Falmer.Google Scholar
  12. Kilpatrick, J. (2003). What works? In S. L. Senk & D. R. Thompson (Eds.), Standards’ based school mathematics curricula. What are they? What do students learn? (pp. 471–493). Mahwah: Lawrence Erlbaum.Google Scholar
  13. Kline, P. (1986). A handbook of test construction. London: Methuen.Google Scholar
  14. Kline, P. (2000). The handbook of psychological testing (2nd ed.). New York: Routledge.Google Scholar
  15. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.CrossRefGoogle Scholar
  16. Linn, R. L., Baker, E. L., & Dunbar, S. B. (1991). Complex, performance-based assessment: expectations and validation criteria. Educational Researcher, 20(8), 15–21.Google Scholar
  17. McNaught, M., Tarr, J., & Grouws. D. (2008). Assessing curriculum implementation: insights from the comparing options in Secondary Mathematics Project. Paper presented at the Annual meeting the American Educational Research Association, New York.Google Scholar
  18. Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.Google Scholar
  19. Murphy, K. R., & Davidshofer, C. O. (2005). Psychological testing: principles and applications (6th ed.). Upper Saddle River: Pearson Prentice Hall.Google Scholar
  20. National Council of Teachers of Mathematics. (1989). Curriculum and evaluation standardsfor school mathematics. Reston: Author.Google Scholar
  21. National Council of Teachers of Mathematics. (1995). Assessment standards for school mathematics. Reston: Author.Google Scholar
  22. National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematicss. Reston: Author.Google Scholar
  23. National Council of Teachers of Mathematics. (2009). Focus in high school mathematics: reasoning and sense making. Reston: Author.Google Scholar
  24. National Research Council. (2000). How people learn: brain, mind, experience, and school: Expanded Edition. Washington, DC: National Academies Press.Google Scholar
  25. National Research Council. (2001). Knowing what students know: the science and design of educational assessment. Washington, DC: National Academies Press.Google Scholar
  26. National Research Council. (2004). On evaluating curricular effectiveness: judging the quality of K-12 NSF-supported and commercially generated mathematics curriculum materials. Washington, DC: National Academies Press.Google Scholar
  27. Papick, I. (2006). Algebra connections. Upper Saddle River: Prentice Hall.Google Scholar
  28. Pearson, D. P., & Garavaglia, D. R. (2003). Improving the information value of performance items in large-scale assessments: NAEP validity studies. Washington, D.C.: National Center for Education Statistics.Google Scholar
  29. Reynolds, C. B., Livingston, R. B., & Willson, V. (2006). Measurement and assessment in education. New York: Pearson.Google Scholar
  30. Ross, D., Reys, R., Chávez, O., McNaught, M., & Grouws, D. (2011). Lessons learned from student strategies on an algebra problem. School Science and Mathematics. Google Scholar
  31. Royer, J. M., Cisero, C. A., & Carlo, M. S. (1993). Techniques and procedures for assessing cognitive skills. Review of Educational Research, 63(2), 201–243.Google Scholar
  32. Schmidt, W. H., McKnight, C. C., & Raizen, S. A. (1997). A splintered vision: an investigation of U.S. science and mathematics education. Dordrecht: Kluwer.Google Scholar
  33. Senk, S., Thompson, D., Viktora, S., Usiskin, Z., Ahbel, N., Rubenstein, R., Levin, S., et al. (2001). UCSMP Advanced Algebra. Prentice-Hall.Google Scholar
  34. Silver, E. A., & Lane, S. (1993). Assessment in the context of mathematics instruction reform: the design of assessment in the QUASAR project. In M. Niss (Ed.), Cases of assessment in mathematics education (pp. 59–69). London: Kluwer Academic Publishers.Google Scholar
  35. Silver, E. A., Alacaci, C., & Stylianou, D. A. (2000). Students’ performance on extended constructed-response tasks. In E. A. Silver & P. A. Kenney (Eds.), Results from the seventh mathematics assessment of the National Assessment of Educational Progress. Reston: NCTM.Google Scholar
  36. Thompson, D. R., Senk, S. L., Witonsky, D., Usiskin, Z., & Kaeley, G. (2001). An evaluation of the second edition of UCSMP Advanced Algebra. Unpublished manuscript.Google Scholar
  37. Thompson, D. R., Witonsky, D., Senk, S. L., Usiskin, Z., & Kaeley, G. (2003). An evaluation of the second edition of UCSMP Geometry. Unpublished manuscript.Google Scholar
  38. Webb, N. L. (1992). Assessment of students’ knowledge of mathematics: steps toward a theory. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning. Reston: National Council of Teachers of Mathematics.Google Scholar
  39. Yoong, W. H. (Ed.). (1999). New elementary mathematics: Syllabus D, volume 1. Singapore: Pan Pacific Publications.Google Scholar

Copyright information

© Mathematics Education Research Group of Australasia, Inc. 2011

Authors and Affiliations

  • Óscar Chávez
    • 1
    Email author
  • Ira Papick
    • 2
    • 4
  • Daniel J. Ross
    • 3
  • Douglas A. Grouws
    • 1
  1. 1.University of MissouriColumbiaUSA
  2. 2.RocheportUSA
  3. 3.206 Sutton Science CenterMaryville CollegeMaryvilleUSA
  4. 4.University of NebraskaLincolnUSA

Personalised recommendations