Designing Reading Comprehension Assessments for Reading Interventions: How a Theoretically Motivated Assessment Can Serve as an Outcome Measure
- 1.4k Downloads
When designing a reading intervention, researchers and educators face a number of challenges related to the focus, intensity, and duration of the intervention. In this paper, we argue there is another fundamental challenge—the nature of the reading outcome measures used to evaluate the intervention. Many interventions fail to demonstrate significant improvements on standardized measures of reading comprehension. Although there are a number of reasons to explain this phenomenon, an important one to consider is misalignment between the nature of the outcome assessment and the targets of the intervention. In this study, we present data on three theoretically driven summative reading assessments that were developed in consultation with a research and evaluation team conducting an intervention study. The reading intervention, Reading Apprenticeship, involved instructing teachers to use disciplinary strategies in three domains: literature, history, and science. Factor analyses and other psychometric analyses on data from over 12,000 high school students revealed the assessments had adequate reliability, moderate correlations with state reading test scores and measures of background knowledge, a large general reading factor, and some preliminary evidence for separate, smaller factors specific to each form. In this paper, we describe the empirical work that motivated the assessments, the aims of the intervention, and the process used to develop the new assessments. Implications for intervention and assessment are discussed.
KeywordsReading comprehension Assessment Outcome measures Intervention Disciplinary literacy
The research reported here was supported by the Institute of Education Sciences, U.S. Department of Education, through grant R305F100005 to Educational Testing Service as part of the Reading for Understanding Research Initiative; and in partnership with WestEd, IMPAQ International, and Empirical Education, Inc. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education, nor the Educational Testing Service. We would also like to thank Cynthia Greenleaf and Ruth Schoenbach of WestEd, Cheri Fancsali and the team at IMPAQ for their partnership and support with school sample recruitment in this study; our Cognitively Based Assessment as, of, and for, Learning (CBAL™) Initiative partners; the NAEP team for providing access and use of released items; Kelly Bruce for technical support; and Jennifer Lentini and Kim Fryer for editorial assistance; Paul Deane, Jim Carlson, Shelby Haberman, Matthias Von Davier and anonymous reviewers for their thoughtful reviews and helpful comments.
- Bennett, R. E. (2010). Cognitively based assessment of, for, and as learning: a preliminary theory of action for summative and formative assessment. Measurement: Interdisciplinary Research and Perspectives, 8, 70–91.Google Scholar
- Cai, L. (2012). flexMIRT version 1.88: a numerical engine for multilevel item factor analysis and test scoring. [computer software]. Seattle, WA: Vector Psychometric Group.Google Scholar
- Chamberlain, A., Daniels, C., Madden, N. A., & Slavin, R. E. (2007). A randomized evaluation of the success for all middle school reading program. Middle Grades Reading Journal, 2, 1–22.Google Scholar
- Coiro, J. (2009). Rethinking reading assessment in a digital age: How is reading comprehension different and where do we turn now? Educational Leadership, 66, 59–63.Google Scholar
- Fisk, C., & Hurst, C. B. (2003). Paraphrasing for comprehension. Reading Teacher, 57, 182–185.Google Scholar
- Goldman, S., & Rakestraw, J. (2000). Structural aspects of constructing meaning from text. In M. Kamil, P. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research (Vol. III, pp. 311–335). Mahwah, NJ: Erlbaum.Google Scholar
- Guldenoğlu, İ., Kargin, T., & Miller, P. (2012). Comparing the word processing and reading comprehension of skilled and less skilled readers. Educational Sciences: Theory & Practice, 12, 2822–2828.Google Scholar
- Kane, M. (2006). Validation. In R. J. Brennan (Ed.), Educational measurement (4th ed., pp. 18–64). Westport, CT: American Council on Education and Praeger.Google Scholar
- Kim, J. S., Samson, J. F., Fitzgerald, R., & Hartry, A. (2010). A randomized experiment of a mixed-methods literacy intervention for struggling readers in grades 4–6: effects on word reading efficiency, reading comprehension and vocabulary, and oral reading fluency. Reading and Writing: An Interdisciplinary Journal, 23, 1109–1129.CrossRefGoogle Scholar
- Kintsch, W. (1998). Comprehension: a paradigm for cognition. Cambridge, UK: Cambridge University Press.Google Scholar
- Lee, C. (2004). Literacy in the academic disciplines and the needs of adolescent struggling readers. Voices in Urban Education, 3, 14–25.Google Scholar
- Lee, C. D., & Spratley, A. (2010). Reading in the disciplines: the challenges of adolescent literacy. New York, NY: Carnegie Corporation.Google Scholar
- MacGinitie, W. H., MacGinitie, R. K., Katherine, M., & Dreyer, L. G. (2000). Gates MacGinitie tests of reading. Itasca, IL: Riverside.Google Scholar
- McKinley, R. L., & Reckase, M. D. (1983). An extension of the two-parameter logistic model to the multidimensional latent space (Research Report ONR 83–2). Iowa City, IA: The American College Testing Program.Google Scholar
- McNamara, D. S. (2007). Reading comprehension strategies: theories, interventions, and technologies. Mahwah, NJ: Erlbaum.Google Scholar
- Meyer, B., & Wijekumar, K. (2007). A Web based tutoring system for the structure strategy: theoretical background, design, and findings. In D. S. McNamara (Ed.), Reading comprehension strategies: theory, interventions, and technologies (pp. 347–374). Mahwah, NJ: Erlbaum.Google Scholar
- Mislevy, R. J., & Sabatini, J. P. (2012). How research on reading and research on assessment are transforming reading assessment (or if they aren’t, how they ought to). In J. Sabatini, E. Albro, & T. O'Reilly (Eds.), Measuring up: advances in how we assess reading ability (pp. 119–134). Lanham, MD: Rowman & Littlefield Education.Google Scholar
- National Assessment Governing Board (2010). Reading framework for the 2011 National Assessment of Educational Progress. Washington, DC: U.S. Department of Education. Retrieved from http://www.nagb.org/publications/frameworks/reading-2011-framework.pdf.
- National Governors Association, & Council of Chief State School Officers (2010). Common core state standards for English language arts. Washington, DC: Authors Retrieved from http://www.corestandards.org/assets/CCSSI_ELA%20Standards.pdf.
- O'Reilly, T., & Sabatini, J. (2013). Reading for understanding: how performance moderators and scenarios impact assessment design (Research Report No. RR-13-31). Princeton, NJ: Educational Testing Service.Google Scholar
- Perfetti, C. A., & Adlof, S. M. (2012). Reading comprehension: a conceptual framework from word meaning to text meaning. In J. P. Sabatini, E. Albro, & T. O’Reilly (Eds.), Measuring up: advances in how we assess reading ability (pp. 3–20). Lanham, MD: Rowman & Littlefield Education.Google Scholar
- Poitras, E., & Trevors, G. (2012). Deriving empirically-based design guidelines for advanced learning technologies that foster disciplinary comprehension. Canadian Journal of Learning and Technology, 38, 1–21.Google Scholar
- Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Relevance instructions and goal-focusing in text learning (pp. 19–52). Greenwich, CT: Information Age Publishing.Google Scholar
- Sabatini, J., & O’Reilly, T. (2013). Rationale for a new generation of reading comprehension assessments. In B. Miller, L. Cutting, & P. McCardle (Eds.), Unraveling reading comprehension: behavioral, neurobiological, and genetic components (pp. 100–111). Baltimore, MD: Brookes Publishing.Google Scholar
- Sabatini, J., O'Reilly, T., & Deane, P. (2013). Preliminary reading literacy assessment framework: foundation and rationale for assessment and system design. (Research Report No. RR-13-30). Princeton, NJ: Educational Testing Service.Google Scholar
- Schoenbach, R., Greenleaf, C., & Murphy, L. (2012). Engaged academic literacy for all. In Reading for understanding: How Reading Apprenticeship improves disciplinary learning in secondary and college classrooms, 2nd edition (pp. 1–6). San Francisco, CA: Jossey-Bass. Retrieved from: http://www.wested.org/online_pubs/read-12-01-sample1.pdf.
- Shanahan, T., & Shanahan, C. (2008). Teaching disciplinary literacy to adolescents: rethinking content-area literacy. Harvard Educational Review, 78, 40–59.Google Scholar
- Tunmer, W. E., Chapman, J. W., & Prochnow, J. E. (2004). Why the reading achievement gap in New Zealand won’t go away: evidence from the PIRLS 2001 International Study of Reading Achievement. New Zealand Journal of Educational Studies, 39, 127–145.Google Scholar
- Vaughn, S., Swanson, E. A., Roberts, G., Wanzek, J., Stillman-Spisak, S. J., Solis, M., & Simmons, D. (2013). Improving reading comprehension and social studies knowledge in middle school. Reading Research Quarterly, 48, 77–93.Google Scholar
- What Works Clearinghouse (2012). Phonological awareness training (What Works Clearinghouse Intervention Report). U.S. Department of Education, Institute of Education Sciences What Works Clearinghouse, Retrieved from http://files.eric.ed.gov/fulltext/ED533087.pdf.