Advertisement

Investigating the Diagnostic Consistency and Incremental Validity Evidence of Curriculum-based Measurements of Oral Reading Rate and Comprehension

Article
  • 4 Downloads

Abstract

Rate and comprehension are two components related to broad reading abilities (i.e., phonemic awareness, decoding, vocabulary, comprehension). The purpose of this study was to examine the unique contribution of three curriculum-based measures (CBM)-comprehension assessments compared to a CBM-oral reading rate assessment through diagnostic consistency and incremental validity analyses. An extant sample of fall screening data in a national assessment system were used for this investigation. The CBM-comprehension assessments measured oral recall, synthesis of main ideas, and free-response question-answering. Scores from the CBM-comprehension measures were associated with weak criterion-related validity to a measure of broad reading. In addition, diagnostic consistency analyses revealed poor overlap between CBM-comprehension and CBM-oral reading rate assessment score classifications based on at-risk benchmarks. Incremental validity evidence replicated previous findings, demonstrating that the CBM-comprehension measures explain unique variation in broad reading scores even after controlling for rate and accuracy. Implications for the usage of CBM-R and CBM-comprehension for screening is addressed.

Keywords

Reading Comprehension CBM Screening Assessment Assessment to intervention 

Notes

Funding

Preparation of this manuscript was supported in part by grants from the Office of Special Education Programs, US Department of Education (H327S150004; R305A120086) and the National Institute of Mental Health, US Department of Health and Human Services (5T32MH010026).

Compliance with Ethical Standards

Ethical Approval

All procedures performed in this study involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Conflict of Interest

Calvary Diggs has no conflict of interest. Dr. Theodore Christ has received royalties from Fastbridge Learning.

References

  1. Alexander, K. L., Entwisle, D. R., & Kabbani, N. S. (2001). The dropout process in life course perspective: Early risk factors at home and school. Teachers College Record, 103, 760–822.Google Scholar
  2. Ardoin, S. P., & Christ, T. J. (2008). Evaluating Curriculum-Based Measurement Slope Estimates Using Data From Triannual Universal Screenings. School Psychology Review, 37, 109–125.Google Scholar
  3. Ardoin, S. P., Eckert, T. L., Christ, T. J., White, M. J., Morena, L. S., January, S.-A. A., & Hine, J. F. (2013). Examining variance in reading comprehension among developing readers: words in context (curriculum-based measurement in reading) versus words out of context (word lists). School Psychology Review, 42(3), 243.Google Scholar
  4. Ball, C. R., & Christ, T. J. (2012). Supporting valid decision making: uses and misuses of assessment data within the context of RTI. Psychology in the Schools, 49(3), 231–244.  https://doi.org/10.1002/pits.21592.CrossRefGoogle Scholar
  5. Carlson, S. E., Seipel, B., & McMaster, K. (2014). Development of a new reading comprehension assessment: identifying comprehension differences among readers. Learning and Individual Differences, 32, 40–53.  https://doi.org/10.1016/j.lindif.2014.03.003.CrossRefGoogle Scholar
  6. Chall, J. S. (1983). Stages of reading development. New York: McGraw-Hill.Google Scholar
  7. Christ, T. J., & Aranas, Y. A. (2014). Best practices in problem analysis. Best practices in school psychology, V 2, (pp. 159–176).Google Scholar
  8. Christ, T. J., White, M. J., Ardoin, S. P., & Eckert, T. L. (2013). Curriculum based measurement of reading: consistency and validity across best, fastest, and question reading conditions. School Psychology Review, 42(4), 415.Google Scholar
  9. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.Google Scholar
  10. Compton, D. L., Fuchs, D., Fuchs, L. S., Bouton, B., Gilbert, J. K., Barquero, L. A., et al. (2010). Selecting at-risk first-grade readers for early intervention: Eliminating false positives and exploring the promise of a two-stage gated screening process. Journal of Educational Psychology, 102, 327–340.Google Scholar
  11. Daly, E., Martens, B. K., Barnett, D., Witt, J. C., & Olson, S. C. (2007). Varying intervention delivery in response to intervention: Confronting and resolving challenges with measurement, instruction, and intensity. School Psychology Review, 36, 562–581.Google Scholar
  12. Davis, M. H., & Guthrie, J. T. (2015). Measuring reading comprehension of content area texts using an assessment of knowledge organization. The Journal of Educational Research, 108(2), 148–164.  https://doi.org/10.1080/00220671.2013.863749.CrossRefGoogle Scholar
  13. Deno, S. L. (1985). Curriculum-based measurement: the emerging alternative. Exceptional Children, 52(3), 219–232.CrossRefPubMedGoogle Scholar
  14. Ehri, L. C. (2005). Learning to read words: theory, findings, and issues. Scientific Studies of Reading, 9(2), 167–188.CrossRefGoogle Scholar
  15. Ford, J. W., Missall, K. N., Hosp, J. L., & Kuhle, J. L. (2016). Comparing two CBM maze selection tools: considering scoring and interpretive metrics for universal screening. Journal of Applied School Psychology, 32(4), 329–353.CrossRefGoogle Scholar
  16. Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to intervention: what, why, and how valid is it? Reading Research Quarterly, 41(1), 93–99.  https://doi.org/10.1598/RRQ.41.1.4.CrossRefGoogle Scholar
  17. Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9(2), 20–28.CrossRefGoogle Scholar
  18. Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239–256.  https://doi.org/10.1207/S1532799XSSR0503_3.CrossRefGoogle Scholar
  19. Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading disabilities by responsiveness-to-instruction: specifying measures and criteria. Learning Disability Quarterly, 27(4), 216–227.  https://doi.org/10.2307/1593674.CrossRefGoogle Scholar
  20. Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene: Institute for the Development of Educational Achievement.Google Scholar
  21. Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7(1), 6–10.CrossRefGoogle Scholar
  22. Graesser, A., & Hu, X. (2012). Conclusion: moving forward on reading assessment. In J. Sabatini, E. Albro, & T. O’Reilly (Eds.), Measuring up (pp. 153–158). Plymouth: Rowman & Littlefield Education.Google Scholar
  23. Graney, S. B., Martínez, R. S., Missall, K. N., & Aricak, O. T. (2010). Universal screening of reading in late elementary school R-CBM versus CBM maze. Remedial and Special Education, 31(5), 368–377.  https://doi.org/10.1177/0741932509338371.CrossRefGoogle Scholar
  24. Hamilton, C., & Shinn, M. R. (2003). Characteristics of word callers: An investigation of the accuracy of teachers’ judgments of reading comprehension and oral reading skills. School Psychology Review, 32, 228–240.Google Scholar
  25. Hasbrouck, J., & Tindal, G. (2005). Oral reading fluency: 90 years of measurement (Technical Report No. 33). Eugene, OR: Behavioral Research and Teaching, University of Oregon.Google Scholar
  26. Hintze, J. M., & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34(3), 372.Google Scholar
  27. Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: reading aloud and maze. Exceptional Children, 59(5), 421–432.CrossRefGoogle Scholar
  28. Kendeou, P., Papadopoulos, T. C., & Spanoudis, G. (2012). Processing demands of reading comprehension tests in young readers. Learning and Instruction, 22(5), 354–367.  https://doi.org/10.1016/j.learninstruc.2012.02.001.CrossRefGoogle Scholar
  29. Kilgus, S. P., Methe, S. A., Maggin, D. M., & Tomasula, J. L. (2014). Curriculum-based measurement of oral reading (R-CBM): a diagnostic test accuracy meta-analysis of evidence supporting use in universal screening. Journal of School Psychology, 52(4), 377–405.  https://doi.org/10.1016/j.jsp.2014.06.002.CrossRefPubMedGoogle Scholar
  30. Kranzler, J. H., Miller, M. D., & Jordan, L. (1999). An examination of racial/ethnic and gender bias on curriculum-based measurement of reading. School Psychology Quarterly, 14(3), 327–342.  https://doi.org/10.1037/h0089012.CrossRefGoogle Scholar
  31. Kuhn, M. R., Schwanenflugel, P. J., Meisinger, E. B., Levy, B. A., & Rasinski, T. V. (2010). Aligning theory and assessment of reading fluency: automaticity, prosody, and definitions of fluency. Reading Research Quarterly, 45(2), 230–251.CrossRefGoogle Scholar
  32. Marcotte, A. M., & Hintze, J. M. (2009). Incremental and predictive utility of formative assessment methods of reading comprehension. Journal of School Psychology, 47(5), 315–335.  https://doi.org/10.1016/j.jsp.2009.04.003.CrossRefPubMedGoogle Scholar
  33. McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. In B. Ross (Ed.), Psychology of learning and motivation (pp. 297–384).CrossRefGoogle Scholar
  34. Meisinger, E. B., Bradley, B. A., Schwanenflugel, P. J., Kuhn, M. R., & Morris, R. D. (2009). Myth and reality of the word caller: the relation between teacher nominations and prevalence among elementary school children. School Psychology Quarterly, 24(3), 147–159.CrossRefPubMedPubMedCentralGoogle Scholar
  35. Morgan, P. L., Sideridis, G., & Hua, Y. (2012). Initial and over-time effects of fluency interventions for students with or at risk for disabilities. Journal of Special Education, 46(2), 94–116.CrossRefGoogle Scholar
  36. National Center for Response to Intervention (2015). Screening tools chart rating system. U.S. Office of Special Education Programs.Google Scholar
  37. National Governors Association Center for Best Practices, Council of Chief State School Officers. (2010). Common core state standards: English language arts. Washington D.C.: National Governors Association Center for Best Practices, Council of Chief State School Officers.Google Scholar
  38. National Reading Panel. (2000). Report of the national reading panel subgroups: teaching children to read. National Institute of Child Health and Human DevelopmentGoogle Scholar
  39. Perfetti, C., & Adlof, S. M. (2012). Reading comprehension: a conceptual framework from word meaning to text meaning. In J. Sabatini, E. Albro, & T. O’Reilly (Eds.), Measuring up (pp. 153–158). Plymouth: Rowman & Littlefield Education.Google Scholar
  40. Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading as an indicator of reading achievement: a meta-analysis of the correlational evidence. Journal of School Psychology, 47(6), 427–469.  https://doi.org/10.1016/j.jsp.2009.07.001.CrossRefPubMedGoogle Scholar
  41. Schochet, P. Z. (2008). Technical methods report: guidelines for multiple testing in impact evaluations (No. NCEE 20084018). Washington, DC: NCEE.Google Scholar
  42. Shapiro, E. S. (2011). Academic Skills Problems: Direct Assessment and Intervention. New York: Guilford Press.Google Scholar
  43. Stanovich, K. E. (1986). Matthew effects in reading: some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360–407.CrossRefGoogle Scholar
  44. Strickland, W. D., Boon, R. T., & Spencer, V. G. (2013). The effects of repeated reading on the fluency and comprehension skills of elementary-age students with learning disabilities (LD), 2001-2011: a review of research and practice. Learning Disabilities. A Contemporary Journal, 11(1), 1–33.Google Scholar
  45. Swets, J. A., Dawes, R. M., & Monahan, J. (2000). Psychological science can improve diagnostic decisions. Psychological Science in the Public Interest, 1, 1–26.CrossRefPubMedGoogle Scholar
  46. Theodore J. Christ and Colleagues (2015). Formative Assessment System for Teachers: Technical Manual Version 2.0, Minneapolis, MN: Author and Fast Bridge Learning (fastbridge.org).Google Scholar
  47. Tindal, G. (2013). Curriculum-based measurement: a brief history of nearly everything from the 1970s to the present. ISRN Education, 2013, 1–29.  https://doi.org/10.1155/2013/958530.CrossRefGoogle Scholar
  48. Tunmer, W. E., & Chapman, J. W. (2012). The simple view of reading redux vocabulary knowledge and the independent components hypothesis. Journal of Learning Disabilities, 45(5), 453–466.CrossRefPubMedGoogle Scholar
  49. U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. (2015). Retrieved from https://www.nationsreportcard.gov/reading_math_2015/#reading?grade=4.
  50. VanDerHeyden, A. M., & Burns, M. K. (2017). Four dyslexia screening myths that cause more harm than good in preventing reading failure and what you can do instead. Communique , 45(7), 1, 26, 28.Google Scholar
  51. Wayman, M. M., Wallace, T., Wiley, H. I., Tichá, R., & Espin, C. A. (2007). Literature synthesis on curriculum-based measurement in reading. The Journal of Special Education, 41(2), 85–120.CrossRefGoogle Scholar
  52. Yang, J. (2007). A meta-analysis of the effects of interventions to increase reading fluency among elementary school students. Retrieved from http://search.proquest.com.ezp2.lib.umn.edu/llba/docview/85656472/A6D9B471C5CC48FAPQ/2Google Scholar

Copyright information

© California Association of School Psychologists 2018

Authors and Affiliations

  1. 1.University of MinnesotaMinneapolisUSA

Personalised recommendations