Skip to main content

Different Approaches to Equating Oral Reading Fluency Passages

  • Chapter
  • First Online:
The Fluency Construct

Abstract

Using curriculum-based measures (CBM) to identify and monitor students’ oral reading fluency (ORF) is challenging, with student performance subject to numerous sources of variability. One source of variability that is beyond teachers’ or students’ control stems from differences in text difficulty across CBM probes at any given grade level. These differences are referred to collectively as form effects on students’ ORF. This chapter examines the research on form effects and different solutions that have been discussed in the research literature for reducing or removing form effects from CBM assessments. These solutions are referred to collectively as equating methods. The chapter examines four different equating methods using data from a sample of 1867 students from grade 6–8 who were evaluated on subtests of the Texas Middle School Fluency Assessment (TMSFA) to illustrate the differences across the methods. These methods either focus on the equating of raw scores, or on the estimation of true fluency scores through the modeling of test forms. The raw score methods include linear and equipercentile equating, while the true score methods include linear and nonlinear equating using latent variables (LVs). The results are discussed in terms of their implications for developers and users of CBM assessments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The appropriate model for the data depends on tests of invariance, which are typically carried out by examining the χ2 test of model fit, comparing the −2 log likelihood of a more constrained nested model with that of a less constrained model.

References

  • Ardoin, S., Suldo, S., Witt, J., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates’ predictions of CBM performance. School Psychology Quarterly, 20(1), 1–22. doi:10.1521/scpq.20.1.1.64193.

    Article  Google Scholar 

  • Bailin, A., & Grafstein, A. (2001). The linguistic assumptions underlying readability formulae: A critique. Language & Communication, 21(3), 285–301. doi:10.1016/S0271-5309(01)00005-2.

    Article  Google Scholar 

  • Barth, A., Stuebing, K. K., Fletcher, J. M., Cirino, P. T., Francis, D. J., & Vaughn, S. (2012). Reliability and validity of the median score when assessing the oral reading fluency of middle grade readers. Reading Psychology, 33(1–2), 133–161. doi: 10.1080/02702711.2012.631863.

    Google Scholar 

  • Begeny, J. C., & Greene, D. J. (2014). Can readability formulas be used to successfully gauge difficulty of reading materials? Psychology in the Schools, 51(2), 198–215. doi:10.1002/pits.21740.

    Article  Google Scholar 

  • Bejar, I. I. (1977). An application of the continuous response level model to personality measurement. Applied Psychological Measurement, 1, 509–521.

    Article  Google Scholar 

  • Betts, J., Pickart, M., & Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. School Psychology, 47, 1–17. doi:10.1016/j.jsp.2008.09.001.

    Article  Google Scholar 

  • Blommers, P. J., & Forsyth, R. A. (1977). Elementary statistical methods in psychology and education (2nd ed.). Boston: Houghton Mifflin.

    Google Scholar 

  • Compton, D. L., Appleton, A. C., & Hosp, M. K. (2004). Exploring the relationship between text-leveling systems and reading accuracy and fluency in second-grade students who are average to poor decoders. Learning Disabilities Research & Practice, 19(3), 176–184. doi:10.1111/j.1540-5826.2004.00102.x.

    Article  Google Scholar 

  • Cummings, K. D., Atkins, T., Allison, R., & Cole, C. (2008). Response to Intervention: Investigating the new role for special educators. Teaching Exceptional Children, 40(4), 24–31.

    Google Scholar 

  • Cummings, K. D., Park, Y., & Bauer Schaper, H. A. (2013). Form effects on DIBELS Next oral reading fluency progress monitoring passages. Assessment for Effective Intervention, 38(2), 91–104. doi:10.1177/1534508412447010.

    Article  Google Scholar 

  • Dale, E., & Chall, J. (1948). A formula for predicting readability: Instructions. Educational Research Bulletin, 27(2), 37–54.

    Google Scholar 

  • Deno, S. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37(3), 184–192. doi:10.1177/00224669030370030801.

    Article  Google Scholar 

  • Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30(4), 507–524.

    Google Scholar 

  • Ferrando, P. J. (2002). Theoretical and empirical comparisons between two models for continuous item responses. Multivariate Behavioral Research, 37, 521–542. doi:10.1207/S15327906MBR3704_05.

    Article  Google Scholar 

  • Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233.

    Article  PubMed  Google Scholar 

  • Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (Eds.). (2007). Learning disabilities: From identification to intervention. New York: Guilford.

    Google Scholar 

  • Francis, D. J., Santi, K. L., Barr, C., Fletcher, J. M., Varisco, A., & Foorman, B. R. (2008). Form effects on the estimation of students’ oral reading fluency using DIBELS. Journal of School Psychology, 46(3), 315–342. doi:10.1016/j.jsp.2007.06.003.

    Article  PubMed Central  PubMed  Google Scholar 

  • Francis, D. J., Barth, A., Cirino, P., Reed, D. K., & Fletcher, J. M. (2010). Texas middle school fluency assessment, version 2.0. Houston: University of Houston/Texas Education Agency.

    Google Scholar 

  • Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11(7), 513–516.

    Google Scholar 

  • Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57(6), 488–501.

    Google Scholar 

  • Fuchs, L. S., & Stecker, P. M. (2003). Scientifically based progress monitoring. Washington, DC: National Center on Student Progress Monitoring. http://www.studentprogress.org/library/Presentations/ScientificallyBasedProgressMonitoring.pdf. Accessed 20 Nov 2014.

  • Fuchs, L. S., Fuchs, D. F., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239–256. doi:10.1207/S1532799XSSR0503_3.

    Article  Google Scholar 

  • Good, R. H., & Kaminski, R. A. (2002a). Dynamic indicators of basic early literacy skills (2000–2003). http://dibels.uoregon.edu/. Accessed 20 Nov 2014.

  • Good, R. H., & Kaminski, R. A. (2002b). DIBELS oral reading fluency passages for first through third grades (technical report no. 10). Eugene: University of Oregon.

    Google Scholar 

  • Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix: Providing multilevel analysis of text characteristics. Educational Researcher, 40, 223–234.

    Article  Google Scholar 

  • Gunning, R. (1952). The technique of clear writing. New York: McGraw-Hill.

    Google Scholar 

  • Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park: Sage.

    Google Scholar 

  • Hammill, D. D., Wiederholt, J. L., & Allen, A. E. (2006). Test of silent contextual reading fluency. Austin: Pro-Ed.

    Google Scholar 

  • Hiebert, E. (2002). Standards, assessments, and text difficulty. In A. Farstrup & S. Samuels (Eds.), What research has to say about reading instruction (pp. 337–369). Newark: International Reading Association.

    Google Scholar 

  • Holland, P. W., & Rubin, D. B. (1982). Test equating. New York: Academic.

    Google Scholar 

  • Kame’enui, E. J., & Simmons, D. C. (2001). Introduction to this special issue: The DNA of reading fluency. Scientific Studies of Reading, 5(3), 203–210. doi:10.1207/S1532799XSSR0503_1.

    Article  Google Scholar 

  • Klare, G. R., & Buck, B. (1954). Your reader: The scientific approach to readability. New York: Hermitage House.

    Google Scholar 

  • Kolen, M. J., & Brennan, R. L. (1995). Standard errors of equating. New York: Springer.

    Book  Google Scholar 

  • Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling, and linking. New York: Springer.

    Book  Google Scholar 

  • Lubke, G. H., Dolan, C. V., Kenderman, H., & Mellenbergh, G. J. (2003). On the relationship between sources of within- and between-group differences and measurement invariance in the common factor model. Intelligence, 31, 543–566. doi:10.1016/S0160-2896(03)00051-5.

    Article  Google Scholar 

  • Madelaine, A., & Wheldall, K. (2004). Curriculum-based measurement of reading: Recent advances. International Journal of Disability. Development and Education, 51(1), 57–82.

    Article  Google Scholar 

  • Marston, D. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18–78). New York: Guilford.

    Google Scholar 

  • Mathes, P. G., Torgesen, J. K., & Herron, J. Continuous monitoring of early reading skills (CMERS) (2008) [Computer software]. San Rafael: Talking Fingers, Inc.

    Google Scholar 

  • McLaughlin, G. H. (1969). SMOG grading: A new readability formula. Journal of Reading, 22(1), 639–646.

    Google Scholar 

  • Messick, S. (1988). The once and future issues of validity: Assessing the meaning and consequences of measurement. In H. Wainer & H. Braun (Eds.), Test validity (pp. 33–45). Hillsdale: Lawrence Erlbaum.

    Google Scholar 

  • Mueller, R. O., & Hancock, G. R. (2008). 32 best practices in structural equation modeling. In J. Osborne (Ed.), Best practice in quantitative methods (pp. 488–508), NY: Sage.

    Chapter  Google Scholar 

  • National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.

    Google Scholar 

  • Nelson, J., Perfetti, C., Liben, D., & Liben, M. (2012). Measures of text difficulty: Testing their predictive value for grade levels and student performance. New York: Student Achievement Partners.

    Google Scholar 

  • Petscher, Y., & Kim, Y. S. (2011). The utility and accuracy of oral reading fluency score types in predicting reading comprehension. Journal of School Psychology, 49(1), 107–129. doi:10.1016/j.jsp.2010.09.004.

    Article  PubMed Central  PubMed  Google Scholar 

  • Powell-Smith, K. A., & Bradley-Klug, K. L. (2001). Another look at the “C” in CBM: Does it really matter if curriculum-based measurement probes are curriculum-based? Psychology in the Schools, 38(4), 299–312. doi:10.1002/pits.1020.

    Article  Google Scholar 

  • Samejima, F. (1973). Homogeneous case of the continuous response model. Psychometrika, 38, 203–219.

    Article  Google Scholar 

  • Shinn, M. R. (2002). Best practices in using curriculum-based measurement in a problem solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (pp. 671–698). Bethesda: National Association of School Psychologists.

    Google Scholar 

  • Shinn, M. R., Rosenfield, S., & Knutson, N. (1989). Curriculum-based assessment: A comparison of models. School Psychology Review, 18(3), 299–316.

    Google Scholar 

  • Snow, C., Burns, M., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.

    Google Scholar 

  • Spache, G. (1953). A new readability formula for primary grade reading materials. The Elementary School Journal, 53(7), 410–413.

    Article  Google Scholar 

  • Sprick, M., Howard, L. M., & Fidanque, A. (1998). Read well: Critical foundations in primary reading. Longmont: Sopris West.

    Google Scholar 

  • Stenner, A. J., Burdick, H., Sanford, E. E., & Burdick, D. S. (2007). The Lexile framework for reading technical report. Durham: MetaMetrics, Inc.

    Google Scholar 

  • Sticht, T. G. (1973). Research toward the design, development and evaluation of a job-functional literacy program for the US Army. Literacy Discussion, 4(3), 339–369.

    Google Scholar 

  • Stoolmiller, M., Biancarosa, G., & Fien, H. (2013). Measurement properties of DIBELS oral reading fluency in grade 2: Implications for equating studies. Assessment for Effective Intervention, 39(2), 76–90. doi:10.1177/1534508412456729.

    Article  Google Scholar 

  • Swanson, C. E., & Fox, H. G. (1953). Validity of readability formulas. Journal of Applied Psychology, 37(2), 114–118. doi:10.1037/h0057810.

    Article  Google Scholar 

  • Tekfi, C. (1987). Readability formulas: An overview. Journal of Documentation, 43(3), 257–269. doi:10.1108/eb026811.

    Article  Google Scholar 

  • Texas Education Agency (TEA), University of Texas, Health Science Center (UTHSC), and University of Houston. (2010). The Texas Primary Reading Inventory (TPRI). Baltimore: Brookes Publishing.

    Google Scholar 

  • Torgesen, J. K., Wagner, R., & Raschote, C. (2001). Test of word reading efficiency. Austin: Pro-Ed.

    Google Scholar 

  • Vandenberg, R. J., & Lance, C. E., (2000). A review and synthesis of measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(4), 4–70. doi:10.1177/109442810031002.

    Article  Google Scholar 

  • Wagner, R. K., Torgesen, J. K., Rashotte, C. A., & Pearson, N. A. (2010). Test of sentence reading efficiency and comprehension. Austin: Pro-Ed.

    Google Scholar 

  • Wang, T., & Zeng, L. (1998). Item parameter estimation for a continuous response model using an EM algorithm. Applied Psychological Measurement, 22, 333–344. doi:10.1177/014662169802200402.

    Article  Google Scholar 

  • Zopluoglu, C. (2013). A comparison of two estimation algorithms for Samejima’s continuous IRT model, Behavioral Research Methods, 45, 54–64. doi:10.3758/s13428-012-0229-6.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kristi L. Santi PhD .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Santi, K., Barr, C., Khalaf, S., Francis, D. (2016). Different Approaches to Equating Oral Reading Fluency Passages. In: Cummings, K., Petscher, Y. (eds) The Fluency Construct. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2803-3_9

Download citation

Publish with us

Policies and ethics