Skip to main content

Foundations of Fluency-Based Assessments in Behavioral and Psychometric Paradigms

  • Chapter
  • First Online:
The Fluency Construct

Abstract

The chapter presents previous and near term applications and innovations for the assessment of rate-based measures such as fluency. The historical and future developments are discussed within the context of an ideographic behavioral and nomothetic psychometric paradigms of assessment. These paradigms are described and contrasted with descriptions of classical test theory (CTT), generalizability theory (GT), and item response theory (IRT). The interpretation and use argument (IUA) is used to frame the contemporary view on unified validity. These theoretical models are combined with an applied perspective to contextualize and encourage future developments in the measurement of fluency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Ardoin, S. P., Roof, C. M., Klubnick, C., & Carfolite, J. (2008). Evaluating curriculum-based measurement from a behavioral assessment perspective. Behavior Analyst Today, 9, 36–49.

    Article  Google Scholar 

  • Betts, J., Pickart, M., & Heistad, D. (2009). An investigation of the psychometric evidence of CBM-R passage equivalence: Utility of readability statistics and equating for alternate forms. Journal of School Psychology, 47, 1–17.

    Article  Google Scholar 

  • Brennan, R. L. (2001). An essay on the history and future of reliability from the perspective of replications. Journal of Educational Measurement, 38(4), 295–317.

    Article  Google Scholar 

  • Brennan, R. L. (2011). Generalizability theory and classical test theory. Applied Measurement in Education, 24, 1–21.

    Article  Google Scholar 

  • Christ, T. J., & Hintze, J. M. (2007). Psychometric considerations of reliability when evaluating response to intervention. In S. R. Jimmerson, A. M. Vanderheyden, & M. K. Burns (Eds.), Handbook of response to intervention (pp. 93–105). New York: Springer.

    Chapter  Google Scholar 

  • Crocker, L. M., & Algina, J. (1986). Introduction to classical and modern test theory. New York: Holt Rinehart and Winston.

    Google Scholar 

  • Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.

    PubMed  Google Scholar 

  • Deno, S. L. (1989). Curriculum-based measurement and alternative special education services: A fundamental and direct relationship. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 1–17). New York: Guilford Press.

    Google Scholar 

  • Deno, S. L. (1990). Individual differences and individual difference: The essential difference of special education. The Journal of Special Education, 24, 160–173.

    Article  Google Scholar 

  • Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37, 184–192.

    Article  Google Scholar 

  • Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston: Council for Exceptional Children.

    Google Scholar 

  • Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional children, 57, 488–500.

    Google Scholar 

  • Fuchs, L. S., & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24.

    Google Scholar 

  • Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45–58.

    Google Scholar 

  • Good, R. H., & Kaminski, R. (2002). Dynamic Indicators of Basic Early Literacy Skills 6th Edition (DIBELS). Eugene, OR: Institute for the Development of Educational Achievement. https://dibels.uoregon.edu.

  • Kane M. T., (2006). Validation. In R. L. Breenan (Ed.), Educational measurement (4th ed.), Westport: American Council on Education/Praeger.

    Google Scholar 

  • Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–73.

    Article  Google Scholar 

  • Kimble, G. A. (1989). Psychology from the standpoint of a generalist. American Psychologist, 44, 491–499.

    Article  Google Scholar 

  • Marston, D. B. (1989). Curriculum-based measurement: What it is and why we do it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18–78). New York: Guilford Press.

    Google Scholar 

  • Shinn, M. R. (1989). Curriculum-based measurement: Assessing special children. New York: Guilford Press.

    Google Scholar 

  • Shinn, M. R. (1995). Best practices in curriculum-based measurement and its use in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology-III (pp. 547–567). Washington, DC: National Association of School Psychologists.

    Google Scholar 

  • Spearman, C. (1904). The proof of measurement of association between two things. American Journal of Psychology, 15, 72–101.

    Article  Google Scholar 

  • Wainer, H., Wang, X. A., Skorupski, W. P., & Bradlow, E. T. (2005). A Bayesian method for evaluating passing scores: The PPoP curve. Journal of Educational Measurement, 42, 271–281.

    Article  Google Scholar 

  • Ware, J. E., Bjorner, J. B., Kosinski, M. (2000). Practical implications of item response theory and computer adaptive testing. Medical Care, 38, 73–82.

    Article  Google Scholar 

  • Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement, 43, 19–38.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theodore J. Christ .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Christ, T., Van Norman, E., Nelson, P. (2016). Foundations of Fluency-Based Assessments in Behavioral and Psychometric Paradigms. In: Cummings, K., Petscher, Y. (eds) The Fluency Construct. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2803-3_6

Download citation

Publish with us

Policies and ethics