Skip to main content

Psychometric Considerations when Evaluating Response to Intervention

  • Chapter
Handbook of Response to Intervention

Abstract

As a part of eligibility determination, responseto-intervention (RTI) models use both the level and rate of skill acquisition to evaluate student response to both core instructional and supplemental interventions (Case, Speece, and Molloy, 2003; Fuchs, 2003; Fuchs and Fuchs, 1998; Fuchs, Mock, Morgan, and Young, 2003). As such, the level of student performance in targeted domains is compared with benchmark expectations and local peer performances (i.e., local norms). A substantial discrepancy in level is often an indication that an instructional change or intervention is necessary. The rate of student performance is also to standard expectations and local peer performances. Persistent and ongoing discrepancies in both level and rate are indicators that more intensive services are necessary, which might include those associated with special education (NASDE, 2005).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • AERA, APA, & NCME. (1999). Standards for Educational and Psychological Testing. Washington, DC: American Educational and Psychological Research Association.

    Google Scholar 

  • Ardoin, S. P., & Christ, T. J. (in press). Evaluating curriculum based measurement slope estimate using data from tri-annual universal screenings. School Psychology Review.

    Google Scholar 

  • Ardoin, S. P., Suldo, S. M., Witt, J. C., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates predictions of CBM performance. School Psychology Quarterly, 20, 1–22.

    Article  Google Scholar 

  • Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97.

    Article  PubMed  Google Scholar 

  • Brennan, R. L. (2003). Generalizability theory. Journal of Educational Measurement, 40, 105–107.

    Article  Google Scholar 

  • Case, L. P., Speece, D. L., & Molloy, D. E. (2003). The validity of a response-to-instruction paradigm to identify reading disabilities: a longitudinal analysis of individual differences and contextual factors. School Psychology Review, 32, 557–582.

    Google Scholar 

  • Chafouleas, S. M., Christ, T. J., Riley-Tillman, T. C., Briesch, A. M., & Chanese, J. A. M. (in press). Generalizability and dependability of daily behavior report cards to measure social behavior of preschoolers.School Psychology Review.

    Google Scholar 

  • Christ, T. J. (2006). Short term estimates of growth using curriculum-based measurement of oral reading fluency: Estimates of standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128–133.

    Google Scholar 

  • Christ, T. J. & Schanding, T. (2007). Practice effects on curriculum based measures of computational skills: Influences on skill versus performance analysis. School Psychology Review, 147–158.

    Google Scholar 

  • Christ, T. J. & Silberglitt, B. (in press). Curriculum-based measurement of oral reading fluency: the standard error of measurement. School Psychology Review.

    Google Scholar 

  • Christ, T. J. & Vining, O. (2006). Curriculum based measurement procedures to develop multiple-skill mathematics computation probes: Evaluation of random and stratified stimulus-set arrangements. School Psychology Review, 35, 387–400.

    Google Scholar 

  • Cone, J. D. (1981). Psychometric considerations. In M. Hersen & J. Bellack (Eds.), Behavioral Assessment: A Practical Handbook (2nd ed., pp. 38–68). New York: Pergamon Press.

    Google Scholar 

  • Cone, J. D. (1986). Ideographic, nomothetic, and related perspectives in behavioral assessment. In R. O. Nelson & S. C. Hayes (Eds.), Conceptual Foundations of Behavioral Assessment (pp. 111–128). New York: Guilford Press.

    Google Scholar 

  • Cone, J. D. (1987). Psychometric considerations and multiple models of behavioral assessment. In M. Hersen & J. Bellack (Eds.), Behavioral Assessment: A Practical Handbook (2nd ed., pp. 42–66). New York: Pergamon Press.

    Google Scholar 

  • Crocker, L. & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Orlando, FL: Harcourt Brace.

    Google Scholar 

  • Cronbach, L. J., Nanda, H., & Rajaratnam, N. (1972). The Dependability of Behavioral Measures. New York: Wiley.

    Google Scholar 

  • Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507–524.

    Google Scholar 

  • Fuchs, L. S. (1995). Incorporating curriculum-based measurement into the eligability decision-making process: a focus on treatment validity and student growth. Paper presented at the National Academy of Sciences Workshop on Alternatives to IQ Testing, Washington, DC.

    Google Scholar 

  • Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18, 172–186.

    Article  Google Scholar 

  • Fuchs, L. S. & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488–500.

    Google Scholar 

  • Fuchs, L. S. & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58, 232–242.

    Google Scholar 

  • Fuchs, L. S. & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24.

    Google Scholar 

  • Fuchs, L. S. & Fuchs, D. (1998). Treatment validity: a unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research & Practice, 13, 204–219.

    Google Scholar 

  • Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: how much growth can we expect. School Psychology Review, 22, 27–48.

    Google Scholar 

  • Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256.

    Article  Google Scholar 

  • Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171.

    Article  Google Scholar 

  • Hambleton, R. K. & Jones, R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. National Council on Measurement in Education: Instructional Topics in Educational Measurement (ITEMS). Retrieved September, 2004, from http://ncme.org/pubs/items.cfm.

    Google Scholar 

  • Heartland AEA 11. (2000). Program Manual for Special Education. Johnston, IA: Heartland AEA.

    Google Scholar 

  • Hintze, J. M. (2006). Psychometrics of direct observation. School Psychology Review, 34, 507–519.

    Google Scholar 

  • Hintze, J. M. & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33, 204–217.

    Google Scholar 

  • Hintze, J. M., Daly III, E. J., & Shapiro, E. S. (1998). An investigation of the effects of passage difficulty level on outcomes of oral reading fluency progress monitoring. School Psychology Review, 27, 433.

    Google Scholar 

  • Hintze, J. M., Owen, S. V., Shapiro, E. S., & Daly, E. J. (2000). Generalizability of oral reading fluency measures: application of G theory to curriculum-based measurement. School Psychology Quarterly, 15, 52–68.

    Google Scholar 

  • Hintze, J. M. & Pelle Petitte, H. A. (2001). The generalizability of CBM oral reading fluency measures across general and special education. Journal of Psychoeducational Assessment, 19, 158–170.

    Article  Google Scholar 

  • Hintze, J. M. & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34, 372–386.

    Google Scholar 

  • Howell, K. W., Kurns, S., & Antil, L. (2002). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology. (pp. 753–772). Bethesda, MD: National Association of School Psychologists.

    Google Scholar 

  • Johnston, J. M. & Pennypacker, H. S. (1993). Strategies and Tactics of Behavioral Research (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M. R. Shinn (Ed.), Curriculum-Based Measurement: Assessing Special Children (pp. 18–78). New York: Guildford Press.

    Google Scholar 

  • NASDE. (2005). Response to Intervention: Policy Considerations and Implementation. Alexandria, VA: Author.

    Google Scholar 

  • Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute. Journal of Psychoeducational Assessment, 23, 326–338.

    Article  Google Scholar 

  • Sattler, J. M. (2001). Assessment of Children: Cognitive Applications (4th ed.). San Diego, CA: Sattler.

    Google Scholar 

  • Shinn, M. R. (Ed.). (1989). Curriculum-Based Measurement: Assessing Special Children. New York: Guildford Press.

    Google Scholar 

  • Shinn, M. R., Gleason, M. M., & Tindal, G. (1989). Varying the difficulty of testing materials: implications for curriculum-based measurement. Journal of Special Education, 23, 223–233.

    Article  Google Scholar 

  • Spearman, C. (1904). The proof and measurement of associations between two things. American Journal of Psychology, 15, 72–101.

    Article  Google Scholar 

  • Spearman, C. (1907). Demonstration of formulae for true measurement of correlation. American Journal of Psychology, 18, 161–169.

    Article  Google Scholar 

  • Spearman, C. (1913). Correlations of sums and differences. British Journal of Psychology, 5, 417–426.

    Google Scholar 

  • Stage, S. A. & Jacobsen, M. D. (2001). Predicting student success on a state-mandated performance-based assessment using oral reading fluency. School Psychology Review, 30, 407–419.

    Google Scholar 

  • Ysseldyke, J. E., Algozzine, B., & Thurlow, M. L. (2000). Critical Issues in Special Education: Issues in assessment (3rd ed.). Boston: Houghton Mifflin Company.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer

About this chapter

Cite this chapter

Christ, T.J., Hintze, J.M. (2007). Psychometric Considerations when Evaluating Response to Intervention. In: Jimerson, S.R., Burns, M.K., VanDerHeyden, A.M. (eds) Handbook of Response to Intervention. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-49053-3_7

Download citation

Publish with us

Policies and ethics