Abstract
As a part of eligibility determination, responseto-intervention (RTI) models use both the level and rate of skill acquisition to evaluate student response to both core instructional and supplemental interventions (Case, Speece, and Molloy, 2003; Fuchs, 2003; Fuchs and Fuchs, 1998; Fuchs, Mock, Morgan, and Young, 2003). As such, the level of student performance in targeted domains is compared with benchmark expectations and local peer performances (i.e., local norms). A substantial discrepancy in level is often an indication that an instructional change or intervention is necessary. The rate of student performance is also to standard expectations and local peer performances. Persistent and ongoing discrepancies in both level and rate are indicators that more intensive services are necessary, which might include those associated with special education (NASDE, 2005).
Preview
Unable to display preview. Download preview PDF.
References
AERA, APA, & NCME. (1999). Standards for Educational and Psychological Testing. Washington, DC: American Educational and Psychological Research Association.
Ardoin, S. P., & Christ, T. J. (in press). Evaluating curriculum based measurement slope estimate using data from tri-annual universal screenings. School Psychology Review.
Ardoin, S. P., Suldo, S. M., Witt, J. C., Aldrich, S., & McDonald, E. (2005). Accuracy of readability estimates predictions of CBM performance. School Psychology Quarterly, 20, 1–22.
Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97.
Brennan, R. L. (2003). Generalizability theory. Journal of Educational Measurement, 40, 105–107.
Case, L. P., Speece, D. L., & Molloy, D. E. (2003). The validity of a response-to-instruction paradigm to identify reading disabilities: a longitudinal analysis of individual differences and contextual factors. School Psychology Review, 32, 557–582.
Chafouleas, S. M., Christ, T. J., Riley-Tillman, T. C., Briesch, A. M., & Chanese, J. A. M. (in press). Generalizability and dependability of daily behavior report cards to measure social behavior of preschoolers.School Psychology Review.
Christ, T. J. (2006). Short term estimates of growth using curriculum-based measurement of oral reading fluency: Estimates of standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128–133.
Christ, T. J. & Schanding, T. (2007). Practice effects on curriculum based measures of computational skills: Influences on skill versus performance analysis. School Psychology Review, 147–158.
Christ, T. J. & Silberglitt, B. (in press). Curriculum-based measurement of oral reading fluency: the standard error of measurement. School Psychology Review.
Christ, T. J. & Vining, O. (2006). Curriculum based measurement procedures to develop multiple-skill mathematics computation probes: Evaluation of random and stratified stimulus-set arrangements. School Psychology Review, 35, 387–400.
Cone, J. D. (1981). Psychometric considerations. In M. Hersen & J. Bellack (Eds.), Behavioral Assessment: A Practical Handbook (2nd ed., pp. 38–68). New York: Pergamon Press.
Cone, J. D. (1986). Ideographic, nomothetic, and related perspectives in behavioral assessment. In R. O. Nelson & S. C. Hayes (Eds.), Conceptual Foundations of Behavioral Assessment (pp. 111–128). New York: Guilford Press.
Cone, J. D. (1987). Psychometric considerations and multiple models of behavioral assessment. In M. Hersen & J. Bellack (Eds.), Behavioral Assessment: A Practical Handbook (2nd ed., pp. 42–66). New York: Pergamon Press.
Crocker, L. & Algina, J. (1986). Introduction to Classical and Modern Test Theory. Orlando, FL: Harcourt Brace.
Cronbach, L. J., Nanda, H., & Rajaratnam, N. (1972). The Dependability of Behavioral Measures. New York: Wiley.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507–524.
Fuchs, L. S. (1995). Incorporating curriculum-based measurement into the eligability decision-making process: a focus on treatment validity and student growth. Paper presented at the National Academy of Sciences Workshop on Alternatives to IQ Testing, Washington, DC.
Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice, 18, 172–186.
Fuchs, L. S. & Deno, S. L. (1991). Paradigmatic distinctions between instructionally relevant measurement models. Exceptional Children, 57, 488–500.
Fuchs, L. S. & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58, 232–242.
Fuchs, L. S. & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15–24.
Fuchs, L. S. & Fuchs, D. (1998). Treatment validity: a unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research & Practice, 13, 204–219.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: how much growth can we expect. School Psychology Review, 22, 27–48.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: a theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256.
Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171.
Hambleton, R. K. & Jones, R. W. (1993). Comparison of classical test theory and item response theory and their applications to test development. National Council on Measurement in Education: Instructional Topics in Educational Measurement (ITEMS). Retrieved September, 2004, from http://ncme.org/pubs/items.cfm.
Heartland AEA 11. (2000). Program Manual for Special Education. Johnston, IA: Heartland AEA.
Hintze, J. M. (2006). Psychometrics of direct observation. School Psychology Review, 34, 507–519.
Hintze, J. M. & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33, 204–217.
Hintze, J. M., Daly III, E. J., & Shapiro, E. S. (1998). An investigation of the effects of passage difficulty level on outcomes of oral reading fluency progress monitoring. School Psychology Review, 27, 433.
Hintze, J. M., Owen, S. V., Shapiro, E. S., & Daly, E. J. (2000). Generalizability of oral reading fluency measures: application of G theory to curriculum-based measurement. School Psychology Quarterly, 15, 52–68.
Hintze, J. M. & Pelle Petitte, H. A. (2001). The generalizability of CBM oral reading fluency measures across general and special education. Journal of Psychoeducational Assessment, 19, 158–170.
Hintze, J. M. & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34, 372–386.
Howell, K. W., Kurns, S., & Antil, L. (2002). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology. (pp. 753–772). Bethesda, MD: National Association of School Psychologists.
Johnston, J. M. & Pennypacker, H. S. (1993). Strategies and Tactics of Behavioral Research (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.
Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M. R. Shinn (Ed.), Curriculum-Based Measurement: Assessing Special Children (pp. 18–78). New York: Guildford Press.
NASDE. (2005). Response to Intervention: Policy Considerations and Implementation. Alexandria, VA: Author.
Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute. Journal of Psychoeducational Assessment, 23, 326–338.
Sattler, J. M. (2001). Assessment of Children: Cognitive Applications (4th ed.). San Diego, CA: Sattler.
Shinn, M. R. (Ed.). (1989). Curriculum-Based Measurement: Assessing Special Children. New York: Guildford Press.
Shinn, M. R., Gleason, M. M., & Tindal, G. (1989). Varying the difficulty of testing materials: implications for curriculum-based measurement. Journal of Special Education, 23, 223–233.
Spearman, C. (1904). The proof and measurement of associations between two things. American Journal of Psychology, 15, 72–101.
Spearman, C. (1907). Demonstration of formulae for true measurement of correlation. American Journal of Psychology, 18, 161–169.
Spearman, C. (1913). Correlations of sums and differences. British Journal of Psychology, 5, 417–426.
Stage, S. A. & Jacobsen, M. D. (2001). Predicting student success on a state-mandated performance-based assessment using oral reading fluency. School Psychology Review, 30, 407–419.
Ysseldyke, J. E., Algozzine, B., & Thurlow, M. L. (2000). Critical Issues in Special Education: Issues in assessment (3rd ed.). Boston: Houghton Mifflin Company.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer
About this chapter
Cite this chapter
Christ, T.J., Hintze, J.M. (2007). Psychometric Considerations when Evaluating Response to Intervention. In: Jimerson, S.R., Burns, M.K., VanDerHeyden, A.M. (eds) Handbook of Response to Intervention. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-49053-3_7
Download citation
DOI: https://doi.org/10.1007/978-0-387-49053-3_7
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-49052-6
Online ISBN: 978-0-387-49053-3
eBook Packages: Behavioral ScienceBehavioral Science and Psychology (R0)