Advertisement

Assessing Students’ Skills Using a Nontraditional Approach

  • Christine E. Neddenriep
  • Brian C. Poncy
  • Christopher H. Skinner
Chapter

Abstract

Currently, within the field of school psychology, a shift in service delivery models is occurring. Whereas school psychology had been dominated by a refer-test-report (and place) delivery model (Reschly & Yssedyke, 2002), recent legislation has facilitated a change in service delivery to include a response to intervention (RtI) model (Brown-Chidsey & Steege, 2005). Practicing within this service delivery model both allows and requires school psychologists to expand their range of skills and the services they offer (Oakland & Cunningham, 1999), specifically increasing their use of nontraditional assessment measures. This need to increase school psychologists’ competencies in nontraditional assessment measures within a problem-solving, outcome-driven model provides the context for this chapter.

Keywords

Reading Fluency Progress Monitoring Oral Reading Fluency Nonsense Word Fluency Phoneme Segmentation Fluency 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Bergan, J. R., & Kratochwill, T. R. (1990). Behavioral consultation and therapy. New York: Plenum.Google Scholar
  2. Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention. New York: Guilford.Google Scholar
  3. Christ, T. J. (2006). Short term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128–133.Google Scholar
  4. Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130–146.Google Scholar
  5. Colón, E. P., & Kranzler, J. H. (2006). Effect of instructions on curriculum-based measurement of reading. Journal of Psychoeducational Assessment, 24, 318–328.CrossRefGoogle Scholar
  6. Daly, E. J., Chafouleas, S., & Skinner, C. H. (2005). Interventions for reading problems. New York: Guilford.Google Scholar
  7. Deno, S. L. (1989). Curriculum-based measurement and special education services: A fundamental and direct relationship. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 1–17). New York: Guilford.Google Scholar
  8. Deno, S. L. (2002). Problem solving as “best practice.”. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 37–56). Bethesda, MD: National Association of School Psychologists.Google Scholar
  9. Deno, S. L. (2005). Problem solving assessment. In R. Brown-Chidsey (Ed.), Assessment for intervention (pp. 10–40). New York: Guilford.Google Scholar
  10. Deno, S. L., Marston, D., & Mirkin, P. (1982). Valid measurement procedures for continuous evaluation of written expression. Exceptional Children, 48, 368–371.Google Scholar
  11. Deno, S. L., & Mirkin, P. K. (1977). Data-based problem modification: A manual. Reston: Council for Exceptional Children.Google Scholar
  12. Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36–45.PubMedGoogle Scholar
  13. Derr-Minneci, T. F., & Shapiro, E. S. (1992). Validating curriculum-based measurement in reading from a behavioral perspective. School Psychology Quarterly, 7, 2–16.CrossRefGoogle Scholar
  14. Fagan, T. K., & Wise, P. S. (2000). School psychology: Past, present, and future (2nd ed.). Bethesda, MD: National Association of School Psychologists.Google Scholar
  15. Forness, S. R., Kavale, K. A., Blum, I. M., & Lloyd, J. W. (1997). Mega-analysis of meta-analyses: What works in special education and related services. Teaching Exceptional Children, 29(6), 4–9.Google Scholar
  16. Fuchs, L. S. (1989). Evaluating solutions: Monitoring progress and revising intervention plans. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessment special children (pp. 153–181). New York: Guilford.Google Scholar
  17. Fuchs, L. S., & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58, 232–243.Google Scholar
  18. Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199–208.PubMedGoogle Scholar
  19. Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review, 21, 45–58.Google Scholar
  20. Fuchs, L. S., Fuchs, D., & Hamlett, C. L. (1989). Effects of instrumental use of curriculum-based measurement to enhance instructional programs. Remedial and Special Education, 10(2), 43–52.CrossRefGoogle Scholar
  21. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27–48.Google Scholar
  22. Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9(2), 20–28.CrossRefGoogle Scholar
  23. Goldstein, H., Arkell, C., Ashcroft, S. C., Hurley, O. L., & Lilly, S. M. (1975). Schools. In N. Hobbs (Ed.), Issues in the classification of children. San Francisco: Jossey-Bass.Google Scholar
  24. Good, R. H., III, & Kaminski, R. A. (Eds.). (2002). Dynamic Indicators of Early Literacy Skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.Google Scholar
  25. Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33, 204–217.Google Scholar
  26. Hintze, J. M., Owen, S. V., Shapiro, E. S., & Daly, E. J. (2000). Generalizability of oral reading fluency measures: Application of g theory to curriculum-based measurement. School Psychology Quarterly, 15, 52–68.CrossRefGoogle Scholar
  27. Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making (3rd ed.). Belmont, CA: Wadsworth.Google Scholar
  28. Kampwirth, T. J. (2006). Collaborative consultation in the schools: Effective practices for students with learning and behavior problems (3rd ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
  29. Kavale, K. (1990). The effectiveness of special education. In T. B. Gutkin & C. R. Reynolds (Eds.), The handbook of school psychology (2nd ed., pp. 868–898). New York: John Wiley.Google Scholar
  30. Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessment special children (pp. 18–78). New York: Guilford.Google Scholar
  31. Neddenriep, C. E., Hale, A. D., Skinner, C. H., Hawkins, R. O., & Winn, B. (2007). A preliminary investigation of the concurrent validity of reading comprehension rate: A direct, dynamic measure of reading comprehension. Psychology in the Schools, 44, 373–388.CrossRefGoogle Scholar
  32. Oakland, T., & Cunningham, J. (1999). The futures of school psychology: Conceptual models for its development and examples of their applications. In C. R. Reynolds & T. B. Gutkin (Eds.), The handbook of school psychology (3rd ed., pp. 34–53). New York: Wiley.Google Scholar
  33. Poncy, B. C., Skinner, C. H., & Axtell, P. K. (2005). An investigation of the reliability and standard error of measurement of words read correctly per minute. Journal of Psychoeducational Assessment, 23, 326–338.CrossRefGoogle Scholar
  34. Powell-Smith, K. A., & Bradley-Klug, K. L. (2001). Another look at the “C” in CBM: Does it really matter if curriculum-based measurement reading probes are “curriculum-based? Psychology in the Schools, 38, 299–312.CrossRefGoogle Scholar
  35. Reschly, D. J., & Yssedyke, J. E. (2002). Paradigm shift: The past is not the future. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 3–36). Bethesda, MD: National Association of School Psychologists.Google Scholar
  36. Shapiro, E. S. (2004). Academic skills problem: Direct assessment and intervention (3rd ed.). New York: Guilford.Google Scholar
  37. Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing special children. New York: Guilford.Google Scholar
  38. Shinn, M. R. (2002). Best practices in using curriculum-based measurement in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 671–698). Bethesda, MD: National Association of School Psychologists.Google Scholar
  39. Shinn, M. R., & Bamonto, S. (1998). Advanced applications of curriculum-based measurement: “Big ideas” and avoiding confusion. In M. R. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 1–31). New York: Guilford.Google Scholar
  40. Shinn, M. R., Good, R. H., III, & Parker, C. (1999). Noncategorical special education services with students with severe achievement deficits. In D. J. Reschly, W. D. Tilly III, & J. P. Grimes (Eds.), Special education in transition: Functional assessment and noncategorical programming (pp. 81–105). Longmont: Sopris West.Google Scholar
  41. Skinner, C. H. (1998). Preventing academic skills deficits. In T. S. Watson & F. Gresham (Eds.), Handbook of child behavior therapy: Ecological considerations in assessment, treatment, and evaluation (pp. 61–83). New York: Plenum.Google Scholar
  42. Skinner, C. H., Belfiore, P. B., & Watson, T. S. (1995/2002). Assessing the relative effects of interventions in students with mild disabilities: Assessing instructional time. Journal of Psychoeducational Assessment, 20, 345-356.15. (Reprinted from Assessment in Rehabilitation and Exceptionality, 2, 207-220, 1995)Google Scholar
  43. Skinner, C. H., Hurst, K. L., Teeple, D. F., & Meadows, S. O. (2002). Increasing on-task behavior during mathematics independent seat-work in students with emotional disorders by interspersing additional brief problems. Psychology in the Schools, 39, 647–659.CrossRefGoogle Scholar
  44. Skinner, C. H., Neddenriep, C. E., Bradley-Klug, K. L., & Ziemann, J. M. (2002). Advances in curriculum-based measurement: Alternative rate measures for assessing reading skills in pre- and advanced readers. Behavior Analyst Today, 3, 270–281.Google Scholar
  45. Tilly, W. D. (2002). Best practices in school psychology as a problem-solving enterprise. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 21–36). Bethesda, MD: National Association of School Psychologists.Google Scholar
  46. Witt, J. C., Daly, E. J., III, & Noell, G. H. (2000). Functional assessments: A step-by-step guide to solving academic and behavior problems. Longmont, CO: Sopris West.Google Scholar
  47. Ysseldyke, J. E., Algozzine, B., Shinn, N., & McGue, M. (1982). Similarities and differences between underachievers and students labeled learning-disabled. The Journal of Special Education, 16, 73–85.CrossRefGoogle Scholar
  48. Ysseldyke, J. E., & Marston, D. (1999). Origins of categorical special education services in schools. In D. J. Reschly, W. D. Tilly III, & J. P. Grimes (Eds.), Special education in transition: Functional assessment and noncategorical programming (pp. 1–18). Longmont, CO: Sopris West.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Christine E. Neddenriep
    • 1
  • Brian C. Poncy
  • Christopher H. Skinner
  1. 1.The University of Wisconsin-WhitewaterWhitewaterUSA

Personalised recommendations