Advertisement

Testing Adaptations: Research to Guide Practice

  • Leah Dembitzer
  • Ryan J. Kettler
Chapter

Abstract

Changes in theory, in practice, and in the research base necessitate a reconceptualization of the traditional dichotomy between testing accommodations and test modifications. Individual need of testing adaptations should be based on functional impairment or competency in access skills, in conjunction with the defined construct of a test. Focusing on access skills, target constructs, and whether either was changed is a more informative way of categorizing adaptations, compared to considering whether or not test content was changed. On one end of the continuum would exist appropriate adaptations, defined as any changes to standard administration that do not change the construct, and on the other end of the continuum would exist inappropriate adaptations, defined as any change in standard administration that does impact the construct. Much of the research on testing adaptations has assumed changes are only appropriate if the differential boost criteria can be met. Given the centrality of clearly defining the construct, other forms of validity evidence may be more appropriate as a first step in evaluating testing adaptations. Practitioners and researchers can use the following three-step process to make evidence-based decisions about adaptations that will lead to reliable scores from which valid inferences can be made: (a) considering the access skills, (b) finding available adaptations, and (c) analyzing the target construct.

Keywords

Accessibility Assessment Differential boost Reliability Testing accommodations Testing adaptations Testing modifications Validity 

References

  1. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: Author.Google Scholar
  2. Beddow, P. A. (2012). Accessibility theory for enhancing the validity of test results for students with special needs. International Journal of Disability, Development and Education, 59(1), 97–111.  https://doi.org/10.1080/1034912X.2012.654966 CrossRefGoogle Scholar
  3. Bolt, S. E., & Thurlow, M. L. (2004). Five of the most frequently allowed testing accommodations in state policy synthesis of research. Remedial and Special Education, 25(3), 141–152.  https://doi.org/10.1177/07419325040250030201 CrossRefGoogle Scholar
  4. Cho, H. J., Lee, J., & Kingston, N. (2012). Examining the effectiveness of test accommodation using DIF and a mixture IRT model. Applied Measurement in Education, 25(4), 281–304.CrossRefGoogle Scholar
  5. Christensen, L. L., Lazarus, S. S., Crone, M., & Thurlow, M. L. (2008). 2007 State policies on assessment participation and accommodations for students with disabilities (Synthesis Report 69). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  6. Cook, L., Eignor, D., Sawaki, Y., Steinberg, J., & Cline, F. (2010). Using factor analysis to investigate accommodations used by students with disabilities on an English-language arts assessment. Applied Measurement in Education, 23, 187–208.CrossRefGoogle Scholar
  7. Cormier, D. C., Altman, J. R., Shyyan, V., & Thurlow, M. L. (2010). A summary of the research on the effects of test accommodations: 2007–2008. (Technical Report 56). Minneapolis, MN: University of Minnesota: National Center on Educational Outcomes.Google Scholar
  8. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.CrossRefGoogle Scholar
  9. CTB McGraw-Hill. (2008). North Dakota state assessments: Fall 2008 administration final technical report. Monterey, CA: Author.Google Scholar
  10. CTB/McGraw-Hill. (2000). Guidelines for using the results of standardized tests administered under nonstandard conditions. Monterey, CA: Author.Google Scholar
  11. CTB/McGraw-Hill. (2005). Guidelines for inclusive test administration. Monterey, CA: Author.Google Scholar
  12. Data Recognition Corporation, & Pacific Metric Corporation. (2008a). iLEAP 2008: Operational technical report. Washington, DC: Author.Google Scholar
  13. Data Recognition Corporation, & Pacific Metric Corporation. (2008b). LEAP 2008: Operational technical report. Washington, DC: Author.Google Scholar
  14. Davidshofer, K. R., & Muphy, C. O. (2005). Psychological testing: Principles and applications. Upper Saddle River, NJ: Pearson/Prentice.Google Scholar
  15. Dembitzer, L. (2016). Universal design and accommodations: Accessibility, reliability, and validity (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses Global.Google Scholar
  16. Elliott, S. N., Kettler, R. J., Beddow, P. A., Kurz, A., Compton, E., McGrath, D., … Roach, A. T. (2010). Effects of using modified items to test students with persistent academic difficulties. Exceptional Children, 76(4), 475–495.CrossRefGoogle Scholar
  17. Elliott, S. N., Kratochwill, T. R., & Schulte, A. (1999). The assessment accommodation checklist. Monterey, CA: CTB/McGraw-Hill.Google Scholar
  18. Flowers, C., Kim, D. H., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read-aloud accommodation. Journal of Special Education Technology, 26(1), 1–12.CrossRefGoogle Scholar
  19. Flowers, C., Wakeman, S., Browder, D. M., & Karvonen, M. (2009). Links for academic learning (LAL): A conceptual model for investigating alignment of alternate assessments based on alternate achievement standards. Educational Measurement: Issues and Practice, 28(1), 25–37.CrossRefGoogle Scholar
  20. Fuchs, L. S., & Fuchs, D. (2001). Helping teachers formulate sound test accommodation decisions for students with learning disabilities. Learning Disabilities Research and Practice, 16, 174–181.  https://doi.org/10.1111/0938-8982.00018 CrossRefGoogle Scholar
  21. Fuchs, L. S., Fuchs, D., & Capizzi, A. M. (2005). Identifying appropriate test accommodations for students with learning disabilities. Focus on Exceptional Children, 37(6), 1–8.Google Scholar
  22. Fuchs, L. S., Fuchs, D., Eaton, S. B., Hamlett, C. L., Binkley, E., & Crouch, R. (2000). Using objective data sources to enhance teacher judgments about test accommodations. Exceptional Children, 67, 67–81.CrossRefGoogle Scholar
  23. Fuchs, L. S., Fuchs, D., Eaton, S. B., Hamlett, C. L., & Karns, K. (2000). Supplementing teacher judgments of mathematics tests accommodations with objective data sources. School Psychology Review, 29, 65–85.Google Scholar
  24. Helwig, R., & Tindal, G. (2003). An experimental analysis of accommodation decisions on large-scale mathematics tests. Exceptional Children, 69(2), 211–225.CrossRefGoogle Scholar
  25. Hollenbeck, K. (2005). Validity issues and decisions about testing accommodations. Assessment for Effective Intervention, 31(7), 7–17.  https://doi.org/10.1177/073724770503100102 CrossRefGoogle Scholar
  26. Individuals with Disabilities Education Act of 1997. (1997). Pub. L., 101–476, 104 Stat. 1142.Google Scholar
  27. Individuals with Disabilities Education Improvement Act. (2004). Pub. L., 208–446, 118 Stat. 2647.Google Scholar
  28. Kettler, R. J. (2011a). Effects of packages of modifications to improve test and item accessibility: Less is more. In S. N. Elliott, R. J. Kettler, P. A. Beddow, & A. Kurz (Eds.), Handbook of accessible achievement tests for all students: Bridging the gaps between research, practice, and policy (pp. 231–242). New York, NY: Springer.  https://doi.org/10.1007/978-1-4419-9356-4_13 CrossRefGoogle Scholar
  29. Kettler, R. J. (2011b). Holding modified assessments accountable: Applying a unified reliability and validity framework to the development and evaluation of AA-MASs. In M. Russell (Ed.), Assessing students in the margins: Challenges, strategies, and techniques (pp. 311–334). Charlotte, NC: Information Age Publishing.Google Scholar
  30. Kettler, R. J. (2012). Testing accommodations: Theory and research to inform practice. International Journal of Disability, Development and Education, 5(1), 53–66.  https://doi.org/10.1080/1034912X.2012.654952 CrossRefGoogle Scholar
  31. Kettler, R. J. (2015). Adaptations and access to assessment of common core content. Review of Research in Education, 39(1), 295–330.CrossRefGoogle Scholar
  32. Kettler, R. J., Dickenson, T. S., Bennett, H. L., Morgan, G. B., Gilmore, J. A., Beddow, P. A., … Palmer, P. W. (2012). Enhancing the accessibility of high school science tests: A multi-state experiment. Exceptional Children, 79, 91–106.CrossRefGoogle Scholar
  33. Kettler, R. J., Elliott, S. N., & Beddow, P. A. (2009). Modifying achievement test items: A theory-guided and data-based approach for better measurement of what students with disabilities know. Peabody Journal of Education, 84, 529–551.CrossRefGoogle Scholar
  34. Kettler, R. J., Rodriguez, M. R., Bolt, D. M., Elliott, S. N., Beddow, P. A., & Kurz, A. (2011). Modified multiple-choice items for alternate assessments: Reliability, difficulty, and differential boost. Applied Measurement in Education, 24(3), 210–234.  https://doi.org/10.1080/08957347.2011.580620 CrossRefGoogle Scholar
  35. King, J. E. (2011). Implementing the Common Core state standards: An action agenda for higher education. Retrieved from http://www.acenet.edu/news-room/Documents/Implementing-the-Common-Core-State-Standards-2011.pdf
  36. Lewandowski, L. J., Lovett, B. J., & Rogers, C. L. (2008). Extended time as a testing accommodation for students with reading disabilities: Does a rising tide lift all ships? Journal of Psychoeducational Assessment, 26(4), 315–324.  https://doi.org/10.1177/0734282908315757 CrossRefGoogle Scholar
  37. Lindstrom, J. H., & Gregg, N. (2007). The role of extended time on the SAT for students with learning disabilities and/or attention-deficit/hyperactivity disorder. Learning Disabilities Research and Practice, 22, 85–95.CrossRefGoogle Scholar
  38. Louisiana Department of Education. (2008). LEAP GEE 2008 technical summary. Baton Rouge, LA: Author.Google Scholar
  39. Louisiana Department of Education. (2009). iLEAP 2009 technical summary. Baton Rouge, LA: Author.Google Scholar
  40. Meloy, L. L., Deville, C., & Frisbie, D. A. (2002). The effect of a read aloud accommodation on test scores of students with and without a learning disability in reading. Remedial and Special Education, 23(4), 248–255.  https://doi.org/10.1177/07419325020230040801 CrossRefGoogle Scholar
  41. National Assessment of Educational Progress (2013). Reading Framework for the 2013 National Assessment of Educational Progress. Retrieved from: http://nces.ed.gov/nationsreportcard/reading/moreabout.aspxGoogle Scholar
  42. Partnership for Assessment of Readiness for College and Careers. (2013). Accessibility features and accommodations manual, 1st edition. Retrieved from: http://www.parcconline.org/parcc-assessment-policies
  43. Poggio, A. J., Yang, X., Irwin, P. M., Glasnapp, D. R., & Poggio, J. P. (2006). Kansas assessments in reading and mathematics: Technical manual. Lawrence: University of Kansas, Center for Educational Testing and Evaluation. Retrieved from https://cete.ku.edu/sites/cete.drupal.ku.edu/files/docs/Technical_Reports/2007/irwin2007_KAMM.pdf
  44. Randall, J., Cheong, Y. F., & Engelhard, G. (2011). Using explanatory item response theory modeling to investigate context effects of DIF for students with disabilities. Educational and Psychological Measurement, 71(1), 129–147.CrossRefGoogle Scholar
  45. Randall, J., & Engelhard, G. (2010). Using confirmatory factor analysis and the Rasch model to assess measurement invariance in a high stakes reading assessment. Applied Measurement in Education, 23, 286–306.CrossRefGoogle Scholar
  46. Roach, A. T., Beddow, P. A., Kurz, A., Kettler, R. J., & Elliott, S. N. (2010). Incorporating student input in developing alternate assessments based on modified academic achievement standards. Exceptional Children, 77, 61–80.CrossRefGoogle Scholar
  47. Roach, A. T., Elliott, S. N., & Webb, N. L. (2005). Alignment of an alternate assessment with state academic standards evidence for the content validity of the Wisconsin alternate assessment. The Journal of Special Education, 38(4), 218–231.CrossRefGoogle Scholar
  48. Roach, A. T., Niebling, B. C., & Kurz, A. (2008). Evaluating the alignment among curriculum, instruction, and assessments: Implications and applications for research and practice. Psychology in the Schools, 45(2), 158–176.CrossRefGoogle Scholar
  49. Rogers, C. M., Christian, E. M., & Thurlow, M. L. (2012). A summary of the research on the effects of test accommodations: 2009–2010 (Technical Report 65). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  50. Rogers, C. M., Lazarus, S. S., & Thurlow, M. L. (2014). A summary of the research on the effects of test accommodations: 2011–2012 (Synthesis Report 94). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  51. Russell, M., & Famularo, L. (2009). Testing what students in the gap can do. Journal of Applied Testing Technology, 9(4), 1–28.Google Scholar
  52. Scarpati, S. E., Wells, C. S., Lewis, C., & Jirka, S. (2011). Accommodations and item-level analyses using mixture DIF models. The Journal of Special Education, 45(1), 54–62.CrossRefGoogle Scholar
  53. Sireci, S. G., Scarpati, S. E., & Li, S. (2005). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75(4), 457–490.  https://doi.org/10.3102/00346543075004457 CrossRefGoogle Scholar
  54. Smarter Balanced Assessment Consortium. (2013). Usability, accessibility, and accommodations guidelines. Retrieved from: http://www.smarterbalanced.org/wordpress/wp-content/uploads/2013/09/SmarterBalanced_Guidelines_091113.pdf
  55. Texas Education Agency. (2009). Technical digest 2007–2008. Austin, TX: Author. Retrieved from http://www.tea.state.tx.us Google Scholar
  56. Thompson, S. J., Johnstone, C. J., & Thurlow, M. L. (2002). UD applied to large scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved Sept 30, 2013, from http://education.umn.edu/NCEO/OnlinePubs/Synthesis44.html
  57. Thurlow, M., House, A., Boys, C., Scott, D., & Ysseldyke, J. (2000). State participation and accommodation policies for students with disabilities: 1999 Update (Synthesis Rep. No. 33). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  58. Zenisky, A. L., & Sireci, S. G. (2007). A summary of the research on the effects of test accommodations: 2005–2006 (Technical Report No. 47). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Leah Dembitzer
    • 1
  • Ryan J. Kettler
    • 2
  1. 1.Center for Health Education, Medicine, and DentistryLakewoodUSA
  2. 2.Rutgers, The State University of New JerseyPiscatawayUSA

Personalised recommendations