Accessibility Theory: Guiding the Science and Practice of Test Item Design with the Test-Taker in Mind

  • Peter A. Beddow
  • Alexander Kurz
  • Jennifer R. Frey
Chapter

Abstract

Test accessibility is defined as the extent to which a test and its constituent item set permit the test-taker to demonstrate his or her knowledge of the target construct (Beddow, Elliott, & Kettler, 2009). The principles of accessibility theory (Beddow, in press) suggest the measurement of achievement involves a multiplicity of interactions between test-taker characteristics and features of the test itself. Beddow argued achievement test results are valid to the degree the test event controls these interactions and yields scores from which inferences reflect the amount of the target construct possessed by the test-taker. Test score inferences typically are based on the assumption that the test event was optimally accessible; therefore, the validity of an achievement test result depends both on the precision of the test score and the accuracy of subsequent inferences about the test-taker’s knowledge of the tested content after accounting for the influence of any access barriers. In essence, the accessibility of a test event is proportional to the validity of test results.

Keywords

Assimilation Arena 

References

  1. Anderson, L. W. (2002). Curricular alignment: A re-examination. Theory into Practice, 41(4), 255–260.CrossRefGoogle Scholar
  2. Baddeley, A. (2003). Working memory: Looking back and looking forward. Neuroscience, 4, 829–839.PubMedGoogle Scholar
  3. Beddow, P. A. (2010). Beyond universal design: Accessibility theory to advance testing for all students. In M. Russell (Ed.), Assessing students in the margins:  Challenges, strategies, and techniques (1st ed., pp. 383–407). New York: Information Age Publishing.Google Scholar
  4. Beddow, P. A., Elliott, S. N., & Kettler, R. J. (2009). TAMI accessibility rating matrix (ARM). Nashville, TN: Vanderbilt University.Google Scholar
  5. Beddow, P. A., Elliott, S. N., & Kettler, R. J. (2010). Test accessibility and modification inventory (TAMI) technical supplement. Nashville, TN: Vanderbilt University.Google Scholar
  6. Beddow, P. A., Kettler, R. J., & Elliott, S. N. (2008). Test accessibility and modification inventory (TAMI). Nashville, TN: Vanderbilt University.Google Scholar
  7. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8, 293–332.CrossRefGoogle Scholar
  8. Chandler, P., & Sweller, J. (1996). Cognitive load while learning to use a computer program. Applied Cognitive Psychology, 10, 151–170.CrossRefGoogle Scholar
  9. Clark, R. C., Nguyen, F., & Sweller, J. (2006). Efficiency in learning: Evidence-Based guidelines to manage cognitive load. San Francisco: Jossey-Bass.Google Scholar
  10. Elliott, S. N., Kettler, R. J., Beddow, P. A., Kurz, A., Compton, E., McGrath, D., et al. (2010). Effects of using modified items to test students with persistent academic difficulties. Exceptional Children , 76, 475–495.Google Scholar
  11. Elliott, S. N., Kurz, A., Beddow, P., & Frey, J. (2009, February). Cognitive load theory: Instruction-Based research with applications for designing tests. Paper presented at the national association of school psychologists’ annual convention, Boston.Google Scholar
  12. Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15, 309–333.CrossRefGoogle Scholar
  13. Johnstone, C. J., Bottsford-Miller, N. A., & Thompson, S. J. (2006). Using the think aloud method (cognitive labs) to evaluate test design for students with disabilities and English language learners (Technical report 44). National Center on Educational Outcomes, University of Minnesota, 25.Google Scholar
  14. Kettler, R. J., Elliott, S. N., & Beddow, P. A. (2009). Modifying achievement test items: A theory-guided and data-based approach for better measurement of what students with disabilities know. Peabody Journal of Education, 84, 529–551.CrossRefGoogle Scholar
  15. Kettler, R. J., Rodriguez, M. R., Bolt, D. M., Elliott, S. N., Beddow, P. A., & Kurz, A. (in press). Modified multiple-choice items for alternate assessments: Reliability, difficulty, and differential boost. Applied Measurement in Education.Google Scholar
  16. Ketterlin-Geller, L. R. (2008). Testing students with special needs: A model for understanding the interaction between assessment and student characteristics in a universally designed environment. Educational Measurement: Issues and Practice , 27, 3–16.CrossRefGoogle Scholar
  17. Kurz, A., & Elliott, S. N. (2011). Overcoming barriers to access for students with disabilities: Testing accommodations and beyond. In M. Russell (Ed.), Assessing students in the margins: Challenges, strategies, and techniques. Charlotte, NC: Information Age Publishing.Google Scholar
  18. Mace, R. L. (1991). Definitions: Accessible, adaptable, and universal design (Fact Sheet). Raleigh, NC: Center for Universal Design, NCSU.Google Scholar
  19. Mace, R. (1997). The principles of universal design (2nd Ed.). Raleigh, NC: Center for Universal Design, College of Design. Retrieved May 20, 2010, from http://www.design.ncsu.edu/cud/pubs_p/docs/poster.pdf
  20. Mace, R. L., Hardie, G. J., & Place, J. P. (1996). Accessible environments: Toward universal design. Retrieved May 20, 2010, from http://www.design.ncsu.edu/cud/pubs_p/docs/ACC%20Environments.pdf
  21. Mayer, R. E., Bove, W., Bryman, A., Mars, R., & Tapangco, L. (1995). When less is more: Meaningful learning from visual and verbal summaries of science textbook lessons. Journal of Educational Psychology, 88, 54–73.Google Scholar
  22. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43–52.CrossRefGoogle Scholar
  23. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review , 63, 81–97.PubMedCrossRefGoogle Scholar
  24. Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91, 348–368.CrossRefGoogle Scholar
  25. Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87, 319–334.CrossRefGoogle Scholar
  26. National Center for Education Statistics. (2011). The Nation’s Report Card: Science 2009 (NCES 2011–451). Washington, DC: Institute of Education Sciences, U.S. Department of Education.Google Scholar
  27. Plass, J. L., Moreno, R., & Brunken, R. (Eds.). (2010). Cognitive load theory. New York: Cambridge University Press.Google Scholar
  28. Porter, A. C. (2006). Curriculum assessment. In J. L. Green, G. Camilli & P. B. Elmore (Eds.), Handbook of complementary methods in education research (pp. 141–159). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
  29. Roach, A. T., Beddow, P. A., Kurz, A., Kettler, R. J., & Elliott, S. N. (2010). Incorporating student input in developing alternate assessments based on modified academic achievement standards. Exceptional Children , 77, 61–80.Google Scholar
  30. Rodriguez, M. C. (1997, August). The art & science of item-writing: A meta-analysis of multiple-choice item format effects. Paper presented at the annual meeting of the American Education Research Association, Chicago, IL.Google Scholar
  31. Rodriguez, M. C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24, 3–13.CrossRefGoogle Scholar
  32. Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar
  33. Sweller, J. (2010). Cognitive load theory: Recent theoretical advances. In J. L. Plass, R. Moreno & R. Brunken (Eds.), Cognitive load theory (pp. 29–47). New York: Cambridge University Press.Google Scholar
  34. Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognitive Instruction, 12, 185–233.CrossRefGoogle Scholar
  35. Thompson, S. J., Johnstone, C. J., Anderson, M. E. & Miller, N. A. (2005). Considerations for the development and review of universally designed assessments (Technical report 42). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  36. Thompson, S. J., Johnston, C. J., & Thurlow, M. L. (2002). Universal design applied to large-scale assessments (Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.Google Scholar
  37. Torcasio, S., & Sweller, J. (2010). The use of illustrations when learning to read: A cognitive load theory approach. Applied Cognitive Psychology , 24(5), 659–672.CrossRefGoogle Scholar
  38. Webb, N. L. (2002, April). An analysis of the alignment between mathematics standards and assessments for three states. Paper presented at the american educational research association annual meeting, New Orleans, LA.Google Scholar
  39. Wright, N. (2009). Towards a better readability measure – The Bog index. Retrieved June 5, 2010, from http://www.clearest.co.uk/files/TowardsABetterReadabilityMeasure.pdf

Copyright information

© Springer New York 2011

Authors and Affiliations

  • Peter A. Beddow
    • 1
  • Alexander Kurz
    • 1
  • Jennifer R. Frey
    • 2
  1. 1.Department of Special EducationPeabody College of Vanderbilt UniversityNashvilleUSA
  2. 2.Peabody College of Vanderbilt UniversityNashvilleUSA

Personalised recommendations