Skip to main content

Making Assessment Relevant to Students, Teachers, and Schools

  • Chapter
  • First Online:
Innovative Assessment for the 21st Century

Abstract

This chapter develops a model for an assessment and accountability system that reverses the trend of systems built upon a foundation of accountability—with sanctions for not meeting expected standards being the primary motivating feature for students, teachers, and schools to devise ways to avoid sanctions. The model developed in this chapter relies instead upon providing students, teachers, and schools with the necessary tools to achieve success as measured by student achievement and student growth based on multiple measures so that accountability is not a punitive measure, but a measure that assures that the tools are being used appropriately to assure success.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The ten elements are: (1) statewide unique student identifier, (2) student-level enrollment data, (3) student-level test data, (4) information on untested students, (5) statewide unique teacher identifier with teacher/student match, (6) student-level course/transcript data, (7) student-level ACT/SAT/Advanced Placement test data, (8) student-level graduation/dropout data, (9) capacity to match P-12 and post-secondary data, and (10) data audit system (Data Quality Campaign, 2009).

  2. 2.

    Wilson and Bertenthal (2006) define a learning progression (or progress map) as “a continuum that describes in broad strokes a possible path for the development of... understanding over the course of... education. It can also be used for tracking and reporting students’ progress...” (p. 78). Doignon and Falmagne (see 1999) also described the development of knowledge spaces as a somewhat similar approach, positing, as a portion of the knowledge space, pre-requisite relationships among different subsets of a domain of knowledge.

  3. 3.

    For example, in many classrooms all student work is graded in such a way that a student who ultimately meets the instructional goals at the end of a unit still achieves a low unit grade because s/he struggled with early work on that content. Such a student should be identified as having met the expectations based on final performance, regardless of early performance.

  4. 4.

    This does not move feedback-looped tasks out of the formative and summative classroom assessment arenas. This simply acknowledges the need to include feedback-looped tasks in the secure assessments as well.

References

  • ACT, Inc. (2006). Ready for college and ready for work: Same or different? Iowa City: Author. Available at http://www.act.org/research/policymakers/reports

  • ACT, Inc. (2009a). Formative item pools. Retrieved December 1, 2009, from http://www.act.org/qualitycore/itempools.html.

  • ACT, Inc. (2009b). The condition of college readiness 2009. Iowa City: Author. http://www.act.org/research/policymakers/reports

  • Albus, D. A., & Thurlow, M. L. (2007). English language learners with disabilities in state English language proficiency assessments: A review of state accommodation policies (Synthesis Report 66). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

    Google Scholar 

  • Allen, J., & Sconing, J. (2005). Using ACT assessment ® scores to set benchmarks for college readiness. ACT Research Report 2005-3. Iowa City: Author. Retrieved December 1, 2009, from http://www.act.org/research/researchers/reports/pdf/ACT_RR2005-3.pdf

  • Bandeira de Mello, V., Blankenship, C., & McLaughlin, D. H. (2009). Mapping state proficiency standards onto NAEP scales: 2005–2007 (NCES 2010-456). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, US Department of Education.

    Google Scholar 

  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–73.

    Article  Google Scholar 

  • Bloom, B. S., Hastings, T., & Madaus, G. (1971). Handbook of formative and summative evaluation of student learning. New York: McGraw-Hill Book Company.

    Google Scholar 

  • CTB/McGraw-Hill. (2009). LAS links benchmark assessments. Retrieved December 1, 2009, from http://www.ctb.com/products/product_accessory.jsp?Current_Page=1&FOLDER%3C%3Efolder_id=1408474395292399&ASSORTMENT%3C%3East_id=1408474395213825&CONTENT%3C%3Ecnt_id=10134198673323014

  • Council of Chief State School Officers & National Governors Association. (2009). The future of student assessment. Program of National Conference on Student Assessment, Los Angeles. Retrieved December 5, 2009, from http://www.ccsso.org/content/pdfs/NCSA%202009%20Final%20Web%20Program.pdf

  • Council of Chief State School Officers, & National Governors Association. (2009). Common core state standards available for comment. Retrieved November 28, 2009, from http://www.nga.org/portal/site/nga/menuitem.6c9a8a9ebc6ae07eee28aca9501010a0/?vgnextoid=6d50c21106ec3210VgnVCM1000005e00100aRCRD&vgnextchannel=6d4c8aaa2ebbff00VgnVCM1000001a01010aRCRD

  • Cronin, J., Dahlin, M., Adkins, D., & Kingsbury, G. G. (2007). The proficiency illusion. Washington, DC: Thomas B. Fordham Institute.

    Google Scholar 

  • Data Quality Campaign. (2009). 10 essential elements of a state longitudinal data system. Retrieved 12/5/2009 from http://www.dataqualitycampaign.org/survey/elements

  • Data Recognition Corp. (2009). Online testing. Retrieved December 1, 2009, from http://www.datarecognitioncorp.com/PageContent.aspx?Ref=es2OnlineTesting

  • Dean, V. J., Burns, M. K., Grialou, T., & Varro, P. J. (2006). Comparison of ecological validity of learning disabilities diagnostic models. Psychology in the Schools, 43(2), 157–168.

    Article  Google Scholar 

  • Doignon, J. -P., & Falmagne, J. -C. (1999). Knowledge spaces. New York: Springer.

    Book  Google Scholar 

  • Duschl, R. A., Schweingruber, H. A., & Shouse, A. W. (Eds.). (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: National Academies Press.

    Google Scholar 

  • Educational Testing Service. (2009). Item banks. Retrieved December 1, 2009 from http://www.ets.org/portal/site/ets/menuitem.1488512ecfd5b8849a77b13bc3921509/?vgnextoid=f55aaf5e44df4010VgnVCM10000022f95190RCRD&vgnextchannel=c1f1253b164f4010VgnVCM10000022f95190RCRD

  • Fixsen, D. L., Blase, K. A., Horner, R., & Sugai, G. (2009). Scaling up evidence-based practices in education. Scaling Up Brief #1. Chapel Hill: The University of North Carolina, FPG, SISEP.

    Google Scholar 

  • Kennedy, C. A., & Wilson, M. (2007). Using progress variables to map intellectual development. In R. W. Lissitz (Ed.), Assessing and modeling cognitive development in schools: Intellectual growth and standard setting. Maple Grove, MN: JAM Press.

    Google Scholar 

  • Lane, S., & Stone, C. A. (2006). Performance assessment. In R. L. Brennan (Ed.), Educational measurement (pp. 387-431). Westport, CT: American Council on Education, Praeger.

    Google Scholar 

  • Lazarus, S. S., Thurlow, M. L., Lail, K. E., Eisenbraun, K. D., & Kato, K. (2006). 2005 state policies on assessment participation and accommodations for students with disabilities (Synthesis Report 64). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today’s date], from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Synthesis64/

  • LeaPS. (2009). Proceedings of the learning progressions in science conference, June 24–26: Iowa City, IA. Retrieved November 28, 2009, from http://www.education.uiowa.edu/projects/leaps/proceedings/Default.aspx

  • Lissitz, R. L. (Ed.). (2005). Value added models in education: Theory and applications. Maple Grove, MN: JAM Press.

    Google Scholar 

  • Lissitz, R. L. (Ed.). (2006). Longitudinal and value added models of student performance. Maple Grove, MN: JAM Press.

    Google Scholar 

  • Martineau, J. A. (2006). Distorting value added: The use of longitudinal, vertically scaled student achievement data for growth-based value-added accountability. Journal of Educational and Behavioral Statistics, 31(1), 35–62.

    Article  Google Scholar 

  • Martineau, J. A., & Betebenner, D. W. (2006). A hybrid value table/transition table model for measuring student progress. Paper presented at the 36th annual national conference on large-scale assessment of the Council of Chief State School Officers (CCSSO), San Francisco.

    Google Scholar 

  • Martineau, J. A., Paek, P., Keene, J., & Hirsch, T. (2007). Integrated, comprehensive alignment as a foundation for measuring student progress. Educational Measurement: Issues & Practice, 26(1), 28–35.

    Article  Google Scholar 

  • McCaffrey, D. F., Lockwood, J. R., Mariano, L. T., & Setodji, C. (2005). Challenges for value-added assessment of teacher effects. In R. Lissitz (Ed.), Value added models in education: Theory and applications (pp. 111–141). Maple Grove, MN: JAM Press.

    Google Scholar 

  • Millman, J. (Ed.). (1997). Grading teachers, grading schools: Is student achievement a valid evaluation measure? Thousand Oaks, CA: Corwin Press.

    Google Scholar 

  • Mintrop, H., & Sunderman, G. L. (2009). Predictable failure of federal sanctions-driven accountability for school improvement – and why we may retain it anyway. Educational Researcher, 38(5), 353–364.

    Article  Google Scholar 

  • National Center for Education Statistics. (2007). Mapping 2005 state proficiency standards pnto the NAEP scales (NCES 2007-482). US Department of Education. Washington, DC: Author.

    Google Scholar 

  • Northwest Evaluation Association. (2009). NWEA’s measures of academic progress is selected as a state approved formative assessment in Colorado. Retrieved December 1, 2009, from http://www.nwea.org/about-nwea/news-and-events/nweas-measures-academic-progress-selected-state-approved-formative-assess

  • Pearson Educational Measurement. (2009). PASeries formative assessments from Pearson Education reviewed by National Center on Student Progress Monitoring. Retrieved December 1, 2009, from http://www.pearsoned.com/pr_2007/022007.htm

  • Popham, W. J. (2000). Testing! Testing! What every parent should know about school tests. Needham Heights, MA: Allyn & Bacon.

    Google Scholar 

  • Popham, W. J. (2008). Transformative assessment. Alexandria, VA: ASCD.

    Google Scholar 

  • Porter, A. C., Polikoff, M. S., & Smithson, J. (2009). Is there a de facto national intended curriculum? Evidence from state content standards. Educational Evaluation and Policy Analysis, 31(3), 238–268.

    Article  Google Scholar 

  • Questar Assessment, Inc. (2009). Touchstone Applied Science Associates, Inc. and Rally! Education announce partnership to develop Testpower, a web-based instructional assessment product line. Retrieved December 1, 2009, from http://www.questarai.com/AboutUs/News/PressReleases/Pages/pr051706_tasa_and_rally_education_announce_partnership_to_develop_testpower.aspx

  • Redfield, D., Roeber, E., Stiggins, R., & Philip, F. (2008). Building balanced assessment systems to guide educational improvement. A background paper for the keynote panel presentation at the National Conference on Student Assessment of the Council of Chief State School Officers, June 15, 2008, Orlando, FL. Retrieved from http://www.ccsso.org/content/PDFs/OpeningSessionPaper-Final.pdf

  • Sanders, W. L., Saxon, A. M., & Horn, S. P. (1997). The Tennessee value-added assessment system: A quantitative, outcomes-based approach to educational assessment. In J. Millman (Ed.), Grading teachers, grading schools: Is student achievement a valid evaluation measure? (pp. 137–162). Thousand Oaks, CA: Corwin Press.

    Google Scholar 

  • Sands, W. A., Waters, B. K., & McBride, J. R. (1997). Computerized adaptive testing: From inquiry to operation. Washington, DC: American Psychological Association.

    Book  Google Scholar 

  • Schmidt, W. H. (2002). The quest for a coherent school science curriculum: The need for an organizing principle. East Lansing: Education Policy Center at Michigan State University. Retrieved November 28, 2009, from http://ustimss.msu.edu/coherentscience.pdf

  • Schmidt, W. H., Houang, R. T., & McKnight, C. C. (2005). Value-added research: Right idea but wrong solution? In R. Lissitz (Ed.), Value added models in education: Theory and applications (pp. 145–164). Maple Grove, MN: JAM Press.

    Google Scholar 

  • Schmidt, W. H., McKnight, C. C., & Raizen, S. A. (1997). A splintered vision: An investigation of US science and mathematics education. Dordrecth: Kluwer Academic Publishers.

    Google Scholar 

  • Shermis, M. D. (2010, this volume). Automated essay scoring in a high stakes testing environment. In V. J. Shute & B. J. Becker (Eds.), Innovative assessment for the 21st century: Supporting educational needs. New York: Springer.

    Google Scholar 

  • Shute, V. J. (2007). Tensions, trends, tools, and technologies: Time for an educational sea change. In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 139–187). New York: Lawrence Erlbaum Associates, Taylor & Francis Group.

    Google Scholar 

  • Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189.

    Article  Google Scholar 

  • Sireci, S. G., & Zenisky, A. L. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representation. In S. M. Downing & T. Haladyna (Eds.), Handbook of test development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

    Google Scholar 

  • Stiggins, R. J. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta Kappan, 83(10), 758–765.

    Google Scholar 

  • Thompson, S., Thurlow, M., & Moore, M. (2003). Using computer-based tests with students with disabilities (Policy Directions No. 15). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today’s date], from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Policy15.htm

  • Thum, Y. M. (2002). Measuring progress towards a goal: Estimating teacher productivity using a multivariate multilevel model for value-added analysis. Santa Monica, CA: Milken Family Foundation.

    Google Scholar 

  • Thurlow, M. L., Elliott, J. L., & Ysseldyke, J. E. (2003). Testing students with disabilities: Practical strategies for complying with district and state requirements. Thousand Oaks, CA: Corwin Press, Inc.

    Google Scholar 

  • US Department of Education. (2007). Standards and assessments peer review guidance: Information and examples for meeting requiremetns of the No Child Left Behind Act of 2001. Washington, DC: US Department of Education, Office of Elementary and Secondary Education. Retrieved December 1, 2009, from http://www.ed.gov/policy/elsec/guid/saaprguidance.pdf

  • Wilson, M. R., & Bertenthal, M. W. (Eds.). (2006). Systems for state science assessment. Committee on test design for K–12 science achievement. Washington, DC: The National Academies Press.

    Google Scholar 

  • Wise, L. (2004). Vertically-articulated content standards. Retrieved June 5, 2006, from http://www.nciea.org/publications/RILS_LW04.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joseph A. Martineau .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Martineau, J.A., Dean, V.J. (2010). Making Assessment Relevant to Students, Teachers, and Schools. In: Shute, V., Becker, B. (eds) Innovative Assessment for the 21st Century. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-6530-1_9

Download citation

Publish with us

Policies and ethics