Controversies in Education pp 39-53

Part of the Policy Implications of Research in Education book series (PIRE, volume 3)

Testing Times: Data and Their (Mis-)Use in Schools

Chapter

Abstract

The chapter starts with an overview of the widely documented ‘collateral damage’ resulting from the combination of standardized school testing with high-stakes decision making. Such damage takes the form of curriculum reduction (covering only what is tested), reduction of pedagogical strategies (teaching to the test), reduced attention to students that are far below and far above the achievement standards tested, and teacher demotivation and increase of anxiety levels. Since the achievement gains under regimens such as the No Child Left Behind Act in the US have been quite limited, the high-stakes testing strategy is increasingly being questioned. I then inspect the claim that standardized testing is valuable as a source of information on learning, provided testing results are not tied to high-stakes decisions. I argue that this position is also problematic because of the (unintended) detrimental effects on students’ motivation, and their epistemic beliefs. The chapter ends with identifying requirements on twenty-first Century assessment so that it is better aligned with twenty-first Century learning.

References

  1. Andrade, H. L., & Cizek, G. J. (Eds.). (2010). Handbook of formative assessment. New York: Routledge.Google Scholar
  2. Archer, M. S., Lawson, T., & Collier, A. (1998). Critical realism: Essential readings. London: Routledge.Google Scholar
  3. Barsalou, L. W. (2010). Grounded cognition: Past, present, and future. Topics in Cognitive Science, 2, 716–724. doi:10.1111/j.1756-8765.2010.01115.x.CrossRefGoogle Scholar
  4. Barsalou, L. W., Breazeal, C., & Smith, L. B. (2007). Cognition as coordinated non-cognition. Cognitive Processing, 8, 79–91. doi:10.1007/s10339-007-0163-1.CrossRefGoogle Scholar
  5. Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18, 5–25.CrossRefGoogle Scholar
  6. Bhaskar, R. (1975). A realist theory of science. London: Leeds Books Ltd.Google Scholar
  7. Bhaskar, R. (2005). The possibility of naturalism: A philosophical critique of contemporary human sciences (3rd ed.). London: Routledge.Google Scholar
  8. Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement: Issues and Practice, 22(4), 5–12.CrossRefGoogle Scholar
  9. Chinn, C. A., Buckland, L. A., & Samarapungavan, A. (2011). Expanding the dimensions of epistemic cognition: Arguments from philosophy and psychology. Educational Psychologist, 46, 141–167. doi:10.1080/00461520.2011.587722.CrossRefGoogle Scholar
  10. Clark, A. (2011). Supersizing the mind. Embodiment, action, and cognitive extension. Oxford: Oxford University Press.Google Scholar
  11. Dehaene, S. (1997). The number sense. New York: Oxford University Press.Google Scholar
  12. Desrosieres, A. (1998). The politics of large numbers: A history of statistical reasoning. Cambridge, MA: Harvard University Press.Google Scholar
  13. Deutsch, D. (2011). The beginning of infinity. London: Viking Penguin.Google Scholar
  14. Dunn, K. E., & Mulvenon, S. W. (2009). A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment: Research & Evaluation, 14(7), 1–11.Google Scholar
  15. Engeström, Y. (1999). Activity theory and individual and social transformation. In Y. Engestroem, R. Miettinen, & R.-L. Punanmäki (Eds.), Perspectives on activity theory (pp. 19–38). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  16. Ferguson, R. (2012). The state of learning analytics in 2012: A review and future challenges (Technical Report KMI-12-01). England: Open University, Milton Keynes.Google Scholar
  17. Firestone, W. A., & Schorr, R. Y. (2004). Introduction. In W. A. Firestone, R. Y. Schorr, & L. F. Monfils (Eds.), The ambiguity of teaching to the test (pp. 1–18). Mahwah: Erlbaum.Google Scholar
  18. Firestone, W. A., Schorr, R. Y., & Monfils, L. F. (Eds.). (2004). The ambiguity of teaching to the test. Mahwah: Erlbaum.Google Scholar
  19. Gibson, J. J. (1986). The ecological approach to visual perception. Hillsdale: Erlbaum.Google Scholar
  20. Graham, J. (2010). Editorial: The trouble with my school. Professional Voice, 8(1), 7–12.Google Scholar
  21. Greeno, J. G. (1998). The situativity of knowing, learning, and research. American Psychologist, 53, 5–26.CrossRefGoogle Scholar
  22. Harré, R., & Madden, E. H. (1975). Causal powers. A theory of natural necessity. Oxford: Basil Blackwell.Google Scholar
  23. Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analyses. Orlando: Academic.Google Scholar
  24. Hidi, S., & Renninger, K. A. (2006). The four-phase model of interest development. Educational Psychologist, 41(2), 111–127.CrossRefGoogle Scholar
  25. Hounsell, D., Marton, F., & Entwistle, N. J. (1997). The experience of learning: Implications for teaching and studying in higher education. Edinburgh: Scottish Academic Press.Google Scholar
  26. House, E. R. (1991). Realism in research. Educational Researcher, 20(6), 2–9.CrossRefGoogle Scholar
  27. Howe, K. R. (2009). Positivist dogmas, rhetoric, and the education science question. Educational Researcher, 38(6), 428–440.CrossRefGoogle Scholar
  28. Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: Cambridge University Press.Google Scholar
  29. Hutchins, E. (2010). Cognitive ecology. Topics in Cognitive Science, 2, 705–715.CrossRefGoogle Scholar
  30. Ladwig, J. G. (2010). What NAPLAN doesn’t address (but could, and should). Professional Voice, 8(1), 35–40.Google Scholar
  31. Lave, J. (1988). Cognition in practice: Mind, mathematics, and culture in everyday live. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  32. Mandinach, E. B., & Honey, M. (Eds.). (2008). Data-driven school improvement. New York: Teachers College Press.Google Scholar
  33. Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data-driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement (pp. 13–31). New York: Teachers College Press.Google Scholar
  34. Manicas, P. T., & Secord, P. F. (1983). Implications for psychology of the new philosophy of science. American Psychologist, 38, 399–413.CrossRefGoogle Scholar
  35. Marton, F., & Säljö, R. (1984). Approaches to learning. In F. Marton, D. Hounsell, & N. J. Entwistle (Eds.), The experience of learning (pp. 36–55). Edinburgh: Scottish Academic Press.Google Scholar
  36. Matters, G. (2006). Using data to support learning in schools: Students, teachers, systems (Australian education review, Vol. 49). Melbourne: ACER.Google Scholar
  37. McMillan, J. H. (2003). Understanding and improving teachers’ classroom assessment decision making: Implications for theory and practice. Educational Measurement: Issues and Practice, 22, 34–43.CrossRefGoogle Scholar
  38. Means, B., Chen, E., DeBarger, A. H., & Padilla, C. (2011). Teachers’ ability to use data to inform instruction: Challenges and supports. Washington, DC: U.S. Department of Education.Google Scholar
  39. Moss, P. A., Girard, B. J., & Haniford, L. C. (2006). Validity in educational assessment. Review of Research in Education, 30, 109–161.CrossRefGoogle Scholar
  40. Nichols, S. L., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts America’s schools. Cambridge, MA: Harvard Education Press.Google Scholar
  41. OECD-CERI. (2013). The OECD schooling scenarios in brief. http://www.oecd.org/edu/school/centreforeducationalresearchandinnovationceri-theoecdschoolingscenariosinbrief.htm. Accessed 3 Mar 2013.
  42. Pawson, R. (2006). Evidence-based policy. A realist perspective. London: Sage.Google Scholar
  43. Pawson, R., & Tilley, N. (1997). Realistic evaluation. Los Angeles: Sage.Google Scholar
  44. Popham, W. J. (2001). The truth about testing. Alexandria: Association for Supervision and Curriculum Development.Google Scholar
  45. Popham, W. J., Cruse, K. L., Rankin, S. C., Sandifer, P. D., & Williams, R. L. (1985). Measurement-driven instruction. Phi Delta Kappan, 66, 628–634.Google Scholar
  46. Porter, T. (1995). Trust in numbers – The pursuit of objectivity in science and public life. Princeton: Princeton University Press.Google Scholar
  47. Resnick, L. (2010). Nested learning systems for the thinking curriculum (2009 Wallace Foundation Distinguished Lecture). Educational Researcher, 39(3), 183–197.CrossRefGoogle Scholar
  48. Salmon, W. C. (1998). Causality and explanation. New York: Oxford University Press.CrossRefGoogle Scholar
  49. Sawyer, R. K., & Greeno, J. G. (2009). Situativity and learning. In P. Robbins & M. Aydede (Eds.), The Cambridge handbook of situated cognition (pp. 347–367). New York: Cambridge University Press.Google Scholar
  50. Sayer, A. (2000). Realism and social science (2nd ed.). London: Sage.CrossRefGoogle Scholar
  51. Schorr, R. Y., & Firestone, W. A. (2004). Conclusion. In W. A. Firestone, R. Y. Schorr, & L. F. Monfils (Eds.), The ambiguity of teaching to the test (pp. 159–168). Mahwah: Erlbaum.Google Scholar
  52. Stecher, B. M., & Barron, S. L. (1999). Quadrennial milepost accountability testing in Kentucky (CSE Technical Report 505). Los Angeles: CRESST.Google Scholar
  53. Stiggins, R. J. (1999). Evaluating classroom assessment training in teacher education programs. Educational Measurement: Issues and Practice, 18(1), 23–27.CrossRefGoogle Scholar
  54. Taubman, P. M. (2009). Teaching by numbers. Deconstructing the discourse of standards and accountability in education. New York: Routledge.Google Scholar
  55. Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school improvement through analysis of student data (CRESPAR Technical Report No. 67). Baltimore: John Hopkins University.Google Scholar
  56. Whitford, B. L., & Jones, K. (2000). Kentucky lesson: How high stakes school accountability undermines a performance-based curriculum vision. In B. L. Whitford & K. Jones (Eds.), Accountability, assessment, and teacher commitment: Lessons from Kentucky’s reform efforts (pp. 9–24). Albany: State University of New York Press.Google Scholar
  57. Yacef, K., & Baker, S. J. D. (2009). The state of educational data mining in 2009: A review and future vision. JEDM – Journal of Educational Data Mining, 1(1), 3–17.Google Scholar
  58. Young, M. (2004). An ecological psychology of instructional design: Learning and thinking by perceiving. In D. H. Jonassen (Ed.), Handbook of research on educational communication and technology (2nd ed., pp. 169–177). Mahwah: Lawrence Erlbaum.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Faculty of Education and Social WorkUniversity of SydneySydneyAustralia

Personalised recommendations