Advertisement

Are These Testing Times, or Is It a Time to Test? Considering the Place of Tests in Students’ Academic Development

  • Andrew J. Martin
Chapter
Part of the Policy Implications of Research in Education book series (PIRE, volume 3)

Abstract

Recent efforts to tie students’ test results to teacher- and school-level consequences and accountability have led to testing times in the education sector. Whilst recognizing numerous concerns with accountability and high stakes testing, this chapter identifies potentially useful applications of testing. It argues that when designed for student-level feedback and intervention, testing is a vital basis for students’ educational development – but not a basis for teacher- and school-level consequences and accountability. The chapter then looks at promising directions in the use of tests to assess students’ academic development. It is suggested that a growth-oriented approach to student testing and assessment redresses limitations associated with accountability and high stakes assessment and can be a basis for effective educational practice.

Keywords

High Stake National Testing High Stake Testing Student Development Enhance Student Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Alexander, R. J. (Ed.). (2010). Children, their world, their education: Final report of the Cambridge primary review. London: Routledge.Google Scholar
  2. Anderman, E., Anderman, L., Yough, M., & Gimbert, B. (2010). Value-added models of assessment: Implications for motivation and accountability. Educational Psychologist, 45, 123–137.CrossRefGoogle Scholar
  3. Au, W. (2008). Devising inequality: A Bernsteinian analysis of high-stakes testing and social reproduction in education. British Journal of Sociology of Education, 29, 639–651.CrossRefGoogle Scholar
  4. Bangert-Drowns, R. L., Kulik, C. L. C., Kulik, J. A., & Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61, 213–238.CrossRefGoogle Scholar
  5. Betebenner, D. (2008). Norm- and criterion-referenced student growth. Paper presented at CCSSO, Washington, DC, 16 June 2008.Google Scholar
  6. Betebenner, D. (2009). Growth, standards and accountability. Dover: Center for Assessment.Google Scholar
  7. Briggs, D., & Betebenner, D. (2009). Is growth in student achievement scale dependent? Paper presented at the National Council for Measurement in Education, San Diego, CA.Google Scholar
  8. Butler, A. C. (2010). Repeated testing produces superior transfer of learning relative to repeated studying. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36, 1118–1133.CrossRefGoogle Scholar
  9. Butler, A. C., & Roediger, H. L. (2007). Testing improves long-term retention in a simulated classroom setting. European Journal of Cognitive Psychology, 19, 514–527.CrossRefGoogle Scholar
  10. Covington, M. V. (1992). Making the grade: A self-worth perspective on motivation and school reform. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  11. David, J. (2011). What students need to learn: High-stakes testing narrows the curriculum. Educational Leadership, 68, 78–80.Google Scholar
  12. Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199–208.Google Scholar
  13. Gocmen, G. B. (2003). Effectiveness of frequent testing over academic achievement: A meta-analysis study. Unpublished doctoral dissertation, University of Ohio, OH.Google Scholar
  14. Harris, D. N. (2011). Value-added measures in education: What every educator needs to know. Cambridge, MA: Harvard Education Press.Google Scholar
  15. Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London & New York: Routledge, Taylor & Francis Group.Google Scholar
  16. Kim, S.E. (2005). Effects of implementing performance assessments on student learning: Meta-analysis using HLM. Unpublished doctoral dissertation, The Pennsylvania State University, PA.Google Scholar
  17. Larsen, D. P., Butler, A. C., & Roediger, H. L. (2008). Test-enhanced learning in medical education. Medical Education, 42, 959–966.CrossRefGoogle Scholar
  18. Larsen, D. P., Butler, A. C., & Roediger, H. L. (2009). Repeated testing improves long-term retention relative to repeated study: A randomized controlled trial. Medical Education, 43, 1174–1181.CrossRefGoogle Scholar
  19. Leeming, F. C. (2002). The exam-a-day procedure improves performance in psychology classes. Teaching of Psychology, 29, 210–212.CrossRefGoogle Scholar
  20. Lingard, B. (2010). Policy borrowing, policy learning: Testing times in Australian schooling. Critical Studies in Education, 51, 129–147.CrossRefGoogle Scholar
  21. Lobascher, S. (2011). What are the potential impacts of high-stakes testing on literacy education in Australia? Australian Journal of Language and Literacy, 34, 9–19.Google Scholar
  22. Martin, A. J. (2011). Personal best (PB) approaches to academic development: Implications for motivation and assessment. Educational Practice and Theory, 33, 93–99.CrossRefGoogle Scholar
  23. Martin, A. J., & Marsh, H. W. (2003). Fear of failure: Friend or foe? Australian Psychologist, 38, 31–38.CrossRefGoogle Scholar
  24. Nichols, S., & Berliner, D. C. (2007). Collateral damage: How high-stakes testing corrupts America’s schools. Cambridge, MA: Harvard Education Press.Google Scholar
  25. Paris, S. (2000). Trojan Horse in the schoolyard: The hidden threats in high stakes testing. Issues in Education, 6, 1–8.Google Scholar
  26. Paris, S., & McEvoy, A. (2000). Harmful and enduring effects of high stakes testing. Issues in Education, 6, 145–160.Google Scholar
  27. Polesel, J., Dulfer, N., & Turnbull, M. (2012). The experience of education: The impacts of high stakes testing on school students and their families. Sydney: The Whitlam Institute.Google Scholar
  28. Ravitch, D. (2010). The death and life of the great American school system: How testing and choice are undermining education. New York: Basic Books.Google Scholar
  29. Slavin, R. E. (1980). Effects of individual learning expectations on student achievement. Journal of Educational Psychology, 72, 520–524.CrossRefGoogle Scholar
  30. Wu, M. (2010). Inadequacies of NAPLAN results for measuring school performance. Submission to the Inquiry into the Administration and Reporting of NAPLAN Testing, Senate References Committee on Education, Employment & Workplace Relations, Canberra.Google Scholar
  31. Yeh, S. S. (2010). Understanding and addressing the achievement gap through individualized instruction and formative assessment. Assessment in Education: Principles, Policy & Practice, 17, 169–182.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.University of New South WalesSydneyAustralia

Personalised recommendations