The Australian Educational Researcher

, Volume 40, Issue 3, pp 299–314 | Cite as

A preliminary analysis of teacher perceptions of the effects of NAPLAN on pedagogy and curriculum

  • Greg ThompsonEmail author
  • Allen G. Harbaugh


This paper reports preliminary survey findings of Western Australian and South Australian teacher perceptions of the impact of NAPLAN on curriculum and pedagogy in their classroom and school. The paper examines how teachers perceive the effects of NAPLAN on curriculum and pedagogy and whether these perceptions mediated by the teacher’s gender, the socioeconomics of the school, the State and the school system in which the teacher works. Teachers report that they are either choosing or being instructed to teach to the test, that this results in less time being spent on other curriculum areas and that these effects contribute in a negative way to the class environment and the engagement of students. This largely agrees with a body of international research that suggests that high-stakes literacy and numeracy tests often results in unintended consequences such as a narrow curriculum focus, a return to teacher-centred instruction and a decrease in motivation. Analysis suggests there is a relationship between participant responses to the effect of NAPLAN on curriculum based on the characteristics of which State the teacher taught in, the socioeconomic status of the school and the school system in which they were employed (State, Catholic, and Independent).


NAPLAN High-stakes testing Teacher perceptions Curriculum and pedagogy 



This research has been made possible by a grant from the ARC.


  1. ACARA. (2011). NAPLAN. Retrieved January 31, 2012, from Australian Curriculum Assessment and Reporting Authority (ACARA):
  2. AERA. (2012, September). Position Statement on High-Stakes Testing. Retrieved September 26, 2012, from American Education Research Association (AERA):
  3. Amrein, A., & Berliner, D. (2002). High-stakes testing, uncertainty, and student learning. Education Policy Analysis Archives, 10(18), 1–74.Google Scholar
  4. Au, W. (2007). High-stakes testing and curricular control: a meta-synthesis. Educational Researcher, 5, 258–267.CrossRefGoogle Scholar
  5. Au, W. (2008). Devising inequality: a Bernsteinian analysis of high-stakes testing and social reproduction in education. British Journal of Sociology of Education, 29(6), 639–651.CrossRefGoogle Scholar
  6. Au, W. (2009). Unequal by design: high-stakes testing and the standardization of inequality. New York: Routledge.Google Scholar
  7. Baker, R. (2012). The effects of high-stakes testing policy on arts education. Arts Education Policy Review, 113(1), 17–25.Google Scholar
  8. Baker, E., Barton, P., Darling-Hammond, L., Haertel, E., Ladd, H., & Linn, R. (2010). Problems with the use of student test scores to evaluate teachers. Washington: Economic Policy Institute.Google Scholar
  9. Ball, S. (1994). Education reform: A critical and post-structural approach. Buckingham: Open University Press.Google Scholar
  10. Ball, S. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18(2), 215–228.CrossRefGoogle Scholar
  11. Barksdale-Ladd, M. A., & Thomas, K. F. (2000). What’s at stake in high-stakes testing: Teachers and parents speak out. Journal of Teacher Education, 51(5), 384–397.CrossRefGoogle Scholar
  12. Barret, B. (2009). No child left behind and the assault on teachers’ professional practices and identities. Teaching and Teacher Education, 25(8), 1018–1025.CrossRefGoogle Scholar
  13. Collins, S., Reiss, M., & Stobart, G. (2010). What happens when high-stakes testing stops? Teachers’ perceptions of the impact of compulsory national testing in science of 11-year-olds in England and its abolition in Wales. Assessment in Education: Principles, Policy and Practice, 17(3), 273–286.CrossRefGoogle Scholar
  14. Comber, B. (2012). Mandated literacy assessment and the reorganisation of teachers’ work: federal policy, local effects. Critical Studies in Education, 53(2), 119–136.CrossRefGoogle Scholar
  15. Gillard, J. (2008a, April). NAPLAN, OECD Report, teacher quality, age pension. Retrieved April 29, 2011, from Hon Julia Gillard MP:
  16. Gillard, J. (2008b, June). Education revolution: COAG agrees to school reform agenda. Retrieved June 17, 2010 from Media Centre:
  17. Gonski, D., Boston, K., Greiner, K., Lawrence, C., Scales, B., & Tannock, P. (2011). Review of funding for schooling—final report. Canberra: Department of Education, Employment and Workplace Relations.Google Scholar
  18. Harris, D. (2007). High-flying schools, student disadvantage, and the logic of NCLB. American Journal of Education, 113(3), 367–394.CrossRefGoogle Scholar
  19. Jones, B. (2008). The unintended outcomes of high-stakes testing. Journal of Applied School Psychology, 23(2), 65–86.CrossRefGoogle Scholar
  20. Klenowski, V. (2011). Assessment for learning in the accountability era: Queensland, Australia. Studies in Educational Evaluation, 37, 78–83.CrossRefGoogle Scholar
  21. Klenowski, V., & Wyatt-Smith, C. (2012). The impact of high stakes testing: the Australian story. Assessment in Education: Principles, Policy & Practice, 19(1), 65–79.CrossRefGoogle Scholar
  22. Ladson-Billings, G. (2006). From the achievement gap to the education debt: Understanding achievement in US schools. Educational Researcher, 35(7), 3–12.CrossRefGoogle Scholar
  23. Lange, T., & Meaney, T. (2011). Becoming disadvantaged: Public discourse around national testing. Poland: European Society for Research in Mathematic Education.Google Scholar
  24. Lingard, R. (2010). Policy borrowing, policy learning: testing times in Australian schooling. Critical Studies in Education, 51(2), 129–145.CrossRefGoogle Scholar
  25. Lingard, B., Creagh, S., & Vass, G. (2012). Education policy as numbers: data categories and two Australian cases of misrecognition. Journal of Education Policy, 27(3), 315–333.CrossRefGoogle Scholar
  26. Lingard, B., Hayes, D., & Mills, M. (2003). Teachers and productive pedagogies: Contextualising, conceptualising, utilising. Pedagogy, Culture & Society, 11(3), 399–423.CrossRefGoogle Scholar
  27. Lobascher, S. (2011). What are the potential impacts of high-stakes testing on literacy education in Australia? Literacy Learning in the Middle Years, 19(2), 9–19.Google Scholar
  28. McCarty, T. (2009). The impact of high-stakes accountability policies on Native American learners: evidence from research. Teaching Education, 20(1), 7–29.CrossRefGoogle Scholar
  29. McGaw, B. (2010). Driving education reform with curriculum reform and accountability. Banksia Lecture, Murdoch University, Perth.Google Scholar
  30. Mockler, N. (2013). Reporting the ‘education revolution’: in the print media. Discourse: Studies in the Cultural Politics of Education, 34(1), 1–16.CrossRefGoogle Scholar
  31. Nichols, S., Glass, G., & Berliner, D. (2006). High-stakes testing and student achievement: Does accountability pressure increase student learning? Education Policy Analysis Archive, 14(1), 1–173.Google Scholar
  32. Nichols, S., Glass, G., & Berliner, D. (2012). High-stakes testing and student achievement: Updated analyses with NAEP data. Education Policy Analysis Archives, 20(20), 1–35.Google Scholar
  33. Perrault, G. (2000). The classroom impact of high-stress testing. Education, 120(4), 705–710.Google Scholar
  34. Phelps, P. (2007). Standardized Testing. New York: Lang.Google Scholar
  35. Polesel, J., Dulfer, N., & Turnbull, M. (2012). The experience of education: The impacts of high stakes testing on school students and their families. Sydney: Whitlam Insititute.Google Scholar
  36. Productivity Commission. (2012). Schools workforce: Productivity commission research report. Canberra: Australian Government.Google Scholar
  37. Reid, A. (2009). Is this a revolution? A critical analysis of the Rudd government’s national education agenda. Curriculum Perspectives, 29(3), 1–13.Google Scholar
  38. Rizvi, F., & Lingard, B. (2010). Globalizing education policy. Abingdon: Routledge.Google Scholar
  39. Rudd, K., & Gillard, J. (2008). Quality education: The case for an education revolution in our schools. Canberra: Commonwealth of Australia.Google Scholar
  40. Ryan, R., & Weinstein, N. (2009). Undermining quality teaching and learning: A self determination theory perspective on high-stakes testing. Theory and Research in Education, 7(2), 224–233.CrossRefGoogle Scholar
  41. Smeed, J., Spiller, K., & Kimber, M. (2009). Issues for principals in high-stakes testing. Principal Matters, 81, 32–34.Google Scholar
  42. Supovitz, J. (2009). Can high stakes testing leverage educational improvement? Prospects from the last decade of testing and accountability reform. Journal of Educational Change, 10, 211–227.CrossRefGoogle Scholar
  43. Thompson, G., & Cook, I. (2013). The logics of good teaching in an audit culture: A Deleuzian analysis. Educational Philosophy and Theory, 45(3), 243–258.CrossRefGoogle Scholar
  44. West, C. (2012). Teaching Music in an era of high-stakes testing and budget reductions. Arts Education Policy Review, 113(2), 75–79.CrossRefGoogle Scholar
  45. Wigglesworth, G., Simpson, J., & Loakes, D. (2011). NAPLAN language assessments for indigenous students in remote communities: Issues and problems. Australian Review of Applied Linguistics, 34(3), 320–343.Google Scholar

Copyright information

© The Australian Association for Research in Education, Inc. 2013

Authors and Affiliations

  1. 1.School of Education, Murdoch UniversityPerthAustralia

Personalised recommendations