A critical analysis of problems with the LBOTE category on the NAPLaN test
- First Online:
- Cite this article as:
- Creagh, S. Aust. Educ. Res. (2014) 41: 1. doi:10.1007/s13384-013-0095-y
- 1.4k Downloads
The National Assessment Program: Literacy and Numeracy (NAPLaN) is an annual literacy and numeracy test for all Australian students, and results from the test are disaggregated into a number of categories including language background other than English (LBOTE). For this and other categories, results on each section of the test are aggregated into state, territory and national means and standard deviations enabling comparison of performance. The NAPLaN data indicate that since the test began, in 2008, at a national level there is little difference between the results of LBOTE and non-LBOTE students on all domains of the test. This is a national result, and there is greater variation at state and territory level. However, these results defy a logic which might suggest that the LBOTE category will reflect the influence of English as a second language on test performance, rather suggesting that a second language background is not associated with test performance. In this paper, I will interrogate the variation in the LBOTE category, using data provided by the Queensland state education department, focusing on year 9 students who participated in the 2010 test. Using multiple regression and focusing on variables which are specifically related to language background, I will show that within the LBOTE category there is a wide variation of performance, and the LBOTE data are in fact hiding some of our most disadvantaged students. I will suggest alternative ways in which language learners could be identified to better empower policy and pedagogical responses to student needs.