Introduction

Australia’s national standardised testing initiative, the National Assessment Program—Literacy and Numeracy (NAPLAN) was implemented to foster transparency and accountability within the Australian education system (ACARA 2015). The information obtained via NAPLAN has been claimed to be of benefit to all stakeholders, such as policy makers, schools, teachers, parents, and of course students.Footnote 1 With NAPLAN testing now a standard feature of the Australian education landscape, do stakeholders believe that NAPLAN lives up to expectations? There have been very few investigations of stakeholder perceptions of NAPLAN considering that NAPLAN has been running since 2008. In the United Kingdom and United States of America, where standardised testing has a longer history compared to Australia, research shows negative unintended consequences associated with standardised testing such as a narrowing of the curriculum, negative impacts upon stakeholder well-being, and general negative attitudes toward testing (Brockmeier et al. 2014; Smith 1991; Segool et al. 2013; Au 2011, 2013).

In the Australian context, the research to date is largely consistent with the studies from overseas. That is, concerns have been raised regarding a negative impact of NAPLAN upon curriculum, and stakeholder well-being (Polesel et al. 2012; Dulfer et al. 2012; Klenowski and Wyatt-Smith 2012; Harris et al. 2013; Lingard et al. 2016). In a recently published study, we reported findings that were somewhat contrary to the prevailing view. Our results indicated only minimal impact of NAPLAN testing upon student, parent, and teacher well-being in a sample of relatively affluent (i.e. high ICSEA) Independent schools in Western Australia (Rogers et al. 2016). These results suggested that standardised testing in Australia is not necessarily associated with a negative impact upon well-being in all contexts, and that more research is required to understand when and how standardised testing might have a negative impact upon well-being across different schools.

In the present study, we report more data from our study of Independent schools, specifically regarding parent and teacher perceptions of NAPLAN testing. As previously mentioned, few published studies have explored parent and teacher attitudes towards the testing in the Australian context. In the largest study to date, the Whitlam Institute surveyed over eight thousand educators (50% primary teachers, 30% secondary teachers, and 20% principals or assistant principals) from around the country via collaboration with the Australian and Independent Education Unions (Polesel et al. 2014; Dulfer et al. 2012). It was reported that most educators agreed that NAPLAN narrows the curriculum and teaching practices by taking focus away from other subjects (around 70%). Additionally, there was more agreement that the purpose of NAPLAN was to act as a school ranking tool and a method of policing school performance (around 70%) compared to assisting individual teachers and students (around 50%). A clear majority believed that poorer than expected NAPLAN results would have negative consequences for the school’s reputation (around 90%).

In another large survey of teachers from government (n = 472), Independent (n = 111), and Catholic (n = 184) schools across Western and South Australia, Thompson and Harbaugh (2013) found generally unfavourable attitudes towards NAPLAN testing. Results suggested that most teachers believed that NAPLAN testing was associated with a narrowing of the curriculum, whilst not providing an effective way to increase literacy and numeracy proficiency. However, while 67% of coded responses were negatively themed, the authors also noted that 21% were positively themed. The authors also reported that negative attitudes appeared more pronounced in government schools and schools located in lower socio-economic areas. Similarly, a large survey of over one thousand primary school principals produced findings to suggest that large variation existed across schools in how the NAPLAN results are communicated to teachers, parents, and students (APPA 2013). The same study also reported a stronger perceived curriculum impact of NAPLAN associated with lower SES schools (APPA 2013). In contrast, one small-scale survey of 84 teachers engaged in a professional development programme found generally favourable attitudes towards NAPLAN as providing useful information for individual teachers (Pierce and Chick 2011). However, Pierce and Chick (2011) also found that many of these teachers reported difficulties in interpreting and using the NAPLAN results effectively.

The only survey of parents reported in the literature was commissioned by the Whitlam Institute and carried out by NEWSPOLL, consisting of 568 parents (Wyn et al. 2014; Newspoll 2013). In contrast with surveys of teachers, results suggested that parents generally held more positive than negative attitudes towards NAPLAN testing. When asked if they are in favour of, or against the testing, 56% of parents reported being “in favour”, 34% “against”, and 10% “undecided”. When asked if they perceived their child’s NAPLAN results to be useful, 68% stated “useful”, 30% “not useful”, and 2% “undecided”. While a clear majority of parents were found to hold a relatively positive attitude towards the testing, there also existed a substantial proportion with a negative attitude. In recent years Australian Senate Enquiries have been carried out to investigate the use and experience of NAPLAN testing across a broad range of stakeholders. Submissions were obtained from individuals and also large organisations such as the Australian Education Union (Australian Senate Enquiry 2010, 2014). The Senate reports produced themes generally consistent with the studies summarised in this introduction. That is, the public appears open to the idea of a national testing programme, yet not entirely satisfied with the implementation. Furthermore, educators are typically more critical of the programme than parents. The over-arching consistent theme within the research literature to date has been that since inception in 2008 the proponents of the NAPLAN testing initiative have been unable to convince some stakeholders of the utility of the initiative.

The present study contributes to the understanding of public experience and opinion of NAPLAN by reporting the perspectives of parents and teachers from Independent schools on NAPLAN regarding themes identified in the prior literature: (a) impact on pedagogy and stress reported by teachers (Dulfer et al. 2012; Polesel et al. 2014; Thompson and Harbaugh 2013); (b) transparency and accountability afforded by NAPLAN (Thompson 2013); (c) usefulness of NAPLAN for helping individual students (Wyn et al. 2014; Newspoll 2013); and (d) communication of NAPLAN results (Pierce and Chick 2011; APPA 2013). The aim is to provide a description of parent and teacher perspectives from a sample of Independent schools to add to the limited evidence base regarding how Australian parents and teachers perceive NAPLAN.

Method

Participants

The research was funded by the Association of Independent Schools of Western Australia (AISWA) and our sample was obtained solely from the Independent school sector in Western Australia. Eighteen AISWA member schools were contacted by the research team to request participation approximately 2 months prior to NAPLAN testing. Eleven school principals agreed to participate. Most of the schools declining participation were from rural areas. We acknowledge that our results are limited by a potential self-selection bias of schools that all hold student well-being as a high priority, and are likely not to be representative of schools generally, nor for all Independent schools. As may be expected, all participating schools were above the median level (1000) of socio-educational advantage as determined by the Index of Community Socio-educational Advantage (ICSEA) that is published on the My School website (www.myschool.edu.au), values ranging from 1051 to 1182 (mean = 1148). Participants consisted of 347 parents (mean age = 42.70 years; 92.5% female), and 40 teachers (mean age = 37 years; 82.5% female) across Years 3 and 5 from the participating Independent schools. Two parents did not provide answers to all items and were excluded from the dataset, making the final sample size 345 parents. While the sample of teachers is comparatively small to other large surveys of educators, it serves as an adequate representative sample of teachers across the year groups in the schools surveyed by the present research to provide a comparison between parent and teacher perspectives. Prior to commencement of the study institutional ethics approval was obtained.

Procedure

Consenting parents and teachers could elect to fill out a paper survey, be telephoned, or complete an online version of the survey at their convenience over a 6-week period after NAPLAN testing finished, in 2015. A wide range of questions were asked of both parents and teachers regarding their attitudes towards, and perceptions of, NAPLAN testing. The specific questions are provided throughout the tables presented in the results section. The results section is separated into sub-sections to provide a more orderly structure and presentation of the survey results by grouping related questions. All questions were asked using the same six-point response scale: “not at all”, “slightly”, “somewhat”, “moderately”, “very much”, “extremely”. Prior research in this area has predominantly used agree–disagree type response scales (Thompson and Harbaugh 2013; Dulfer et al. 2012; Polesel et al. 2014). An agreedisagree scale can result in skew towards either general agreement or disagreement, with most responses falling into either slightly agree/disagree, or strongly agree/disagree categories. In practice, this means the response scale may only provide a 2-point scale (i.e. slightly and strongly) that does not provide much differentiation between individual responses. We therefore decided to use the “not at all”—“extremely” scale to provide a more nuanced understanding regarding the extent of endorsement for the questions asked. At the end of the parent survey, participants were also asked to volunteer an open-ended text response. Specifically, they were asked to “please write anything you would like to say about NAPLAN testing in the space below”.

Analytical approach

This research presents quantitative descriptive statistics, and correlations between survey items. Descriptive statistics are provided regarding how participants have responded to questions, and Spearman correlations are examined regarding the correlation between certain items. A Spearman correlation is a non-parametric alternative to the more commonly used parametric Pearson correlation. A Spearman correlation can be used when data is skewed or ordinal, but can still be interpreted in the same way as a Pearson correlation. Just like the Pearson correlation a Spearman correlation can potentially range from − 1 to + 1. In this article we interpret the magnitude of correlations according to the commonly used guidelines suggested by Cohen (1988, 1992): .1 (weak), .3 (moderate), and .5 (strong). Participant responses to the single open-ended question were coded and sorted by first using key words in context, and then through a process of constant comparison, as described by Strauss and Corbin (1990).

Results

Teacher perceptions of impact upon pedagogy and practice

The first three questions presented in Table 1 asked about the teacher’s perceptions of the impact NAPLAN was having upon their pedagogy (e.g. does NAPLAN narrow the focus of the curriculum?). It was found that 40–50% answered “not at all” or “slightly” to these questions which suggests that little impact of NAPLAN was perceived by a substantial proportion of the teachers. However, there was also a reasonable proportion (albeit much less) reporting a significant impact, with 15–30% stating “very much” or “extremely”. While our sample is limited to teachers from Independent schools, the results are consistent with prior studies suggesting that teachers from Independent schools do not tend to experience such a large impact on their curriculum as what may be the experience of teachers in government and/or lower SES contexts (Thompson and Harbaugh 2013; APPA 2013).

Table 1 Teacher (n = 40) perceptions regarding the impact and use of NAPLAN

In previous work we presented evidence to suggest that the parents and teachers surveyed in the present study reported “a little bit” of emotional distress during NAPLAN testing (Rogers et al. 2016). Consistent with those previously reported results, as shown in Table 1, around 50% of the teachers responded “not at all” or “slightly” to a question asking how stressed NAPLAN makes them feel. Additionally, around 70% reported “not at all” or “slightly” to a question asking if NAPLAN makes them want to teach in another year level that does not have NAPLAN. Therefore, most teachers did not report much impact to their general well-being due to NAPLAN testing. However, it must also be noted that around 10% answered “very much” or “extremely” to these two questions.

The final few questions reported in Table 1 asked about some potential uses of NAPLAN. Results show that most teachers surveyed are sceptical that NAPLAN testing can ensure a consistent experience for all students, that NAPLAN can be used to identify strengths and weaknesses for individual students, and that NAPLAN provides useful feedback regarding their own teaching performance. Therefore, while the results from the present study suggest most teachers did not report any large impact on their curriculum or any great deal of stress associated with the testing, neither did they appear to perceive the testing as particularly useful.

When examining Spearman correlations between the question items presented in Table 1, it was revealed that teachers reporting more stress associated with the testing also tended to report a stronger desire to teach in a different year without NAPLAN (Spearman r = .62, p < .01). Both questions were also positively associated with the items that asked if NAPLAN narrows the focus of the curriculum (Spearman rs = .45 and .61, ps < .01, respectively), and the extent that NAPLAN is perceived to place too much emphasis on literacy and numeracy to the detriment of other areas (Spearman rs = .61 and .45, ps < .01, respectively). Therefore, a teacher reporting more impact upon their curriculum also tended to report more stress associated with NAPLAN. The relationship between stress and perceived curriculum impact suggests there may be greater stress associated with NAPLAN in other contexts where the impact of NAPLAN is felt more intensely. It has been suggested that curriculum impact is greater in non-Independent schools (APPA 2013; Thompson and Harbaugh 2013). No other correlations between items listed in Table 1 reached statistical significance.

Parent and teacher perceptions of transparency and accountability

From the beginning of implementation to the present day, NAPLAN testing has been claimed to serve the Australian public by providing increased transparency and accountability across primary and secondary schooling (Australian Senate Enquiry 2010, 2014). These kind of perceptions have been investigated in the context of other testing regimes in other parts of the world (Brockmeier et al. 2014). However, a nuanced understanding of teacher and parent beliefs about how well NAPLAN achieves these over-arching transparency and accountability goals is lacking in the Australian context. In the present study, we directly asked teachers and parents if NAPLAN results are an indicator of how well schools are doing, and how well teachers can teach (i.e. transparency). We also asked about their perceptions regarding the perceived extent that NAPLAN increases accountability of schools, teachers, and parents. Responses to these questions are presented in Tables 2 (parents) and 3 (teachers).

Table 2 Parent (n = 345) responses to survey items asking about the perceived transparency and accountability of NAPLAN

For parents, responses were evenly spread across the response scale. While about 1/5 reported a belief that NAPLAN fosters transparency and accountability “not at all”, another 1/5 reported “very much” or “extremely” (see Table 2). A strong positive association was found between all items. Compared with the parents, overall the teachers tended to report less endorsement of NAPLAN as fostering transparency and accountability (see Table 3). The teachers largely did not endorse NAPLAN as an indicator of teacher performance, as around 60% reported “not at all”. Both parents and teachers perceived NAPLAN as fostering accountability of teachers and schools more than fostering any accountability of parents. For teachers, correlations between all items were moderate-strong, except for the ‘accountability of parents’ item. Therefore, an overall transparency and accountability measure was created by averaging across the first four items (this excludes the ‘accountability of parents’ item). This composite measure is further discussed later in the results section (Section “Summary of composite variables”).

Table 3 Teacher (n = 40) responses to survey items asking about the perceived transparency and accountability of NAPLAN

Parent and teacher perceptions of usefulness, validity, and fairness of the testing

The Australian Senate NAPLAN enquiries reported that stakeholders generally perceive NAPLAN as a useful initiative for providing broad-scale comparisons across schools at the state and national levels (Australian Senate Enquiry 2010, 2014). However, the senate enquiries also reported that stakeholder perceptions regarding the usefulness of NAPLAN for serving the needs of individual teachers and students were mixed. Mixed positive and negative parent and teacher perceptions are also evident in the research literature (Polesel et al. 2012). As shown in Tables 4 and 5 (see the first three items in each table) it is evident that mixed positive/negative perceptions also existed within our sample of parents and teachers regarding the usefulness of NAPLAN to help individual students. The inter-correlations between items reveal strong positive associations between the ‘usefulness’ items for both parents and teachers. We therefore created a composite ‘usefulness’ score from these items that is discussed later in the results section (Section “Summary of composite variables”).

Table 4 Parent (n = 345) responses to survey items asking about the perceived usefulness of NAPLAN
Table 5 Teacher (n = 40) responses to survey items asking about the perceived usefulness of NAPLAN

Concerns have also previously been raised about the validity and fairness of NAPLAN testing (Australian Senate Enquiry 2010, 2014; Polesel et al. 2012). We asked respondents their perception of the extent that NAPLAN measures all aspects of maths and reading (validity type question), and the extent that it is a fair form of testing for children from all cultural backgrounds (fairness type question). Again, there are mixed responses to these items; however, participants expressed more doubt about these validity and fairness questions compared to the questions targeting usefulness. For the validity question, approximately 35% of both parent and teacher groups responded “not at all”. A similar proportion of parents responded “not at all” for the fairness question, and approximately 60% of the teachers responded “not at all” to this question. This suggests that more work needs to be done by the test administrators to convince stakeholders of the validity and appropriateness of widespread usage of the testing. Our data are however limited to only single questions regarding validity and fairness, with a specific population of respondents (i.e. parents and teachers from Independent schools). Future research with a wider set of items and a broader sample would be useful. The validity and fairness questions were found to be strongly (for parents) and moderately (for teachers) positively associated with one another. This suggests that a respondent perceiving the test to be more appropriate for widespread use also tended to perceive the test as encompassing most aspects of maths and reading. For parents, items were also positively associated with the usefulness items. This suggests that parents perceiving the test as more valid and fair also tended to perceive the testing as more useful for comparing across/within students and helping individual students learn. Low-moderate positive associations among these variables were observed for teachers.

Parent and teacher perceptions of the clarity of communication of NAPLAN results

In this section the results from parents are from Year 5 parents (n = 198) only, since they had prior experience regarding the communication of NAPLAN results when their child was in Year 3. In contrast, the surveyed Year 3 parents would not have had sufficient prior experience to answer the items with any confidence. Previous discussion about the communication of NAPLAN results has largely focused upon the presentation of results on the My School website (Australian Senate Enquiry 2010, 2014; Ragusa and Bousfield 2015). Pierce and Chick (2011) have reported findings that teachers may experience difficulties interpreting results of their students as provided to them by the governing body of NAPLAN, the Australian Curriculum and Assessment Reporting Authority (ACARA). In the present study, we focus on perceptions regarding how well the results of students are communicated to the students and parents by the teacher and the school report. Parent and teacher responses were similar with around 50% responding “not at all” or “slightly” and around 15% responding “very much” or “extremely” to these items, from both groups (see Tables 6, 7). Therefore findings reveal some mixed perceptions situated within a prevailing negativity regarding the communication of results to students and parents (by the teacher and school report). Strong associations were found between the first four items for both parents and teachers. An overall communication appraisal score was created by averaging across these items that will be discussed further in Section “Summary of composite variables”.

Table 6 Year 5 Parent (n = 198) responses to survey items asking about how well NAPLAN results are communicated
Table 7 Teacher (n = 40) responses to survey items asking about how well NAPLAN results are communicated

A single question was also asked to gauge parent and teacher perception regarding how well results are communicated via the My School website. As can be seen in Tables 6 and 7, perceptions were very mixed with relatively even responses across the response scale. Another question asked if respondents believed NAPLAN results were communicated in a timely manner, and not surprisingly the most frequent response by both parents and teachers was “not at all” (around 35% parents, and around 60% teachers). One reason behind the upcoming switch to online NAPLAN assessment is to improve the turnaround time for the reporting of results to address criticisms regarding the long wait for results that has plagued the testing initiative since inception (Australian Senate Enquiry 2014).

Summary of composite variables

While prior sections have provided the frequency response data for individual survey items, in this section we examine composite variables. The composite variables represent an overall measure of the perception of the ‘transparency and accountability’ afforded by NAPLAN (see Section “Parent and teacher perceptions of transparency and accountability”, questions A1–A4), the ‘usefulness of NAPLAN for helping individual students’ (see Section “Parent and teacher perceptions of usefulness, validity, and fairness of the testing”, questions U1–U3), and the ‘clarity of the communication of NAPLAN results to parents and students by the teacher and school report’ (see Section “Parent and teacher perceptions of the clarity of communication of NAPLAN results”, questions C1–C4). To statistically justify the appropriateness of our composite variables we conducted confirmatory factor analysis (CFA) using a structural equation modelling (SEM) approach (Acock 2013; Rubio and Gillespie 1995), see Fig. 1. The parent models acquired adequate goodness of fit indices, but the teacher model did not as this group has a relatively low sample size. Across all groups the reliability of all factors was high (i.e. equal to or greater than .79). A strong positive association was found between perceptions of accountability/transparency and usefulness for individual students for parents (Year 5 parents r = .81, Year 3 parents r = .77), and a moderate association for teachers (r = .46). These data suggest that continued work to foster engagement with NAPLAN results will likely help to improve NAPLAN’s image as a testing initiative that promotes a more transparent and accountable primary and secondary Australian education system. For Year 5 parents, a positive association was found between perceptions surrounding the clarity of communication of results with perceptions of transparency/accountability (r = .38), and usefulness of the testing (r = .44). These results suggest that increasing parental sense of the clarity of communication may promote their general attitudes towards the testing. These same associations were not observed for teachers.

Fig. 1
figure 1

Confirmatory factor analysis results for: a Year 5 parents, b Teachers, and c Year 3 parents. Models were obtained using Maximum Likelihood estimation. Path coefficients represent standardised loadings. All coefficients are statistically significant with the exception of the .10 values between comm with acc and use for the Teacher sample. Latent variables are as follows: acc = Transparency and accountability; use = Usefulness of NAPLAN; comm = Communication to students and parents by teacher and school report. Acock (2013) provides guidelines for acceptable model fit indices: CFI ≥ .95, SRMR ≥ .05, RMSEA ≥ .08. The Year 5 model conforms to these guidelines, the Year 3 model conforms to guidelines with exception of RMSEA, and the Teachers model does not provide adequate fit indices however this model is limited by a small sample size. Reliability scores for individual factors are denoted by the ρ symbol. In two instances with substantial modification indices we added correlated error terms to the models (i.e. items PA3 and PA4, and PC1 and PC3). We justify this inclusion as in both instances the items have a substantial amount of conceptual overlap, with associated high inter-correlation (Acock 2013; Rubio and Gillespie 1995)

While previous sections highlight the existence of mixed negative and positive perceptions, the average perception as obtained by the composite scores obtained by averaging across items for each factor for both parents and teachers, is around “somewhat”, see Fig. 2. The only exception is the average teacher perception of transparency and accountability that is closer to “slightly”. The transparency and accountability measure was the only measure of the three that yielded a statistically significant difference between parent and teacher perceptions, t(384) = 2.65, p < .01, d = .48. As can be seen in Fig. 2 results suggest that parents typically report a stronger perception of transparency and accountability than teachers, although the mean level for parents is still low on the scale (i.e. “somewhat”).

Fig. 2
figure 2

Mean scores for the ‘accountability’, ‘usefulness’, and ‘communication’ composite variables for Year 3 parents (n = 147), Year 5 parents (n = 198), and teachers (n = 40). Error bars represent 95% confidence limits

Open-ended responses from parents

As part of the survey an optional open-ended response box was available for parents. One hundred and eighty-five (54%) of the parents provided a written response. The frequency of occurring themes is summarised in Table 8. This qualitative data complements the variation that was noted in the quantitative data. While a substantial proportion provided comments that suggested they can see the potential of NAPLAN as a tool to help individual students (19%), a similar proportion also adamantly stated a belief that NAPLAN was not a good measure of a child’s ability (16%). It was also expressed that too much emphasis is placed upon NAPLAN by the general community (10%), and that the additional pressure placed upon stakeholders was not needed or helpful (17%). Therefore, similar to the quantitative findings, a mix of generally ‘positive’ and ‘negative’ attitudes are evident.

Table 8 Frequencies of different themes present within the open-ended comments provided by parents

Discussion and conclusions

Since its inception in 2008, NAPLAN has received ongoing criticism regarding negative impacts upon teaching and stakeholder well-being from scholars (Klenowski and Wyatt-Smith 2012; Harris et al. 2013; Polesel et al. 2012), and the media (Shine 2015). These criticisms have resulted in two separate Australian Senate enquiries that conceded there are several negative unintended consequences associated with the testing. However at the same time NAPLAN is perceived by the Australian government to be a valuable initiative that with ongoing development and refinement can aid in improving the primary and secondary school system (Australian Senate Enquiry 2010, 2014). To date, surveys of educators and parents have provided evidence to suggest that while some believe in the usefulness of the testing, there are just as many people that perceive the testing as counter-productive and a waste of time and resources (Thompson and Harbaugh 2013; Dulfer et al. 2012; Polesel et al. 2014; Newspoll 2013; Pierce and Chick 2011; Wyn et al. 2014; Ragusa and Bousfield 2015; APPA 2013).

The present study sought to add to the existing literature by reporting findings from a survey of teachers and parents from a sample of Independent schools in Western Australia. Results were consistent with prior studies as responses to survey items were typically spread across the “not at all” to “extremely” response scale, highlighting the presence of mixed attitudes towards NAPLAN. Survey questions targeted four broad themes:

While not unanimous, results suggest that the teachers surveyed in the present study experience only a relatively minor impact of the testing upon their curriculum and stress levels. This finding is consistent with a prior publication reporting the teachers from these schools generally only experience a minor increase in emotional distress associated with the testing (Rogers et al. 2016). Additionally, the findings of the present study are consistent with other research suggesting that there are people who do not perceive a great deal of impact on curriculum due to NAPLAN (APPA 2013; Thompson and Harbaugh 2013). Further research is required to fully understand the impact of NAPLAN on pedagogy, and what potential stress might be related to this. In particular it would be useful to compare and contrast educators in different contexts, such as relative affluence, and whether the school is Independent or government funded. In the present study, a moderate positive association was found between perceived impact of NAPLAN on curriculum with stress. This provides tentative evidence to support arguments made by Hardy that curriculum changes perceived to be associated with NAPLAN can be stressful for individuals who might be opposed to standardised testing due to their own personal philosophical stance towards education policy and practice (Hardy 2014, 2015).

While results suggest that the impact on curriculum and well-being in the surveyed schools is generally low, perceptions regarding the usefulness of the testing were not as encouraging. Teachers generally reported that NAPLAN was not particularly useful for identifying strengths and weaknesses of individual students; nor was it seen to provide useful feedback on their own teaching. Other questions asking about the usefulness of NAPLAN for helping individual students and providing useful comparative data were however rated slightly more positively. Both teacher and parent responses to these questions were mixed with some respondents quite negative and others very positive; however, on average, the perception was that NAPLAN is only “somewhat” useful for helping individual students. Participants were particularly critical of the lengthy time lapse (i.e. months) between sitting the test and notification of results. Teachers and parents perceiving the timing of notification of results as more adequate also tended to report NAPLAN as more useful for helping individual students. This suggests that improving this long-standing problem with testing administration (i.e. timely release of results) will likely help to improve general attitudes regarding the usefulness of NAPLAN. This is a hoped for outcome of changing to online administration of the testing (Australian Senate Enquiry 2014). It will be interesting for future research to revisit teacher and parent perceptions sometime after the initiation of the online testing and compare with findings of the present study and the few other similar pre-online studies of NAPLAN attitudes to examine if an increased appreciation of the usefulness of the testing is borne out.

The online administration and associated faster turnaround of results may also help to lift perceptions of the clarity of communication of results. In the present study, while the survey revealed mixed perceptions, the average perception of both parents and teachers regarding the clarity of communication of results to the students and parents by the teacher and school report was only “somewhat”. Interviews with the school principals (to be fully reported and discussed in detail in a subsequent publication) revealed different approaches regarding the communication of results to students and parents. However, a common practice among all principals was a concerted effort to minimise the hype around the testing via a low-key approach to NAPLAN communication. On the one hand, a low-key approach may minimise hype, and on the other hand it may reduce perceptions of the usefulness of the testing. It may be helpful if schools were to debrief parents on how they used the NAPLAN results to improve the education of their children. In our study we found a positive association between parent perception of the clarity of communication of results and perceived usefulness of the testing. This suggests that fostering a sense of clearer communication of results with parents may help to bolster attitudes towards the testing. A risk that communicating more clearly about NAPLAN might increase pressure and stress associated with testing is an assumption that requires further research. We argue that a communication strategy that includes more precise information about how the results are used to help children learn would be useful.

The present research focused on communication of NAPLAN results to the students and parents by the teacher and school report. However, Pierce and Chick (2011) have noted that the use of NAPLAN results by teachers within their individual classrooms is dependent upon quality of communication between test administration (i.e. ACARA) and schools. Where the primary responsibility for the communication of individual NAPLAN results to students and parents sits is an issue for debate (e.g. ACARA, schools, or teachers). ACARA provides individual student reports (see: http://www.nap.edu.au/results-and-reports/student-reports.html). Is this enough information to fully inform and/or engage students and parents to utilise the results? How can information be communicated and used within the student–teacher–parent triad that maximises student gains for both knowledge and skills? How can NAPLAN results be further utilised by teachers to help individual students? Are results communicated adequately to teachers to enable them to get the most use from results? These kinds of questions require future consideration and study.

From the outset a major drive behind the NAPLAN initiative has been to foster an increased sense of transparency and accountability within the Australian primary and secondary education sector (ACARA 2015). In the present study, we directly asked teachers and parents their perceptions regarding the transparency and accountability afforded by NAPLAN. We found evidence for mixed negative and positive attitudes, with the average response “somewhat” for parents, and “slightly” for teachers. However, as noted, despite a fairly low average opinion of the ability of NAPLAN to meet transparency/accountability goals, approximately 10–20% of both parents and teachers responded “very much”.

In the present study both parents and teachers that perceived NAPLAN as useful for helping individual students also tended to perceive NAPLAN as fostering more transparency and accountability. Therefore, a challenge for the administrators of NAPLAN is to improve the perceptions regarding the usefulness of the testing. We have suggested that improving the turnaround time for the release of results, and the communication of results, should help to improve perceptions of usefulness. Furthermore, more research is required to provide evidence of the reliability and validity of the testing. As an example, Grasby et al. (2015) recently provided evidence of convergent validity between NAPLAN and other established reading and numeracy tests. Another example that has provided some evidence for predictive validity, Year 9 NAPLAN scores have been found to be a significant predictor of final Year 12 results (Houng and Justman 2014). More research of this kind needs to be conducted, and communicated to the public, to provide a solid empirical evidence base that will help to bolster the perception of NAPLAN by both educators and parents.

The present research adds to the existing body of knowledge regarding educator and parent perceptions of NAPLAN testing. Consistent with prior work we found evidence for an overall low average opinion of NAPLAN, yet at the same time it must be acknowledged that attitudes measured by both survey items and an open-ended response ranged from very poor to very positive. We have suggested some challenges for test administrators and schools moving forward to lift the perception of the NAPLAN initiative. The upcoming switch to online testing should afford a much faster turnaround of results that will likely facilitate the use of the testing to be used as a tool for helping individual students. This will hopefully improve the perception of the usefulness of the testing for both educators and parents. The online testing is however far from a magic bullet in this regard, and further development of the testing process and communication and use of results will need to occur over the coming years to effectively lift the profile of NAPLAN in the eyes of the public. While online testing should help improve the turnaround of results feedback, it potentially creates other issues such as inequities that could arise via different opportunities for learning with computers across different school contexts. We acknowledge that the results presented in the present study are very limited to a specific context (i.e. relatively affluent Independent schools in Western Australia with a strong focus on student well-being), and the attitudes uncovered may therefore not be generalizable to the wider teacher/parent community. However, as suggested by prior research (Thompson and Harbaugh 2013), the context we investigated is arguably where more positive attitudes toward NAPLAN would reside. Therefore, that we found a lot of room for improvement in attitudes in this specific population suggests there is likely just as much, if not more, room for improvement in the attitudes towards NAPLAN in the wider population.