At a meeting in 2000 of UNICEF International Working Group on Education, a working paperFootnote 1 presented posed a crucial question: What does quality mean in the context of education? Referring to the many definitions of educational quality and the multiple attempts to capture the concept’s complexity and multifaceted nature, the authors also note that terms like efficiency, effectiveness, equity and quality have often been used synonymously. Under the heading ‘Quality Learners’, the first part of the paper includes the following statement:

School systems work with the children who come into them. The quality of children’s lives before beginning formal education greatly influences the kind of learners they can be. Many elements go into making a quality learner, including health, early childhood experiences and home support. (UNICEF, 2002, p. 5)

The paper emphasises whole child approaches to education and goes on to discuss ‘Quality Learning Environments’, which include physical, psychosocial and service delivery elements. More than 20 years later, this is still an ongoing conversation and the five articles presented in this issue extend this conversation and emphasise the concerns of learners.

1 Overview of EAEA 4/2021

In the first article, Getenet and Beswick report a study examining predictors of achievement in Australia’s National Assessment Program—Literacy and Numeracy (NAPLAN), an annual standardised national test for students in Years 3, 5, 7 and 9. As in many other countries, the Australian Government promotes the use of these test results as a public accountability mechanism to ensure transparency and public confidence in the country’s education standards. Previous studies had revealed widening gaps in NAPLAN performance as a result of factors that included gender, geographic location, parents’ educational background, and language background other than English (LBOTE). Focusing on children’s numeracy, the authors examined NAPLAN results in Queensland schools from 2014 to 2017, using a hierarchical multiple regression model to analyse performance predictors and their relative importance. One important finding was that prior numeracy test results accounted for more of the total variance in later scores than reported in previous studies. These findings suggest that children who are doing less well in Year 3 may need numeracy support to improve their performance in Year 5 and above, and that some children might also benefit from additional support prior to Year 3. The authors suggest that further research should explore other factors such as social-economic status, as well as investigating the test’s appropriateness and validity as an indicator of indigenous students’ mathematical capabilities.

Like their peers, special needs students in Turkey are required to sit large-scale exams to qualify for entry to secondary and higher education institutions. Their scores are crucial for securing a qualified education and for students’ future professional prospects. In the second article, Şenel examines the measurement invariance of the Central Examination for Secondary Education Institutions in relation to participant disability status, using four methods to detect differential item functioning (DIF) based on achievement data from 2018. As well as examining focus group and reference group achievement data, the study asked a number of measurement experts to complete an expert opinion form. In total, 16 of 90 items indicated DIF, and expert opinion confirmed that five of these items were biased in favour of non-visually impaired students. Identifying and problematising possible sources of bias that include the conversion of visual materials into text-based formats, the author highlights some implications for policy, practice, and further research.

Chen, Zhang and Li explore the implementation of formative assessments in mid-western China. Based on their analysis of interview data with teachers and deans from eight universities, they identify multiple issues that may hinder the use of formative assessments, including insufficient supports for teachers, improper dissemination, ineffective training, limited instructor assessment literacy, large class sizes and student resistance. The authors conclude that institutionalisation of formative assessments of practices can be enhanced by improving training, increasing support from leaders, and empowering teachers to try out new ideas to support student learning.

Lipnevich, Panadero, Gjicali and Fraile present the findings of their study of the differing assessment criteria used to assign course grades by university instructors in the US and Spain. Analysing a large sample of syllabi from universities in both countries, they found that US instructors relied equally on process and product criteria while Spanish instructors used a higher proportion of product indicators. Self- and peer assessment to support students’ learning were rarely used in either country, and none of the syllabi incorporated criteria for assessing progress. The authors note implications for policy, practice and further research, as well as for theoretical accounts of curricular and assessment design in higher education.

While students in higher education institutions are frequently asked for their opinions about various aspects of their course or teacher performance, they are rarely asked about the importance of these issues for learning. In the fifth and final article Cladera draws on previous research to argue the importance of this issue in establishing a context for course and teaching evaluations. The study details a methodology that lecturers can use to collect and analyse student feedback on potential improvements. Analysis of the survey data revealed that teaching characteristics in need of further attention included lecturer enthusiasm and course interest and intellectual challenge, both of which were rated as highly important and low performing.

2 Some reflections

In the first and second articles, one important emergent theme is the fairness of achievement tests and exams and the need to address possible sources of bias that may disadvantage certain groups of children. The third and fourth articles address traditions of assessment and the challenges of introducing new student-centred forms for assessing learning and providing feedback that may break with existing beliefs and practices, including those of students themselves. The final article contends that students are indeed capable of prioritising and assessing essential elements for improving their learning.