Between-classes sorting within schools and test scores: an empirical analysis of Italian junior secondary schools

Abstract

This paper suggests that some Italian junior secondary schools are likely to practise sorting between classes, and proposes an indicator to measure this practice. The impact of “informal” sorting on the students’ achievement is evaluated through an appropriate Instrumental Variables (IV) approach. The results suggest that this practice harms the students’ results in Reading, as measured through standardised test scores. Heterogeneity of this effect is then explored, considering different school types as well as different student characteristics. Overall, practising sorting within schools helps to replicate existing inequality through unequal educational opportunities.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Notes

  1. 1.

    The Italian educational system is currently organised into three sequential steps: primary schooling (grades 1–5, age 6–10), junior secondary schooling (grades 6–8, age 11–13) and high secondary schooling (grades 9–13, age 14–18).

  2. 2.

    In the year 2011/2012, for junior secondary schools, the minimum/maximum numbers in a class were 18 and 27, respectively (Presidential Decree No. 81/2009).

  3. 3.

    Since the choice of whether to sort the students or not and, more generally, how to manage the process of composing the classes, falls on the head teacher, it could be interesting to understand whether and how sorting is affected by the head’s skills and practices; the work by Di Liberto et al. (2013) can be promising in this sense.

  4. 4.

    Nevertheless, for empirical purposes and with the aim of testing this significant assumption, in Annex 3, we have discussed the results obtained when assuming that sorting is along ability and not socio-economic status.

  5. 5.

    There is a significant amount of recent empirical evidence showing and measuring the significance of peer effects on educational and other social outcomes. See, for instance, Gaviria and Raphael (2011) on peer group influences with respect to an array of behaviour such as drug use, smoking, alcohol abuse, dropping out from school, etc., and Vardardottir (2013) who reports on an empirical study within the Iceland setting, where peer effects seem to be the main factor explaining differences between high- and low-ability classes. Peer effects are also related to the average level of ability in the classroom, as Kang (2007) demonstrated in an international perspective, using TIMSS data. The review of the literature carried out by Sacerdote (2011) reveals that half of all the existent studies on peer effects fail to find any correlation/effect between average classmates or schoolmates’ characteristics and individual students’ performance.

  6. 6.

    The choice of a synthetic indicator is preferable to using single (mono-dimensional) measures, because it provides a closer approximation to the general consensus that the “social, cultural and economic status” concept is best represented by education, occupational status and economic means (Hauser 1994; Schulz 2005). Of course, the use of single indicators is useful for detecting which are the single dimensions is more influential on learning outcomes. Therefore, the various dimensions should be caught in their conjunction to really understand which is the socioeconomic background that is affecting education outcomes. For example, a student from a low-income family can still perform positively at school, if her parents are well educated and transfer positive values about engaging in education. Conversely, a student coming from a wealthy family can obtain lower scores than expected if there is not a positive chain of values transmitted from educated parents. In both cases, the use of single indicators will not be able to capture such double-sided effect of different aspects of background, so a composite measure is preferable. A strong methodological argument for using composite indicators for measuring socio-economic status, especially when considering cross-country comparisons, is also made by Marks (2011).

  7. 7.

    Although we strongly believe in the opportunity of clustering S.E. at school level, we also tested how the results change when clustering at class level, and we did not find any sensible difference.

  8. 8.

    To the extent that the school may have changed policy from one year to another, the instrument does not capture the phenomenon of interest. For instance, identifying whether the head teacher changed could be an (indirect) indicator; unfortunately, we have no specific data to check for this. Nevertheless, it is very likely that the practice of sorting is the outcome of an arrangement agreed within the wider community of teachers, and not purely a matter decided by an individual head teacher, and so probably more persistent over time.

  9. 9.

    For a methodological description of how IV functions, see Angrist and Pischke (2009).

  10. 10.

    Although high, and highly significant statistically, the value is slightly lower than |10| which is the target suggested by Staiger and Stock (1997) for fully ensuring that the instrument is (empirically) reliable.

  11. 11.

    Second-generation immigrant children are often placed in classes one or two grades below their age, to help them progress from an educational perspective (limiting the problems connecting with learning Italian and/or compensating for the lower level of instruction they may have received in their country of origin).

  12. 12.

    What is remarkable here is that, while we have no data about the achievement of disabled students (as they normally do not take the INVALSI test, or if they do, their results are not included in the data), we control for the proportion of disabled students in the class, to check whether there are any spillover effects—potentially related to different educational activities and strategies when the numbers are higher.

  13. 13.

    We also checked whether the empirical analysis is sensitive to the exclusion of the group of schools for which ESCS_Var_Within is particularly (maybe too) high, specifically >75 %. The results are almost identical; the estimated impact of ESCS_Var_Within on the reading score is 0.049 (instead of 0.050) and statistically significant. Also the other coefficients of the educational production function are virtually unchanged. We also tested whether there is a mechanical relationship in the correlation between test scores, ESCS_Var_Within and overall ESCS variance at school level. We calculated the latter variable (overallvar), and we included it in the model; However, ESCS_Var_Within remains statistically significant and with unchanged coefficient.

  14. 14.

    The size of this effect has been calculated by considering the specific values of (the standard deviations of) Reading scores and ESCS_Var_Within for the students attending schools in Southern Italy (16.7 and 9.2, respectively).

  15. 15.

    In this circumstance, the t values associated with the instruments in the first stage are very high, well above the threshold suggested by Staiger and Stock (1997).

  16. 16.

    Agasisti (2011) showed that different competitive pressures in the geographical areas of Italy can partly account for differentials in achievement.

References

  1. Adnett N, Davies P (2005) Competition between or within schools? Re-assessing school choice. Educ Econ 13(1):109–121

    Article  Google Scholar 

  2. Agasisti T (2011) How competition affects schools’ performances: does specification matter? Econ Lett 110(3):259–261

    Article  Google Scholar 

  3. Agasisti T, Vittadini G (2012) Regional economic disparities as determinants of students’ achievement in Italy. Res Appl Econ 4(1):33–53

    Google Scholar 

  4. Ammermueller A, Pischke JS (2009) Peer effects in European primary schools: evidence from the Progress in International Reading Literacy Study. J Labor Econ 27(3):315–348

    Article  Google Scholar 

  5. Angrist JD, Pischke JS (2009) Mostly harmless econometrics. Princeton University Press, Princeton

    Google Scholar 

  6. Barbieri G, Rossetti C, Sestito P (2011) The determinants of teacher mobility: evidence using Italian teachers’ transfer application. Econ Educ Rev 30(6):1430–1444

    Article  Google Scholar 

  7. Bertoni M, Brunello G, Rocco L (2013) When the cat is near, the mice won’t play: the effect of external examiners in Italian schools. J Pub Econ 104:65–77

    Article  Google Scholar 

  8. Borland MV, Howsen RM, Trawick MW (2006) Intra-school competition and student achievement. Appl Econ 38(14):1641–1647

    Article  Google Scholar 

  9. Bratti M, Checchi D, Filippin D (2007) Geographical differences in Italian students’ mathematical competencies: evidence from PISA 2003. Giornale degli Economisti e Annali di Economia 66(3):299–333

    Google Scholar 

  10. Brunello G, Checchi D (1997) Does school tracking affect equality of opportunity? New international evidence. Econ Policy 22(52):781–861

    Google Scholar 

  11. Caldas SJ, Bankston C (1997) Effect of school population socioeconomic status on individual academic achievement. J Educ Res 90(5):269–277

    Article  Google Scholar 

  12. Campodifiori E, Figura E, Papini M, Ricci R (2010) Un indicatore di status socio-economico-culturale degli allievi della quinta primaria in Italia (An indicator for students’ socio-economic background). INVALSI Working Paper No. 02/2010, INVALSI, Rome (Italy)

  13. Carrell SE, Sacerdote BI, West JE (2013) From natural variation to optimal policy? The importance of endogenous peer group formation. Econometrica 81(3):855–882

    Article  Google Scholar 

  14. Checchi D, Flabbi L (2007) Intergenerational mobility and schooling decisions in Germany and Italy: the impact of secondary school tracks. IZA Discussion Paper No. 2876

  15. Collins CA, Gan L (2013) Does sorting students improve scores? an analysis of class composition. NBER Working Paper No. 18848

  16. Condron DJ (2007) Stratification and educational sorting: explaining ascriptive inequalities in early childhood reading group placement. Soc Probl 54(1):139–260

    Article  Google Scholar 

  17. Di Liberto A, Schivardi F, Sulis G. (2013) Managerial practices and students’ performance. FGA Working Paper No. 49 (07/2013)

  18. Duflo E, Dupas P, Kremera M (2011) Peer effects teacher incentives, and the impact of tracking: evidence from a randomized evaluation in Kenya. Am Econ Rev 101(5):1739–1774

    Article  Google Scholar 

  19. Ferrer-Esteban G (2011) Beyond the traditional territorial divide in the Italian education system. Effects of system management factors on performance in lower secondary schools. FGA Working Paper No. 42 (12/2011)

  20. Gaviria A, Raphael S (2011) School-based peer effects and juvenile behaviour. Rev Econ Stat 83(2):257–268

    Article  Google Scholar 

  21. Gorard S, Cheng SC (2011) Pupil clustering in English secondary schools: one pattern or several? Int J Res Method Educ 34(3):327–339

    Article  Google Scholar 

  22. Hanushek EA (2006) Does Educational Tracking Affect Performance and Inequality? Differences-in-differences evidence across countries. Econ J 116(510):C63–C76

    Article  Google Scholar 

  23. Hauser RM (1994) Measuring socioeconomic status in studies of child development. Child Dev 65(6):1541–1545

    Article  Google Scholar 

  24. Haveman R, Wolfe B (1995) The determinants of children’s attainments: a review of methods and findings. J Econ Lit 33:1829–1878

    Google Scholar 

  25. INVALSI—Istituto nazionale per la valutazione del sistema educativo di istruzione e di formazione (2013) Rilevazioni nazionali sugli apprendimenti 2012/13 (National standardised test scores 2012/13). INVALSI, Rome

    Google Scholar 

  26. Kang C (2007) Academic interactions among classroom peers: a cross-country comparison using TIMSS. Appl Econ 39(12):1531–1544

    Article  Google Scholar 

  27. Marks GN (2011) Issues in the conceptualisation and measurement of socioeconomic background: do different measures generate different conclusions? Soc Indic Res 104(2):225–251

    Article  Google Scholar 

  28. Mocetti S (2008) Educational choices and the selection process before and after compulsory schooling. Bank of Italy Temi di Discussione, No. 691 Sept 2008

  29. Mullis IVS, Martin MO, Foy P, Arora A (2012) TIMSS 2011 International Results in Mathematics. TIMSS & PIRLS International Study Center, Boston College, Chestnut Hill

    Google Scholar 

  30. Oakes J (1986) Keeping track, part 1: the policy and practice of curriculum inequality. Phi Delta Kappa 68(1):12–17

    Google Scholar 

  31. Oakes J (2005) Keeping track: how schools structure inequality, 2nd edn. Yale University Press, New Haven

    Google Scholar 

  32. Oakes J (2008) Keeping track: structuring equality and inequality in an era of accountability. Teach Coll Rec 110(3):700–712

    Google Scholar 

  33. Perri LB, McConney A (2010) Does the SES of the school matter? An examination of socioeconomic status and student achievement using PISA 2003. Teach Coll Rec 112(4):1137–1162

    Google Scholar 

  34. Poletto S (1992) La formazione delle classi (The process of composing classes). In: Bartolini D (ed) L’uomo—la scuola (man & school). Nuova Dimensione, Portogruaro

    Google Scholar 

  35. Polidano C, Hanel B, Buddelmeyer H (2013) Explaining the socio-economic status school completion gap. Educ Econ 21(3):230–247

    Article  Google Scholar 

  36. Sacerdote B (2011) Peer effects in education: how might they work, how big are they and how much do we know thus far? Handb Econ Educ 3:249–277

    Article  Google Scholar 

  37. Schneider T (2003) School class composition and student development in cognitive and non-cognitive domains: longitudinal analyses of primary school students in Germany. In: Windzio M (ed) Integration and inequality in educational institutions. Springer, Dordrecht (The Netherlands), pp 167–190

    Google Scholar 

  38. Schulz W (2005) Measuring the socio-economic background of students and Its effect on achievement on PISA 2000 and PISA 2003. Technical Report

  39. Sirin SR (2005) Socioeconomic status and academic achievement: a meta-analytic review of research. Rev Educ Res 75(3):417–453

    Article  Google Scholar 

  40. Staiger D, Stock JH (1997) Instrumental variables regression with weak instruments. Econometrica 65(3):557–586

    Article  Google Scholar 

  41. Thapa A, Cohen J, Guffey S, Higgins-D’Alessandro A (2013) A review of school climate research. Rev Educ Res 83(3):357–385

    Article  Google Scholar 

  42. Vardardottir A (2013) Peer effects and academic achievement: a regression discontinuity analysis. Econ Educ Rev 36(1):108–121

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tommaso Agasisti.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Appendices

Annex 1

See Table 11.

Table 11 First-stage regression in the IV approach (dependent variable: ESCS_Var_Within(t); instrument: ESCS_Var_Within(t−1))

Annex 2

See Tables 12 and 13.

Table 12 Comparing the entire population of sixth graders with those for whom information about prior achievement (test score in grade 5) is available
Table 13 Comparing the results between preferred specification (restricted sample) and an analysis with all students
Table 14 Results when using PriorAchievement_Var_Within k instead of ESCS_Var_Within k

Annex 3

As discussed extensively in the paper, we believe in a sorting mechanism that is based on students’ SES more than their ability; however, we consider them not only equivalent, but also likely to happen together.

In this annex, however, our aim is to show that our main results are unchanged when considering the alternative sorting force in action. The theoretical rationale is to look at the results obtained in test scores at grade 5, and assuming that they reflect the same kind of ability that is valued and judged by the teachers of primary schools; in other words, we assume that those students who obtained higher scores in INVALSI tests at grade 5 are also those that the primary school’s teachers report as better students to junior secondary schools’ teachers responsible for composing the classes. In these circumstances, if a school decides to “sort by ability”, then we should observe a great structural differentiation between classes in terms of prior achievement, with best students in the same class as well as the worst, etc. To the extent to which prior achievement is related to socioeconomic status (SES), the resulting sorting is similarly based on ability and SES.

Operationally, we calculated the within-school (between classes) variance when considering Prior Achievement as the focal variable (the indicator was named PriorAchievement_Var_Within k ). The paper clearly described that the information about prior achievement is, unfortunately, not available for almost 50 % of the students; consequently, we opted for a selection procedure that would guarantee that PriorAchievement_Var_Within k is calculated only for those schools where the proportion of students for which the information is available is high enough (the threshold was set at 75 %, in other words, only those schools for which we have prior achievement for at least 75 % of the students were analysed). At the end of this further data restriction, we have a sample of around 140,000 students for Reading scores (33 % of the original population) and 180,000 for Mathematics (42 %), in 2056 schools. At this stage, we first certify that the pair-wise correlation between the two variables, although not high in magnitude (around 0.19), is statistically significant at 1 % conventional level. Figure 8 graphically illustrates the relationships between the two indicators; Table 15 cross-tabulates this relationship.

Fig. 8
figure8

ESCS_Var_Within and PriorAchievement_Var_Within: correlations

Table 15 Cross-tabulation of ESCS_Var_Within and PriorAchievement_Var_Within

Then, we estimated the following two alternative specification of the EPF, as a sensitivity test for our baseline model:

$$Y_{ijkt} = \alpha_{0} + \alpha_{1} X_{1ijkt} + \alpha_{2} X_{2jkt} + \alpha_{3} X_{3kt} + \left[ {\alpha_{p} X_{pkt} } \right] + \varepsilon_{ijkt}$$
(7)

where X pkt is PriorAchievement_Var_Within k as calculated at school level, and the other variables are as in (2). The objective is to check that coefficients’ estimates are not too different from those obtained in the baseline specification and most importantly to see whether the results obtained for the variable of interest (PriorAchievement_Var_Within k ) are coherent with those reported for our preferred indicator. Two cautions must be expressed here: as we do not have adequate instruments for this variable, we estimated two different models with two different underlying assumptions: (i) one without any instrument (which results are likely to be affected by endogeneity between sorting and student achievement) and one with ESCS_Var_Within(t−1) as instrument, and the reliability of the results in this last case critically relies on the assumption that the same underlying phenomena are behind “sorting by ability” and “sorting by SES” (this is indeed our main assumption in this paper). The results are reported in Table 14. Two main facts must be noticed and are both good new for the robustness of the results obtained through the empirical analyses of this paper. First, all the coefficients about student, class and school levels are estimated as practically identical to the baseline ones reported in Table 4—the only notable exception is that prior achievement seems to have a slightly higher effect (around 0.52/53 SD instead of 0.49/50). Second, the effect of PriorAchievement_Var_Within k is very similar in magnitude and sign to that estimated for ESCS_Var_Within k , even though in the former case it does not gain statistical significance (it does in the model which does not use IV, but we tend to believe less to this specification). In summary: the variable measuring “sorting by ability” is correlated with our preferred indicator of “sorting by SES”; when included in the empirical analyses, the use of one of one of these two indicators is quite interchangeable and do not alter the main results, so it can be assumed that the underlying forces at action are similar (see Tables 14, 15; Fig. 8).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Agasisti, T., Falzetti, P. Between-classes sorting within schools and test scores: an empirical analysis of Italian junior secondary schools. Int Rev Econ 64, 1–45 (2017). https://doi.org/10.1007/s12232-016-0261-4

Download citation

Keywords

  • Between-classes sorting
  • Instrumental Variables (IV)
  • Educational evaluation
  • Equality

JEL Classification

  • I24
  • I21
  • J24