Social Indicators Research

, Volume 122, Issue 2, pp 607–634 | Cite as

Assessing Divergences in Mathematics and Reading Achievement in Italian Primary Schools: A Proposal of Adjusted Indicators of School Effectiveness

  • Isabella SulisEmail author
  • Mariano Porcu


This research aims to reach four main objectives by identifying plausible factors influencing Italian fifth grade pupils’ achievement in mathematics and reading: (1) to assess the relationships between pupils’ performances and their socio-cultural characteristics; (2) to suggest value-added measures of the contribution that schools give to pupils’ achievement; (3) to advance a system of indicators in order to detect schools characterized by distinctive performances; (4) to summarize main evidences at different geographical levels. Nationwide pupils’ scores in mathematics and reading tests have been jointly summarized using Item Response Theory models. A Multilevel Bivariate Regression model with heteroscedastic random terms at school-level has been adopted to single out the factors which seem to account for the greatest variability in pupils’ achievement as well as to jointly model the unobserved heterogeneity among geographical areas. A system of school value-added measures is proposed to make comparative assessments at national and at sub-national levels.


Schools effectiveness Multilevel IRT INVALSI Adjusted indicators Value-added measures 


  1. Agasisti, T., & Vittadini, G. (2012). Regional economic disparities as determinants of students’ achievement in Italy. Research in Applied Economics, 41, 33–54.Google Scholar
  2. Aitkin, M., & Longford, N. (1986). Statistical modelling issues in school effectiveness studies. Journal of the Royal Statistical Society A, 149(1), 1–43.CrossRefGoogle Scholar
  3. Ball, R., & Wilkinson, R. (1994). The use and abuse of performance indicators in UK higher education. Higher Education, 27(4), 417–427.CrossRefGoogle Scholar
  4. Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques (2nd ed.). New York: Dekker.Google Scholar
  5. Bird, M., Cox, D., Goldstein, H., Holt, T., P. Smith, P.C. (2005). Performance indicators: good, bad, and ugly. Journal of the Royal Statistical Society: Series A, 168(1), 1–27.Google Scholar
  6. Boyd, D. J., et al. (2009). Teacher preparation and student achievement. Educational Evaluation and Policy Analysis, 31(4), 416–440.CrossRefGoogle Scholar
  7. Braga, M., & Checchi, D. (2010). Sistemi scolastici regionali e capacit di sviluppo delle competenze. Il divario delle indagini Pirsl e Pisa: Italian Journal of Social Policy, 3, 1–24.Google Scholar
  8. Bratti, M., Checchi, D., & Filippin, A. (2007). Geographical differences in Italian students’ mathematical competencies: Evidence from PISA 2003. Giornale degli Economisti e Annali di Economia, 663, 299–333.Google Scholar
  9. Covay, E., & Carbonaro, W. (2010). After the bell: Participation in extracurricular activities classroom behavior, and academic achievement. Sociology of Education, 83(1), 20–45.CrossRefGoogle Scholar
  10. Dobbie, W., & Fryer, R. G., Jr. (2011). Getting beneath the veil of effective schools: Evidence from New York City. NBER working papers, 17632.Google Scholar
  11. Draper, D., & Gittoes, W. (2004). Statistical analysis of performance indicators in UK higher education. Journal of the Royal Statistical Society: Series A, 167(3), 449–474.CrossRefGoogle Scholar
  12. Finch, H. (2008). Estimation of item response theory parameters in the precence of missing data. Journal of Educational Measurement, 45(3), 225–245.CrossRefGoogle Scholar
  13. Flere, S., et al. (2010). Cultural Capital and intellectual ability as predictors of scholastic achievement: A study of Slovenian secondary school students. British Journal of Sociology of Education, 31(1), 47–58.CrossRefGoogle Scholar
  14. Fuchs, T., & Woessman, L. (2007). What accounts for international differences in student performance? A re-examination using OECD-PISA data. Empirical Economics, 32(2), 433–464.CrossRefGoogle Scholar
  15. Goldstein, H., & Healy, M. J. R. (1995). The graphical presentation of a collection of means. Journal of the Royal Statistical Society: Series A, 158, 175–177.CrossRefGoogle Scholar
  16. Goldstein, H., & Spiegelhalter, D. J. (1996). League tables and their limitations: Statistical issues in comparisons of institutional performance. Journal of the Royal Statistical Society: Series A, 159, 385–443.CrossRefGoogle Scholar
  17. Goldstein, H., & Thomas, S. (1996). Using examination results as indicators of school and college performance. Journal of the Royal Statistical Society: Series A, 159(1), 97–114.Google Scholar
  18. Goldstein, H. (2008). School league tables: What can they really tell us. Significance, 5(2), 67–69.CrossRefGoogle Scholar
  19. Goldstein, H. (2011). Multilevel statistical models (4th ed.). Chichester: Wiley.Google Scholar
  20. Grilli, L., & Sani, C. (2011). Differential variability of test scores among schools: A multilevel analysis of the fifth-grade INVALSI test using heteroscedastic random effects. Journal of Applied Quantitative Methods, 6(4), 88–99.Google Scholar
  21. Hanushek, E. A., & Woessman, L. (2010) The economics of international differences in educational achievement. NBER working paper series, 15949.Google Scholar
  22. INVALSI. (2011). Servizio Nazionale di Valutazione a.s. 2009–10. Rilevazione degli apprendimenti. Scuola Primaria. Prime Analisi. Cited February 8, 2013.Google Scholar
  23. Jæger, M. M., & Holm, A. (2007). Does parents’ economic, cultural, and social capital explain the social class effect on educational attainment in the Scandinavian mobility regime? Social Science Research, 36(7), 19–44.Google Scholar
  24. Leckie, G., & Goldstein, H. (2009). The limitation of using school league tables to inform school choice. Journal of the Royal Statistical Society: Series A, 172(4), 835–851.CrossRefGoogle Scholar
  25. Leckie, G. (2009). The complexity of school and neighbourhood effects and movements of pupils on school differences in model educational achievement. Journal of the Royal Statistical Society: Series A, 172(3), 537–554.CrossRefGoogle Scholar
  26. Leckie, G., & Charlton, C. (2013). runMLwiN—A program to run the MLwiN multilevel modelling software from within stata. Journal of Statistical Software, 52(11), 1–40.Google Scholar
  27. Leithwood, K., & Jantzi, D. (2009). A review of empirical evidence about school size effects: A policy perspective. Review of Educational Research, 79(1), 464–490.CrossRefGoogle Scholar
  28. Luginbuhl, R., & Webbink, D. (2009). Do inspections improve primary school performance? Educational Evaluation and Policy Analysis, 31(3), 221–237.CrossRefGoogle Scholar
  29. Martin, M. O., & Mullis, I. V. S. (Eds.). (2012). Methods and procedures in TIMSS and PIRLS 2011. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.Google Scholar
  30. Martini, A., & Marcucci, L. (2012). Dalla performance degli studenti al contributo della scuola: il ruolo dei fattori di composizione nei singoli segmenti del sistema scolastico, Istituto nazionale per la valutazione del sistema educativo di istruzione e di formazione, working paper 17.Google Scholar
  31. Matteucci, M., & Mignani, S. (2011). Gender differences in performance in mathematics at the end of secondary school in Italy. Learning and Individual Differences, 21, 543–548.CrossRefGoogle Scholar
  32. OECD. (2011). Education at a Glance: OECD Indicators. Paris: PISA, OECD Publishing.Google Scholar
  33. OECD. (2012). PISA 2009 Technical Report. PISA, OECD Publishing (2012). doi: 10.1787/9789264167872-en.
  34. Petracco-Giudici, M., Vidoni, D., & Rosati, R. (2010). Compositional effects in Italian primary schools: an exploratory analysis of INVALSI SNV data and suggestions for further research. In Organizational, business, and technological aspects of the knowledge society - communications in computer and information science, Vol. 112, Springer, Berlin, Heidelberg, pp. 460–470.Google Scholar
  35. Pullmann, H., & Allik, J. (2008). Relations of academic and general self-esteem to school achievement. Personality and Individual Differences, 45, 559–564.CrossRefGoogle Scholar
  36. Quintano, C., Castellano, R., & Longobardi, S. (2012). The literacy divide: Territorial differences in the Italian education system. In Advanced statistical methods for the analysis of large datasets. Springer, Berlin, Heidelberg.Google Scholar
  37. Rizopoulos, D. (2006). ltm: An R package for latent variable modelling and item response theory analyses. Journal of Statistical Software, 17(5), 1–25.Google Scholar
  38. Raaijmakers, A. W. (1999). Effectiveness of different missing data treatments in surveys with likert-type data: Introducing the relative mean substitution approach. Educational and Psychological Measurement, 5(59), 725–748.CrossRefGoogle Scholar
  39. Rasbash, J., Leckie, G., & Jenkins, J. (2010). Children’s educational progress: Partitioning family, school and area effects. Journal of the Royal Statistical Society: Series A, 173(3), 657–682.CrossRefGoogle Scholar
  40. Roscigno, V. J., & Ainsworth-Darnell, J. W. (1999). Race, cultural capital, and educational resources: Persistent inequalities and achievement returns. Sociology of Education, 72(3), 158–178.CrossRefGoogle Scholar
  41. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psychometric Monograph No. 17, Psychometric Society, Richmond, VA.Google Scholar
  42. Singh, K., Granville, M., & Dika, S. (2002). Mathematics and science achievement: Effects of motivation, interest, and academic engagement. The Journal of Educational Researach, 95(6), 323–332.CrossRefGoogle Scholar
  43. Skrondal, A., & Rabe-Hesketh, S. (2004). Generalized latent variables modeling. Boca Raton: Chapman & Hall.CrossRefGoogle Scholar
  44. Sulis, I., & Porcu, M. (2012). Comparing university institutions from evaluators’ assessments. A LCRA-based composite indicator. Statistical Methods & Applications, 21(2), 193–209.CrossRefGoogle Scholar
  45. Sulis, I. (2013). A further proposal to perform multiple imputation on a bunch of polytomous items based on latent class analysis. In Statistical models for data analysis studies in classification, data analysis, and knowledge organization. Berlin, Heidelberg: Springer.Google Scholar
  46. Tobias, J. L. (2004). Assessing assessments of school performance. The American Statistician, 58(1), 55–63.CrossRefGoogle Scholar
  47. Yang, M., Goldstein, H., Rath, T., & Hill, N. (1999). The use of assessment data for school improvement purposes. Oxford Review of Education, 25(4), 469–483.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Dipartimento di Scienze Sociali e delle IstituzioniUniversità di CagliariCagliariItaly

Personalised recommendations