Journal of Productivity Analysis

, Volume 25, Issue 3, pp 279–289 | Cite as

Aggregate Versus Disaggregate Data in Measuring School Quality

Abstract

This article develops a measure of efficiency to use with aggregated data. Unlike the most commonly used efficiency measures, our estimator adjusts for the heteroskedasticity created by aggregation. Our estimator is compared to estimators currently used to measure school efficiency. Theoretical results are supported by a Monte Carlo experiment. Results show that for samples containing small schools (sample average may be about 100 students per school but sample includes several schools with about 30 or less students), the proposed aggregate data estimator performs better than the commonly used OLS and only slightly worse than the multilevel estimator. Thus, when school officials are unable to gather multilevel or disaggregate data, the aggregate data estimator proposed here should be used. When disaggregate data are available, standardizing the value-added estimator should be used when ranking schools.

Keywords

Data aggregation Error components School quality 

JEL Classification

C23 I21 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Clotfelter CT, Ladd HF (1996) Recognizing and rewarding success in public schools. In: Ladd HF (ed) Holding schools accountable. performance-based reform in education The Brookings Institute,Washington, D.C.Google Scholar
  2. Dickens W (1990) Error components in grouped data: Is it ever worth weighting? Rev of Econ Stat 328–333.Google Scholar
  3. Fowler-Finn T (2001) Student Stability vs. Mobility. Factors that Contribute to Achievement Gaps. School Administrator. August 2001. Available at http://www.aasa.org/publications/sa/2001_08/fowler-finn.htm.Google Scholar
  4. Goldstein, H 1995Multilevel statistical modelsEdward ArnoldLondonGoogle Scholar
  5. Goldstein~, H 1997Methods in school effectiveness researchSch Eff Sch Improv8369395Google Scholar
  6. Hanushek, EA, Rivkin, SG, Taylor, LL 1990Alternative assessments of the performance of schools: Measurement of state variations in achievmentJ Human Resources25179201CrossRefGoogle Scholar
  7. Hanushek, EA, Taylor, LL 1996Aggregation and the estimated effects of school resourcesRev Econ Stat2611627CrossRefGoogle Scholar
  8. Ladd, HF 1996Catalysts for learningRecognition and reward programs in the public schools. Brookings Rev31417Google Scholar
  9. Moser, BK 1996Linear models: A mean model approachAcademic PressCaliforniaGoogle Scholar
  10. Odden, A, Clune, W 1995Improving educational productivity and school financeEduc Res9610Google Scholar
  11. Webster WJ, Mendro RL, Orsak TH, Weerasinghe D (1996) The Applicability of Selected Regression and Hierarchical Linear Models to the Estimation of School and Teacher Effects. Paper presented at the annual meeting of the National Council of Measurement in Education, NY.Google Scholar
  12. Woodhouse, G, Goldstein, H 1998Educational performance indicators and LEA league tablesOxford Rev Educ14301320Google Scholar
  13. Woodhouse, G, Yang, M, Goldstein, H, Rasbash, J 1996Adjusting for measurement error in multilevel analysisJ Roy Stat Soci A159201212Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Department of EconomicsCleveland State UniversityClevelandUSA
  2. 2.Department of Agricultural EconomicsOklahoma State UniversityStillwaterUSA

Personalised recommendations