Journal of the Operational Research Society

, Volume 54, Issue 7, pp 679–691

Measuring and managing educational performance

Review Paper

Abstract

Performance measures have come to play a central role in the management of the education sector. This paper identifies a number of desirable properties for educational performance measures, whose breach is likely to result in sub-optimal patterns of educational outcomes and resource management. Recent trends in the study of mathematics in schools give particular cause for concern. The paper examines several outstanding issues that require further attention if performance evaluation techniques are to provide reliable measures of school effectiveness.

Keywords

performance evaluation education data envelopment analysis value added school effectiveness multilevel modelling 

References

  1. HM Treasury (2002). 2002 Spending Review Public Service Agreements. HM Treasury: London.Google Scholar
  2. HM Treasury (2002). 2002 Spending Review. HM Treasury: London.Google Scholar
  3. Office for Standards in Education (1995). The OFSTED Handbook — Guidance on the Inspection of Secondary Schools. TSO: London.Google Scholar
  4. Gillborn D and Youdell D (1999). Rationing Education: Policy, Practice, Reform and Equity. Open UP: Buckingham.Google Scholar
  5. Department for Education and Skills (2002). Education and Skills: Delivering Results. DfES: London.Google Scholar
  6. Mayston DJ (1992). School Performance Indicators and Performance Related Pay. Assistant Masters and Mistresses Association: London.Google Scholar
  7. Department for Education and Skills (2002). School and College (Post 16) Performance Tables 2001. DfES: London.Google Scholar
  8. Department for Education and Skills (2003). Secondary School Performance Tables 2002. DfES: London.Google Scholar
  9. Assessment and Qualifications Alliance (2002). GCE Interboard Statistics — Advanced Level 1990–2002. AQA: Guildford.Google Scholar
  10. Roberts G (2002). Set for Success: The Supply of People with Science, Technology, Engineering and Mathematics Skills. HM Treasury: London.Google Scholar
  11. Department for Education and Employment (1998). Fair Funding. DfEE: London.Google Scholar
  12. Arrow KJ (1973). Higher education as a filter. J Public Econ 2: 193–216.CrossRefGoogle Scholar
  13. Todd K . (2000). An Historical Study of the Correlation Between GCE Advanced Level Grades and the Subsequent Academic Performance of Well-Qualified Students in a University Engineering Department. University of York: York.Google Scholar
  14. Thomson A (2002). GCSEs are route to HE expansion. Times Higher Ed Sup, 22 February.Google Scholar
  15. Jesson D, Mayston DJ and Smith PC (1987). Performance assessment in the education sector: educational and economic perspectives. Oxford Rev Educ 13: 249–266.CrossRefGoogle Scholar
  16. Mayston DJ and Jesson D (1988). Developing models of educational accountability. Oxford Rev Educ 14: 321–340.CrossRefGoogle Scholar
  17. Jesson D (2002). Value Added and the Benefits of Specialism. Technology College Trust: London.Google Scholar
  18. Jesson D and Gray J (1991). Slants on slopes: using multi-level models to investigate differential school effectiveness and its impact on pupils' examination results. School Effectiveness School Improvement 2: 230–247.CrossRefGoogle Scholar
  19. Atkinson AB (1970). The measurement of inequality. J Econ Theory 2: 244–263.CrossRefGoogle Scholar
  20. Becker G (1985). Human Capital. Chicago University Press: Chicago.Google Scholar
  21. Cohn E and Addison J . (1998). The economic returns to lifelong learning. Education Econ 6: 309–346.CrossRefGoogle Scholar
  22. Belfield C (2000). Economic Principles for Education. Edward Elgar: Cheltenham.Google Scholar
  23. Mayston DJ and Smith P (1990). Analysing the need to spend on education. J Opl Res Soc 41: 125–131.CrossRefGoogle Scholar
  24. Mayston DJ (1996). Educational attainment and resource use: mystery or econometric misspecification? Education Econ 4: 127–142.CrossRefGoogle Scholar
  25. Johnston J . (1984). Econometric Methods, 3rd edn. McGraw-Hill: London.Google Scholar
  26. Mayston DJ (2000). The demand for education and the production of local public goods. Discussion Papers in Economics, 2000/50. University of York: York.Google Scholar
  27. Hanushek E (1986). The economics of schooling: production and efficiency in public schools. J Econ Lit 24: 1141–1177.Google Scholar
  28. Mayston DJ (2002). Tackling the Endogeneity Problem when Estimating the Relationship Between School Spending and Pupil Outcomes. DfES Research Report RR328. DfES: London.Google Scholar
  29. Aigner D, Lovell AK and Schmidt P (1977). Formulation and estimation of stochastic frontier production function models. J Econometrics 6: 21–37.CrossRefGoogle Scholar
  30. Smith DJ and Tomlinson S (1989). The School Effect. Policy Studies Institute: London.Google Scholar
  31. Mayston DJ and Jesson D (1999). Linking Educational Resourcing with Enhanced Educational Outcomes. DfEE Research Report RR179. DfEE: London.Google Scholar
  32. Goldstein H (1995). Multilevel Statistical Models, 2nd edn. Arnold: London.Google Scholar
  33. Charnes A, Cooper WW and Rhodes E (1978). A Data Envelopment Approach to Evaluation of the Program Follow Through Experiements in US Public School Education. Management Science Research Report 432. Carnegie-Mellon University: Pittsburgh.Google Scholar
  34. Bessent A, Bessent W, Kennington J and Reagan B (1982). An application of mathematical programming to assess productivity in the Houston independent school district. Mngt Sci 28: 1355–1367.CrossRefGoogle Scholar
  35. Desai A and Schinnar AP (1990). Technical issues in measuring scholastic improvement due to compensatory education programs. Socio-Economic Plann Sci 24: 143–153.CrossRefGoogle Scholar
  36. McCarty TA and Yaisawarng S (1993). Technical efficiency in New Jersey school districts. In: Fried HO, Lovell CAK and Shelton S (eds). The Measurement of Productive Efficiency. Oxford University Press: Oxford, pp 271–287.Google Scholar
  37. Worthington A (2001). An empirical survey of frontier efficiency measurement techniques in education. Education Econ 9: 245–268.CrossRefGoogle Scholar
  38. Charnes A and Cooper WW (1985). Preface to topics in data envelopment analysis. Ann Opns Res 2: 59–94.CrossRefGoogle Scholar
  39. Orme C and Smith P (1996). The potential for endogeneity bias in data envelopment analysis. J Opl Res Soc 47: 73–83.CrossRefGoogle Scholar
  40. Gravelle H and Rees R (1992). Microeconomics, 2nd edn. Longman: Harlow.Google Scholar
  41. Varian H (1992). Microeconomic Analysis, 3rd edn. Norton: New York.Google Scholar
  42. Shephard RW (1970). Theory of Cost and Production Functions. Princeton University Press: Princeton, NJ.Google Scholar
  43. Charnes A, Cooper WW and Rhodes E (1978). Measuring the efficiency of decision making units. Eur J Opl Res 2: 429–444.CrossRefGoogle Scholar
  44. Cooper WW, Seiford L and Tone K (2000). Data Envelopment Analysis. Kluwer: Boston.Google Scholar
  45. Berndt ER (1991). The Practice of Econometrics. Addison-Wesley: New York.Google Scholar
  46. Chambers RG (1988). Applied Production Analysis. Cambridge University Press: Cambridge.Google Scholar
  47. Banker RD, Charnes A and Cooper WW (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Mngt Sci 30: 1078–1092.CrossRefGoogle Scholar
  48. Henderson JM and Quandt RE (1980). Microeconomic Theory. McGraw-Hill: New York.Google Scholar
  49. Department for Education and Employment (2000). Setting Targets. DfEE: London.Google Scholar
  50. Deprins D, Simar L and Tulkens H (1984). Measuring labor-efficiency in post offices. In: Marchand M, Pestieau P and Tulkens H (eds). The Performance of Public Enterprises: Concepts and Measurement. Elsevier: Amsterdam, pp 243–267.Google Scholar
  51. Farrell MJ (1957). The measurement of productive efficiency. J Roy Stat Soc 120: 11–282.Google Scholar
  52. Cooper WW, Park KS and Pastor JT (1999). RAM: a range adjusted measure of inefficiency for use with additive models and relations to other models and measures in DEA. J Productivity Analy 11: 5–42.CrossRefGoogle Scholar
  53. Mayston DJ (1996). Capital and labour markets for health. In: Culyer AJ and Wagstaff A (eds). Reforming Health Care Systems. Edward Elgar: Cheltenham, pp 104–133.Google Scholar
  54. Fare R and Grosskopf S (1996). Intertemporal Production Frontiers: With Dynamic DEA. Kluwer: Boston.CrossRefGoogle Scholar
  55. Pedraja-Chaparro F, Salinas-Jimenez P and Smith P (1997). On the role of weight restrictions in data envelopment analysis. J Productivity Anal 8: 215–230.CrossRefGoogle Scholar
  56. Mayston DJ (1985). Non-profit performance indicators in the public sector. Financial Accountability Mngt 1: 51–74.CrossRefGoogle Scholar
  57. Mittag H-J and Rinne H (1993). Statistical Methods of Quality Assurance. Chapman & Hall: London.Google Scholar
  58. Quality Assurance Agency (2000). Subject Review Handbook. QAA: Gloucester.Google Scholar

Copyright information

© Palgrave Macmillan Ltd 2003

Authors and Affiliations

  1. 1.University of YorkYorkUK

Personalised recommendations