Advertisement

Scientometrics

, Volume 115, Issue 1, pp 585–606 | Cite as

University rankings: What do they really show?

  • Jill JohnesEmail author
Article

Abstract

University rankings as developed by the media are used by many stakeholders in higher education: students looking for university places; academics looking for university jobs; university managers who need to maintain standing in the competitive arena of student recruitment; and governments who want to know that public funds spent on universities are delivering a world class higher education system. Media rankings deliberately draw attention to the performance of each university relative to all others, and as such they are undeniably simple to use and interpret. But one danger is that they are potentially open to manipulation and gaming because many of the measures underlying the rankings are under the control of the institutions themselves. This paper examines media rankings (constructed from an amalgamation of variables representing performance across numerous dimensions) to reveal the problems with using a composite index to reflect overall performance. It ends with a proposal for an alternative methodology which leads to groupings rather than point estimates.

Keywords

Higher education Rankings Performance Principal components analysis 

Notes

Acknowledgements

I am grateful for comments and suggestions to an anonymous referee, to Geraint Johnes and Swati Virmani, and to the participants at: Efficiency in Education, Politecnico di Milano 20th–21st October 2016; Valuing Higher Education: An appreciation of the work of Gareth Williams, Centre for Higher Education Studies, Institute of Education, University College London, 15th November 2016; the Fourth Lisbon Research Workshop on Economics, Statistics and Econometrics of Education, Lisbon, Portugal, 26th–27th January 2017; the Meeting of the Economics of Education Association, Murcia, 29th–30th June 2017.

References

  1. Allcock, D., Johnes, J., & Virmani, S. (2017). Efficiency and VC pay: Exploring the value conundrum. In European workshop on efficiency and productivity analysis, London, 13th–15th June.Google Scholar
  2. Bachan, R. (2015). Grade inflation in UK higher education. Studies in Higher Education.  https://doi.org/10.1080/03075079.2015.1019450.Google Scholar
  3. Barr, R. S., Durchholz, M. L., & Seiford, L. (2000). Peeling the DEA onion: Layering and rank-ordering DMUs using tiered DEA. Dallas, TX: Southern Methodist University Technical Report.Google Scholar
  4. Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research.  https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1.Google Scholar
  5. Bougnol, M.-L., & Dula, J. H. (2006). Validating DEA as a ranking tool: An application of DEA to assess performance in higher education. Annals of Operations Research, 145(1), 339–365.MathSciNetCrossRefzbMATHGoogle Scholar
  6. Cattell, J. M. (1906). American men of science: A biographical dictionary. New York, NY: Science Press.Google Scholar
  7. Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(4), 429–444.MathSciNetCrossRefzbMATHGoogle Scholar
  8. Cherchye, L., Moesen, W., Rogge, N., & Van Puyenbroeck, T. (2007). An introduction to ‘benefit of the doubt’ composite indicators. Social Indicators Research, 82(1), 111–145.CrossRefGoogle Scholar
  9. De Fraja, G., & Valbonesi, P. (2012). The design of the university system. Journal of Public Economics, 96(3–4), 317–330.CrossRefGoogle Scholar
  10. Dill, D. D. (2009). Convergence and diversity: The role and influence of university ranking. In B. M. Kehm & B. Stensaker (Eds.), University rankings, diversity, and the new landscape of higher education (pp. 97–116). Rotterdam: Sense Publishers.Google Scholar
  11. Dill, D. D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems. Higher Education, 49(4), 495–533.CrossRefGoogle Scholar
  12. Dixon, M. (1976). Careers: More means better. London: The Financial Times.Google Scholar
  13. Dixon, M. (1985). Jobs column: What happened to universities’ graduates?. London: The Financial Times.Google Scholar
  14. Dixon, M. (1989). Jobs column: Benefits, and risks, of trying for a degree. London: The Financial Times.Google Scholar
  15. Ehrenberg, R. G. (2012). American higher education in transition. Journal of Economic Perspectives, 26(1), 193–216.CrossRefGoogle Scholar
  16. Harman, G. (2011). Competitors of rankings: New directions in quality assurance and accountability. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 35–54). Dordrecht: Springer.CrossRefGoogle Scholar
  17. Hazelkorn, E. (2015). How the geo-politics of rankings is shaping behaviour. Higher Education in Russia and Beyond, 2(4), 6–7.Google Scholar
  18. HEFCE. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Bristol: Higher Education Funding Council for England.Google Scholar
  19. HEFCE. (2012). Collaborations, alliances and mergers in higher education: Consultation on lessons learned and guidance for institutions. London: HEFCE 2012/06 Higher Education Funding Council for England.Google Scholar
  20. Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251.CrossRefGoogle Scholar
  21. Himanen, L., Auranen, O., Puuska, H.-M., & Nieminen, M. (2009). Influence of research funding and science policy on university research performance: A comparison of five countries. Science and Public Policy, 36(6), 419–430.CrossRefGoogle Scholar
  22. Hughes, R. M. (1925). A study of the graduate schools of America. Oxford, OH: Miami University Press.Google Scholar
  23. Johnes, G. (1992). Performance indicators in higher education: A survey of recent work. Oxford Review of Economic Policy, 8(2), 19–34.CrossRefGoogle Scholar
  24. Johnes, J. (1996). Performance assessment in higher education in Britain. European Journal of Operational Research, 89, 18–33.CrossRefzbMATHGoogle Scholar
  25. Johnes, G. (2004). Standards and grade inflation. In G. Johnes & J. Johnes (Eds.), International handbook on the economics of education (pp. 462–483). Cheltenham: Edward Elgar.Google Scholar
  26. Johnes, J. (2015). Operational research in education. European Journal of Operational Research, 243(3), 683–696.CrossRefzbMATHGoogle Scholar
  27. Johnes, J. (2016). Performance indicators and rankings in higher education. In R. Barnett, P. Temple, & P. Scott (Eds.), Valuing higher education: An appreciation of the work of Gareth Williams. London: UCL Institute of Education Press.Google Scholar
  28. Johnes, G., & Soo, K. T. (2015). Grades across universities over time. The Manchester School.  https://doi.org/10.1111/manc.12138.Google Scholar
  29. Jump, P. (2014). Cut 50% of universities and bar undergraduates from Oxbridge. Times Higher Education. London, Times Supplements Ltd 25th June 2014.Google Scholar
  30. Kehm, B. M. (2014). Global university rankings—impacts and unintended side effects. European Journal of Education, 49(1), 102–112.CrossRefGoogle Scholar
  31. Kelchtermans, S., & Verboven, F. (2010). Program duplication in higher education is not necessarily bad. Journal of Public Economics, 94(5–6), 397–409.CrossRefGoogle Scholar
  32. Locke, W., Verbik, L., Richardson, J. T. E., & Roger, K. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Bristol: Higher Education Funding Council for England.Google Scholar
  33. Longden, B. (2011). Ranking indicators and weights. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 73–104). Dordrecht: Springer.CrossRefGoogle Scholar
  34. Marginson, S. (2014). University rankings and social science. European Journal of Education, 49(1), 45–59.CrossRefGoogle Scholar
  35. Marginson, S., & Wende, V. D. M. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal of Studies in International Education, 11(3–4), 306–329.CrossRefGoogle Scholar
  36. Morphew, C. C., & Swanson, C. (2011). On the efficacy of raising your university’s ranking. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 185–200). Dordrecht: Springer.CrossRefGoogle Scholar
  37. Muller, S. M. (2017). Academics as rent seekers: Distorted incentives in higher education, with reference to the South African case. International Journal of Educational Development, 52, 58–67.CrossRefGoogle Scholar
  38. Newman, M. (2008). Students urged to inflate national survey marks to improve job options. Times Higher Education London, Times Supplements Ltd. 15th May 2008: 7 15th May 2008.Google Scholar
  39. Pollard, E., Williams, M., Williams, J., Bertram, C., & Buzzeo, J. (2013). How should we measure higher education? A fundamental review of the Performance Indicators, Part 2: The evidence report. Brighton: Institute for Employment Studies.Google Scholar
  40. Popov, S. V., & Bernhardt, D. (2013). University competition, grading standards, and grade inflation. Economic Inquiry, 51(3), 1764–1778.CrossRefGoogle Scholar
  41. Rolfe, H. (2003). University strategy in an age of uncertainty: The effect of higher education funding on old and new universities. Higher Education Quarterly, 57(1), 24–47.CrossRefGoogle Scholar
  42. Saisana, M., d’Hombres, B., & Saltelli, A. (2011). Rickety numbers: Volatility of university rankings and policy implications. Research Policy, 40(1), 165–177.CrossRefGoogle Scholar
  43. Saltelli, A., Nardo, M., Tarantola, S., Giovannini, E., Hoffman, A., & Saisana, M. (2005). Handbook on constructing composite indicators: Methodology and user guide. Paris: OECD.Google Scholar
  44. Shin, J. C., & Toutkoushian, R. K. (2011). The past, present, and future of university rankings. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 1–18). Dordrecht: Springer.CrossRefGoogle Scholar
  45. Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research.  https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1.Google Scholar
  46. Usher, A., & Medow, J. (2009). A global survey of university rankings and league tables. In B. M. Kehm & B. Stensaker (Eds.), University rankings, diversity, and the new landscape of higher education (pp. 3–18). Rotterdam: Sense Publishers.Google Scholar
  47. Weimer, D. L., & Vining, A. R. (1996). Economics. In D. F. Kettle & H. B. Milward (Eds.), The state of public management (pp. 92–117). Baltimore, MA: Johns Hopkins University.Google Scholar
  48. Yorke, M. (1997). A good league table guide? Quality Assurance in Education, 5(2), 61–72.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2018

Authors and Affiliations

  1. 1.Huddersfield Business SchoolUniversity of HuddersfieldHuddersfieldUK

Personalised recommendations