Skip to main content
Log in

University rankings: What do they really show?

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

University rankings as developed by the media are used by many stakeholders in higher education: students looking for university places; academics looking for university jobs; university managers who need to maintain standing in the competitive arena of student recruitment; and governments who want to know that public funds spent on universities are delivering a world class higher education system. Media rankings deliberately draw attention to the performance of each university relative to all others, and as such they are undeniably simple to use and interpret. But one danger is that they are potentially open to manipulation and gaming because many of the measures underlying the rankings are under the control of the institutions themselves. This paper examines media rankings (constructed from an amalgamation of variables representing performance across numerous dimensions) to reveal the problems with using a composite index to reflect overall performance. It ends with a proposal for an alternative methodology which leads to groupings rather than point estimates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Note that there is some evidence that they might be instrumental in determining VC pay, for example Allcock et al. (2017).

  2. One could argue that the tables of performance produced in the Financial Times in the UK were also forerunners to the media rankings we see today. These covered distinct aspects such as: achievement rates (Dixon 1976); labour market destinations (Dixon 1985); and completion rates (Dixon 1989).

  3. It should be noted that the principal-agent model may be overly simplistic for a complex organisation such as a university which produces multiple outputs and may operate cross subsidisation across these outputs.

  4. The difficulties for managers of dealing with multiple stakeholders who may have conflicting objectives is discussed in Weimer and Vining (1996).

  5. The results of the REF 2014 can be found here: http://www.ref.ac.uk/. Note that REF 2014 was preceded by various Research Assessment Exercises (RAEs) undertaken in 1986, 1989, 1992, 1996, 2001 and 2008.

  6. See https://education.gov.au/research-block-grants.

  7. See http://www.hefce.ac.uk/lt/howfund/.

  8. See https://www.gov.uk/government/speeches/teaching-at-the-heart-of-the-system.

  9. See http://www.hefce.ac.uk/lt/tef/.

  10. Source: https://www.hesa.ac.uk/data-and-analysis/performance-indicators accessed 17th July 2017.

  11. Source: http://www.thecompleteuniversityguide.co.uk/league-tables/methodology/ accessed 17th July 2017. Note that this particular university guide is chosen purely for illustrative purposes; conclusions from any analysis presented here can be generalised across all university guides.

  12. Note that the original data have been standardised data to have mean zero; a higher value represents more favourable performance on every dimension.

  13. Each performance measure is usually standardised to produce a z-score before calculating an overall ranking. This ensures that the composite index is not affected by the units of measurement of the components underlying it.

  14. The interested reader can find more details on all the techniques elsewhere (Saltelli et al. 2005; Johnes 2015).

  15. This cross-subsidisation actually has more disadvantages than simply reducing diversity, one of which os a sub-optimal allocation of resources to university activities—see (Muller 2017) for a discussion of distortions created by rent-seeing behaviour in higher education.

  16. Note that two HEIs ranked 78th and 126th have been excluded because they have no observations on the research indicators.

References

  • Allcock, D., Johnes, J., & Virmani, S. (2017). Efficiency and VC pay: Exploring the value conundrum. In European workshop on efficiency and productivity analysis, London, 13th–15th June.

  • Bachan, R. (2015). Grade inflation in UK higher education. Studies in Higher Education. https://doi.org/10.1080/03075079.2015.1019450.

    Google Scholar 

  • Barr, R. S., Durchholz, M. L., & Seiford, L. (2000). Peeling the DEA onion: Layering and rank-ordering DMUs using tiered DEA. Dallas, TX: Southern Methodist University Technical Report.

    Google Scholar 

  • Bekhradnia, B. (2016). International university rankings: For good or ill? HEPI report. http://www.hepi.ac.uk/wp-content/uploads/2016/12/Hepi_International-university-rankings-For-good-or-for-ill-REPORT-89-10_12_16_Screen.pdfHigherEducationPolicyInstitute.

  • Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1.

    Google Scholar 

  • Bougnol, M.-L., & Dula, J. H. (2006). Validating DEA as a ranking tool: An application of DEA to assess performance in higher education. Annals of Operations Research, 145(1), 339–365.

    Article  MathSciNet  MATH  Google Scholar 

  • Cattell, J. M. (1906). American men of science: A biographical dictionary. New York, NY: Science Press.

    Google Scholar 

  • Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(4), 429–444.

    Article  MathSciNet  MATH  Google Scholar 

  • Cherchye, L., Moesen, W., Rogge, N., & Van Puyenbroeck, T. (2007). An introduction to ‘benefit of the doubt’ composite indicators. Social Indicators Research, 82(1), 111–145.

    Article  Google Scholar 

  • De Fraja, G., & Valbonesi, P. (2012). The design of the university system. Journal of Public Economics, 96(3–4), 317–330.

    Article  Google Scholar 

  • Dill, D. D. (2009). Convergence and diversity: The role and influence of university ranking. In B. M. Kehm & B. Stensaker (Eds.), University rankings, diversity, and the new landscape of higher education (pp. 97–116). Rotterdam: Sense Publishers.

    Google Scholar 

  • Dill, D. D., & Soo, M. (2005). Academic quality, league tables, and public policy: A cross-national analysis of university ranking systems. Higher Education, 49(4), 495–533.

    Article  Google Scholar 

  • Dixon, M. (1976). Careers: More means better. London: The Financial Times.

    Google Scholar 

  • Dixon, M. (1985). Jobs column: What happened to universities’ graduates?. London: The Financial Times.

    Google Scholar 

  • Dixon, M. (1989). Jobs column: Benefits, and risks, of trying for a degree. London: The Financial Times.

    Google Scholar 

  • Ehrenberg, R. G. (2012). American higher education in transition. Journal of Economic Perspectives, 26(1), 193–216.

    Article  Google Scholar 

  • Harman, G. (2011). Competitors of rankings: New directions in quality assurance and accountability. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 35–54). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Hazelkorn, E. (2015). How the geo-politics of rankings is shaping behaviour. Higher Education in Russia and Beyond, 2(4), 6–7.

    Google Scholar 

  • HEFCE. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Bristol: Higher Education Funding Council for England.

    Google Scholar 

  • HEFCE. (2012). Collaborations, alliances and mergers in higher education: Consultation on lessons learned and guidance for institutions. London: HEFCE 2012/06 Higher Education Funding Council for England.

    Google Scholar 

  • Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251.

    Article  Google Scholar 

  • Himanen, L., Auranen, O., Puuska, H.-M., & Nieminen, M. (2009). Influence of research funding and science policy on university research performance: A comparison of five countries. Science and Public Policy, 36(6), 419–430.

    Article  Google Scholar 

  • Hughes, R. M. (1925). A study of the graduate schools of America. Oxford, OH: Miami University Press.

    Google Scholar 

  • Johnes, G. (1992). Performance indicators in higher education: A survey of recent work. Oxford Review of Economic Policy, 8(2), 19–34.

    Article  Google Scholar 

  • Johnes, J. (1996). Performance assessment in higher education in Britain. European Journal of Operational Research, 89, 18–33.

    Article  MATH  Google Scholar 

  • Johnes, G. (2004). Standards and grade inflation. In G. Johnes & J. Johnes (Eds.), International handbook on the economics of education (pp. 462–483). Cheltenham: Edward Elgar.

    Google Scholar 

  • Johnes, J. (2015). Operational research in education. European Journal of Operational Research, 243(3), 683–696.

    Article  MATH  Google Scholar 

  • Johnes, J. (2016). Performance indicators and rankings in higher education. In R. Barnett, P. Temple, & P. Scott (Eds.), Valuing higher education: An appreciation of the work of Gareth Williams. London: UCL Institute of Education Press.

    Google Scholar 

  • Johnes, G., & Soo, K. T. (2015). Grades across universities over time. The Manchester School. https://doi.org/10.1111/manc.12138.

    Google Scholar 

  • Jump, P. (2014). Cut 50% of universities and bar undergraduates from Oxbridge. Times Higher Education. London, Times Supplements Ltd 25th June 2014.

  • Kehm, B. M. (2014). Global university rankings—impacts and unintended side effects. European Journal of Education, 49(1), 102–112.

    Article  Google Scholar 

  • Kelchtermans, S., & Verboven, F. (2010). Program duplication in higher education is not necessarily bad. Journal of Public Economics, 94(5–6), 397–409.

    Article  Google Scholar 

  • Locke, W., Verbik, L., Richardson, J. T. E., & Roger, K. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Bristol: Higher Education Funding Council for England.

    Google Scholar 

  • Longden, B. (2011). Ranking indicators and weights. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 73–104). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Marginson, S. (2014). University rankings and social science. European Journal of Education, 49(1), 45–59.

    Article  Google Scholar 

  • Marginson, S., & Wende, V. D. M. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal of Studies in International Education, 11(3–4), 306–329.

    Article  Google Scholar 

  • Morphew, C. C., & Swanson, C. (2011). On the efficacy of raising your university’s ranking. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 185–200). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Muller, S. M. (2017). Academics as rent seekers: Distorted incentives in higher education, with reference to the South African case. International Journal of Educational Development, 52, 58–67.

    Article  Google Scholar 

  • Newman, M. (2008). Students urged to inflate national survey marks to improve job options. Times Higher Education London, Times Supplements Ltd. 15th May 2008: 7 15th May 2008.

  • Pollard, E., Williams, M., Williams, J., Bertram, C., & Buzzeo, J. (2013). How should we measure higher education? A fundamental review of the Performance Indicators, Part 2: The evidence report. Brighton: Institute for Employment Studies.

    Google Scholar 

  • Popov, S. V., & Bernhardt, D. (2013). University competition, grading standards, and grade inflation. Economic Inquiry, 51(3), 1764–1778.

    Article  Google Scholar 

  • Rolfe, H. (2003). University strategy in an age of uncertainty: The effect of higher education funding on old and new universities. Higher Education Quarterly, 57(1), 24–47.

    Article  Google Scholar 

  • Saisana, M., d’Hombres, B., & Saltelli, A. (2011). Rickety numbers: Volatility of university rankings and policy implications. Research Policy, 40(1), 165–177.

    Article  Google Scholar 

  • Saltelli, A., Nardo, M., Tarantola, S., Giovannini, E., Hoffman, A., & Saisana, M. (2005). Handbook on constructing composite indicators: Methodology and user guide. Paris: OECD.

    Google Scholar 

  • Shin, J. C., & Toutkoushian, R. K. (2011). The past, present, and future of university rankings. In J. C. Shin, R. K. Toutkoushian, & U. Teichler (Eds.), University rankings: Theoretical basis, methodology and impacts on global higher education (pp. 1–18). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Stark, P. B., & Freishtat, R. (2014). An evaluation of course evaluations. ScienceOpen Research. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1.

    Google Scholar 

  • Usher, A., & Medow, J. (2009). A global survey of university rankings and league tables. In B. M. Kehm & B. Stensaker (Eds.), University rankings, diversity, and the new landscape of higher education (pp. 3–18). Rotterdam: Sense Publishers.

    Google Scholar 

  • Weimer, D. L., & Vining, A. R. (1996). Economics. In D. F. Kettle & H. B. Milward (Eds.), The state of public management (pp. 92–117). Baltimore, MA: Johns Hopkins University.

    Google Scholar 

  • Yorke, M. (1997). A good league table guide? Quality Assurance in Education, 5(2), 61–72.

    Article  Google Scholar 

Download references

Acknowledgements

I am grateful for comments and suggestions to an anonymous referee, to Geraint Johnes and Swati Virmani, and to the participants at: Efficiency in Education, Politecnico di Milano 20th–21st October 2016; Valuing Higher Education: An appreciation of the work of Gareth Williams, Centre for Higher Education Studies, Institute of Education, University College London, 15th November 2016; the Fourth Lisbon Research Workshop on Economics, Statistics and Econometrics of Education, Lisbon, Portugal, 26th–27th January 2017; the Meeting of the Economics of Education Association, Murcia, 29th–30th June 2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jill Johnes.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Johnes, J. University rankings: What do they really show?. Scientometrics 115, 585–606 (2018). https://doi.org/10.1007/s11192-018-2666-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-018-2666-1

Keywords

Navigation