Scientometrics

, Volume 96, Issue 1, pp 337–364 | Cite as

What about excellence in teaching? A benevolent ranking of universities

Article

Abstract

Existing university rankings apply fixed and exogenous weights based on a theoretical framework, stakeholder or expert opinions. Fixed weights cannot embrace all requirements of a ‘good ranking’ according to the Berlin Principles. As the strengths of universities differ, the weights on the ranking should differ as well. This paper proposes a fully nonparametric methodology to rank universities. The methodology is in line with the Berlin Principles. It assigns to each university the weights that maximize (minimize) the impact of the criteria where university performs relatively well (poor). The method accounts for background characteristics among universities and evaluates which characteristics have an impact on the ranking. In particular, it accounts for the level of tuition fees, an English speaking environment, size, research or teaching orientation. In general, medium sized universities in English speaking countries benefit from the benevolent ranking. On the contrary, we observe that rankings with fixed weighting schemes reward large and research oriented universities. Especially Swiss and German universities significantly improve their position in a more benevolent ranking.

Keywords

University ranking Endogenous weight selection Conditional efficiency Higher education 

JEL Classification

C14 C25 I21 

References

  1. Altbach, P. (2006). The dilemmas of ranking. International Higher Education, 42, 2–3.Google Scholar
  2. Billaut, J.-C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking? Scientometrics, 84(1), 237–263.CrossRefGoogle Scholar
  3. Boulton, G. (2010) University rankings: Diversity, excellence And the european initiative. League Of European Research Universities. Advice Paper Nr.3, June 2010.Google Scholar
  4. Bowman, N. A., & Bastedo, M. N. (2010). Anchoring effects in world university rankings: Exploring biases in reputation scores. Higher Education, 61(4), 431–444.CrossRefGoogle Scholar
  5. Cazals, C., Florens, J. P., & Simar, L. (2002). Nonparametric frontier estimation: A robust approach. Journal of Econometrics, 106(1), 1–25.MathSciNetMATHCrossRefGoogle Scholar
  6. Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429–444.MathSciNetMATHCrossRefGoogle Scholar
  7. Cheng, Y., & Liu, N. (2008). Examining major rankings according to the Berlin principles. Higher Education in Europe, 33(2–3), 201–208.CrossRefGoogle Scholar
  8. Cherchye, L., Moesen, W., Rogge, N., & Van Puyenbroeck, T. (2007). An introduction to ‘benefit of the doubt’ composite indicators. Social Indicators Research, 82, 111–145.CrossRefGoogle Scholar
  9. CHERI. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Report to HEFCE by the Centre for Higher Education Research and Information (CHERI), Open University, and Hobsons Research, Issues Paper, April 2008/14.Google Scholar
  10. Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. Higher Education in Europe, 32(1), 59–70.CrossRefGoogle Scholar
  11. Coelli, M. (2009). Tuition fees and equality of university enrolment. Canadian Journal of Economics, 42(3), 1072–1099.CrossRefGoogle Scholar
  12. Daraio, C., & Simar, L. (2005). Introducing Environmental variables in nonparametric frontier models: A probabilistic approach. Journal of Productivity Analysis, 24(1), 93–121.CrossRefGoogle Scholar
  13. De Witte, K., & Kortelainen, M. (2013). What explains performance of students in a heterogeneous environment? Conditional efficiency estimation with continuous and discrete environmental variables. Applied Economics, 45(17), 2401–2412.CrossRefGoogle Scholar
  14. De Witte, K., & Rogge, N. (2010). To publish or not to publish? On the aggregation and drivers of research performance. Scientometrics, 85(3), 657–680.CrossRefGoogle Scholar
  15. De Witte, K., & Rogge, N. (2011). Accounting for exogenous influences in performance evaluations of teachers. Economics of Education Review, 30(4), 641–653.CrossRefGoogle Scholar
  16. Eccles, C. (2002). The use of university rankings in the United Kingdom. Higher Education in Europe, 27(4), 423–432.CrossRefGoogle Scholar
  17. Enserink, M. (2007). Who ranks the university rankers? Science, 317, 1026–1028.Google Scholar
  18. Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society, 120(3), 253–290.CrossRefGoogle Scholar
  19. Federkeil, G. (2008). Rankings and quality assurance in higher education. Higher Education in Europe, 33(2–3), 219–223.CrossRefGoogle Scholar
  20. Griffith, A., & Rask, K. (2007). The influence of the U.S. News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995–2004. Economics of Education Review, 26(2), 1–12.CrossRefGoogle Scholar
  21. Harvey, L. (2008a). Rankings of higher education institutions: A critical review, editorial. Quality in Higher Education, 14(3), 187–207.CrossRefGoogle Scholar
  22. Harvey, L. (2008b). Assaying improvement, paper presented at the 30th EAIR Forum, Copenhagen, Denmark, 24–27 August.Google Scholar
  23. Hazelkorn, E. (2007). The impact of league tables and ranking systems on higher education decision making. Higher Education Management and Policy, 19, 2.Google Scholar
  24. Institute for Higher Education Policy (IHEP). (2007). College and university ranking systems. Global perspectives and American challenges. Washington, DC: Institute for Higher Education Policy.Google Scholar
  25. International Ranking Expert Group (IREG). (2006). Berlin principles on ranking of higher education institutions. www.che.de/downloads/Berlin_Principles_IREG_534.pdf.
  26. JRC/OECD. (2008). Handbook on constructing composite indicators. Methodology and user guide, OECD Publishing, ISBN 978-92-64-04345-9.Google Scholar
  27. Kälvemark, T. (2007). University ranking systems: A critique. Technical report, Irish Universities Quality Board. Google Scholar
  28. Marcucci, P. N., & Johnstone, D. B. (2007). Tuition fee policies in a comparative perspective: Theoretical and political rationales. Journal of Higher Education Policy and Management, 29(1), 25–40.CrossRefGoogle Scholar
  29. Marginson, S. (2007). Global university rankings: Implications in general and for Australia. Journal of Higher Education Policy and Management, 29(2), 131–142.CrossRefGoogle Scholar
  30. Marginson, S., & van der Wende, M. C. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal for Studies in International Education, 11(3–4), 306–329.CrossRefGoogle Scholar
  31. Melyn, W., & Moesen, W. (1991). Towards a synthetic indicator of macroeconomic performance: Unequal weighting when limited information is available. Public economics research paper, 17, CES, KULeuven.Google Scholar
  32. Merisotis, J. P. (2002). On the ranking of higher education institutions. Higher Education in Europe, 27(4), 361–363.CrossRefGoogle Scholar
  33. Michaelis, P. (2004). Education, research and the impact of tuition fees: A simple model of the Jahrbuch für Wirtschaftswissenschaften/Review of Economics Publication Info, University of Augsburg, Department of Economics, paper 265.Google Scholar
  34. Racine, J. S., Hart, J., & Li, Q. (2006). Testing the significance of categorical predictor variables in nonparametric regression models. Econometric Reviews, 25(4), 523–544.MathSciNetMATHCrossRefGoogle Scholar
  35. Rauhvargers, A. (2011). Global university rankings and their impact. Brussels: EuropeanUniversity Association.Google Scholar
  36. Sadlak, J., Merisotis, J., & Li, N. C. (2008). University rankings: Seeking prestige, raising visibility and embedding quality—the editors’ views. Higher Education in Europe, 33(2–3), 195–199.CrossRefGoogle Scholar
  37. Sanoff, A. (1998). Rankings are here to stay: Colleges can improve them. Chronicle of Higher Education, 45(2), 96–100.Google Scholar
  38. Sowter, B. (2008). The times higher education supplement and Quacquarelli Symonds (THES—QS) world university rankings: New developments in ranking methodology. Higher Education in Europe, 33(2–3), 345–347.CrossRefGoogle Scholar
  39. Stella, A., & Woodhouse, D. (2006). Ranking of higher education institutions. Occasional Publications Series No: 6, Melbourne, AUQA.Google Scholar
  40. Taylor, P., & Braddock, R. (2007). International university ranking systems and the idea of university excellence. Journal of Higher Education Policy and Management, 29(3), 245–260.CrossRefGoogle Scholar
  41. THES. (2005). World university rankings. In Times Higher Education Supplement.Google Scholar
  42. Tofallis, C. (2012). A different approach to university rankings. Higher Education, 63(1), 1–18.Google Scholar
  43. Tulkens, H. (2007). Ranking universities: How to take better account of diversity. CORE Discussion Paper 2007/42.Google Scholar
  44. van der Wende, M. (2008). Rankings and classifications in higher education: A European perspective. Higher Education, 23, 49–71.Google Scholar
  45. van der Wende, M., & Westerheijden, D. (2009). Rankings and classifications: The need for a multidimensional approach. Higher Education Management and Policy, 22/3, 71–86.Google Scholar
  46. van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.CrossRefGoogle Scholar
  47. van Vught, F. (2007). Diversity and differentiation in higher education systems. Presentation at the conference higher education in the 21st century—diversity of missions, Dublin, 26 June 2007.Google Scholar
  48. van Vught, F. A., & Westerheijden, D. F. (2010). Multidimensional ranking: a new transparency tool for higher education and research. The Netherlands: Center for Higher Education Policy Studies (CHEPS), University of Twente.Google Scholar
  49. Weber, L. (2006). University autonomy, a necessary, but not sufficient condition for excellence—being a paper presented at IAU/IAUP Presidents’ symposium. Chiang Mai, Thailand, 8–9 December.Google Scholar
  50. Webster, J. (2001). A principal component analysis of the U.S. News & World Report tier rankings of colleges and universities. Economics of Education Review, 20, 235–244.CrossRefGoogle Scholar
  51. Zha, Q. (2009). Diversification or homogenization in higher education: A global allomorphism perspective. Higher Education in Europe, 34(3–4), 459–479.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  1. 1.Top Institute for Evidence Based Education ResearchMaastricht UniversityMaastrichtThe Netherlands
  2. 2.Faculty of Business and EconomicsKatholieke Universiteit Leuven (KULeuven)LeuvenBelgium
  3. 3.Department of Economic StatisticsUniversity of Economics, PraguePrague 3Czech Republic

Personalised recommendations