Testing differences statistically with the Leiden ranking
- First Online:
- 837 Downloads
The Leiden ranking 2011/2012 provides the Proportion top-10% publications (PPtop-10%) as a new indicator. This indicator allows for testing performance differences between two universities for statistical significance.
KeywordsRanking University Test Comparison Expectation
On 1 December 2011, the Centre for Science and Technology Studies (CWTS) at Leiden University launched the Leiden ranking 2011/2012 at http://www.leidenranking.com/ranking.aspx. The Leiden ranking 2011/2012 measures the scientific performance of 500 major universities worldwide. The PPtop-10% is added as a new indicator of impact. This indicator corresponds with the excellence indicator (EI) recently introduced in the SCImago Institutions rankings (at http://www.scimagoir.com/pdf/sir_2011_world_report.pdf).
Whereas SCImago uses Scopus data, the Leiden ranking is based on the Web-of-Science data of Thomson Reuters. In addition to the “stability intervals” provided by CWTS, values for both PPtop-10% and EI can be tested statistically for significant differences from expectation. Furthermore, the statistical significance of performance differences between universities can be tested by using the z-test for independent proportions (Bornmann et al. in press; Sheskin 2011, pp. 656f).
An Excel sheet can be downloaded from http://www.leydesdorff.net/leiden11/leiden11.xls into which the values for this indicator PPtop-10% can be fed in order to obtain a z value. The example in the download shows the results for Leiden University when compared with the University of Amsterdam (not statistically significantly different; p > 0.05), and for Leiden University when compared with the expectation (the value is statistically significant above the expectation; p < 0.001). The values in the sheet can be replaced with values in the ranking for any university or any set of two universities.
An absolute value of z larger than 1.96 indicates the statistical significance of the difference between two ratings at the 5% level (p < 0.05); the critical value for a test at the 1% level (p < 0.01) is 2.576. However, in a series of tests for many institutions, a significance level higher than 5% must be chosen because of the possibility of a family-wise accumulation of type-I errors (the so-called Bonferroni correction; cf. Leydesdorff et al. 2011).
In summary, it seems fortunate to us that two major teams in our field (Granada and Leiden University) have agreed on using an indicator for the Scopus and WoS databases, respectively, that allows for testing of statistically significant differences of scientific performances. Of course, there remains the problem of interdisciplinarity/multidisciplinarity when institutional units, such as universities are ranked. This could be counteracted by field-normalization and perhaps by fractionation of citations (1/the number of references) in terms of the citing papers (Zhou and Leydesdorff 2011).
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
- Bornmann, L., de Moya-Anegón, F., & Leydesdorff, L. (2011, in press). The new excellence indicator in the world report of the SCImago institutions rankings. Journal of Informetrics. http://arxiv.org/abs/1110.2305.
- Sheskin, D. J. (2011). Handbook of parametric and nonparametric statistical procedures (5th ed.). Boca Raton: Chapman & Hall/CRC.Google Scholar