Skip to main content
Log in

How much do different ways of calculating percentiles influence the derived performance indicators? A case study

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Bibliometric indicators can be determined by comparing specific citation records with the percentiles of a reference set. However, there exists an ambiguity in the computation of percentiles because usually a significant number of papers with the same citation count are found at the border between percentile rank classes. The present case study of the citations to the journal Europhysics Letters (EPL) in comparison with all physics papers from the Web of Science shows the deviations which occur due to the different ways of treating the tied papers in the evaluation of the percentage of highly cited publications. A strong bias can occur, if the papers tied at the threshold number of citations are all considered as highly cited or all considered as not highly cited.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Bornmann, L., Leydesdorff, L., & Mutz, R. (2012). The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits of this method for the normalization of citations. Journal of the American Society for Information Science and Technology (in press).

  • Leydesdorff, L. (2012). Accounting for the uncertainty in the evaluation of percentile ranks. Journal of the American Society for Information Science and Technology, 63(11), 2349–2350.

    Article  Google Scholar 

  • Leydesdorff, L., & Bornmann, L. (2011). Integrated impact indicators compared with impact factors: an alternative research design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 2133–2146.

    Article  Google Scholar 

  • Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables on citation analysis one more time: principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.

    Article  Google Scholar 

  • Rousseau, R. (2012). Basic properties of both percentile rank scores and the I3 indicator. Journal of the American Society for Information Science and Technology, 63(2), 416–420.

    Article  Google Scholar 

  • Schreiber, M. (2012a). Inconsistencies of recently proposed citation impact indicators and how to avoid them. Journal of the American Society for Information Science and Technology, 63(10), 2062–2073. arXiv:1202.3861v1.

    Article  MathSciNet  Google Scholar 

  • Schreiber, M. (2012b). Uncertainties and ambiguities in percentiles and how to avoid them. Journal of the American Society for Information Science and Technology , 64(3), 640–643. arXiv:1205.3588.

    Google Scholar 

  • Waltman, L., & Schreiber, M. (2012). On the calculation of percentile-based bibliometric indicators. Journal of the American Society for Information Science and Technology, 64(2), 372–379.

    Article  Google Scholar 

Download references

Acknowledgments

I thank L. Waltman for his assistance in obtaining the citation data. Useful discussions with L. Bornmann, L. Leydesdorff, and L. Waltman are gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Schreiber.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Schreiber, M. How much do different ways of calculating percentiles influence the derived performance indicators? A case study. Scientometrics 97, 821–829 (2013). https://doi.org/10.1007/s11192-013-0984-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-013-0984-x

Keywords

Navigation