Skip to main content

How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations

Abstract

Although bibliometrics has been a separate research field for many years, there is still no uniformity in the way bibliometric analyses are applied to individual researchers. Therefore, this study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences. 2005 saw the introduction of the h index, which gives information about a researcher’s productivity and the impact of his or her publications in a single number (h is the number of publications with at least h citations); however, it is not possible to cover the multidimensional complexity of research performance and to undertake inter-personal comparisons with this number. This study therefore includes recommendations for a set of indicators to be used for evaluating researchers. Our proposals relate to the selection of data on which an evaluation is based, the analysis of the data and the presentation of the results.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. The results of studies on citing behaviour “suggest that not only the content of scientific work, but also other, in part non-scientific, factors play a role in citing behaviour. Citations can therefore be viewed as a complex, multidimensional and not a unidimensional phenomenon” (Bornmann and Daniel 2008, p. 69). According to van Raan (2005a) “there is, however, sufficient evidence that these reference motives are not so different or ‘randomly given’ to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact” (p. 135). A prerequisite is that the publication set of the researcher is sufficiently large.

  2. Publications and citations are linked to scientific practice to varying degrees; otherwise, we could not have salami-slicing or salami style of publishing (Bornmann and Daniel 2007a). Scientists have been found to slice up data and interpretations into two, three, four, or more papers.

  3. As an alternative to the NJP, other methods for normalizing the JIF could be used. An overview of these methods can be found in Vinkler (2010, pp. 186–189). For example, an interesting alternative is the %Q1 indicator. It is the ratio of publications that a researcher has published in the most influential journals. These journals are ranked in the first quartile (25 %) of their subject categories. It is an advantage of this indicator that an expected values is available: One can expect that 25 % of a researcher's publications have been published in the first quartile.

    It might be a disadvantage of all normalizing methods that they are based on journal sets to delineate different fields. It is well known that these categories can be quite imprecise—especially in case of multi-disciplinary journals and highly specialized fields of research (Bornmann et al. 2008). Thus, if a publication list contains publications from these journals and/or the evaluated scientist is active in a highly specialized field, the use of journal metrics based on journal sets may be a problem.

  4. Citations are a probabilistic process and therefore the number of citations to the publications of the researchers may vary for all sorts of reasons that have nothing to do with cognitive impact (Bornmann and Daniel 2008). In addition, the measurement of citations does inevitably entail measurement errors. Hence, statistical estimations of the possible error involved—like confidence intervals (Cumming 2012) or stability intervals (Waltman et al. 2012b)—around the values of citation indicators could be calculated and added.

References

  • Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). Assessing the varying level of impact measurement accuracy as a function of the citation window length. Journal of Informetrics, 5(4), 659–667. doi:10.1016/j.joi.2011.06.004.

    Google Scholar 

  • Abramo, G., & D’Angelo, C. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514. doi:10.1007/s11192-011-0352-7.

    Google Scholar 

  • Abramo, G., D’Angelo, C. A., & Costa, F. D. (2010). Testing the trade-off between productivity and quality in research activities. Journal of the American Society for Information Science and Technology, 61(1), 132–140.

    Google Scholar 

  • Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics, 56(2), 235–246.

    Google Scholar 

  • Albarrán, P., & Ruiz-Castillo, J. (2011). References made and citations received by scientific articles. Journal of the American Society for Information Science and Technology, 62(1), 40–49. doi:10.1002/asi.21448.

    Google Scholar 

  • Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289. doi:10.1016/j.joi.2009.04.001.

    Google Scholar 

  • American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association (APA).

    Google Scholar 

  • Andres, A. (2011). Measuring Academic Research: How to undertake a bibliometric study. New York, NY: Neal-Schuman Publishers.

    Google Scholar 

  • Azoulay, P., Graff Zivin, J. S., & Manso, G. (2009). Incentives and creativity: Evidence from the academic life sciences (NBER Working Paper No. 15466). Cambridge, MA: National Bureau of Economic Research (NBER).

  • Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45, 199–245.

    Google Scholar 

  • Bornmann, L. (2013a). A better alternative to the h index. Journal of Informetrics, 7(1), 100. doi:10.1016/j.joi.2012.09.004.

    Google Scholar 

  • Bornmann, L. (2013b). How to analyse percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes and top-cited papers. Journal of the American Society for Information Science and Technology, 64(3), 587–595.

    Google Scholar 

  • Bornmann, L. (2013c). The problem of citation impact assessments for recent publication years in institutional evaluations. Journal of Informetrics, 7(3), 722–729. doi:10.1016/j.joi.2013.05.002.

    Google Scholar 

  • Bornmann, L., Bowman, B. F., Bauer, J., Marx, W., Schier, H., & Palzenberger, M. (in press). Standards for using bibliometrics in the evaluation of research institutes. In B. Cronin & C. Sugimoto (Eds.), Next generation metrics. Cambridge, MA: MIT Press.

  • Bornmann, L., & Daniel, H.-D. (2007a). Multiple publication on a single research study: Does it pay? The influence of number of research articles on total citation counts in biomedicine. Journal of the American Society for Information Science and Technology, 58(8), 1100–1107.

    Google Scholar 

  • Bornmann, L., & Daniel, H.-D. (2007b). What do we know about the h index? Journal of the American Society for Information Science and Technology, 58(9), 1381–1385. doi:10.1002/asi.20609.

    Google Scholar 

  • Bornmann, L., & Daniel, H.-D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45–80. doi:10.1108/00220410810844150.

    Google Scholar 

  • Bornmann, L., & Daniel, H.-D. (2009). The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports, 10(1), 2–6. doi:10.1038/embor.2008.233.

    Google Scholar 

  • Bornmann, L., de Moya Anegón, F., & Leydesdorff, L. (2012a). The new excellence indicator in the world report of the SCImago Institutions Rankings 2011. Journal of Informetrics, 6(2), 333–335. doi:10.1016/j.joi.2011.11.006.

    Google Scholar 

  • Bornmann, L., de Moya-Anegón, F., & Leydesdorff, L. (2010). Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis. PLoS ONE, 5(10), e11344.

    Google Scholar 

  • Bornmann, L., & Marx, W. (in press). Distributions instead of single numbers: Percentiles and beam plots for the assessment of single researchers. Journal of the American Society of Information Science and Technology.

  • Bornmann, L., Marx, W., Gasparyan, A. Y., & Kitas, G. D. (2012b). Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatology International (Clinical and Experimental Investigations), 32(7), 1861–1867.

    Google Scholar 

  • Bornmann, L., Marx, W., Schier, H., Rahm, E., Thor, A., & Daniel, H. D. (2009). Convergent validity of bibliometric Google Scholar data in the field of chemistry. Citation counts for papers that were accepted by Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar, Science Citation Index, Scopus, and Chemical Abstracts. Journal of Informetrics, 3(1), 27–35. doi:10.1016/j.joi.2008.11.001.

    Google Scholar 

  • Bornmann, L., & Mutz, R. (2013). The advantage of the use of samples in evaluative bibliometric studies. Journal of Informetrics, 7(1), 89–90. doi:10.1016/j.joi.2012.08.002.

    Google Scholar 

  • Bornmann, L., Mutz, R., & Daniel, H.-D. (2008a). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59(5), 830–837. doi:10.1002/asi.20806.

    Google Scholar 

  • Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. D. (2011a). A meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5(3), 346–359. doi:10.1016/j.joi.2011.01.006.

    Google Scholar 

  • Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011b). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high-profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. doi:10.1111/j.1467-985X.2011.00689.x.

    MathSciNet  Google Scholar 

  • Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008b). Use of citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93–102. doi:10.3354/esep00084.

    Google Scholar 

  • Bornmann, L., & Ozimek, A. (2012). Stata commands for importing bibliometric data and processing author address information. Journal of Informetrics, 6(4), 505–512. doi:10.1016/j.joi.2012.04.002.

    Google Scholar 

  • Boyack, K. W. (2004). Mapping knowledge domains: Characterizing PNAS. Proceedings of the National Academy of Sciences of the United States of America, 101, 5192–5199.

    Google Scholar 

  • Butler, L., & Visser, M. S. (2006). Extending citation analysis to non-source items. Scientometrics, 66(2), 327–343. doi:10.1007/s11192-006-0024-1.

    Google Scholar 

  • Cole, S. (1992). Making science. Between nature and society. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Coleman, B. J., Bolumole, Y. A., & Frankel, R. (2012). Benchmarking individual publication productivity in logistics. Transportation Journal, 51(2), 164–196.

    Google Scholar 

  • Costas, R., van Leeuwen, T. N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581.

    Google Scholar 

  • Council of Canadian Academies. (2012). Informing research choices: Indicators and judgment: The expert panel on science performance and research funding. Ottawa: Council of Canadian Academies.

    Google Scholar 

  • Cronin, B., & Meho, L. I. (2007). Timelines of creativity: A study of intellectual innovators in information science. Journal of the American Society for Information Science and Technology, 58(13), 1948–1959. doi:10.1002/Asi.20667.

    Google Scholar 

  • Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. London: Routledge.

    Google Scholar 

  • Danell, R. (2011). Can the quality of scientific work be predicted using information on the author’s track record? Journal of the American Society for Information Science and Technology, 62(1), 50–60. doi:10.1002/asi.21454.

    Google Scholar 

  • D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments. Journal of the American Society for Information Science and Technology, 62(2), 257–269. doi:10.1002/asi.21460.

    Google Scholar 

  • de Bellis, N. (2009). Bibliometrics and citation analysis: From the Science Citation Index to Cybermetrics. Lanham, MD: Scarecrow Press.

    Google Scholar 

  • de Moya-Anegón, F., Guerrero-Bote, V. P., Bornmann, L., & Moed, H. F. (2013). The research guarantors of scientific papers and the output counting: A promising new approach. Scientometrics, 97(2), 421–434.

    Google Scholar 

  • Doane, D. P., & Tracy, R. L. (2000). Using beam and fulcrum displays to explore data. American Statistician, 54(4), 289–290. doi:10.2307/2685780.

    MathSciNet  Google Scholar 

  • Duffy, R., Jadidian, A., Webster, G., & Sandell, K. (2011). The research productivity of academic psychologists: Assessment, trends, and best practice recommendations. Scientometrics, 89(1), 207–227. doi:10.1007/s11192-011-0452-4.

    Google Scholar 

  • Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152. doi:10.1007/s11192-006-0144-7.

    MathSciNet  Google Scholar 

  • Egghe, L. (2010). The Hirsch index and related impact measures. Annual Review of Information Science and Technology, 44, 65–114.

    Google Scholar 

  • El Emam, K., Arbuckle, L., Jonker, E., & Anderson, K. (2012). Two h-index benchmarks for evaluating the publication performance of medical informatics researchers. Journal of Medical Internet Research, 14(5), e144. doi:10.2196/jmir.2177.

    Google Scholar 

  • Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012). The success-index: An alternative approach to the h-index for evaluating an individual’s research output. Scientometrics, 92(3), 621–641. doi:10.1007/s11192-011-0570-z.

    Google Scholar 

  • Froghi, S., Ahmed, K., Finch, A., Fitzpatrick, J. M., Khan, M. S., & Dasgupta, P. (2012). Indicators for research performance evaluation: An overview. BJU International, 109(3), 321–324. doi:10.1111/j.1464-410X.2011.10856.x.

    Google Scholar 

  • García-Pérez, M. A. (2010). Accuracy and completeness of publication and citation records in the Web of Science, PsycINFO, and Google Scholar: A case study for the computation of h indices in Psychology. Journal of the American Society for Information Science and Technology, 61(10), 2070–2085. doi:10.1002/asi.21372.

    Google Scholar 

  • Garfield, E. (1979). Citation indexing—its theory and application in science, technology, and humanities. New York, NY: Wiley.

    Google Scholar 

  • Garfield, E. (2002). Highly cited authors. Scientist, 16(7), 10.

    Google Scholar 

  • Glänzel, W., Debackere, K., Thijs, B., & Schubert, A. (2006). A concise review on the role of author self-citations in information science, bibliometrics and science policy. Scientometrics, 67(2), 263–277.

    Google Scholar 

  • Grupp, H., & Mogee, M. E. (2004). Indicators for national science and technology policy: Their development, use, and possible misuse. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. The use of publication and patent statistics in studies of S&T systems (pp. 75–94). Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Haslam, N., & Laham, S. M. (2010). Quality, quantity, and impact in academic publication. European Journal of Social Psychology, 40(2), 216–220. doi:10.1002/ejsp.727.

    Google Scholar 

  • Hemlin, S. (1996). Research on research evaluations. Social Epistemology, 10(2), 209–250.

    Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572. doi:10.1073/pnas.0507655102.

    Google Scholar 

  • Jacso, P. (2009). Google Scholar’s ghost authors. Library Journal, 134(18), 26–27.

    Google Scholar 

  • Jacso, P. (2010). Metadata mega mess in Google Scholar. Online Information Review, 34(1), 175–191. doi:10.1108/14684521011024191.

    Google Scholar 

  • Korevaar, J. C., & Moed, H. F. (1996). Validation of bibliometric indicators in the field of mathematics. Scientometrics, 37(1), 117–130. doi:10.1007/Bf02093488.

    Google Scholar 

  • Kosmulski, M. (2011). Successful papers: A new idea in evaluation of scientific output. Journal of Informetrics, 5(3), 481–485. doi:10.1016/j.joi.2011.03.001.

    Google Scholar 

  • Kosmulski, M. (2012). Modesty-index. Journal of Informetrics, 6(3), 368–369. doi:10.1016/j.joi.2012.02.004.

    Google Scholar 

  • Kreiman, G., & Maunsell, J. H. R. (2011). Nine criteria for a measure of scientific output. Frontiers in Computational Neuroscience, 5, 48. doi:10.3389/fncom.2011.00048.

    Google Scholar 

  • Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38(1), 201–221. doi:10.1146/annurev-soc-070308-120022.

    Google Scholar 

  • Larsen, P. O., & von Ins, M. (2009). The steady growth of scientific publication and the declining coverage provided by Science Citation Index. In B. Larsen & J. Leta (Eds.), Proceedings of ISSI 2009—12th international conference of the international society for scientometrics and informetrics (Vol. 2, pp. 597–606). Leuven: Int Soc Scientometrics and Informetrics-ISSI.

  • Lehmann, S., Jackson, A., & Lautrup, B. (2008). A quantitative analysis of indicators of scientific performance. Scientometrics, 76(2), 369–390. doi:10.1007/s11192-007-1868-8.

    Google Scholar 

  • Lewison, G., Thornicroft, G., Szmukler, G., & Tansella, M. (2007). Fair assessment of the merits of psychiatric research. British Journal of Psychiatry, 190, 314–318. doi:10.1192/bjp.bp.106.024919.

    Google Scholar 

  • Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.

    Google Scholar 

  • Martin, B. R., & Irvine, J. (1983). Assessing basic research—some partial indicators of scientific progress in radio astronomy. Research Policy, 12(2), 61–90.

    Google Scholar 

  • Marx, W. (2011). Bibliometrie in der Forschungsbewertung: Aussagekraft und Grenzen. Forschung and Lehre, 11, 680.

    Google Scholar 

  • Marx, W., & Bornmann, L. (2012). Der Journal Impact Factor: Aussagekraft, Grenzen und Alternativen in der Forschungsevaluation. Beiträge zur Hochschulforschung, 34(2), 50–66.

    Google Scholar 

  • Marx, W., & Bornmann, L. (in press). On the problems of dealing with bibliometric data. Journal of the American Society for Information Sciences and Technology.

  • Meho, L. I., & Spurgin, K. M. (2005). Ranking the research productivity of library and information science faculty and schools: An evaluation of data sources and research methods. Journal of the American Society for Information Science and Technology, 56(12), 1314–1331.

    Google Scholar 

  • Merton, R. K. (1957). Priorities in scientific discovery: A chapter in the sociology of science. American Sociological Review, 22(6), 635–659. doi:10.2307/2089193.

    Google Scholar 

  • Merton, R. K. (1980). Auf den Schultern von Riesen ein Leitfaden durch das Labyrinth der Gelehrsamkeit. Frankfurt am Main: Syndikat.

    Google Scholar 

  • Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer.

    Google Scholar 

  • Moed, H. F., & Hesselink, F. T. (1996). The publication output and impact of academic chemistry research in the Netherlands during the 1980s: Bibliometric analysis and policy implications. Research Policy, 25(5), 819–836.

    Google Scholar 

  • Moed, H. F., van Leeuwen, T. N., & Reedijk, J. (1996). A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society—inaccuracies in published impact factors based on overall citations only. Scientometrics, 37(1), 105–116.

    Google Scholar 

  • Norris, M., & Oppenheim, C. (2010). The h-index: A broad review of a new bibliometric indicator. Journal of Documentation, 66(5), 681–705. doi:10.1108/00220411011066790.

    Google Scholar 

  • Nosek, B. A., Graham, J., Lindner, N. M., Kesebir, S., Hawkins, C. B., Hahn, C., et al. (2010). Cumulative and career-stage citation impact of social-personality psychology programs and their members. Personality and social Psychology Bulletin, 36(10), 1283–1300. doi:10.1177/0146167210378111.

    Google Scholar 

  • Opthof, T., & Wilde, A. A. M. (2011). Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors. Netherlands Heart Journal, 19(5), 246–255. doi:10.1007/s12471-011-0128-y.

    Google Scholar 

  • Panaretos, J., & Malesios, C. (2009). Assessing scientific research performance and impact with single indices. Scientometrics, 81(3), 635–670. doi:10.1007/s11192-008-2174-9.

    Google Scholar 

  • Pendlebury, D. A. (2008). Using bibliometrics in evaluating research. Philadelphia, PA: Research Department, Thomson Scientific.

    Google Scholar 

  • Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum Immunologiae Et Therapiae Experimentalis, 57(1), 1–11. doi:10.1007/s00005-009-0008-y.

    Google Scholar 

  • Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications—theory, with application to literature of physics. Information Processing and Management, 12(5), 297–312.

    Google Scholar 

  • Retzer, V., & Jurasinski, G. (2009). Towards objectivity in research evaluation using bibliometric indicators: A protocol for incorporating complexity. Basic and Applied Ecology, 10(5), 393–400. doi:10.1016/j.baae.2008.09.001.

    Google Scholar 

  • Ruiz-Castillo, J. (2012). The evaluation of citation distributions. SERIEs: Journal of the Spanish Economic Association, 3(1), 291–310. doi:10.1007/s13209-011-0074-3.

    Google Scholar 

  • Sahel, J. A. (2011). Quality versus quantity: Assessing individual research performance. Science Translational Medicine, 3(84), 84cm13. doi:10.1126/scitranslmed.3002249.

    Google Scholar 

  • Schubert, A., & Braun, T. (1993). Reference standards for citation based assessments. Scientometrics, 26(1), 21–35.

    Google Scholar 

  • Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(3), 311–324.

    Google Scholar 

  • Smith, A., & Eysenck, M. (2002). The correlation between RAE ratings and citation counts in psychology. London: Department of Psychology, Royal Holloway, University of London.

    Google Scholar 

  • StataCorp. (2011). Stata statistical software: Release 12. College Station, TX: Stata Corporation.

    Google Scholar 

  • Strotmann, A., & Zhao, D. (2012). Author name disambiguation: What difference does it make in author-based citation analysis? Journal of the American Society for Information Science and Technology, 63(9), 1820–1833. doi:10.1002/asi.22695.

    Google Scholar 

  • Sugimoto, C. R., & Cronin, B. (2012). Biobibliometric profiling: An examination of multifaceted approaches to scholarship. Journal of the American Society for Information Science and Technology, 63(3), 450–468. doi:10.1002/asi.21695.

    Google Scholar 

  • Taylor, J. (2011). The assessment of research quality in UK universities: Peer review or metrics? British Journal of Management, 22(2), 202–217. doi:10.1111/j.1467-8551.2010.00722.x.

    Google Scholar 

  • Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). New indices in scholarship assessment. American Journal of Pharmaceutical Education, 73(6), 111.

    Google Scholar 

  • Tijssen, R., & van Leeuwen, T. (2006). Centres of research excellence and science indicators. Can ‘excellence’ be captured in numbers? In W. Glänzel (Ed.), Ninth international conference on science and technology indicators (pp. 146–147). Leuven, Belgium: Katholieke Universiteit Leuven.

  • Tijssen, R., Visser, M., & van Leeuwen, T. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381–397.

    Google Scholar 

  • van Raan, A. F. J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420.

    Google Scholar 

  • van Raan, A. J. F. (2000). The Pandora’s Box of citation analysis: Measuring scientific excellence—the last evil? In B. Cronin & H. B. Atkins (Eds.), The web of knowledge (pp. 301–319). Medford, NJ: Information Today Inc.

    Google Scholar 

  • van Raan, A. F. J. (2005a). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.

    Google Scholar 

  • van Raan, A. F. J. (2005b). Measurement of central aspects of scientific research: Performance, interdisciplinarity, structure. Measurement, 3(1), 1–19.

    Google Scholar 

  • van Raan, A. F. J. (2008). Bibliometric statistical properties of the 100 largest European research universities: Prevalent scaling rules in the science system. Journal of the American Society for Information Science and Technology, 59(3), 461–475. doi:10.1002/asi.20761.

    Google Scholar 

  • Vinkler, P. (2010). The evaluation of research by scientometric indicators. Oxford: Chandos Publishing.

    Google Scholar 

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J. et al. (2012a). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Retrieved February 24, from http://arxiv.org/abs/1202.3941.

  • Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J., et al. (2012b). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432.

    Google Scholar 

  • Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406–415. doi:10.1002/asi.21678.

    Google Scholar 

  • Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851–872. doi:10.1007/s11192-012-0775-9.

    Google Scholar 

  • Weingart, P. (2005). Das Ritual der Evaluierung und die Verführbarkeit. In P. Weingart (Ed.), Die Wissenschaft der Öffentlichkeit: Essays zum Verhältnis von Wissenschaft, Medien und Öffentlichkeit (pp. 102–122). Weilerswist: Velbrück.

    Google Scholar 

  • Zhang, L., & Glänzel, W. (2012). Where demographics meets scientometrics: Towards a dynamic career analysis. Scientometrics, 91(2), 617–630. doi:10.1007/s11192-011-0590-8.

    Google Scholar 

Download references

Acknowledgments

We thank two anonymous reviewers for the recommendations to significantly improve the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lutz Bornmann.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Bornmann, L., Marx, W. How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics 98, 487–509 (2014). https://doi.org/10.1007/s11192-013-1161-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-013-1161-y

Keywords

  • Bibliometrics
  • Publications
  • Productivity
  • Citations
  • Percentiles
  • Researchers