Scientometrics

, Volume 98, Issue 1, pp 487–509 | Cite as

How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations

Article

Abstract

Although bibliometrics has been a separate research field for many years, there is still no uniformity in the way bibliometric analyses are applied to individual researchers. Therefore, this study aims to set up proposals how to evaluate individual researchers working in the natural and life sciences. 2005 saw the introduction of the h index, which gives information about a researcher’s productivity and the impact of his or her publications in a single number (h is the number of publications with at least h citations); however, it is not possible to cover the multidimensional complexity of research performance and to undertake inter-personal comparisons with this number. This study therefore includes recommendations for a set of indicators to be used for evaluating researchers. Our proposals relate to the selection of data on which an evaluation is based, the analysis of the data and the presentation of the results.

Keywords

Bibliometrics Publications Productivity Citations Percentiles Researchers 

References

  1. Abramo, G., Cicero, T., & D’Angelo, C. A. (2011). Assessing the varying level of impact measurement accuracy as a function of the citation window length. Journal of Informetrics, 5(4), 659–667. doi:10.1016/j.joi.2011.06.004.Google Scholar
  2. Abramo, G., & D’Angelo, C. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514. doi:10.1007/s11192-011-0352-7.Google Scholar
  3. Abramo, G., D’Angelo, C. A., & Costa, F. D. (2010). Testing the trade-off between productivity and quality in research activities. Journal of the American Society for Information Science and Technology, 61(1), 132–140.Google Scholar
  4. Aksnes, D. W. (2003). A macro study of self-citation. Scientometrics, 56(2), 235–246.Google Scholar
  5. Albarrán, P., & Ruiz-Castillo, J. (2011). References made and citations received by scientific articles. Journal of the American Society for Information Science and Technology, 62(1), 40–49. doi:10.1002/asi.21448.Google Scholar
  6. Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273–289. doi:10.1016/j.joi.2009.04.001.Google Scholar
  7. American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: American Psychological Association (APA).Google Scholar
  8. Andres, A. (2011). Measuring Academic Research: How to undertake a bibliometric study. New York, NY: Neal-Schuman Publishers.Google Scholar
  9. Azoulay, P., Graff Zivin, J. S., & Manso, G. (2009). Incentives and creativity: Evidence from the academic life sciences (NBER Working Paper No. 15466). Cambridge, MA: National Bureau of Economic Research (NBER).Google Scholar
  10. Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45, 199–245.Google Scholar
  11. Bornmann, L. (2013a). A better alternative to the h index. Journal of Informetrics, 7(1), 100. doi:10.1016/j.joi.2012.09.004.Google Scholar
  12. Bornmann, L. (2013b). How to analyse percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes and top-cited papers. Journal of the American Society for Information Science and Technology, 64(3), 587–595.Google Scholar
  13. Bornmann, L. (2013c). The problem of citation impact assessments for recent publication years in institutional evaluations. Journal of Informetrics, 7(3), 722–729. doi:10.1016/j.joi.2013.05.002.Google Scholar
  14. Bornmann, L., Bowman, B. F., Bauer, J., Marx, W., Schier, H., & Palzenberger, M. (in press). Standards for using bibliometrics in the evaluation of research institutes. In B. Cronin & C. Sugimoto (Eds.), Next generation metrics. Cambridge, MA: MIT Press.Google Scholar
  15. Bornmann, L., & Daniel, H.-D. (2007a). Multiple publication on a single research study: Does it pay? The influence of number of research articles on total citation counts in biomedicine. Journal of the American Society for Information Science and Technology, 58(8), 1100–1107.Google Scholar
  16. Bornmann, L., & Daniel, H.-D. (2007b). What do we know about the h index? Journal of the American Society for Information Science and Technology, 58(9), 1381–1385. doi:10.1002/asi.20609.Google Scholar
  17. Bornmann, L., & Daniel, H.-D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45–80. doi:10.1108/00220410810844150.Google Scholar
  18. Bornmann, L., & Daniel, H.-D. (2009). The state of h index research. Is the h index the ideal way to measure research performance? EMBO Reports, 10(1), 2–6. doi:10.1038/embor.2008.233.Google Scholar
  19. Bornmann, L., de Moya Anegón, F., & Leydesdorff, L. (2012a). The new excellence indicator in the world report of the SCImago Institutions Rankings 2011. Journal of Informetrics, 6(2), 333–335. doi:10.1016/j.joi.2011.11.006.Google Scholar
  20. Bornmann, L., de Moya-Anegón, F., & Leydesdorff, L. (2010). Do scientific advancements lean on the shoulders of giants? A bibliometric investigation of the Ortega hypothesis. PLoS ONE, 5(10), e11344.Google Scholar
  21. Bornmann, L., & Marx, W. (in press). Distributions instead of single numbers: Percentiles and beam plots for the assessment of single researchers. Journal of the American Society of Information Science and Technology.Google Scholar
  22. Bornmann, L., Marx, W., Gasparyan, A. Y., & Kitas, G. D. (2012b). Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatology International (Clinical and Experimental Investigations), 32(7), 1861–1867.Google Scholar
  23. Bornmann, L., Marx, W., Schier, H., Rahm, E., Thor, A., & Daniel, H. D. (2009). Convergent validity of bibliometric Google Scholar data in the field of chemistry. Citation counts for papers that were accepted by Angewandte Chemie International Edition or rejected but published elsewhere, using Google Scholar, Science Citation Index, Scopus, and Chemical Abstracts. Journal of Informetrics, 3(1), 27–35. doi:10.1016/j.joi.2008.11.001.Google Scholar
  24. Bornmann, L., & Mutz, R. (2013). The advantage of the use of samples in evaluative bibliometric studies. Journal of Informetrics, 7(1), 89–90. doi:10.1016/j.joi.2012.08.002.Google Scholar
  25. Bornmann, L., Mutz, R., & Daniel, H.-D. (2008a). Are there better indices for evaluation purposes than the h index? A comparison of nine different variants of the h index using data from biomedicine. Journal of the American Society for Information Science and Technology, 59(5), 830–837. doi:10.1002/asi.20806.Google Scholar
  26. Bornmann, L., Mutz, R., Hug, S. E., & Daniel, H. D. (2011a). A meta-analysis of studies reporting correlations between the h index and 37 different h index variants. Journal of Informetrics, 5(3), 346–359. doi:10.1016/j.joi.2011.01.006.Google Scholar
  27. Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011b). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high-profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. doi:10.1111/j.1467-985X.2011.00689.x.MathSciNetGoogle Scholar
  28. Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.-D. (2008b). Use of citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics, 8, 93–102. doi:10.3354/esep00084.Google Scholar
  29. Bornmann, L., & Ozimek, A. (2012). Stata commands for importing bibliometric data and processing author address information. Journal of Informetrics, 6(4), 505–512. doi:10.1016/j.joi.2012.04.002.Google Scholar
  30. Boyack, K. W. (2004). Mapping knowledge domains: Characterizing PNAS. Proceedings of the National Academy of Sciences of the United States of America, 101, 5192–5199.Google Scholar
  31. Butler, L., & Visser, M. S. (2006). Extending citation analysis to non-source items. Scientometrics, 66(2), 327–343. doi:10.1007/s11192-006-0024-1.Google Scholar
  32. Cole, S. (1992). Making science. Between nature and society. Cambridge, MA: Harvard University Press.Google Scholar
  33. Coleman, B. J., Bolumole, Y. A., & Frankel, R. (2012). Benchmarking individual publication productivity in logistics. Transportation Journal, 51(2), 164–196.Google Scholar
  34. Costas, R., van Leeuwen, T. N., & Bordons, M. (2010). A bibliometric classificatory approach for the study and assessment of research performance at the individual level: The effects of age on productivity and impact. Journal of the American Society for Information Science and Technology, 61(8), 1564–1581.Google Scholar
  35. Council of Canadian Academies. (2012). Informing research choices: Indicators and judgment: The expert panel on science performance and research funding. Ottawa: Council of Canadian Academies.Google Scholar
  36. Cronin, B., & Meho, L. I. (2007). Timelines of creativity: A study of intellectual innovators in information science. Journal of the American Society for Information Science and Technology, 58(13), 1948–1959. doi:10.1002/Asi.20667.Google Scholar
  37. Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. London: Routledge.Google Scholar
  38. Danell, R. (2011). Can the quality of scientific work be predicted using information on the author’s track record? Journal of the American Society for Information Science and Technology, 62(1), 50–60. doi:10.1002/asi.21454.Google Scholar
  39. D’Angelo, C. A., Giuffrida, C., & Abramo, G. (2011). A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments. Journal of the American Society for Information Science and Technology, 62(2), 257–269. doi:10.1002/asi.21460.Google Scholar
  40. de Bellis, N. (2009). Bibliometrics and citation analysis: From the Science Citation Index to Cybermetrics. Lanham, MD: Scarecrow Press.Google Scholar
  41. de Moya-Anegón, F., Guerrero-Bote, V. P., Bornmann, L., & Moed, H. F. (2013). The research guarantors of scientific papers and the output counting: A promising new approach. Scientometrics, 97(2), 421–434.Google Scholar
  42. Doane, D. P., & Tracy, R. L. (2000). Using beam and fulcrum displays to explore data. American Statistician, 54(4), 289–290. doi:10.2307/2685780.MathSciNetGoogle Scholar
  43. Duffy, R., Jadidian, A., Webster, G., & Sandell, K. (2011). The research productivity of academic psychologists: Assessment, trends, and best practice recommendations. Scientometrics, 89(1), 207–227. doi:10.1007/s11192-011-0452-4.Google Scholar
  44. Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152. doi:10.1007/s11192-006-0144-7.MathSciNetGoogle Scholar
  45. Egghe, L. (2010). The Hirsch index and related impact measures. Annual Review of Information Science and Technology, 44, 65–114.Google Scholar
  46. El Emam, K., Arbuckle, L., Jonker, E., & Anderson, K. (2012). Two h-index benchmarks for evaluating the publication performance of medical informatics researchers. Journal of Medical Internet Research, 14(5), e144. doi:10.2196/jmir.2177.Google Scholar
  47. Franceschini, F., Galetto, M., Maisano, D., & Mastrogiacomo, L. (2012). The success-index: An alternative approach to the h-index for evaluating an individual’s research output. Scientometrics, 92(3), 621–641. doi:10.1007/s11192-011-0570-z.Google Scholar
  48. Froghi, S., Ahmed, K., Finch, A., Fitzpatrick, J. M., Khan, M. S., & Dasgupta, P. (2012). Indicators for research performance evaluation: An overview. BJU International, 109(3), 321–324. doi:10.1111/j.1464-410X.2011.10856.x.Google Scholar
  49. García-Pérez, M. A. (2010). Accuracy and completeness of publication and citation records in the Web of Science, PsycINFO, and Google Scholar: A case study for the computation of h indices in Psychology. Journal of the American Society for Information Science and Technology, 61(10), 2070–2085. doi:10.1002/asi.21372.Google Scholar
  50. Garfield, E. (1979). Citation indexing—its theory and application in science, technology, and humanities. New York, NY: Wiley.Google Scholar
  51. Garfield, E. (2002). Highly cited authors. Scientist, 16(7), 10.Google Scholar
  52. Glänzel, W., Debackere, K., Thijs, B., & Schubert, A. (2006). A concise review on the role of author self-citations in information science, bibliometrics and science policy. Scientometrics, 67(2), 263–277.Google Scholar
  53. Grupp, H., & Mogee, M. E. (2004). Indicators for national science and technology policy: Their development, use, and possible misuse. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. The use of publication and patent statistics in studies of S&T systems (pp. 75–94). Dordrecht: Kluwer Academic Publishers.Google Scholar
  54. Haslam, N., & Laham, S. M. (2010). Quality, quantity, and impact in academic publication. European Journal of Social Psychology, 40(2), 216–220. doi:10.1002/ejsp.727.Google Scholar
  55. Hemlin, S. (1996). Research on research evaluations. Social Epistemology, 10(2), 209–250.Google Scholar
  56. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572. doi:10.1073/pnas.0507655102.Google Scholar
  57. Jacso, P. (2009). Google Scholar’s ghost authors. Library Journal, 134(18), 26–27.Google Scholar
  58. Jacso, P. (2010). Metadata mega mess in Google Scholar. Online Information Review, 34(1), 175–191. doi:10.1108/14684521011024191.Google Scholar
  59. Korevaar, J. C., & Moed, H. F. (1996). Validation of bibliometric indicators in the field of mathematics. Scientometrics, 37(1), 117–130. doi:10.1007/Bf02093488.Google Scholar
  60. Kosmulski, M. (2011). Successful papers: A new idea in evaluation of scientific output. Journal of Informetrics, 5(3), 481–485. doi:10.1016/j.joi.2011.03.001.Google Scholar
  61. Kosmulski, M. (2012). Modesty-index. Journal of Informetrics, 6(3), 368–369. doi:10.1016/j.joi.2012.02.004.Google Scholar
  62. Kreiman, G., & Maunsell, J. H. R. (2011). Nine criteria for a measure of scientific output. Frontiers in Computational Neuroscience, 5, 48. doi:10.3389/fncom.2011.00048.Google Scholar
  63. Lamont, M. (2012). Toward a comparative sociology of valuation and evaluation. Annual Review of Sociology, 38(1), 201–221. doi:10.1146/annurev-soc-070308-120022.Google Scholar
  64. Larsen, P. O., & von Ins, M. (2009). The steady growth of scientific publication and the declining coverage provided by Science Citation Index. In B. Larsen & J. Leta (Eds.), Proceedings of ISSI 2009—12th international conference of the international society for scientometrics and informetrics (Vol. 2, pp. 597–606). Leuven: Int Soc Scientometrics and Informetrics-ISSI.Google Scholar
  65. Lehmann, S., Jackson, A., & Lautrup, B. (2008). A quantitative analysis of indicators of scientific performance. Scientometrics, 76(2), 369–390. doi:10.1007/s11192-007-1868-8.Google Scholar
  66. Lewison, G., Thornicroft, G., Szmukler, G., & Tansella, M. (2007). Fair assessment of the merits of psychiatric research. British Journal of Psychiatry, 190, 314–318. doi:10.1192/bjp.bp.106.024919.Google Scholar
  67. Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 1370–1381.Google Scholar
  68. Martin, B. R., & Irvine, J. (1983). Assessing basic research—some partial indicators of scientific progress in radio astronomy. Research Policy, 12(2), 61–90.Google Scholar
  69. Marx, W. (2011). Bibliometrie in der Forschungsbewertung: Aussagekraft und Grenzen. Forschung and Lehre, 11, 680.Google Scholar
  70. Marx, W., & Bornmann, L. (2012). Der Journal Impact Factor: Aussagekraft, Grenzen und Alternativen in der Forschungsevaluation. Beiträge zur Hochschulforschung, 34(2), 50–66.Google Scholar
  71. Marx, W., & Bornmann, L. (in press). On the problems of dealing with bibliometric data. Journal of the American Society for Information Sciences and Technology.Google Scholar
  72. Meho, L. I., & Spurgin, K. M. (2005). Ranking the research productivity of library and information science faculty and schools: An evaluation of data sources and research methods. Journal of the American Society for Information Science and Technology, 56(12), 1314–1331.Google Scholar
  73. Merton, R. K. (1957). Priorities in scientific discovery: A chapter in the sociology of science. American Sociological Review, 22(6), 635–659. doi:10.2307/2089193.Google Scholar
  74. Merton, R. K. (1980). Auf den Schultern von Riesen ein Leitfaden durch das Labyrinth der Gelehrsamkeit. Frankfurt am Main: Syndikat.Google Scholar
  75. Moed, H. F. (2005). Citation analysis in research evaluation. Dordrecht: Springer.Google Scholar
  76. Moed, H. F., & Hesselink, F. T. (1996). The publication output and impact of academic chemistry research in the Netherlands during the 1980s: Bibliometric analysis and policy implications. Research Policy, 25(5), 819–836.Google Scholar
  77. Moed, H. F., van Leeuwen, T. N., & Reedijk, J. (1996). A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society—inaccuracies in published impact factors based on overall citations only. Scientometrics, 37(1), 105–116.Google Scholar
  78. Norris, M., & Oppenheim, C. (2010). The h-index: A broad review of a new bibliometric indicator. Journal of Documentation, 66(5), 681–705. doi:10.1108/00220411011066790.Google Scholar
  79. Nosek, B. A., Graham, J., Lindner, N. M., Kesebir, S., Hawkins, C. B., Hahn, C., et al. (2010). Cumulative and career-stage citation impact of social-personality psychology programs and their members. Personality and social Psychology Bulletin, 36(10), 1283–1300. doi:10.1177/0146167210378111.Google Scholar
  80. Opthof, T., & Wilde, A. A. M. (2011). Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors. Netherlands Heart Journal, 19(5), 246–255. doi:10.1007/s12471-011-0128-y.Google Scholar
  81. Panaretos, J., & Malesios, C. (2009). Assessing scientific research performance and impact with single indices. Scientometrics, 81(3), 635–670. doi:10.1007/s11192-008-2174-9.Google Scholar
  82. Pendlebury, D. A. (2008). Using bibliometrics in evaluating research. Philadelphia, PA: Research Department, Thomson Scientific.Google Scholar
  83. Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum Immunologiae Et Therapiae Experimentalis, 57(1), 1–11. doi:10.1007/s00005-009-0008-y.Google Scholar
  84. Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications—theory, with application to literature of physics. Information Processing and Management, 12(5), 297–312.Google Scholar
  85. Retzer, V., & Jurasinski, G. (2009). Towards objectivity in research evaluation using bibliometric indicators: A protocol for incorporating complexity. Basic and Applied Ecology, 10(5), 393–400. doi:10.1016/j.baae.2008.09.001.Google Scholar
  86. Ruiz-Castillo, J. (2012). The evaluation of citation distributions. SERIEs: Journal of the Spanish Economic Association, 3(1), 291–310. doi:10.1007/s13209-011-0074-3.Google Scholar
  87. Sahel, J. A. (2011). Quality versus quantity: Assessing individual research performance. Science Translational Medicine, 3(84), 84cm13. doi:10.1126/scitranslmed.3002249.Google Scholar
  88. Schubert, A., & Braun, T. (1993). Reference standards for citation based assessments. Scientometrics, 26(1), 21–35.Google Scholar
  89. Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(3), 311–324.Google Scholar
  90. Smith, A., & Eysenck, M. (2002). The correlation between RAE ratings and citation counts in psychology. London: Department of Psychology, Royal Holloway, University of London.Google Scholar
  91. StataCorp. (2011). Stata statistical software: Release 12. College Station, TX: Stata Corporation.Google Scholar
  92. Strotmann, A., & Zhao, D. (2012). Author name disambiguation: What difference does it make in author-based citation analysis? Journal of the American Society for Information Science and Technology, 63(9), 1820–1833. doi:10.1002/asi.22695.Google Scholar
  93. Sugimoto, C. R., & Cronin, B. (2012). Biobibliometric profiling: An examination of multifaceted approaches to scholarship. Journal of the American Society for Information Science and Technology, 63(3), 450–468. doi:10.1002/asi.21695.Google Scholar
  94. Taylor, J. (2011). The assessment of research quality in UK universities: Peer review or metrics? British Journal of Management, 22(2), 202–217. doi:10.1111/j.1467-8551.2010.00722.x.Google Scholar
  95. Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). New indices in scholarship assessment. American Journal of Pharmaceutical Education, 73(6), 111.Google Scholar
  96. Tijssen, R., & van Leeuwen, T. (2006). Centres of research excellence and science indicators. Can ‘excellence’ be captured in numbers? In W. Glänzel (Ed.), Ninth international conference on science and technology indicators (pp. 146–147). Leuven, Belgium: Katholieke Universiteit Leuven.Google Scholar
  97. Tijssen, R., Visser, M., & van Leeuwen, T. (2002). Benchmarking international scientific excellence: Are highly cited research papers an appropriate frame of reference? Scientometrics, 54(3), 381–397.Google Scholar
  98. van Raan, A. F. J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420.Google Scholar
  99. van Raan, A. J. F. (2000). The Pandora’s Box of citation analysis: Measuring scientific excellence—the last evil? In B. Cronin & H. B. Atkins (Eds.), The web of knowledge (pp. 301–319). Medford, NJ: Information Today Inc.Google Scholar
  100. van Raan, A. F. J. (2005a). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.Google Scholar
  101. van Raan, A. F. J. (2005b). Measurement of central aspects of scientific research: Performance, interdisciplinarity, structure. Measurement, 3(1), 1–19.Google Scholar
  102. van Raan, A. F. J. (2008). Bibliometric statistical properties of the 100 largest European research universities: Prevalent scaling rules in the science system. Journal of the American Society for Information Science and Technology, 59(3), 461–475. doi:10.1002/asi.20761.Google Scholar
  103. Vinkler, P. (2010). The evaluation of research by scientometric indicators. Oxford: Chandos Publishing.Google Scholar
  104. Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J. et al. (2012a). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Retrieved February 24, from http://arxiv.org/abs/1202.3941.
  105. Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J., et al. (2012b). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432.Google Scholar
  106. Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406–415. doi:10.1002/asi.21678.Google Scholar
  107. Wang, J. (2013). Citation time window choice for research impact evaluation. Scientometrics, 94(3), 851–872. doi:10.1007/s11192-012-0775-9.Google Scholar
  108. Weingart, P. (2005). Das Ritual der Evaluierung und die Verführbarkeit. In P. Weingart (Ed.), Die Wissenschaft der Öffentlichkeit: Essays zum Verhältnis von Wissenschaft, Medien und Öffentlichkeit (pp. 102–122). Weilerswist: Velbrück.Google Scholar
  109. Zhang, L., & Glänzel, W. (2012). Where demographics meets scientometrics: Towards a dynamic career analysis. Scientometrics, 91(2), 617–630. doi:10.1007/s11192-011-0590-8.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  1. 1.Division for Science and Innovation StudiesAdministrative Headquarters of the Max Planck SocietyMunichGermany
  2. 2.Information Retrieval Services (IVS-CPT)Max Planck Institute for Solid State ResearchStuttgartGermany

Personalised recommendations