, Volume 73, Issue 2, pp 175–214 | Cite as

Publication, cooperation and productivity measures in scientific research

  • Marianne GauffriauEmail author
  • Peder Olesen Larsen
  • Isabelle Maye
  • Anne Roulin-Perriard
  • Markus von Ins


The literature on publication counting demonstrates the use of various terminologies and methods. In many scientific publications, no information at all is given about the counting methods used. There is a lack of knowledge and agreement about the sort of information provided by the various methods, about the theoretical and technical limitations for the different methods and about the size of the differences obtained by using various methods. The need for precise definitions and terminology has been expressed repeatedly but with no success.

Counting methods for publications are defined and analysed with the use of set and measure theory. The analysis depends on definitions of basic units for analysis (three chosen for examination), objects of study (three chosen for examination) and score functions (five chosen for examination). The score functions define five classes of counting methods. However, in a number of cases different combinations of basic units of analysis, objects of study and score functions give identical results. Therefore, the result is the characterization of 19 counting methods, five complete counting methods, five complete-normalized counting methods, two whole counting methods, two whole-normalized counting methods, and five straight counting methods.

When scores for objects of study are added, the value obtained can be identical with or higher than the score for the union of the objects of study. Therefore, some classes of counting methods, including the classes of complete, complete-normalized and straight counting methods, are additive, others, including the classes of whole and whole-normalized counting methods, are non-additive.

An analysis of the differences between scores obtained by different score functions and therefore the differences obtained by different counting methods is presented. In this analysis we introduce a new kind of objects of study, the class of cumulative-turnout networks for objects of study, containing full information on cooperation. Cumulative-turnout networks are all authors, institutions or countries contributing to the publications of an author, an institute or a country. The analysis leads to an interpretation of the results of score functions and to the definition of new indicators for scientific cooperation.

We also define a number of other networks, internal cumulative-turnout networks, external cumulative-turnout networks, underlying networks, internal underlying networks and external underlying networks. The networks open new opportunities for quantitative studies of scientific cooperation.


Basic Unit Score Function Counting Method Productivity Measure Underlying Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Anderson, J., Collins, P. M. D., Irvine, J., Isard, P. A., Martin, B. R., Narin, F., Stevens, K. (1988), On-line approaches to measuring national scientific output: a cautionary tale, Science and Public Policy, 15: 153–161.Google Scholar
  2. Borgman, C. L., Furner, J. (2002), Scholarly communication and bibliometrics, Annual Review of Information Science and Technology, 36: 3–72.CrossRefGoogle Scholar
  3. Braun, T., Glänzel, W., Schubert, A. (1989), Assessing assessments of British science. Some facts and figures to accept or decline, Scientometrics, 15: 165–170.CrossRefGoogle Scholar
  4. Bourke, P., Butler, L. (1996), Standards issues in a national bibliometric database: the Australian Case, Scientometrics, 35: 199–207.CrossRefGoogle Scholar
  5. Butler, L. (2003), Explaining Australia’s increased share of ISI publications — the effects of a funding formula based on publication counts, Research Policy, 32: 143–155.CrossRefGoogle Scholar
  6. CEST (2002), La Suisse et la Champions League internationale des institutions de recherche 1994–1999. Contribution au benchmarking international des institutions de recherche. CEST 2002/6. Centre d’études de la science et de la technologie, Berne.Google Scholar
  7. CEST (2003), Place scientifique suisse 2001. Développements de la recherche en comparaison internationale sur la base d’indicateurs bibliométriques 1981–2001. CEST 2003/2. Centre d’études de la science et de la technologie, Berne.Google Scholar
  8. CEST (2004a), Annexe: aspects méthodologiques.
  9. CEST (2004b), CEST Scientometrics Scoreboard. Indicateurs-clés de la place scientifique suisse (1981–2002).
  10. CEST (2004c), CEST Scientometrics Research Portfolios. Universities and colleges participating in the Champions League: Diagrams and Profiles (1998–2002).
  11. Codd, E. F. (1970), A relational model of data for large shared data banks, Communications of the ACM, 13: 377–387.zbMATHCrossRefGoogle Scholar
  12. Cole, J. R., Cole, S. (1973), Social Stratification in Science, University of Chicago Press.Google Scholar
  13. Egghe, L. (1999), An explanation of the relation between the fraction of multinational publications and the fractional score of a country, Scientometrics, 45: 291–310.CrossRefGoogle Scholar
  14. Egghe, L., Rousseau, R., Van Hooydonk, G. (2000), Methods for accrediting publications to authors or countries: consequences for evaluation studies, Journal of the American Society for Information Science, 51: 145–157.CrossRefGoogle Scholar
  15. Gauffriau, M., Larsen, P. O. (2005), Counting methods are decisive for rankings based on publication and citation studies, Scientometrics, 64: 85–93.CrossRefGoogle Scholar
  16. Glänzel, W. (2000), Science in Scandinavia: a bibliometric approach, Scientometrics, 48: 121–150.CrossRefGoogle Scholar
  17. Glänzel, W. (2001), National characteristics in international scientific co-authorship relations, Scientometrics, 51: 69–115.CrossRefGoogle Scholar
  18. Glänzel, W., Katz, S., Moed, U., Schoepflin, U. (1996), Preface, Scientometrics, 35: 165–166.CrossRefGoogle Scholar
  19. Glänzel, W., Schoepflin, U. (1994), Little scientometrics, big scientometrics ... and beyond? Scientometrics, 30: 375–384.CrossRefGoogle Scholar
  20. Glänzel, W., Schubert, A. (2001), Double effort-double impact? A critical view at international coauthorship in chemistry, Scientometrics, 50: 199–214.CrossRefGoogle Scholar
  21. Halmos, P. R. (1950), Measure Theory, Van Nostrand Reinhold, New York.zbMATHGoogle Scholar
  22. Lange, L. (2001), Citation counts on multi-authored papers — first-name authors and further authors, Scientometrics 52: 457–470.CrossRefGoogle Scholar
  23. Lindsey, D. (1980), Production and citation measures in the sociology of science: the problem of multiple authorship, Social Studies of Science, 10: 145–162.CrossRefGoogle Scholar
  24. Mc GRATH, W. E. (1996), The unit of analysis (objects of study) in bibliometrics and scientometrics, Scientometrics, 35: 257–264.CrossRefGoogle Scholar
  25. Moed, H. F. (2005), Citation Analysis in Research Evaluation, Springer, Dordrecht.Google Scholar
  26. Moed, H. F., De Bruin, R. E., Nederhof, A. J., Tijssen, R. J. W. (1991), International scientific cooperation and awareness within the European Community: problems and perspectives, Scientometrics, 21: 291–311.CrossRefGoogle Scholar
  27. Moed, H. F., Glänzel, W., Schmock, U. (Eds), (2004), Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems, Kluwer Academic Publishers, Dordrecht (The Netherlands).Google Scholar
  28. Narin, F., Stevens, E. S., Whitlow, E. S. (1991), Scientific cooperation in Europe and the citation of multinationally authored papers, Scientometrics, 21: 313–324.CrossRefGoogle Scholar
  29. Narin, F., Whitlow, E. S. (1990). Measurement of Scientific Cooperation and Coauthorship in CEC-related Areas of Science. Report EUR 12900, Office for Official Publications of the European Communities, Luxembourg. Volume 1 and 2.Google Scholar
  30. Nederhof, A. J., Moed, H. F. (1993), Modelling multinational publication: development of an on-line fractionation approach to measure national scientific output, Scientometrics, 27: 39–52.CrossRefGoogle Scholar
  31. Persson, O., Danell, R. (2004), Decomposing national trends in acitivity and impact. A study of Swedish neuroscience papers. In: H. F. Moed, W. Glänzel, U. Schmoch (Eds), Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T System, Kluwer Academic Publishers. Dordrecht (The Netherlands), pp. 514–526.Google Scholar
  32. Persson, O., Glänzel, W., Danell, R. (2004), Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies, Scientometrics, 60: 421–432.CrossRefGoogle Scholar
  33. Schubert, A., Glänzel, W., Braun, T. (1989), Scientometric datafiles. A comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfields. 1981–1985, Scientometrics, 16: 3–478.CrossRefGoogle Scholar
  34. Tijssen, R. J. W., van Leeuwen, T. N. (2003), Bibliometric Analyses of World Science. Extended technical annex to chapter 5 of the Third European Report on S&T Indicators. Last accessed July 18th, 2006.
  35. Trueba, F. J., Guerrero, H. (2004), A robust formula to credit authors for their publication, Scientometrics, 60: 181–204.CrossRefGoogle Scholar
  36. van Raan, A. F. J. (1997), Science as an international enterprise, Science and Public Policy, 24: 290–300.Google Scholar
  37. Wagner, C. S., Leydesdorff, L. (2005), Network structure, self-organization, and the growth of international collaboration in science, Research Policy, 34: 1608–1618.CrossRefGoogle Scholar
  38. Zitt, M., Teixeira, N. (1996), Science macro-indicators: some aspects of OST experience, Scientometrics, 35: 209–222.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2007

Authors and Affiliations

  • Marianne Gauffriau
    • 1
    Email author
  • Peder Olesen Larsen
    • 2
  • Isabelle Maye
    • 3
  • Anne Roulin-Perriard
    • 3
  • Markus von Ins
    • 3
  1. 1.Technical Knowledge Center of Denmark D’ARC — DTU Analysis & Research Promotion CenterTechnical University of DenmarkLyngbyDenmark
  2. 2.HellerupDenmark
  3. 3.Center for Science and Technology Studies (CEST)BernSwitzerland

Personalised recommendations