Skip to main content
Log in

The bibliometric assessment of UK scientific performance a reply to Braun, Glänzel and Schubert

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

In 1987, an analysis of the CHI/NSFScience Literature Indicators Data-Base by the author and his colleagues suggested that the UK's percentage share of the world publication and citation totals had continued to fall over 1981–84, although at a slower rate than previously. That finding has recently been challenged byBraun, Glänzel andSchubert who, by combining 28 publication-based indicators, concluded that there was no statistically significant evidence for such a decline. This paper examines the reasons for the discrepancy. It is argued that the methodology ofBraun et al. is seriously flawed, as well as being inconsistent with work that they have published elsewhere. By adopting a more consistent and realistic set of indicators and applying them to the data ofBraun et al., one arrives at results entirely consistent with those derived from the CHI/NSF data-base.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes and references

  1. T. Braun, W. Glänzel, A. Schubert, ‘Assessing assessments of British science: some facts and figures to accept or decline’,Scientometrics, 15 (1989) 165–170.

    Article  Google Scholar 

  2. B.R. Martin, J. Irvine, F. Narin, C. Sterritt, The continuing decline of British science’Nature, 320 (1987) 123–126.

    Article  Google Scholar 

  3. Quite what epistemological assumptionsBraun et al. are making in putting forward the notion of “the correct proof” we leave for others to speculate. Suffice it to say that it is not a concept to which we adhere. Furthermore, in all our analyses of research performance, we have always stressed the imperfect or partial nature of bibliometric indicators, and the fact that they can only be used to suggest (rather than to “prove”) certain conclusions. For example, seeB.R. Martin, J. Irvine, ‘Assessing basic research: some partial indicators of scientific progress in radio astronomy’,Research Policy, 12 (1983) 61–90.

    Article  Google Scholar 

  4. Braun et al.,op. cit. note 1 ‘ 165.

    Article  Google Scholar 

  5. Ibid. 170.

    Google Scholar 

  6. J. Irvine, B.R. Martin, P.A. Isard,Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Elgar, Cheltenham, 1990.

    Google Scholar 

  7. See, for example,B.R. Martin, J. Irvine,Research Foresight: Priority-Setting in Science, Pinter Publishers, London, 1989.

    Google Scholar 

  8. J. Irvine, B.R. Martin, ‘Is Britain spending enough on science?’,Nature, 323 (1986) 591–594. See alsoIrvine et al.op. cit. note 6.J. Irvine, B.R. Martin, P.A. Isard,Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Elgar, Cheltenham, 1990.

    Article  Google Scholar 

  9. See, for example, Advisory Board for the Research Councils,Science and Public Expenditure 1987, Advisory Board for the Research Councils, London, 1987.

    Google Scholar 

  10. Business Statistics Office,Industrial Research and Development Expenditure and Employment 1985, Business Monitor MO14, HMSO, London, 1988.

    Google Scholar 

  11. B.R. Martin, J. Irvine, R. Turner, ‘The writing on the wall for British science’,New Scientist, 104 (8 November 1984) 25–29.

    Google Scholar 

  12. J. Irvine, B.R. Martin, T. Peacock, R. Turner, ‘Charting the decline in British science’,Nature, 316 (1985) 587–590.

    Article  Google Scholar 

  13. D.C. Smith, P.M.D. Collins, D.M. Hicks, S.M. Wyatt, ‘National performance in basic research’,Nature, 323 (1986) 681–684.

    Article  Google Scholar 

  14. Martin et al.,op.cit. note 2 ‘.

    Article  Google Scholar 

  15. A more recent analysis suggests that the decline in the UK's publication share levelled off between 1984 and 1986, although its citation share continued to fall slowly — seeB.R. Martin, J. Irvine, F. Narin, C. Sterritt, K. Stevens ‘Recent trends in the output and impact of British science’,Science and Public Policy, 17 (1990) 14–26.

    Google Scholar 

  16. L. Leydesdorff, ‘Problems with the “measurement” of national scientific performance’,Science and Public Policy, 15 (1988) 149–152.

    Google Scholar 

  17. J. Anderson, P.M.D. Collins, J. Irvine, P.A. Isard, B.R. Martin, F. Narin, K. Stevens, ‘On-line approaches to measuring national scientific output — a cautionary tale’,Science and Public Policy, 15 (1988) 153–61. The claim byBraun et al. (op. cit. note 1,T. Braun, W. Glänzel, A. Schubert, ‘Assessing assessments of British science: some facts and figures to accept or decline’,Scientometrics, 15 (1989) 165. that this debate was “inconclusive for any of the participants” seems difficult to reconcile with Leydesdorff's own admission that “I accept most of their points on the measurement techniques of using ‘on-line’ databases for this purpose” seeL. Leydesdorff, ‘Performance figures for British science’,Science and Public Policy, 15 (1988) 270.

    Google Scholar 

  18. L. Leydesdorff, ‘TheScience Citation Index and the measurement of national scientific performance in terms of numbers of scientific publications’,Scientometrics, 17 (1989) 111–120.

    Article  Google Scholar 

  19. One of the most recent and comprehensive is to be found inA. Schubert, W. Glänzel, T. Braun, ‘World flash on basic research: scientometric datafiles — a comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfiels, 1981–1985’,Scientometrics, 16 (1989) 3–478.

    Google Scholar 

  20. Our earlier fundings dealt only with the period up to the end of 1984 (seeMartin et al.,op. cit note 2 ‘). not 1985 asBraun et al. (op. cit. note 1,T. Braun, W. Glänzel, A. Schubert, ‘Assessing assessments of British science: some facts and figures to accept or decline’,Scientometrics, 15 (1989) 165. imply.

    Article  Google Scholar 

  21. Braun et al. omit the second of these three options in their classification scheme (seeop. cit. note 1, 166. However, it must be included to arrive at their total of 72 possible variants of publication-based indicators.

    Google Scholar 

  22. See, for example,Schubert et al.,op. cit. note 20 ‘.

    Google Scholar 

  23. For a detailed description of the data-base, seeData-Users Guide to the National Science Foundation's Science Literature Indicators Data-Base, CHI Research, Haddon Heights, New Jersey 1987; andData Users Guide to the Science Literature Indicators SP2 Subfield Citation Tape (1981–86), CHI Research, Haddon Heights, New Jersey, 1989.

  24. A fourth option here (but one not mentioned byBraun et al.) would be to include research articles only, a course which is investigated later in the paper. Yet another possiblity would be to include several publication types but to give them different weights. Thus,Braun's list of 72 variants is by no means exhaustive.

  25. The statement byBraun et al. that they produced “140 individual indicator values characterizing the world share of British publications” (op. cit. note 1 166) is therefore incorrect; only half (i.e. 70) relate to the UK'sshare of the world total.

    Google Scholar 

  26. SeeMartin andIrvine (op. cit. note 3 ‘), where we argue that partial indicators can only be used for comparative purposes.

    Article  Google Scholar 

  27. Braun et al.,op. cit. note 1 ‘ 166.

    Google Scholar 

  28. SeeAnderson et al.,op. cit. note 18 ‘ 156.

    Google Scholar 

  29. Schubert et al.,op. cit. note 20 ‘ 7.

    Google Scholar 

  30. Leydesdorff has tried to justify the fact that the world total for ‘all author’ countscannot be adjusted using his on-line approach in the following way: “Fractional counting is based on dividing the world total according to a pie-model, while integer counting allows for intersections, and should therefore be visualized in terms of Venn-diagrams” (op. cit. note 18, 114). However, he goes on to admit that “in the Venn diagram picture, one is no longer able to combine the contributions of various countries to the world share without corrections to the intersection” (ibid.). Furthermore, he fails to point out that an apparent rise in his unadjusted figure for the UK's ‘percentage’ share of the world total has little significance if the corresponding figures for most other major countries have been growing faster over the same period.

    Google Scholar 

  31. Braun et al.,op. cit. note 1 ‘ 166.

    Google Scholar 

  32. J. Irvine, B. R. Martin, ‘International comparisons of scientific performance revisited’,Scientometrics, 15 (1989) 369–392 (see note 58);Schubert et al., ‘World flash on basic research: scientometric datafiles — a comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfields, 1981–1985’,Scientometrics, 16 (1989) 6.Anderson et al.,op. cit. note 18,J. Anderson, P.M.D. Collins, J. Irvine, P.A. Isard, B.R. Martin, F. Narin, K. Stevens, ‘On-line approaches to measuring national scientific output — a cautionary tale’,Science and Public Policy, 15 (1988) 154

    Article  Google Scholar 

  33. Leydesdorff,op. cit. note 19, ‘ 113.

    Article  Google Scholar 

  34. Schubert et al.op. cit. note 20 ‘ 8.

    Google Scholar 

  35. Braun et al.,op. cit. note 1 ‘ 168.

    Google Scholar 

  36. Schubert et al.,op. cit. note 20 ‘ 6.

    Google Scholar 

  37. This view is not, however, shared byLeydesdorff (op. cit. note 19 ‘ 113). In attempting to rationalize the limitations of his on-line search method, he argues somewhat unconvincingly thatall document types covered by the SCI should be included because “we do not yet know how to attribute relative weights to types of documents”. Then, taking a somewhat different tack, he continues: “In general one should prefer aggregated data for inferences at the aggregate level, since otherwise methodological problems of inference may emerge”. This would seem to imply a somewhat simplistic notion that bigger samples are always preferable to smaller ones, regardless of whether the latter are less relevant and chosen in a less rigorous manner.

    Article  Google Scholar 

  38. None of these nine categories “is relevant in impact oriented evaluations” (Schubert et al.,op. cit. note 20, ‘ 6).

    Google Scholar 

  39. To support this argument, they have recently produced data on the citations of 1981 and 1982 publications in 1983, broken down by different types of publication (seeT. Braun, W. Glänzel, A. Schubert, ‘Some data on the distribution of journal publication types in the Science Citation Index database’,Scientometrics, 15 (1989) 325–330). The results apparently show that ‘letters’ are comparatively well cited. However, these data are highly selective. It is well known that letters often have a more immediate impact but that their citation record then tends to tail off fairly rapidly. By choosing to look at citation records two years or less after publications appear,Braun et al. are focussing on figures that are almost certainly biassed in favour of the ‘letters’ category. (The author is grateful to Dr. F.Narin for drawing this point to his attention.)

    Article  Google Scholar 

  40. T. Braun (private correspondence, 17 July 1989).

  41. The exception concerns ‘fractional author’ counts in 1982 where the difference between ‘tape year’ and ‘publication year’ indicators is some 0.08% larger than for the ‘all author’ and ‘first author’ indicators. This discrepancy can also be seen in Table 2 where the difference between the pairs of indicators 2 and 3, 5 and 8, 10 and 11, and 13 and 14 is a fairly constant 0.10% (±0.02%) exceptfor ‘published year’ data in 1982 where the gap closes to 0.00%. On the face of it, this would seem to suggest an error has been made byBraun et al. in processing their ‘publication year/fractional author’ data, perhaps by inadvertently omitting a few hundred UK papers. This explanation is apparently borne out by the fact that Braun's Fig. 3 (reproduced in the text) shows the same constant gap of 0.1% between ‘first author’ and fractional author’ countsbut with no coming together of the two graphs in 1982. Yet when this discrepancy between the information supplied to us and Fig. 3 was pointed out,Braun stated that the data were correct (ibid.). Instead, it was Fig. 3 in his published paper which was wrong, we were informed, because “our draughtsman, apparently for misconceived esthetic reasons, moved [the two 1982 points] a bit apart”!

  42. See Table 1 inMartin et al.,op. cit. note 2 ‘ 124.

    Google Scholar 

  43. See, for example,Schubert et al.,op. cit. note 20 ‘ 7.

    Google Scholar 

  44. Leydesdorff, op. cit. note 19 ‘ 117.

    Article  Google Scholar 

  45. The difference in size is, in any case, fairly trivial. The ‘fixed journal set’ for 1981–85 used bySchubert et al. (op. cit. note 20 ‘ 6) consisted of 2649 journals out of a total of 3711 which appeared over the five years (i.e. 1062 did not appear in at least one of the five years). However, those 2649 journals accounted for 94% of the papers (and received 98% of all citations).

    Google Scholar 

  46. In 1974, 19% of articles, notes and reviews in theSCI were in non-English-language journals, but by 1986 this had almost halved to 10% (F. Narin, private communication, 1990). While this is probably largely the result of authors in countries such as the Federal Republic of Germany and France publishing more frequently in English, it may also be partly due to more ‘second-rate’ English-language journals having been added to theSCI data-base.

  47. Schubert et al.,op. cit. note 20 ‘ 6.

    Google Scholar 

  48. This assumes that the approach ofBraun et al. in calculating the “annual mean relative change” and using the associated standard deviation as a measure of significance is valid. However, as was observed in note 29 above, the indicators are not independent. In this case, all six include data relating to numbers of articles, four contain data relating to notes and reviews, and two data for letters. By taking the average of the six, one is therefore giving greatest weight to research articles and least to letters, with notes and reviews in between, the implicit weighting ratio being 3∶1∶2. Nevertheless, such a set of weights is not unreasonable in the light of the earlier discussion of the average scientific contribution of each research article compared with other types of publication.

  49. SeeMartin et al.,op. cit. note 16 ‘.

    Google Scholar 

  50. Leydesdorff (op. cit. note 19 ‘) continues to maintain that the reason why he obtains an increase in Britain's world share compared with the decrease suggested by the CHI data-base is because the latter is derived from a ‘fixed journal set’. However, this does not square with the fact that every one of the nine ‘all journal’ indicators in Table 4 shows a decline over 1981–85, the “annual mean relative change” being −0.5% (±0.2%).

    Article  Google Scholar 

  51. The author is indebted to ProfessorR. Johnston for this point.

  52. T. Braun, A. Schubert, S. Zsindely, ‘The decline of British analytical chemistry: fact or artifact?’,Analytical Proceedings, 26 (1989) 87–91.

    Article  Google Scholar 

  53. Martin et al.,op. cit. note 2 ‘ 126.

    Google Scholar 

  54. Braun et al.,op. cit. note 1 ‘ 170.

    Google Scholar 

  55. Schubert et al.,op. cit. note 20 ‘ 472.

    Google Scholar 

  56. The ‘activity index’ is defined as a country's share of the world publication total in a given field divided by that country's share of the world publication output for all fields. The ‘attractivity index’ is defined in the same way but for citation shares rather than publication shares.

  57. Braun et al.,op. cit. note 1 ‘ 170.

    Google Scholar 

  58. A.J. Nederhof, ‘Change in publication patterns of biotechnologists: an evaluation of the impact of government stimulation programs in six industrial nations’,Scientometrics, 14 (1988) 475–485; the quotation reproduced in the text appears on page 484.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

A response to this paper can be found inScientometrics, 20 (1991) 463.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Martin, B.R. The bibliometric assessment of UK scientific performance a reply to Braun, Glänzel and Schubert. Scientometrics 20, 333–357 (1991). https://doi.org/10.1007/BF02017524

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02017524

Keywords

Navigation