Scientometrics

, Volume 36, Issue 3, pp 343–362 | Cite as

The use of multiple indicators in the assessment of basic research

  • B. R. Martin
Article

Abstract

This paper argues that evaluations of basic research are best carried out using a range of indicators. After setting out the reasons why assessments of government-funded basic research are increasingly needed, we examine the multi-dimensional nature of basic research. This is followed by a conceptual analysis of what the different indicators of basic research actually measure. Having discussed the limitations of various indicators, we describe the method of converging partial indicators used in several SPRU evaluations. Yet although most of those who now use science indicators would agree that a combination of indicators is desirable, analysis of a sample ofScientometrics articles suggests that in practice many continue to use just one or two indicators. The paper also reports the results of a survey of academic researchers. They, too, are strongly in favour of research evaluations being based on multiple indicators combined with peer review. The paper ends with a discussion as to why multiple indicators are not used more frequently.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes and references

  1. 1.
    E.g.J. H. Westbrook, Identifying significant research,Science, 132 (1960) 1229–1234;N. Wade, Citation analysis: a new tool for science administrators,Science, 188 (1975) 429–432.Google Scholar
  2. 2.
    J. Irvine, B. R. Martin, What direction for basic scientific research?, Chapter 5 inM. Gibbons, P. Gummett, B. M. Udgaonkar (eds.),Science and Technology Policy in the 1980s and Beyond, London, Longman, 1984, pp. 67–98.Google Scholar
  3. 3.
    B. R. Martin, J. Irvine, Assessing basic research: some partial indicators of scientific progress in radio astronomy,Research Policy, 12 (1983) 61–90.CrossRefGoogle Scholar
  4. 4.
    E.g.Anon, Is your lab well cited?,Nature, 227 (1970) 219;Anon, More games with numbers,Nature, 2228 (1970) 698–699.Google Scholar
  5. 5.
    E.g.D. Lindsey, Production and citation measures in the sociology of science: the problem of multiple authorship,Social Studies of Science, 10 (1980) 145–162.Google Scholar
  6. 6.
    The sixth, CERN, was left to a subsequent study two years later. The results were published in a series of three articles:B. R. Martin, J. Irvine, CERN: past peformance and future prospects-I-CERN's position in World High-Energy Physics,Research Policy, 13 (1984) 183–210;J. Irvine, B. R. Martin, CERN: past performance and future prospects-II-The scientific performance of the CERN accelerators,Research Policy, 13 (1984) 247–284; andB. R. Martin, J. Irvine, CERN: Past performance and future prospects-III-CERN and the future of world high-energy physics,Research Policy, 13 (1984) 311–342.CrossRefGoogle Scholar
  7. 7.
    J. Irvine, B. R. Martin, A methodology for assessing the scientific performance of research groups,Scientia Yugoslavia, 6 (1980) 83–95.Google Scholar
  8. 8.
    Martin, Irvine, ——op. cit., note 3. This article appeared in April 1983 even though it had been accepted for publication in September 1980.CrossRefGoogle Scholar
  9. 9.
    J. Irvine, B. R. Martin, P. A. Isard,Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Aldershot and Brookfield, Vermont, Edward Elgar, 1990.Google Scholar
  10. 10.
    Irvine, Martin, ——op. cit., note 2.Google Scholar
  11. 11.
    H. F. Hansen, B. H. Jørgensen,Science Policy & Research Management: Can Research Indicators Be Used?, Institute of Political Science, University of Copenhagen, Copenhagen, 1995, p. 1.Google Scholar
  12. 12.
    R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995.Google Scholar
  13. 13.
    For example, the evaluation of government-funded applied research in Norway employed a combination of peer review and ‘customer review’-seeJ. Irvine, B. R. Martin, M. Schwarz, K. Pavitt, R. Rothwell,Government Support for Industrial Research in Norway: A SPRU Report, Oslo: Universitetsforlaget Norwegian Official Publication NOU 30B, 1981.Google Scholar
  14. 14.
    See Fig. 1 on p. 64 inMartin, Irvine, ——op. cit., note 3.CrossRefGoogle Scholar
  15. 15.
    A good example here would be popular books by scientists such asStephen Hawking.Google Scholar
  16. 16.
    Ibid., note 3, p. 64.CrossRefGoogle Scholar
  17. 17.
    Kostoff,op. cit., note 12R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995, p. 8.Google Scholar
  18. 18.
    Martin, Irvine, ——op. cit., note 3, p. 75.Google Scholar
  19. 19.
    R. Miller, The influence of primary task on R&D laboratory evaluation: a comparative bibliometric analysis,R&D Management, 22 (1992) 3–20.Google Scholar
  20. 20.
    For an example of how the educational technological outputs from basic research may be assessed, see the references cited in note 21.J. Irvine, B. R. Martin, The Economic Effects of Big Science: The Case of Radio Astronomy,Proceedings of the International Colloquium on the Economic Effects of Space and Other Advanced Technologies, Strasbourg, 28–30 April 1980, Paris, European Space Agency, ESA SP-151, 1980;Google Scholar
  21. 21.
    J. Irvine, B. R. Martin,The Economic Effects of Big Science: The Case of Radio Astronomy,Proceedings of the International Colloquium on the Economic Effects of Space and Other Advanced Technologies, Strasbourg, 28–30 April 1980 Paris, European Space Agency, ESA SP-151, 1980; andB. R. Martin, J. Irvine, Spin-Off from Basic Science: The Case of Radio Astronomy,Physics in Technology, 12 (1981) 204–212.Google Scholar
  22. 22.
    Over 150 scientists were interviewed in the ‘Big Science Project’.Google Scholar
  23. 23.
    This section draws heavily onMartin, Irvine,op. cit., note 3 61–90.CrossRefGoogle Scholar
  24. 24.
    See, for example, the discussion in Ibid. p. 67.CrossRefGoogle Scholar
  25. 25.
    T. Luukkonen, The cognitive and social foundation of citation studies-why we still lack a theory of citation, submitted toScience, Technology and Human Values (1995).Google Scholar
  26. 26.
    W. R. Shadish, D. Tolliver, M. Gray, S. K. Sen Gupta, Author judgements about works they cite: three studies from psychology journals,Social Studies of Science, 25 (1995) 447–498 — quote on p. 481.Google Scholar
  27. 27.
    See also the related distinction between ‘quality’ and ‘relevance’ inHansen, Jørgensen,op. cit., note 11, p. 3.Google Scholar
  28. 28.
    Martin, Irvine,op. cit., note 3, p. 70.Google Scholar
  29. 29.
  30. 30.
  31. 31.
  32. 32.
    Examples of this in the field of experimental high-energy physics can be found inMartin, Irvine,op. cit., note 6.CrossRefGoogle Scholar
  33. 33.
    T. S. Kuhn,The Structure of Scientific Revolutions, Chicago, University of Chicago Press, 1970.Google Scholar
  34. 34.
    Martin, Irvine,op. cit., note 3,idem.,op. cit., note 6 CERN: past performance and future prospects-I-CERN's position in World High-Energy Physics,Research Policy, 13 (1984) 183–210.CrossRefGoogle Scholar
  35. 35.
    Martin, Irvine,op. cit., note 3.CrossRefGoogle Scholar
  36. 36.
    J. Irvine, B. R. Martin, Assessing basic research: The case of the Isaac Newton Telescope,Social Studies of Science, 13 (1983) 49–86.Google Scholar
  37. 37.
    B. R. Martin, J. Irvine, Internal criteria for scientific choice: an evaluation of the research performance of electron high-energy physics accelerators,Minerva, XIX (1981) 408–432.Google Scholar
  38. 38.
    Martin, Irvine op. cit., note 6.CrossRefGoogle Scholar
  39. 39.
    L. M. Baird, C. Oppenheim, Do citations matter?,Journal of Information Science, 20 (1994) 2–15 (quote on p. 13).Google Scholar
  40. 40.
    Kostoff,op. cit., note 12The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995. p. 37.Google Scholar
  41. 41.
    Ibid.R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995 p. 118Google Scholar
  42. 42.
    A. H. Rubenstein, E. Geisler, Evaluating the outputs and impacts of R&D/innovation,International Journal of Technology Management, 6 (1991).Google Scholar
  43. 43.
    Not all scientometric analysts are guilty of this. For example, the ISI analysts who periodically publish lists of leading research institutes inScience Watch normally use three indicators-papers, citations and citations per paper.Google Scholar
  44. 44.
    Full details of the study and the results can be found inB. R. Martin, J. E. F. Skea,Academic Research Performance Indicators: An Assessment of the Possibilities, Brighton, SPRU, 1992.Google Scholar
  45. 45.
    See Table 12 inibid..Google Scholar
  46. 46.
    Kostoff,op cit, note 12R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995, p. 8.Google Scholar
  47. 47.
    Hansen, Jørgensen,op. cit., note 11, p. 5.Google Scholar
  48. 48.
    Martin, Skea,op. cit., note 44,. p. 75.Google Scholar
  49. 49.
    Ibid. p. 75.Google Scholar
  50. 50.
    J. P. de Greve, A. Frijdal, Evaluation of scientific research prolife analysis — a mixed method,Higher Education Management, 1 (1989) 83–90.Google Scholar
  51. 51.
    Kostoff,op. cit., note 12The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995. p. 9.Google Scholar

Copyright information

© Akadémiai Kiadó 1996

Authors and Affiliations

  • B. R. Martin
    • 1
  1. 1.ESRC Centre on Science, Technology, Energy and Environment Policy, Science Policy Research UnitUniversity of SussexBrighton(UK)

Personalised recommendations