Abstract
The development of methods for the quantification of research impact has taken a variety of forms: the impact of research outputs on other research, through various forms of citation analysis; the impact of research and technology, through patent-derived data; the economic impact of research projects and programs, through a variety of cost-benefit analyses; the impact of research on company performance, where there is no relationship with profit, but a strong positive correlation with sales growth has been established; and calculations of the rates of social return on the investment in research.
However, each of these approaches, which have had varying degrees of success, are being challenged by substantial revision in the understanding of the ways in which research interacts, and contributes to, other human activities. First, advances in the sociology of scientific knowledge have revealed the complex negotiation processes involved in the establishment of research outcomes and their meanings. In this process, citation is little more than a peripheral formalisation. Second, the demonstration of the limitations of neo-classical economics in explaining the role of knowledge in the generation of wealth, and the importance of learning porcesses, and interaction, in innovation within organisations, has finally overturned the linear model on which so many research impact assessments have been based. A wider examination of the political economy of research evaluation itself reveals the growth of a strong movement towards managerialism, with the application of a variety of mechanisms — foresight, priority setting, research evaluation, research planning — to improve the efficiency of this component of economic activity. However, there are grounds for questioning whether the resulting improved efficiencies have, indeed, improved overall performances. A variety of mechanisms are currently being experimented with in a number of countries which provide both the desired accountability and direction for research, but which rely less on the precision of measures and more on promoting a research environment that is conducive to interaction, invention, and connection.
Similar content being viewed by others
References
For an overview, OTA,Research Funding as an Investment; Can we Measure the Returns, US Congress, 1989.
For example,A. F. J. van Raan et al.,Science and Technology Indicators, DSWO Press, 1989;H. F. Moed,The Use of Bibliometric Indicators for the Assessment of Research Performance, DSWO Press, 1989.
P. Bourke, L. Butler, ‘Science in our universities: What's done where?’The Australian, 8 March 1995.
For example,H. Grupp (Ed.),Problems of Measuring Technological Change, Verlag, 1987, and subsequent reports.
For example,R. T. Prinsley,A Review of Research and Development Evaluation, AGPS, Canberra, 1993.
Industry Commission,Research and Development, Canberra, 1994.
G. K. Morbey, R. & D expenditures and profit growth,Research Technology Management, 28 (1989) 20.
E. Mansfield, Social returns from R & D,Research Technology Management, Vol. 30, 1991; a recent critical review of the literature is provided in Industry Commission,op. cit., ref. 6,Research and Development, Canberra, 1994, Vol. 3.
S. Cozzens et al.,Methods for Evaluating Fundamental Science, RAND, Office of Science and Technology Policy, Washington, 1994.
D. E. Stokes, ‘The Impaired Dialog between Science and Government and What Might Be Done About It’, delivered to Nineteenth Annual AAAS Colloquium on Science and Technology Policy, Washington, April, 1994, p. 2.
For example, the Industry Commission offers the following definitions: “non-rivalry — it can be made available to a number of users simultaneously, at no extra cost to the supplier; and non-excludability — users cannot be denied access to it”, p. A98.
D. J. Teece, Technological change and the nature of the firm, In:G. Dosi et al. (Eds.),Technical Change and Economic Theory, Macmillan, 1988.
M. Callon, Is science a public good?Science, Technology & Human Values, 19 (1994) 407.
Key references areB. W. Arthur, Competing technologies, increasing returns and lock-in by historical events,Economic Journal, 99 (1989) 116–131 andP. A. David, Clio and the economics of QWERTY,American Economic Review, 75 (1984) 332–337.
M. Callon,op. cit. ref. 4, p. 408.H. Grupp (Ed.),Problems of Measuring Technological Change, Verlag, 1987, and subsequent reports.
For example,C. Freeman, L. Soete,New Explorations in the Economics of Technological Change, Pinter, 1990,R. Nelson,Understanding Technological Change as an Evolutionary Process, North Holland, 1987.
OECD, ‘Interactions in Knowledge Systems; Foundation, Policy Implications and Empirical Methods’, DSTI/STP/TIP(94)15, Paris, 1994, p. 4.
J. Fagerberg, International competitiveness,Economic Journal, 98 (1988) 355–374.
J. Ziman,Prometheus Bound; Science in a Dynamic Steady State, Cambridge University Press, 1994.
R. Johnston, Strategic policy for science, In:S. Cozzens et al.,The Research System in Transition, Kluwer Press, 1990.
J. Ziman,op. cit., ref. 17, OECD, ‘Interactions in Knowledge Systems; Foundation, Policy Implications and Empirical Methods’, DSTI/STP/TIP(94)15, Paris, 1994, p. 251.
Ibid. J. Ziman,op. cit., ref. 17, OECD, ‘Interactions in Knowledge Systems; Foundation, Policy Implications and Empirical Methods’, DSTI/STP/TIP(94)15, Paris, 1994, p. 263.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Johnston, R. Research impact quantification. Scientometrics 34, 415–426 (1995). https://doi.org/10.1007/BF02018009
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF02018009