Abstract
Amidst current widespread calls for evidence based decision making on public investments in science and technological innovation, frequently interpreted to imply the employment of some bundle of output, outcome, productivity, or rate-of-return measures, the promises and limitations of performance measures, singly or collectively, varies greatly across contexts. The promises reflect belief in, scholarly research supportive of, and opportunistic provision of performance measures that respond or cater to executive and legislative branch expectations or hopes that such measures will facilitate evidence-based decision-making. The limitations reflect research on the dynamics of scientific discovery, technological innovation and the links between the two that even when well done and used by adepts, performance measures at best provide limited guidance for future expenditure decisions and at worst are rife with potential for incorrect, faddish, chimerical, and counterproductive decisions. As a decision-making enhancement, performance measurement techniques have problematic value when applied to the Big 3 questions of U.S. science policy: (1) what is the optimal size of the Federal government’s investments in science and technology programs; (2) the allocation of these investments among missions/agencies/and programs (and thus fields of science); and (3) the selection of performers, funding mechanisms, and the criteria used to select projects and performers.
Similar content being viewed by others
Notes
See Geisler (2000); pp. 254–255) for example for a catalogue of 37 “core” metrics, (e.g., number of publications in refereed journals; number of patents; number of improved or new products produced; cost reductions from new and improved products/processes; higher incomes) that encompasses reasonably well most variables found in JTT articles.
The link between the requirements and demands for such evidence and the use, rejection or misuse of this evidence in recent U.S. science and technology policy decisions is a separate topic, never to be overlooked especially as it has so often led to frustration on the part of researchers and evaluators, but too complex to be dealt with in a space constraints of this article!
References
Adams, J., & Pendlebury, D. (2010). Global Research Report: United States (Thomas Reuters).
Aghion, P., David, P., Foray, D. (2009). “Can We Link Policy Practice with Research on ‘STIG’ Systems? Toward Connecting the Analysis of Science, Technology and Innovation Policy with Realistic Programs for Economic Development and Growth” in, The New Economics of Technology Policy, edited by D. Foray(Cheltenham, UK: Edward Elgar), 46-71.
American Association for the Advancement of Science. (2009). Research and development FY2010. Washington, DC: AAAS.
Auranen, O., & Nieminen, M. (2010). University research funding and publication performance—An international comparison. Research Policy, 39, 822–834.
Behn, R. (1994). Here comes performance assessment-and it might even be good for you. In A. Teich, S. Nelson, & C. McEnaney (Eds.), AAAS science and technology policy yearbook-1994 (pp. 257–264). Washington, DC: American Association for the Advancement of Science.
Borner, K., Contractor, N., Falk-Krzesinski, H., Fiore, S., Hall, K., Keyton, J., et al. (2010). A multi-systems perspective for the science of team science. Science Translational Medicine, 2, 1–5.
Boroush, M. (2010a). “New NSF estimates indicate that U.S. R&D spending continued to grow in 2008” NSF Infobrief 10–32. Arlington, VA: National Science Foundation.
Boroush, M. (2010b). NSF Releases New Statistics on Business Innovation, NSF Info Brief 11–300. Arlington, VA: National Science Foundation.
Boskin, M., & Lau, L. (2000). “Generalized solow-neutral technical progress and postwar economic growth” NBER Working Paper 8023. Cambridge, MA: National Bureau of Economic Research.
Boyack, K., Klavans, R., & Borner, K. (2005). Mapping the backbone of science. Scientometrics, 64, 351–374.
Cohen, W. R. (2005). Patents and appropriation: Concerns and evidence. Journal of Technology Transfer, 30, 57–71.
Cohen, W., Nelson, R., & Walsh, J. (2002). Links and impacts: The influence of public research on industrial R&D. Management Science, 48, 1–23.
Crespi, G., & Geuna, A. (2008). An empirical study of scientific production: A cross country analysis, 1981–2002. Research Policy, 37, 565–579.
David, P. (1994). Difficulties in Assessing the Performance of Research and Development Programs. In AAAS Science and Technology Policy Yearbook-1994, op. cit., 293–301.
Evenson, R., Ruttan, V., & Waggoner, P. E. (1979). Economic benefits from research: An example from agriculture. Science, 205, 1101–1107.
Executive Office of the President, Office of Management and Budget, Science and Technology Priorities for the FY2012 Budget, M-10-30.
Feller, I. (2002). Performance measurement redux. American Journal of Evaluation, 23, 435–452.
Feller, I., Chubin, D., Derrick, E., & Pharityal, P. (2010). The challenges of evaluating multipurpose cooperative research centers. In C. Boardman, D. Gray, & D. Rivers (Eds.), Cooperative research centers and technical innovation: Government policies, industry strategies, and organizational dynamics. Springer (forthcoming).
Feuer, M., & Maranto, C. (2010). Science advice as procedural rationality: Reflections on the National Research Council. Minerva, 48, 259–275.
Freeman, C., & Soete, L. (2009). Developing science, technology and innovation indicators: What we can learn from the past. Research Policy, 38, 583–589.
Freeman, R., & van Reenan, J. (2008). “Be careful what you wish for: A cautionary tale about budget doubling”, Issues in Science and Technology, Fall. Washington, DC: National Academy Press.
Gault, F. (2010). Innovation strategies for a global economy. Cheltenham: Edward Elgar.
Geisler, E. (2000). The metrics of science and technology. Westport, CT: Quorum Books.
Gladwell, M. (2011) “The Order of Things”. New Yorker, 68ff.
Goldston, D. (2009) “Mean what you say” Nature 458, 563 (Published online 1 April 2009).
Gross, C., Anderson, G., & Powe, N. (1999). The relation between funding by the National Institutes of Health and the burden of disease. New England Journal of Medicine, 340, 1881–1887.
Haltiwanger, J., Jarmin, R., & Miranda, J. (2010). “Who creates jobs? Small vs. large vs. young”, National Bureau of Economic Research Working Paper 16300. Cambridge, MA: National Bureau of Economic Research.
Heisey, P., King, J., Rubenstein, K., Bucks, D., Welsh, R. (2010). Assessing the benefits of public research within an economic framework: The Case of USDA’s Agricultural Research Service. United States Department of Agriculture, Economic Research Service, Economic Research Report Number 95.
Jarmin, R. (1999). Evaluating the impact of manufacturing extension on productivity growth. Journal of Policy Analysis and Management, 18, 99–119.
Kettl, D. (1997). The global revolution in public management: Driving themes, missing links. Journal of Policy Analysis and Management, 16, 446–462.
Lane, J., & Bertuzzi, S. (2011). Measuring the results of science investments. Science, 331(6018), 678–680.
Link, A. (2010) Retrospective benefit-cost evaluation of U.S. DOE vehicle combustion engine R&D investments: Impacts of a cluster of energy technologies (U.S. Department of Energy/Energy Efficiency and Renewable Energy).
Mansfield, E. (1991). Social returns from R&D: Findings, methods and limitations. Research Technology Management, 34, 6.
Moed, H. (2005). Citation analysis in research evaluation. Dordrecht: Springer.
Murphy, K., & Topel, R. (2006). The value of health and longevity. Journal of Political Economy, 114, 871–904.
National Academies. (1999). Evaluating federal research programs. Washington, DC: National Academy Press.
National Academies. (2007a). Rising above the gathering storm. Washington, DC: National Academies Press.
National Academies. (2007b). A strategy for assessing science. Washington, DC: National Academies Press.
National Science Foundation. (2007). Changing U.S. output of scientific articles: 1988–2003—Special Report. Arlington, VA: National Science Foundation.
Office of Management and Budget. (2008). Program Assessment Rating Tool Guidance, No. 2007-02.
Organisation for Economic Cooperation and Development. (2005). Modernising government. Paris: Organisation for Economic Cooperation and Development.
Perrin, B. (1998). Effective use and misuse of performance measurement. American Journal of Evaluation, 19, 367–379.
Radin, B. (2006). Challenging the performance movement. Washington, DC: Georgetown University Press.
Rosenberg, N. (1972). Technology and American economic growth. New York: Harper Torchbooks.
Ruegg, R., & Feller, I. (2003). A toolkit for evaluating public R&D investment, NIST GCR 03–857. Gaithersburg, MD: National Institute of Standards and Technology.
Ruttan, V. (1982). Agricultural research policy. Minneapolis, MN: University of Minnesota Press.
Sarawetz, D. (1996). Frontiers of illusion. Philadelphia, PA: Temple University Press.
Schmoch, U., Schubert, T., Jansen, D., Heidler, R., & von Gortz, R. (2010). How to use indicators to measure scientific performance: A balanced approach. Research Evaluation, 19, 2–18.
Stephan, P. (2012). How economics shapes science. Cambridge, MA: Harvard University Press.
Stevens, A.J., Wyller, J., Kilgore, K., Chatterjee, S., & Rohbraugh, M. (2011). The role of Public-Sector research in the discovery of drugs and vaccines. New England Journal of Medicine, 364, 535–541.
Sumell, A., Stephan, P., & Adams, J. (2009). Capturing knowledge: The location decision of new Ph.D.s working in industry. In R. Freeman & D. Goroff (Eds.) Science and engineering careers in the United States: An analysis of markets and employment (pp. 257–287). Chicago, IL: University of Chicago Press.
Weingert, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62, 117–131.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Feller, I. Performance measures as forms of evidence for science and technology policy decisions. J Technol Transf 38, 565–576 (2013). https://doi.org/10.1007/s10961-012-9264-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10961-012-9264-9
Keywords
- Science and technology policy
- Performance measurement
- Evidence based decision making
- New public management