Summary
The ultimate social test of the information chartered by any investor in research or information production is its third party utility-the difference having the information makes in the behavior or decisions of users. This chapter describes some prototype “strategies” and criteria for evaluating the utility of individual research projects and “portfolios” of projects. Portfolio impacts should be the most significant factor in guiding an investor’s resource allocation, ex ante, and in demonstrating prudent expenditures, ex post. Consequently, the chapter discusses how to go about judging whether existing portfolios are the best that can be obtained.
This chapter describes procedures for assessing three different kinds of research: (1) basic research; (2) “innovation” research, aimed at developing new or improved products, services, or processes; and (3) policy research aimed at informing or affecting public decisions. Since research investors frequently involve themselves on the supply side of the research market as well as the demand side, the chapter also describes best practice in evaluating science education and manpower training programs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ashton, W.B. and Sen, R.K. 1986. “Understanding Technology Change Using Patent Information,” unpublished paper.
Averch, H. 1990. Private Markets and Public Intervention. Pittsburgh: Pittsburgh University Press.
Averch, H. 1991. “The Practice of Research Evaluation in the United States,” Research Evaluation, 2: 1–7.
Cohen, W.M. and Levinthal, D.A. 1990. “Absorptive Capacity: Perspectives on Knowing and Innovation,” Administrative Science Quarterly, 35: 18–152.
Feldman, M.S. and March, J.G. 1981. “Information as Signal and Symbol,” Administrative Science Quarterly, 26: 171–186.
IITRI (Illinois Institute of Technology Research Institute). 1968. Technology in Retrospect and Critical Events in Science. Chicago: IITRI.
Kostoff, R.N. 1988. “Evaluation of Proposed and Existing Accelerated Research Programs of the Office of Naval Research,” IEEE Transactions on Engineering Management, 35: 271–279.
Logsdon, J.M. and Rubin, C. 1985. An Overview of Federal Research Evaluation Activities. Washington, DC: George Washington University.
Mansfield, E. 1991. “The Social Rate of Return from Academic Research,” Research Policy.
March, J.G. and Olsen, J.P. 1990. Rediscovering Institutions: The Organizational Basis of Politics. New York: The Free Press.
Pavitt, K. 1985. “Patent Statistics as Indicators of Innovative Activities,” Scientometrics, 7: 77–99.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1993 Springer Science+Business Media New York
About this chapter
Cite this chapter
Averch, H.A. (1993). Criteria for Evaluating Research Projects and Portfolios. In: Bozeman, B., Melkers, J. (eds) Evaluating R&D Impacts: Methods and Practice. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-5182-6_14
Download citation
DOI: https://doi.org/10.1007/978-1-4757-5182-6_14
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-5135-9
Online ISBN: 978-1-4757-5182-6
eBook Packages: Springer Book Archive