Skip to main content

The Challenges of Evaluating Multipurpose Cooperative Research Centers

  • Chapter
  • First Online:
Cooperative Research Centers and Technical Innovation
  • 1044 Accesses

Abstract

This chapter contribution addresses the conceptual and methodological challenges that confront evaluators of cooperative research centers that are supported by major government centers programs. Irwin Feller, Daryl Chubin, Ed Derrick, and Pallavi Pharityal begin with a brief historical, institutional, and policy account of the origins of this morphing of evaluation criteria. They then address three analytical and methodological issues encountered in evaluating centers: aggregation and weighting of outcomes from multipurpose centers; deconstructing and operationalizing the meanings of the currently widely used performance criteria of value-added and transformative research; and construction of comparison groups. The first and second of these topics apply broadly across evaluations of the multiobjective cooperative research centers funded by the US government as well as by national governments abroad. The third also is a general topic but the special focus in this chapter is on technical issues that arise in comparing output from centers and individual investigator awards. For a complementary examination, see the chapter contribution on challenges in evaluating the economic impacts of cooperative research centers, by Roessner and colleagues.

This chapter draws freely in places on the American Association for the Advancement of Science’s evaluation of the National Science Foundation’s Science and Technology Centers Program under NSF Grant No. 0949599 (Chubin et al. 2010 AAAS review of the NSF Science and Technology Centers Integrative Partnerships. Report to the National Science Foundation under NSF Grant No. 0949599. American Association for the Advancement of Science, Washington, DC). The views expressed in the chapter are those of the authors alone and do not necessarily reflect those of any NSF official. We are indebted to Michael McGeary, Daryl Farber, and an anonymous referee for their assistance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    As articulated the National Science Board’s (NSB) 1988 report, Science and Technology Centers: Principles and Guidelines, centers represent a means of exploiting opportunities in science where the complexity of the research problem can benefit from the sustained interaction among disciplines and/or subdisciplines and as a cost-effective means of conducting large-scale research associated with high capital costs (NSB 1988).

  2. 2.

    “Research in many fields has increasingly involved collaboration of researchers, whether on large or small projects. Funding entities often encourage collaborative research, which can bring together people of different disciplines, different types of institutions, different economic sectors, and different countries” (NSB 2010, pp. 5–28).

  3. 3.

    Concerns, explicit or implicit, about quality control enter into requiring this comparison. The taint of lower intellectual merit or performer qualifications than if funds were allocated to single investigators lingers in discussions of the center model. Whereas individual-investigator proposals submitted to research directorates undergo merit review, research undertaken in university centers (after the initial selection stage) is perceived to be subject to less systematic and rigorous scrutiny. Whatever intellectual vitality may be present at the center’s inception is seen as subject to entrophying effects as the guarantee of multiyear funding, the difficulty of maintaining affiliation of key researchers, and the overlay of bureaucratic procedures dulls its initial potential for leading edge research impacts. An opposing view also exists, however. Instead of being subject to fewer quality checks, research conducted under the center mantle may be said to have to pass through a four-tiered quality filter. First are the multitiered reviews of the proposed science conducted during the selection process. Second are the annual site visit reviews organized and managed by the Federal funding agency. Third are the internal reviews of priorities and productivity conducted by center directors, center executive committees, university administrators, and center-based external advisory committees (each of which holds the potential for redirecting research thrusts and shutting down and/or opening up revised lines of research. The fourth is the review applied to manuscripts submitted by center personnel for publication in refereed journals).

  4. 4.

    To a degree, this displacement of attention on design rather than on criteria follows unavoidably from the fact that evaluators pick means, not ends. Evaluations of cooperative/university research centers are generally conducted under contracts or cooperative agreements issued by the agency sponsoring the center, with the criteria by which performance is to be evaluated specified in a request for proposal or negotiated during the proposal selection process.

  5. 5.

    Portions of the chapter’s analysis are likely applicable to the university center programs of mission oriented Federal agencies, e.g., the Department of Transportation’s University Technology Centers program, but no attempt is made here to make these connections.

  6. 6.

    Practical considerations also contribute to making NSF the central focus of this chapter’s analysis. Allowing for the degree of decentralization and autonomy exercised first by research directorates and then by program managers in setting priorities, making award decisions, and reviewing program performance, NSF is a relatively simpler organization than is NIH to study. NIH is comprised of 27 institutes and centers, each of which possesses considerable budgetary and programmatic autonomy. A complex set of center award mechanisms is available across these institutes (NRC 2007, Appendix E), but specific center activities and outputs vary in ways that reflect institute missions, emphases on research, training, equipment, and clinical practice. Moreover, for NSF as cited earlier, explicit, publicly accessible guidelines exist about how center evaluations are to be conducted. NIH has an extensive evaluation program, in part reflecting implementation of its legislatively authorized Evaluation Set-Aside Program. Interviews with NIH officials however indicate that no prescribed set of evaluation criteria exist that are applicable across all institutes. Finally, to the extent that the NRC’s 2004 report, NIH Extramural Center Programs: Criteria for Initiation and Education, is cited as providing a general NIH-wide guidelines, institute and center-specific conversion of these guidelines into specific center criteria are described as works in progress.

  7. 7.

    ERCs provide a classic example not only of the multiplicity and span of university center objectives but also how this span can change over time, typically widening by accretion. The objectives set forth for NSF’s ERC program for example have broadened over time, expanding from an initial emphasis in the 1980s on interdisciplinary engineering research and education related to technological innovation and partnerships to include in the most recent, 2011, set of awards added emphases on entrepreneurship, small business collaboration, and international partnerships.

  8. 8.

    “NSF’s investment in centers should be reported as both a percentage of the R&RA account and as a percentage of the total NSF budget, with the range of support for NSF centers being 6–8 % of R&RA” (NSB-05-166, Appendix C to NSB-05-166).

  9. 9.

    These debates continue. Contemporary debates for example about NSF’s National Ecological Observatory Network (NEON) initiative vibrate with sounds similar to those heard during the prelaunch of the ERC program. With an estimated initial cost of $20 million and a total estimated cost of $434 million, NEON is described as “ushering in a new era of large-scale environmental science.” But it also is seen as requiring a change in how ecologists do their science, requiring them to become part of a collective and posing out-year challenges to NSF program managers to cover annual operations and maintenance expenses of newly constructed facilities without “devouring their annual budgets which nurture thousands of individual investigators” (Pennsi 2010). See also, NAS (1999, p. 2), for an account of the opposition of mathematicians to proposals to transfer funding from individual investigator awards to exiting or additional mathematical sciences institutes.

  10. 10.

    A fifth concept, legacy, that has entered into evaluations of NSF center programs in recent years also has multiple meanings and can be measured in many ways.. Given the multiple purposes for which a center program is established, legacy may mean new scientific and technological findings that approximate “paradigmatic shifts”; new institutionalized forums for knowledge exchange, such as new journals or scientific associations; permanent changes in academic curricula; new formalized, continuing relationships between host institutions and other educational institutions; lasting impacts of student career choices or faculty attitudes towards collaborative projects, and more. The core concept underlying the legacy criterion though appears to be that the center’s impacts extend beyond its agency-funded life.

  11. 11.

    The differences among the four concepts are in part substantive and in part a matter of differences in semantic usage across settings. To take as an example similarities and differences between value added and additionality, the former term, frequently found in the framing of program evaluations in the US, may be narrowly construed to mean empirically estimated changes in the value for some output or outcome variable(s). It also may be interpreted broadly to encompass broader (societal) impacts. Additionality is a concept employed more frequently in European program evaluations. As with value added, it has been used to describe measured changes in the value of output or outcome variables. The primary added value of its introduction in this exegesis is that it provides for the explicit introduction of the concept of behavioral additionality, an effect that is (too) frequently obscured in evaluations of the impacts of center programs.

  12. 12.

    The importance of these features is reported in the assessment offered of the United Kingdom’s Platform Grants program. Operated under the auspices of the UK’s Engineering and Physical Sciences Council (EPSC), this program shares many features with the science thrust of university research centers. Considered as added value features of the program “over standard research grants” were the following: flexibility, strategic vision, freedom, retention of staff, attracting and keeping stars, succession planning, new directions, taking risks, academic collaborations, prestige, interdisciplinarity, making links with emerging groups, industrial collaboration, outreach to the general public, leveraging funding, and external impact (EPSC 2008, pp. 14–15).

  13. 13.

    Narin’s (1989) study of the bibliometric origins of published papers linked to major advances in cancer research found that RO1 research project grants and the National Cancer Institute’s intramural program each accounted for 10–25 % of the sample, but that the support of other research mechanisms was also evident, with contract funds supporting almost 20 % of the sample (Narin 1989). Each support mechanism also was found to produce highly cited papers, leading to the conclusion that “support mechanisms and institutional settings are not key factors affecting research advance. By implication, the procedures used to screen research proposals at the NCI are, in fact, choosing high impact research regardless of the support mechanisms and institutional settings” (1989, p. 132). Borner et al. report that “A study of more than 21 million papers worldwide from 1945 to the present reveals a fundamental and nearly university shift in all branches of science: Teams increasingly dominate solo scientists in the product of high-impact, highly cited science; teams are growing in size; and teams are increasingly located across university boundaries rather than within them” (2010, p. 1, see also, Jones et al. 2008).

  14. 14.

    The downside of having several plausible comparison groups to choose from is that whichever one or subset of these possibilities employed in an evaluation, it may be criticized for not considering others. See NAS (1996, op. cit, Appendix A).

  15. 15.

    For example, recent studies from the Netherlands that test whether bibliometric indicators can predict the success of research grant applications report that approved for approved applicants have a higher average number of publications/citations than rejected applicants. However, according to Neufeld and von Ins (2011), this difference disappears or even reverses when the group of successful applicants was compared only to the best of the rejected applicants.

  16. 16.

    Differences for a number of STC centers were found in the number of faculty participants listed on their websites and those reported to NSF, the latter being based on the NSF requirement that only faculty supported on the STC center budget for at least 160 h be counted as center participants (Chubin et al., D-6, Footnote 24).

  17. 17.

    Referring to respondents to their survey of participants in Australia’s Cooperative Research Centers program, Garrett-Jones, Turpin, and Diment write “Our respondents tended to see the benefits of the CC first in terms of advantage to their own research career and second in terms of the ‘scientific’ domain in which their career resided. Their most immediate concern seemed to be that of their own career-how they were able to perform their research, their conditions and rewards-their prospects for advancement” (2010, p. 542).

References

  • Armour-Garb A (2009) Should value added models be used to evaluate teachers? J Policy Anal Manage 28:692–712

    Article  Google Scholar 

  • Azoulay P, Zivin J, Manso G (2011) Incentives and creativity: evidence from the academic life sciences. National Bureau of Economic Research Working Paper No. 15466. NBER, Cambridge, MA

    Google Scholar 

  • Boardman C, Bozeman B (2007) Role strain in university research centers. J High Educ 78:430–461

    Article  Google Scholar 

  • Boardman C, Gray D (2010) The new science and engineering management: cooperative research centers as government policies, industry strategies, and organization. J Technol Transf 35:445–459

    Article  Google Scholar 

  • Boix-Mansilla V, Feller I, Gardner H (2006) Quality assessment in interdisciplinary research and education. Res Eval 15:69–74

    Article  Google Scholar 

  • Bonvillian W, Van Atta R (2011) ARPA-E and DARPA: applying the DARPA model to energy innovation. J Technol Transf 36(5):469–513

    Article  Google Scholar 

  • Borner K, Contractor N, Falk-Krzesinski H, Fiore S, Hall K, Keyton J, Spring B, Stokols D, Trochim W, Uzzil B (2010) A multi-systems perspective for the science of team science. Sci Transl Med 2:1–5

    Article  Google Scholar 

  • Bozeman B, Boardman C (2003) Managing the new multipurpose, multidiscipline university research centers: institutional innovation in the academic community. IBM Center for the Business of Government, Washington, DC

    Google Scholar 

  • Bozeman B, Boardman C (2004) The NSF engineering research centers and the university-industry research revolution: a brief history featuring an interview with Erich Bloch. J Technol Transf 29:363–375

    Article  Google Scholar 

  • Bozeman B, Boardman C (2009) Broad impacts and narrow perspectives: passing the buck on science and social impacts. Soc Epistemol 23:183–198

    Article  Google Scholar 

  • Article  Google Scholar 

  • Burggren W (2009) Implementation of the National Science Foundation’s “broader impacts”: efficiency considerations and alternative approaches. Soc Epistemol 23:221–237

    Article  Google Scholar 

  • Chubin D, Derrick E, Feller I, Phartiyal P (2010) AAAS review of the NSF Science and Technology Centers integrative partnerships. Report to the National Science Foundation under NSF Grant No. 0949599. American Association for the Advancement of Science, Washington, DC

    Google Scholar 

  • Coberly B, Gray D (2010) Cooperative research centers and faculty satisfaction: a multi-level predictive analysis. J Technol Transf 35:547–564

    Article  Google Scholar 

  • Cole S (1992) Making science. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Davis D, Bryant J (2010) Leader-member exchange, trust, and performance in National Science Foundation Industry/University Cooperative Research Centers. J Technol Transf 35:511–526

    Article  Google Scholar 

  • Edelman M (1985) The symbolic use of politics. University of Illinois Press, Urbana, IL

    Google Scholar 

  • Engineering and Physical Sciences Research Council (2008) Review of the platform grant scheme, Panel report, Swindon, UK

    Google Scholar 

  • Feller I (2006) Multiple actors, multiple settings, multiple criteria: issues in assessing interdisciplinary research. Res Eval 15:5–15

    Article  Google Scholar 

  • Feller I (2011) Promises and limitations of performance measures. In: Olson S, Merrill S, Rapporteurs (eds) Measuring the impacts of federal investments in research: a workshop summary. National Academies Press, Washington, DC, 119–152

    Google Scholar 

  • Frodeman R, Holbrook B (2011) NSF’s struggle to articulate relevance. Science 333:157–158

    Article  Google Scholar 

  • Garrett-Jones S, Turpin T, Diment K (2010) Managing competition between individual and organizational goals in cross-sector research and development centers. J Technol Transf 35:527–546

    Article  Google Scholar 

  • Georghiou L (2002) Impact and additionality of innovation policy. In: Boekolt P, Larosse J (eds) Innovation, science and technology, vol 40. IWT-Studies, Brussels, Belgium, pp 57–65

    Google Scholar 

  • Harris D (2011) Value-added measures and the future of educational accountability. Science 333(12):826–827

    Article  Google Scholar 

  • Hart D (1998) Forged consensus. Princeton University Press, Princeton, NJ

    Google Scholar 

  • Heinze T (2008) How to sponsor ground-breaking research: a comparison of funding schemes. Sci Pub Policy 35:302–318

    Article  Google Scholar 

  • Hicks D, Katz J (2011) Equity and excellence in research funding. Minerva 49(2):137–151

    Article  Google Scholar 

  • Hill H (2009) Evaluating value-added models: a validity argument approach. J Policy Anal Manage 28:700–709

    Article  Google Scholar 

  • Jacob B, Lefgren L (2011) The impact of NIH postdoctoral training grants on scientific productivity. Res Policy 40:864–874

    Article  Google Scholar 

  • Jaffe A (1998) Measurement issues. In: Branscomb L, Keller J (eds) Investing in innovation. Harvard University Press, Cambridge, MA, pp 64–84

    Google Scholar 

  • Jones B, Wuchty S, Uzzi B (2008) Multi-university research teams: shifting impact, geography, and stratification in science. Science 32:1259–1262

    Article  Google Scholar 

  • Kohler R (1982) From medical chemistry to biochemistry. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Kuhn T (1970) The structure of scientific revolutions, 2nd edn. University of Chicago Press, Chicago, IL

    Google Scholar 

  • Lamont M (2009) How professors think. Harvard University Press, Cambridge, MA

    Google Scholar 

  • Medawar P (1967) The art of the soluble. Methuen & Co. Ltd, London

    Google Scholar 

  • Narin F (1989) The impact of different modes of research funding. In: The evaluation of scientific research. Ciba foundation conference, Wiley, Chichester, pp 120–133

    Google Scholar 

  • National Academies (1996) An assessment of the National Science Foundation’s science and technology centers program. National Academies Press, Washington, DC

    Google Scholar 

  • National Academies (1999) U.S. research institutes in the mathematical sciences. National Academies Press, Washington, DC

    Google Scholar 

  • National Academies (2000) Experiments in international benchmarking of US research fields. National Academies Press, Washington, DC

    Google Scholar 

  • National Academies (2005) Facilitating Interdisciplinary Research. National Academies Press, Washington, DC

    Google Scholar 

  • Google Scholar 

  • Google Scholar 

  • National Academies-Institute of Medicine (2004) NIH extramural center programs: criteria for initiation and evaluation. National Academies Press, Washington, DC

    Google Scholar 

  • National Science Board (1988) Science and technology centers: principles and guidelines. National Academy Press, Washington, DC

    Google Scholar 

  • National Science Board (2005) Approved minutes, open session 389th meeting, Appendix C, 30 Nov to 1 Dec 2005, Arlington, VA

    Google Scholar 

  • National Science Board (2007) Enhancing support for transformative research at the National Science Foundation, NSB-07-32. National Academy Press, Arlington, VA

    Google Scholar 

  • National Science Board (2008) National Science Foundation’s merit review process. Fiscal year 2007, NSB-08-47. Revised 11 June 2008. National Science Foundation, Arlington, VA

    Google Scholar 

  • National Science Board (2010) Science and engineering indicators 2010, NSB 10-01. National Science Foundation, Arlington, VA

    Google Scholar 

  • National Science Foundation, Office of Inspector General (2007) Audit of NSF practices to oversee and manage its research centers programs

    Google Scholar 

  • Neufeld J, von Ins M (2011) Informed peer review and uninformed bibliometrics? Res Eval 20:31–46

    Article  Google Scholar 

  • Office of Management and Budget (2009) Increased emphasis on program evaluation. Memorandum to Heads of Executive Departments and Agencies

    Google Scholar 

  • Organisation of Economic Development and Cooperation (2006) Government R&D funding and company behaviour: measuring behavioural additionality. OECD Publishing, Paris

    Google Scholar 

  • Orzag P (2009) Building rigorous evidence to drive policy. www.whitehouse.gov/omb/09/06/08; posted 9 June 2009

  • Pennsi E (2010) A groundbreaking observatory to monitor the environment. Science 328:418–420

    Article  Google Scholar 

  • Ponomariov B, Boardman C (2010) Influencing scientists’ collaboration and productivity patterns through new institutions: University Research Centers and scientific and technical human capital. Res Policy 39:613–624

    Article  Google Scholar 

  • Ponomariov B, Welch E, Melkers J (2009) Assessing the outcomes of student involvement in research: educational outcomes in an Engineering Research Center. Res Eval 18:313–322

    Article  Google Scholar 

  • Roessner D, Manrique L, Park J (2010) The economic impact of Engineering Research Centers: preliminary results of a Pilot Study. J Technol Transf 35:475–493

    Article  Google Scholar 

  • Schubert T (2009) Empirical observations on new public management to increase efficiency in public research-boon or bane. Res Policy 38:1225–1234

    Article  Google Scholar 

  • Schwartz L (2000) The Interdisciplinary labs in materials science. A personal view. In: Roy R (ed) The interdisciplinary imperative: interactive research and education, still an elusive goal in academia. Writers Club Press, San Jose, CA, pp 129–133

    Google Scholar 

  • Segein P (1992) The skewness of science. J Am Soc Inf Sci 49:628–638

    Google Scholar 

  • Slaughter S, Hearn J (2009) Centers, universities, and the scientific innovation ecology: a workshop, final report to the National Science Foundation Grant BCS-0907827. University of Georgia, Athens, GA

    Google Scholar 

  • Smith B (1990) American science policy since World War II. Brookings Institution, Washington, DC

    Google Scholar 

  • Stahler G, Tash W (1994) Centers and institutes in the research university: issues, problems and prospects. J High Educ 65:540–554

    Article  Google Scholar 

  • Stevens H (2003) Fundamental physics and its justifications, 1945–1993. Hist Stud Phys Biol Sci 34:151–197

    Google Scholar 

  • Stokes D (1997) Pasteur’s quadrant. Brookings Institution, Washington, DC

    Google Scholar 

  • Thomas J, Mohrman SS (2011) A vision of data and analytics for the science of science policy. In: Husbands-Fealing K, Lane J, Marburger J III, Shipp S (eds) The science of science policy. Stanford University Press, Stanford, CA, pp 258–281

    Google Scholar 

  • U.S. Congress, Office of Technology Assessment (1991) Federally funded research: decisions for a decade. US Government Printing Office, Washington, DC

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Irwin Feller .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media LLC

About this chapter

Cite this chapter

Feller, I., Chubin, D., Derrick, E., Pharityal, P. (2013). The Challenges of Evaluating Multipurpose Cooperative Research Centers. In: Boardman, C., Gray, D., Rivers, D. (eds) Cooperative Research Centers and Technical Innovation. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-4388-9_10

Download citation

Publish with us

Policies and ethics