Skip to main content

Public Value Mapping and Science Policy Evaluation

Abstract

Here we present the framework of a new approach to assessing the capacity of research programs to achieve social goals. Research evaluation has made great strides in addressing questions of scientific and economic impacts. It has largely avoided, however, a more important challenge: assessing (prospectively or retrospectively) the impacts of a given research endeavor on the non-scientific, non-economic goals—what we here term “public values”—that often are the core public rationale for the endeavor. Research programs are typically justified in terms of their capacity to achieve public values, and that articulation of public values is pervasive in science policy-making. We outline the elements of a case-based approach to “public value mapping” of science policy, with a particular focus on developing useful criteria and methods for assessing “public value failure,” with an intent to provide an alternative to “market failure” thinking that has been so powerful in science policy-making. So long as research evaluation avoids the problem of public values, science policy decision makers will have little help from social science in making choices among competing paths to desired social outcomes.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    A distinction should be made between public opinion and public values: Whereas public opinion is highly volatile, both in its concerns and its directions, public values are much more stable. New public values may enter and old ones may exit but generally only after great social change and the passing of generations.

  2. 2.

    By “research assessment,” not our focus in this paper, we mean an investigation with similar objectives but not necessarily including data and perhaps premised on indicators but with no formal analysis.

  3. 3.

    During the history of modern science and technology policy and research evaluation, the most prominent approach to assessment has been peer review. While recognizing that peer review is crucially important, the present study focuses on systematic and potentially quantitative or mixed-method approaches and, thus, does not discuss peer review approaches to research evaluation. Similarly, this paper does not deal with the many and increasingly useful bibliometic approaches to research evaluation.

  4. 4.

    For a history of government mandated research evaluation in Canada, including research evaluation, see Auditor General (1993). For a history of research evaluation activities in Canada, see Barbarie (1993).

  5. 5.

    Several publications provide synoptic reviews of the history and methods of research evaluation in European nations; see, for example, Luukkonen (2002); Callon, Laredo and Mustar (1997).

References

  1. Adams, Guy B. 1992. Enthralled with Modernity: The Historical Context of Knowledge and Theory Development in Public Administration. Public Administration Review 52(4): 363–373.

    Article  Google Scholar 

  2. Adams, John. 2006. The Failure of Seat-Belt Legislation. In Clumsy Solutions for a Complex World, eds. M. Verweij, and M. Thompson, 132–154. Houndmills, UK: Palgrave Macmillan.

    Google Scholar 

  3. Anderson, Elizabeth. 1993. Value in ethics and Economics. Cambridge, MA: Harvard University Press.

    Google Scholar 

  4. Andrews, Frank M. 1979. Scientific productivity, the effectiveness of research groups in six countries. Ann Arbor, MI.: University of Michigan Press.

    Google Scholar 

  5. Auditor General. 1993. Program Evaluation in the Federal Government. Treasury Board of Canada: The Case for Program Evaluation.

    Google Scholar 

  6. Audretsch, David B., Barry Bozeman, Kathryn Combs, Maryanne Feldman, Albert Link, Donald Siegel, Paula Stephan, Gregory Tassey, and Charles Wessner. 2002. The Economics of Science and Technology. Journal of Technology Transfer 27(2): 155–203.

    Article  Google Scholar 

  7. Barbarie, Alain. 1993. Evaluating Federal R&D in Canada. In Evaluating R&D Impacts: Methods and Practice, eds. Barry Bozeman, and Julia Melkers, 155–162. Norwell, MA: Kluwer.

    Google Scholar 

  8. Baumgartner, Frank R., and Bryan D. Jones. 1991. Agenda Dynamics and Policy Subsystems. Journal of Politics 53(4): 1044–1074.

    Article  Google Scholar 

  9. Bozeman, Barry. 2007. Public Values and Public Interest: Counter-balancing Economic Individualism. Washington, D.C.: Georgetown University Press.

    Google Scholar 

  10. Bozeman, Barry. 2002. Public Value Failure and Market Failure. Lead Article, Public Administration Review 62(2): 145–161.

    Article  Google Scholar 

  11. Bozeman, Barry 2003. Public Value Mapping of Science Outcomes: Theory and Method. In D. Sarewitz, et. al. Knowledge Flows & Knowledge Collectives: Understanding the Role of Science & Technology Policies in Development. 2 (1).

  12. Bozeman, Barry, James Dietz, and Monica Gaughan. 2001. Scientific and technical human capital: an alternative model for research evaluation. International Journal of Technology Management 22(7/8): 716–740.

    Article  Google Scholar 

  13. Bozeman, Barry, and Julia Melkers (eds.). 1993. Evaluating R&D Impacts: Methods and Practice. Boston: Kluwer.

    Google Scholar 

  14. Bozeman, Barry, and Juan R. Rogers. 2002. A Churn Model of Scientific Knowledge Value: Internet Researchers as a Knowledge Value Collective. Research Policy 31(5): 769–794.

    Article  Google Scholar 

  15. Bozeman, Barry, and Daniel Sarewitz. 2005. Public Values and Public Failure in U.S. Science Policy. Science and Public Policy 32(2): 119–136.

    Article  Google Scholar 

  16. Braybrooke, David, and Charles E. Lindblom. 1963. A Strategy of Decision: Policy Evaluation as a Social Process. New York: Free Press.

    Google Scholar 

  17. Budd, John, and Lynn Connaway. 1997. University Faculty and Networked Information: Results of a Survey. Journal of the American Society for Information Science 48(9): 843–852.

    Article  Google Scholar 

  18. Cummings, Ronald, and Laura Taylor. 1999. Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method. American Economic Review 89(3): 649–665.

    Article  Google Scholar 

  19. Feeney, Mary, and Barry Bozeman. 2007. The 2004–2005 Influenza Episode as a Case of Public Failure. Journal of Public Integrity 9(2): 179–195.

    Google Scholar 

  20. Fischer, Ernest Peter. 1997. Beauty and the Beast: The Aesthetic Moment in Science. trans. Elizabeth Oehlkers. New York: Plenum Trade.

    Google Scholar 

  21. Fisher, Erik, Catherine Slade, Derrick Anderson and Barry Bozeman. 2010. The Public Value of Nanotechnology? Scientometrics 85(1):29–39.

  22. Freeman, Christopher. 1992. The Economics of Hope: Essays on Technical Change, Economic Growth and the Environment, London: Pinter Publishers, 1992. London: Thompson Learning.

    Google Scholar 

  23. Garrison, Jim. 2000. Pragmatism and Public Administration. Administration and Society 32(4): 458–478.

    Article  Google Scholar 

  24. Gaus, Gerald F. 1990. Value and Justification: The Foundations of Liberal Theory. New York: Cambridge University Press.

    Google Scholar 

  25. Johnson, Harry G. 1965. Federal Support of Basic Research: Some Economic Issues. Minerva 3(4): 500–514.

    Article  Google Scholar 

  26. Jones, Charles I., and John C. Williams. 1998. Measuring the Social Return to R&D. Quarterly Journal of Economics 113(4): 1119–1135.

    Article  Google Scholar 

  27. Kevles, Daniel. 1995. The Physicists: The History of a Scientific Community in Modern America. Cambridge, MA: Harvard University Press.

    Google Scholar 

  28. Kostoff, Ronald. 2001. The Metrics of Science and Technology. Scientometrics 50(2): 353–361.

    Article  Google Scholar 

  29. Holdren, John P. 2009. Science and Technology Policy in the Obama Administration, Remarks for the Business Higher Education Forum, Washington, D.C., 16 June (Powerpoint presentation).

  30. Kirlin, John. 1996. What Government Must Do Well: Creating Value for Society. Journal of Public Administration Theory and Research 6(1): 161–185.

    Google Scholar 

  31. Leslie, Stuart W. 1993. The Cold War and American Science. New York: Columbia University Press.

    Google Scholar 

  32. Link, Albert N. 1996a. Economic Performance Measures for Evaluating Government Sponsored Research. Scientometrics 36(3): 325–342.

    Article  Google Scholar 

  33. Link, Albert N. 1996b. Evaluating Public Sector Research & Development. New York: Greenwood.

    Google Scholar 

  34. Luukkonen, Terttu. 2002. Research evaluation in Europe: state of the art, 11 (2): 81-84.

  35. Luukkonen-Gronow, Terttu. 2007. Scientific Research Evaluation: A Review of Methods and Various Contexts of their Application. R&D Management 17(3): 207–221.

    Article  Google Scholar 

  36. Machlup, Fritz. 1962. The Production and Distribution of Knowledge in the United States. Princeton, NJ: Princeton University Press.

    Google Scholar 

  37. Marburger, John. 2005. Speech at the 30th Annual AAAS Forum on Science and Technology Policy in Washington, D.C. (April 21), available at: http://www.aaas.org/news/releases/2005/0421marburgerText.shtml.

  38. Marmolo, Elisabetta. 1999. A Constitutional Theory of Public Goods. Journal of Economic Behavior & Organization 38(1): 27–42.

    Article  Google Scholar 

  39. Martens, Karel. 2009. Equity Concerns and Cost-Benefit Analysis: Opening the Black Box. Washington, DC.: Transportation Research Board, Paper #09-0586.

  40. Nye, Joseph. 1997. In Government We Don’t Trust. Foreign Policy 108(2): 99–111.

    Article  Google Scholar 

  41. OECD. 1997. The Evaluation of Scientific Research: selected experiences. Paris: OECD, Committee for Scientific and Technological Policy. Document OECD/GD(97)194. http://www.oecd.org/dsti/sti/s_t/scs/prod/e_97-194.htm.

  42. OECD. In press. Enhancing Public Research Performance through Evaluation, Impact Assessment and Priority Setting. Paris: OECD, Directorate for Science, Technology and Industry.

  43. Polanyi, Michael. 1962. The Republic of Science: It’s Political and Economic Theory. Minerva 1(1): 54–73.

    Article  Google Scholar 

  44. Rubenstein, Albert. 1976. Effectiveness of Federal Civilian-Oriented R&D Programs. Policy Studies Journal 5(2): 217–227.

    Article  Google Scholar 

  45. Rosenberg, Nathan. 1982. How Exogenous is Science? In Inside the Black Box (NY: Cambridge University Press), p. 141–159.

  46. Ruegg, Rosalie. 1996. “Guidelines for Economic Evaluation of the Advanced Technology Program,” NIST Internal Report 5896.

  47. Ruttan, Vernon. 2006. Is War Necessary for Economic Growth? New York: Oxford University Press.

  48. Salasin, John, Lowell Hattery, and Ramsey Thomas. 1980. The Evaluation of Federal Research Programs, MITRE Technical Report MTR-80W123, June 1980.

  49. Sarewitz, Daniel. 1996. Frontiers of Illusion: Science, Technology, and the Politics of Progress. Philadelphia, PA: Temple University Press.

    Google Scholar 

  50. Shields, Patricia M. 1996. Pragmatism: Exploring Public Administration’s Policy Imprint. Administration and Society 28(3): 390–411.

    Article  Google Scholar 

  51. Shils, Edward. 1968. Introduction. In Criteria for Scientific Development: Public Policy and National Goals, ed. E. Shils. Cambridge, MA: MIT Press, p. iv–v.

    Google Scholar 

  52. Solow, Robert M. 1957. Technical change and the aggregate production function. Review of Economics and Statistics 39(3): 312–320.

    Article  Google Scholar 

  53. Toulmin, Stephen. 1964. The Complexity of Scientific Choice: A Stocktaking. Minerva 2(3): 343–359.

    Article  Google Scholar 

  54. Van Deth, Jan W., and Elinor Scarbrough. 1995. The Impact of Values. Oxford: Oxford University.

    Google Scholar 

  55. Van Houten, Therese, and Harry Hatry. 1987. How to Conduct a Citizen Survey. Washington, D.C.: American Planning Association.

    Google Scholar 

  56. Weinberg, Alvin. 1963. Criteria for Scientific Choice. Minerva 1(2): 159–171.

    Article  Google Scholar 

  57. Woodhouse, Edward, and Daniel Sarewitz. 2007. Science Policies for Reducing Societal Inequities. Science and Public Policy 34(2): 139–150.

    Article  Google Scholar 

  58. Ziman, John. 1968. Public Knowledge: The Social Dimensions of Science. Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Acknowledgments

The authors acknowledge the support of the U.S. National Science Foundation’s “Science of Science Policy” program (award number 0738203, Arizona State University, “Public Value Mapping: Developing a Non-Economic Model of the Social Value of Science and Innovation Policy”). We are grateful for the assistance and ideas of the members of the “Public Value Mapping” project, including: Catherine Slade, Ryan Meyer, Erik Fisher, Genevieve Maricle, Walter Valdivia, Nathaniel Logar, Stephanie Moulton, Cynthia Schwartz, and David Guston.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Barry Bozeman.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Bozeman, B., Sarewitz, D. Public Value Mapping and Science Policy Evaluation. Minerva 49, 1–23 (2011). https://doi.org/10.1007/s11024-011-9161-7

Download citation

Keywords

  • Public values
  • Research choice
  • Research evaluation
  • Science policy
  • Market failure