Abstract
The function and legitimacy of values in decision making is a critically important issue in the contemporary analysis of science. It is particularly relevant for some of the more application-oriented areas of science, specifically decision-oriented science in the field of regulation of technological risks. Our main objective in this paper is to assess the diversity of roles that non-cognitive values related to decision making can adopt in the kinds of scientific activity that underlie risk regulation. We start out, first, by analyzing the issue of values with the help of a framework taken from the wider philosophical debate on science and values. Second, we study the principal conceptualizations used by scholars who have applied them to numerous case studies. Third, we appraise the links between those conceptualizations and learning processes in decision-oriented science. In this, we recur to the concept of methodological learning, i.e., learning about the best methodologies for generating knowledge that is useful for science-based regulatory decisions. The main result of our analysis is that non-cognitive values can contribute to methodological improvements in science in three principal ways: (a) as basis for critical analysis (to differentiate “sound” from “bad” science), (b) for contextualizing methodologies (by identifying links between methods and objectives), and (c) for establishing the burden of proof (in order to generate data that otherwise would not be generated).
Similar content being viewed by others
Notes
Science whose objective is to inform public decision making (regulation, innovation, policy making, etc.).
What Jasanoff (1990) calls “regulatory science”.
In this paper we will consider two types of influence of non-cognitive values that are particularly relevant to decision-oriented science:
(a)Whenever non-cognitive values exert influence on the early stages of research, particularly on the decisions on what topics to investigate, and in which fashion, because this will determine the available empirical evidence.
(b)Whenever non-cognitive values exert influence on cognitive values in any stages of scientific research, as for example in the selection of methodologies, extrapolation models, standards of evidence, etc.
To the contrary, we will not consider cases in which non-cognitive values exert direct influence on the research process and its results, as for example when they lead to the direct acceptance or rejection of hypotheses due to the latter’s concordance or discordance with religious or ideological beliefs, etc. (as in the rejection of Darwinism), the fabrication of data, or to scientists reporting results to which they are predisposed in a particular way (as in the case of N rays).
Laudan (2004) makes a distinction between epistemic and cognitive values (a point of view shared by Douglas). On his view, epistemic values are those most directly related to empirical support, while cognitive values refer to all the other values related to scientific knowledge (simplicity, scope, etc.). While Laudan’s distinction is relevant in certain cases, in this paper we will not differentiate between epistemic and cognitive values.
From the point of view of a particular social actor, certain non-cognitive values may be understood as “good” or “positive”, while others may be interpreted as “bad” or “negative”. Such a differentiation is, however, always context-dependent. In other words, to consider that, for instance, the promotion of innovation is “better” than the promotion of public health depends on contextual, ideological and moral considerations. In fact, many of the controversies related to scientific-technological products and systems (genetically modified organisms, pharmaceuticals, human biotechnology, climate change, etc.) focus on such contextually situated issues. Here, we will not take into consideration such context-dependent differentiations in the evaluation of non-cognitive values.
This terminology introduced by Mayo makes reference to idealized stances. Even though we cannot expect the authors we will analyze here to conform completely to any of those three idealizations, we still consider this classification useful for situating the authors and pointing to the main differences between them.
The “positivist” stance, obviously, must not be understood as referring to logical positivism. At best, it could be understood in terms of the classical positivist point of view in philosophy, represented by authors like Compte. In any case, its use here merely has the function of a—very broad—label. We could equally well use the term “technocratic” to characterize this stance.
Alternatively, this point of view could be labeled “relativist”.
Steel (2010), against Laudan, argues that there do exist ways of adjusting the distribution of errors without negatively affecting error minimization.
The latter point is a generalization of Laudan’s (2001) naturalistic thesis that determining the best methodology for satisfying a number of given cognitive values is an empirical question. This same thesis has been applied here to the issue of non-cognitive values.
Jeffrey’s (1956) classical work (in response to Rudner, 1953, among others) constitutes a radicalized version of this thesis: the task of science is limited to collecting and characterizing evidence in support of, or against various hypotheses, but never to accept or reject those hypotheses. Laudan would probably not agree to this interpretation (see, for example, Laudan 2010).
Steele (2012) would disagree. On her account, transposing the climate scientists’ complex beliefs into a (cruder) format that can usefully be communicated to policy makers by itself implies value judgments. The latter are unavoidable because the scientists have to make (value-laden) decisions on how to match their beliefs to the format required by other social actors.
Short term tests (STTs), in the regulatory context, are indirect tests for evaluating, for example, the carcinogenic potential of chemical substances, by way of methods like bacterial and mammalian mutagenesis, cell transformation, and animal DNA assays. They substitute for standard, full blown studies. The advantage of STTs is that standard scientific methodology for obtaining similar (albeit somewhat more reliable) data implies the use of all-out animal assays, which are complex, expensive and time-intensive. As substitutes for standard studies STTs are usually less accurate. However, in applications in which they are not used as substitutes for other kinds of methodology, as for example when they are applied to test for immediate effects (acute toxicity), STTs may be as accurate as standard studies.
In risk assessment mechanistic information refers to the understanding of individual processes in highly complex (chemical, biological) systems that produce an observed outcome, like mutagenicity. Such information comes in two types: a less detailed one (“mode of action” data), which refers to the sequence of stages in the interaction of an organism and a toxic product; and a more detailed one, the “mechanism of action”, that comprises the highly detailed understanding, usually on the molecular level, of those events (White et al. 2009).
Shifting the burden of proof onto the technology developer (industry) could be understood as a form of science of “what if” (Ravetz 1997). The relevant question in this case is: what happens if we are wrong, and some of the technologies we think are safe turn out to be problematic? In other words, we would have to investigate the consequences of possible error.
These three perspectives would correspond to three different ways of understanding Laudan’s point about the distribution of errors.
As we have already seen, Douglas has adopted Laudan’s distinction between epistemic and cognitive values. This distinction is equivalent to the one that Steel (2010) makes between intrinsic and extrinsic epistemic values.
Allocation of the burden of proof has to be understood as a methodological question, not only as a regulatory prescription about upon whom the burden of proof falls.
However, this issue has to be analyzed with care. There are cases in which non-cognitive values may appear to influence the choice of hypotheses. But this influence is either only apparent or very indirect. Let us take as an example the case of models for the extrapolation from high to low doses of exposure. Here we are clearly faced with a methodological decision. But underlying any model of extrapolation there is an empirical hypothesis with respect to the interaction between a (chemical, radioactive, etc.) substance and humans. In this sense we could assert that non-cognitive values indeed are a source of legitimacy for the choice of hypotheses. But, it is important to point out that in this case non-cognitive values are of import only because we do not have at our disposal any other criterion that would allow us to decide between the two alternative models (which are underdetermined by cognitive values). In this case the choice of model, and indirectly also of its underlying empirical hypothesis, has a purely methodological function.
References
Ashford, N. A. (2005). Incorporating science, technology, fairness, and accountability in environmental, health, and safety decisions. Human and Ecological Risk Assessment, 11, 85–96.
Betz, G. (2013). In defence of the value free ideal. European Journal for the Philosophy of Science, 3, 207–220.
Churchman, C. (1948). Statistics, pragmatics, induction. Philosophy of Science, 15, 249–268.
Cranor, C. (1993). Regulating toxic substances. New York: Island Press.
Cranor, C. (1995). The social benefits of expedited risk assessment. Risk Analysis, 15, 353–358.
Cranor, C. (1997). The normative nature of risk assessment: Features and possibilities. Risk: Health, Safety and Environment, 8, 123–136.
Cranor, C. (1999). Asymmetric information, the precautionary principle, and burdens of proof. In C. Raffensperger & J. Tickner (Eds.), Protecting public health and the environment: Implementing the precautionary principle (pp. 74–99). Washington: Island Press.
Cranor, C. (2001). Learning from the law to address uncertainty in the precautionary principle. Science and Engineering Ethics, 7, 313–326.
Cranor, C. (2006). Toxic torts. Science, law and the possibility of justice. Cambridge: Cambridge University Press.
Cranor, C. (2011). Legally poisoned: How the law puts us at risk from toxicants. Cambridge, MA: Harvard University Press.
Dorato, M. (2004). Epistemic and nonepistemic values inscience. In Machamer & Wolters, 2004, 52–77.
Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science, 67, 559–579.
Douglas, H. (2004). Border skirmishes between science and policy. In Machamer & Wolters, 2004, 220–244.
Douglas, H. (2006). Norms for values in scientific belief acceptance. Contributed paper-20th biennial meeting of the Philosophy of Science Association PSA 2006 Vancouver, 2–14.
Douglas, H. (2007). Rejecting the ideal of value-free science. In H. Kincaid, J. Dupré, & A. Wylie (Eds.), Value-free science? (pp. 120–140). New York: Oxford University Press.
Douglas, H. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press.
Douglas, M., & Wildavsky, A. (1982). Risk and culture: An essay on the selection of technical and environmental dangers. Berkeley: University of California Press.
Dupré, J. (2007). Fact and value. Value-free science? (pp. 27–40). New York: Oxford University Press.
Elliot, K. (2000). Conceptual clarification and policy-related science: The case of chemical hormesis. Perspectives on Science, 8, 346–366.
Elliot, K. (2006). A novel account of scientific anomaly: Help for the dispute over low-dose biochemical effects. Philosophy of Science, 73, 790–802.
Elliot, K., & McKaughan, D. (2009). How values in scientific discovery and pursuit alter theory appraisal. Philosophy of Science, 76, 598–611.
Elliott, K. (2011a). Is a little pollution good for you? Incorporating societal values in environmental research. New York: Oxford University Press.
Elliott, K. (2011b). Direct and indirect roles for values in science. Philosophy of Science, 78, 303–324.
Elliott, K. (2013). Douglas on values: From indirect roles to multiple goals. Studies in History and Philosophy of Science, 44, 375–383.
Giere, R. (1991). Knowledge, values, and technological decisions: A decision theoretic approach. In Mayo and Hollander, 1991, 183–203.
Haack, S. (2008). Proving causation: The holism of warrant and the atomism of Daubert. Journal of Health & Biomedical Law, 4, 253–289.
Hansen, S. F., von Krauss, M., & Tickner, J. A. (2007). Categorizing mistaken false positives in regulation of human and environmental health. Risk Analysis, 27, 255–269.
Hempel, C. (1981). Turns in the evolution of the problem of induction. Synthese, 46, 389–404.
Jasanoff, S. (1990). The fifth branch. Science advisers as policy makers. Cambridge, MA: Harvard University Press.
Jeffrey, R. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 22, 237–246.
Kincaid, H., Dupré, J., & Wylie, A. (Eds.). (2007a). Value-free science? New York: Oxford University Press.
Kincaid, H., Dupré, J., & Wylie, A. (2007b). Introduction. In H. Kincaid, J. Dupré, & A. Wylie (Eds.), Value-free science? (pp. 3–23). New York: Oxford University Press.
Krimsky, S. (2005). The weight of scientific evidence in policy and law. American Journal of Public Health, 95, S129–S136.
Kuhn, T. S. (1977). Objectivity, value judgment, and theory choice. In T. S. Kuhn (Ed.), The essential tension (pp. 320–339). Chicago: Univ. of Chicago Press.
Lacey, H. (1999). Is science value free? Values and scientific understanding. London: Routledge.
Lacey, H. (2005). Values and objectivity in science. Lanham: Lexington Books.
Laudan, L. (1984). Science and values. Berkeley: Univ. of California Press.
Laudan, L. (2001). Epistemic crises and justification rules. Philosophical Topics, 29, 271–317.
Laudan, L. (2004). The epistemic, the cognitive and the social. In P. Machamer & G. Wolters (Eds.), Science, values and objectivity (pp. 14–23). Pittsburgh: University of Pittsburgh Press.
Laudan, L. (2008). Truth, error, and criminal law: An essay in legal epistemology. Cambridge: Cambridge University Press.
Laudan, L. (2010). Legal epistemology: The anomaly of affirmative defenses. In D. Mayo & A. Spanos (Eds.), Error and inference: Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of science (pp. 376–396). Cambridge: Cambridge University Press.
Laudan, L. (2011). Is it finally time to put ‘proof beyond a reasonable doubt’ out to pasture? In A. Marmour (Ed.), Routledge companion to philosophy of law. London: Routledge.
Lemons, J., Shrader-Frechette, K., & Cranor, C. (1997). The precautionary principle: Scientific uncertainty and Type I and Type II errors. Foundations of Science, 2, 207–236.
Levi, I. (1960). Must the scientist make value judgments? The Journal of Philosophy, 57, 345–357.
Longino, H. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton: Princeton University Press.
Longino, H. (2002). The fate of knowledge. Princeton, NJ: Princeton University Press.
Machamer, P., & Douglas, H. (1999). Cognitive and social values. Science & Education, 8, 45–54.
Machamer, P., & Wolters, G. (Eds.). (2004). Science, values and objectivity. Pittsburgh: University of Pittsburgh Press.
Mayo, D. G. (1991). Sociological versus metascientific views of risk assessment. In D. G. Mayo & R. D. Hollander (Eds.), Acceptable evidence: Science and values in risk management (pp. 249–279). Oxford: Oxford University Press.
Mayo, D. G. (1996). Error and the growth of experimental knowledge. Chicago: University of Chicago Press.
Mayo, D. G. (2010). Error and the law. Exchanges with Larry Laudan. In D. Mayo & A. Spanos (Eds.), Error and inference: Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of science (pp. 397–411). Cambridge: Cambridge University Press.
Mayo, D. G., & Hollander, R. D. (Eds.). (1991). Acceptable evidence: Science and values in risk management. Oxford: Oxford University Press.
Mayo, D. G., & Spanos, A. (2006). Philosophical scrutiny of evidence of risks: From bioethics to bioevidence. Philosophy of Science, 73, 803–816.
Mayo, D. G., & Spanos, A. (2008). Risk to health and risk to science: the need for a responsible ‘bioevidential’ scrutiny. BELLE Newsletter, 14, 18–21.
McMullin, E. (1983). Values in science. In P. Asquith & T. Nickles (Eds.), Proceedings of the 1982 PSA (pp. 3–28). East Lansing, MI: PSA.
Michaels, D. (2008). Doubt is our product. Oxford: Oxford University Press.
Michaels, D., & Monforton, C. (2005). Manufacturing uncertainty. American Journal of Public Health, 95(supplement 1), 39–49.
Mitchell, S. (2004). The prescribed and proscribed values in science policy. In P. Machamer & G. Wolters (Eds.), Science, values and objectivity (pp. 245–255). Pittsburgh: University of Pittsburgh Press.
Murphy, J., Levidow, L., & Carr, S. (2006). Regulatory standard for environmental risks. Social Studies of Science, 36, 133–160.
National Research Council. (1983). Risk assessment in the federal government. Washington, DC: National Academy Press.
Ravetz, J. (1997). The science of what if. Futures, 29, 533–539.
Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20, 1–6.
Shrader-Frechette, K. (1989). Scientific progress and models of justification. In S. Goldman (Ed.), Science, technology, and social progress (pp. 196–226). London: Associated University Presses.
Shrader-Frechette, K. (1994). Ethics of scientific research. Lanham: Rowman & Littlefield.
Shrader-Frechette, K. (2001). Radiobiological hormesis, methodological value judgments, and metascience. Perspectives on Science, 8, 367–379.
Shrader-Frechette, K. (2004a). Using metascience to improve dose-response curves in biology: Better policy through better science. Philosophy of Science, 71, 1026–1037.
Shrader-Frechette, K. (2004b). Comparativist rationality and epidemiological epistemology: Theory choice in cases of nuclear-weapons risk. Topoi, 23, 153–163.
Shrader-Frechette, K. (2010). Conceptual analysis and special-interest science: Toxicology and the case of edward calabrese. Synthese, 177, 449–469.
Silbergeld, E. (1991). Risk assessment and risk management. An uneasy divorce. In D. G. Mayo & R. D. Hollander (Eds.), Acceptable evidence: Science and values in risk management (pp. 99–114). Oxford: Oxford University Press.
Solomon, M. (2001). Social empiricism. Cambridge, MA: MIT Press.
Steel, D. (2010). Epistemic values and the argument from inductive risk. Philosophy of Science, 77, 14–34.
Steel, D. (2011). Extrapolation, uncertainty factors, and the precautionary principle. Studies in History and Philosophy of Biological and Biomedical Sciences, 42, 356–364.
Steel, D. (2015). Acceptance, values, and probability. Studies in History and Philosophy of Science, 53, 81–88.
Steele, K. (2012). The scientist qua policy advisors makes value judgments. Philosophy of Science, 79, 893–904.
Stirling, A. (1999). On science and precaution in the management of technological risk, vol. 1. Brussels: EC Joint Research Center.
Wandall, B. (2004). Values in science and risk assessment. Toxicology Letters, 152, 265–272.
Wandall, B., Hansson, S. O., & Rudén, C. (2007). Bias in toxicology. Archives of Toxicology, 81, 605–617.
Weiss, C. (2006). Can there be science-based precaution? Environmental Research Letters, 1, 014003.
White, R. H., Cote, I., Zeise, L., Fox, M., Dominici, F., Burke, T., et al. (2009). State-of-the-science workshop report: Issues and approaches in low-dose–response extrapolation for environmental health risk assessment. Environmental Health Perspectives, 117, 283–287.
Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science, 40, 92–101.
Worrall, J. (1988). The value of a fixed methodology. British Journal for the Philosophy of Science, 39, 263–275.
Wynne, B. (1992). Risk and social learning: reification to engagement. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 275–297). Westport: Praeger.
Acknowledgments
This work has received support from the Spanish Government’s State Secretary of Research, Development and Innovation (research projects La explicación basada en mecanismos en la evaluación de riesgos [FFI2010-20227/FISO] and La evaluación de beneficios como ciencia reguladora: las declaraciones de salud de los alimentos funcionales [FFI2013-42154-P]), and European Commission FEDER funds.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Todt, O., Luján, J.L. Non-cognitive Values and Methodological Learning in the Decision-Oriented Sciences. Found Sci 22, 215–234 (2017). https://doi.org/10.1007/s10699-015-9482-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10699-015-9482-3