Skip to main content
Log in

Non-cognitive Values and Methodological Learning in the Decision-Oriented Sciences

  • Published:
Foundations of Science Aims and scope Submit manuscript

Abstract

The function and legitimacy of values in decision making is a critically important issue in the contemporary analysis of science. It is particularly relevant for some of the more application-oriented areas of science, specifically decision-oriented science in the field of regulation of technological risks. Our main objective in this paper is to assess the diversity of roles that non-cognitive values related to decision making can adopt in the kinds of scientific activity that underlie risk regulation. We start out, first, by analyzing the issue of values with the help of a framework taken from the wider philosophical debate on science and values. Second, we study the principal conceptualizations used by scholars who have applied them to numerous case studies. Third, we appraise the links between those conceptualizations and learning processes in decision-oriented science. In this, we recur to the concept of methodological learning, i.e., learning about the best methodologies for generating knowledge that is useful for science-based regulatory decisions. The main result of our analysis is that non-cognitive values can contribute to methodological improvements in science in three principal ways: (a) as basis for critical analysis (to differentiate “sound” from “bad” science), (b) for contextualizing methodologies (by identifying links between methods and objectives), and (c) for establishing the burden of proof (in order to generate data that otherwise would not be generated).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Science whose objective is to inform public decision making (regulation, innovation, policy making, etc.).

  2. What Jasanoff (1990) calls “regulatory science”.

  3. In this paper we will consider two types of influence of non-cognitive values that are particularly relevant to decision-oriented science:

    (a)Whenever non-cognitive values exert influence on the early stages of research, particularly on the decisions on what topics to investigate, and in which fashion, because this will determine the available empirical evidence.

    (b)Whenever non-cognitive values exert influence on cognitive values in any stages of scientific research, as for example in the selection of methodologies, extrapolation models, standards of evidence, etc.

    To the contrary, we will not consider cases in which non-cognitive values exert direct influence on the research process and its results, as for example when they lead to the direct acceptance or rejection of hypotheses due to the latter’s concordance or discordance with religious or ideological beliefs, etc. (as in the rejection of Darwinism), the fabrication of data, or to scientists reporting results to which they are predisposed in a particular way (as in the case of N rays).

  4. Laudan (2004) makes a distinction between epistemic and cognitive values (a point of view shared by Douglas). On his view, epistemic values are those most directly related to empirical support, while cognitive values refer to all the other values related to scientific knowledge (simplicity, scope, etc.). While Laudan’s distinction is relevant in certain cases, in this paper we will not differentiate between epistemic and cognitive values.

  5. From the point of view of a particular social actor, certain non-cognitive values may be understood as “good” or “positive”, while others may be interpreted as “bad” or “negative”. Such a differentiation is, however, always context-dependent. In other words, to consider that, for instance, the promotion of innovation is “better” than the promotion of public health depends on contextual, ideological and moral considerations. In fact, many of the controversies related to scientific-technological products and systems (genetically modified organisms, pharmaceuticals, human biotechnology, climate change, etc.) focus on such contextually situated issues. Here, we will not take into consideration such context-dependent differentiations in the evaluation of non-cognitive values.

  6. This terminology introduced by Mayo makes reference to idealized stances. Even though we cannot expect the authors we will analyze here to conform completely to any of those three idealizations, we still consider this classification useful for situating the authors and pointing to the main differences between them.

  7. The “positivist” stance, obviously, must not be understood as referring to logical positivism. At best, it could be understood in terms of the classical positivist point of view in philosophy, represented by authors like Compte. In any case, its use here merely has the function of a—very broad—label. We could equally well use the term “technocratic” to characterize this stance.

  8. Alternatively, this point of view could be labeled “relativist”.

  9. Steel (2010), against Laudan, argues that there do exist ways of adjusting the distribution of errors without negatively affecting error minimization.

  10. The latter point is a generalization of Laudan’s (2001) naturalistic thesis that determining the best methodology for satisfying a number of given cognitive values is an empirical question. This same thesis has been applied here to the issue of non-cognitive values.

  11. Jeffrey’s (1956) classical work (in response to Rudner, 1953, among others) constitutes a radicalized version of this thesis: the task of science is limited to collecting and characterizing evidence in support of, or against various hypotheses, but never to accept or reject those hypotheses. Laudan would probably not agree to this interpretation (see, for example, Laudan 2010).

  12. Steele (2012) would disagree. On her account, transposing the climate scientists’ complex beliefs into a (cruder) format that can usefully be communicated to policy makers by itself implies value judgments. The latter are unavoidable because the scientists have to make (value-laden) decisions on how to match their beliefs to the format required by other social actors.

  13. Rudner’s (1953) thesis was criticized by Jeffrey (1956). Hempel (1981) rejected both points of view.

  14. Short term tests (STTs), in the regulatory context, are indirect tests for evaluating, for example, the carcinogenic potential of chemical substances, by way of methods like bacterial and mammalian mutagenesis, cell transformation, and animal DNA assays. They substitute for standard, full blown studies. The advantage of STTs is that standard scientific methodology for obtaining similar (albeit somewhat more reliable) data implies the use of all-out animal assays, which are complex, expensive and time-intensive. As substitutes for standard studies STTs are usually less accurate. However, in applications in which they are not used as substitutes for other kinds of methodology, as for example when they are applied to test for immediate effects (acute toxicity), STTs may be as accurate as standard studies.

  15. In risk assessment mechanistic information refers to the understanding of individual processes in highly complex (chemical, biological) systems that produce an observed outcome, like mutagenicity. Such information comes in two types: a less detailed one (“mode of action” data), which refers to the sequence of stages in the interaction of an organism and a toxic product; and a more detailed one, the “mechanism of action”, that comprises the highly detailed understanding, usually on the molecular level, of those events (White et al. 2009).

  16. Shifting the burden of proof onto the technology developer (industry) could be understood as a form of science of “what if” (Ravetz 1997). The relevant question in this case is: what happens if we are wrong, and some of the technologies we think are safe turn out to be problematic? In other words, we would have to investigate the consequences of possible error.

  17. These three perspectives would correspond to three different ways of understanding Laudan’s point about the distribution of errors.

  18. As we have already seen, Douglas has adopted Laudan’s distinction between epistemic and cognitive values. This distinction is equivalent to the one that Steel (2010) makes between intrinsic and extrinsic epistemic values.

  19. Allocation of the burden of proof has to be understood as a methodological question, not only as a regulatory prescription about upon whom the burden of proof falls.

  20. However, this issue has to be analyzed with care. There are cases in which non-cognitive values may appear to influence the choice of hypotheses. But this influence is either only apparent or very indirect. Let us take as an example the case of models for the extrapolation from high to low doses of exposure. Here we are clearly faced with a methodological decision. But underlying any model of extrapolation there is an empirical hypothesis with respect to the interaction between a (chemical, radioactive, etc.) substance and humans. In this sense we could assert that non-cognitive values indeed are a source of legitimacy for the choice of hypotheses. But, it is important to point out that in this case non-cognitive values are of import only because we do not have at our disposal any other criterion that would allow us to decide between the two alternative models (which are underdetermined by cognitive values). In this case the choice of model, and indirectly also of its underlying empirical hypothesis, has a purely methodological function.

References

  • Ashford, N. A. (2005). Incorporating science, technology, fairness, and accountability in environmental, health, and safety decisions. Human and Ecological Risk Assessment, 11, 85–96.

    Article  Google Scholar 

  • Betz, G. (2013). In defence of the value free ideal. European Journal for the Philosophy of Science, 3, 207–220.

    Article  Google Scholar 

  • Churchman, C. (1948). Statistics, pragmatics, induction. Philosophy of Science, 15, 249–268.

    Article  Google Scholar 

  • Cranor, C. (1993). Regulating toxic substances. New York: Island Press.

    Book  Google Scholar 

  • Cranor, C. (1995). The social benefits of expedited risk assessment. Risk Analysis, 15, 353–358.

    Article  Google Scholar 

  • Cranor, C. (1997). The normative nature of risk assessment: Features and possibilities. Risk: Health, Safety and Environment, 8, 123–136.

    Google Scholar 

  • Cranor, C. (1999). Asymmetric information, the precautionary principle, and burdens of proof. In C. Raffensperger & J. Tickner (Eds.), Protecting public health and the environment: Implementing the precautionary principle (pp. 74–99). Washington: Island Press.

    Google Scholar 

  • Cranor, C. (2001). Learning from the law to address uncertainty in the precautionary principle. Science and Engineering Ethics, 7, 313–326.

    Article  Google Scholar 

  • Cranor, C. (2006). Toxic torts. Science, law and the possibility of justice. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Cranor, C. (2011). Legally poisoned: How the law puts us at risk from toxicants. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Dorato, M. (2004). Epistemic and nonepistemic values inscience. In Machamer & Wolters, 2004, 52–77.

    Google Scholar 

  • Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science, 67, 559–579.

    Article  Google Scholar 

  • Douglas, H. (2004). Border skirmishes between science and policy. In Machamer & Wolters, 2004, 220–244.

    Google Scholar 

  • Douglas, H. (2006). Norms for values in scientific belief acceptance. Contributed paper-20th biennial meeting of the Philosophy of Science Association PSA 2006 Vancouver, 2–14.

  • Douglas, H. (2007). Rejecting the ideal of value-free science. In H. Kincaid, J. Dupré, & A. Wylie (Eds.), Value-free science? (pp. 120–140). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Douglas, H. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Douglas, M., & Wildavsky, A. (1982). Risk and culture: An essay on the selection of technical and environmental dangers. Berkeley: University of California Press.

    Google Scholar 

  • Dupré, J. (2007). Fact and value. Value-free science? (pp. 27–40). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Elliot, K. (2000). Conceptual clarification and policy-related science: The case of chemical hormesis. Perspectives on Science, 8, 346–366.

    Article  Google Scholar 

  • Elliot, K. (2006). A novel account of scientific anomaly: Help for the dispute over low-dose biochemical effects. Philosophy of Science, 73, 790–802.

    Article  Google Scholar 

  • Elliot, K., & McKaughan, D. (2009). How values in scientific discovery and pursuit alter theory appraisal. Philosophy of Science, 76, 598–611.

    Article  Google Scholar 

  • Elliott, K. (2011a). Is a little pollution good for you? Incorporating societal values in environmental research. New York: Oxford University Press.

    Book  Google Scholar 

  • Elliott, K. (2011b). Direct and indirect roles for values in science. Philosophy of Science, 78, 303–324.

    Article  Google Scholar 

  • Elliott, K. (2013). Douglas on values: From indirect roles to multiple goals. Studies in History and Philosophy of Science, 44, 375–383.

    Article  Google Scholar 

  • Giere, R. (1991). Knowledge, values, and technological decisions: A decision theoretic approach. In Mayo and Hollander, 1991, 183–203.

    Google Scholar 

  • Haack, S. (2008). Proving causation: The holism of warrant and the atomism of Daubert. Journal of Health & Biomedical Law, 4, 253–289.

    Google Scholar 

  • Hansen, S. F., von Krauss, M., & Tickner, J. A. (2007). Categorizing mistaken false positives in regulation of human and environmental health. Risk Analysis, 27, 255–269.

    Article  Google Scholar 

  • Hempel, C. (1981). Turns in the evolution of the problem of induction. Synthese, 46, 389–404.

    Article  Google Scholar 

  • Jasanoff, S. (1990). The fifth branch. Science advisers as policy makers. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Jeffrey, R. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science, 22, 237–246.

    Article  Google Scholar 

  • Kincaid, H., Dupré, J., & Wylie, A. (Eds.). (2007a). Value-free science? New York: Oxford University Press.

    Google Scholar 

  • Kincaid, H., Dupré, J., & Wylie, A. (2007b). Introduction. In H. Kincaid, J. Dupré, & A. Wylie (Eds.), Value-free science? (pp. 3–23). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Krimsky, S. (2005). The weight of scientific evidence in policy and law. American Journal of Public Health, 95, S129–S136.

    Article  Google Scholar 

  • Kuhn, T. S. (1977). Objectivity, value judgment, and theory choice. In T. S. Kuhn (Ed.), The essential tension (pp. 320–339). Chicago: Univ. of Chicago Press.

    Google Scholar 

  • Lacey, H. (1999). Is science value free? Values and scientific understanding. London: Routledge.

    Google Scholar 

  • Lacey, H. (2005). Values and objectivity in science. Lanham: Lexington Books.

    Google Scholar 

  • Laudan, L. (1984). Science and values. Berkeley: Univ. of California Press.

    Google Scholar 

  • Laudan, L. (2001). Epistemic crises and justification rules. Philosophical Topics, 29, 271–317.

    Article  Google Scholar 

  • Laudan, L. (2004). The epistemic, the cognitive and the social. In P. Machamer & G. Wolters (Eds.), Science, values and objectivity (pp. 14–23). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Laudan, L. (2008). Truth, error, and criminal law: An essay in legal epistemology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Laudan, L. (2010). Legal epistemology: The anomaly of affirmative defenses. In D. Mayo & A. Spanos (Eds.), Error and inference: Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of science (pp. 376–396). Cambridge: Cambridge University Press.

    Google Scholar 

  • Laudan, L. (2011). Is it finally time to put ‘proof beyond a reasonable doubt’ out to pasture? In A. Marmour (Ed.), Routledge companion to philosophy of law. London: Routledge.

    Google Scholar 

  • Lemons, J., Shrader-Frechette, K., & Cranor, C. (1997). The precautionary principle: Scientific uncertainty and Type I and Type II errors. Foundations of Science, 2, 207–236.

    Article  Google Scholar 

  • Levi, I. (1960). Must the scientist make value judgments? The Journal of Philosophy, 57, 345–357.

    Article  Google Scholar 

  • Longino, H. (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton: Princeton University Press.

    Google Scholar 

  • Longino, H. (2002). The fate of knowledge. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Machamer, P., & Douglas, H. (1999). Cognitive and social values. Science & Education, 8, 45–54.

    Article  Google Scholar 

  • Machamer, P., & Wolters, G. (Eds.). (2004). Science, values and objectivity. Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Mayo, D. G. (1991). Sociological versus metascientific views of risk assessment. In D. G. Mayo & R. D. Hollander (Eds.), Acceptable evidence: Science and values in risk management (pp. 249–279). Oxford: Oxford University Press.

    Google Scholar 

  • Mayo, D. G. (1996). Error and the growth of experimental knowledge. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Mayo, D. G. (2010). Error and the law. Exchanges with Larry Laudan. In D. Mayo & A. Spanos (Eds.), Error and inference: Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of science (pp. 397–411). Cambridge: Cambridge University Press.

    Google Scholar 

  • Mayo, D. G., & Hollander, R. D. (Eds.). (1991). Acceptable evidence: Science and values in risk management. Oxford: Oxford University Press.

    Google Scholar 

  • Mayo, D. G., & Spanos, A. (2006). Philosophical scrutiny of evidence of risks: From bioethics to bioevidence. Philosophy of Science, 73, 803–816.

    Article  Google Scholar 

  • Mayo, D. G., & Spanos, A. (2008). Risk to health and risk to science: the need for a responsible ‘bioevidential’ scrutiny. BELLE Newsletter, 14, 18–21.

    Google Scholar 

  • McMullin, E. (1983). Values in science. In P. Asquith & T. Nickles (Eds.), Proceedings of the 1982 PSA (pp. 3–28). East Lansing, MI: PSA.

    Google Scholar 

  • Michaels, D. (2008). Doubt is our product. Oxford: Oxford University Press.

    Google Scholar 

  • Michaels, D., & Monforton, C. (2005). Manufacturing uncertainty. American Journal of Public Health, 95(supplement 1), 39–49.

    Article  Google Scholar 

  • Mitchell, S. (2004). The prescribed and proscribed values in science policy. In P. Machamer & G. Wolters (Eds.), Science, values and objectivity (pp. 245–255). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Murphy, J., Levidow, L., & Carr, S. (2006). Regulatory standard for environmental risks. Social Studies of Science, 36, 133–160.

    Article  Google Scholar 

  • National Research Council. (1983). Risk assessment in the federal government. Washington, DC: National Academy Press.

    Google Scholar 

  • Ravetz, J. (1997). The science of what if. Futures, 29, 533–539.

    Article  Google Scholar 

  • Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20, 1–6.

    Article  Google Scholar 

  • Shrader-Frechette, K. (1989). Scientific progress and models of justification. In S. Goldman (Ed.), Science, technology, and social progress (pp. 196–226). London: Associated University Presses.

    Google Scholar 

  • Shrader-Frechette, K. (1994). Ethics of scientific research. Lanham: Rowman & Littlefield.

    Google Scholar 

  • Shrader-Frechette, K. (2001). Radiobiological hormesis, methodological value judgments, and metascience. Perspectives on Science, 8, 367–379.

    Article  Google Scholar 

  • Shrader-Frechette, K. (2004a). Using metascience to improve dose-response curves in biology: Better policy through better science. Philosophy of Science, 71, 1026–1037.

    Article  Google Scholar 

  • Shrader-Frechette, K. (2004b). Comparativist rationality and epidemiological epistemology: Theory choice in cases of nuclear-weapons risk. Topoi, 23, 153–163.

    Article  Google Scholar 

  • Shrader-Frechette, K. (2010). Conceptual analysis and special-interest science: Toxicology and the case of edward calabrese. Synthese, 177, 449–469.

    Article  Google Scholar 

  • Silbergeld, E. (1991). Risk assessment and risk management. An uneasy divorce. In D. G. Mayo & R. D. Hollander (Eds.), Acceptable evidence: Science and values in risk management (pp. 99–114). Oxford: Oxford University Press.

    Google Scholar 

  • Solomon, M. (2001). Social empiricism. Cambridge, MA: MIT Press.

    Google Scholar 

  • Steel, D. (2010). Epistemic values and the argument from inductive risk. Philosophy of Science, 77, 14–34.

    Article  Google Scholar 

  • Steel, D. (2011). Extrapolation, uncertainty factors, and the precautionary principle. Studies in History and Philosophy of Biological and Biomedical Sciences, 42, 356–364.

    Article  Google Scholar 

  • Steel, D. (2015). Acceptance, values, and probability. Studies in History and Philosophy of Science, 53, 81–88.

    Article  Google Scholar 

  • Steele, K. (2012). The scientist qua policy advisors makes value judgments. Philosophy of Science, 79, 893–904.

    Article  Google Scholar 

  • Stirling, A. (1999). On science and precaution in the management of technological risk, vol. 1. Brussels: EC Joint Research Center.

    Google Scholar 

  • Wandall, B. (2004). Values in science and risk assessment. Toxicology Letters, 152, 265–272.

    Article  Google Scholar 

  • Wandall, B., Hansson, S. O., & Rudén, C. (2007). Bias in toxicology. Archives of Toxicology, 81, 605–617.

    Article  Google Scholar 

  • Weiss, C. (2006). Can there be science-based precaution? Environmental Research Letters, 1, 014003.

    Article  Google Scholar 

  • White, R. H., Cote, I., Zeise, L., Fox, M., Dominici, F., Burke, T., et al. (2009). State-of-the-science workshop report: Issues and approaches in low-dose–response extrapolation for environmental health risk assessment. Environmental Health Perspectives, 117, 283–287.

    Article  Google Scholar 

  • Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science, 40, 92–101.

    Article  Google Scholar 

  • Worrall, J. (1988). The value of a fixed methodology. British Journal for the Philosophy of Science, 39, 263–275.

    Article  Google Scholar 

  • Wynne, B. (1992). Risk and social learning: reification to engagement. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 275–297). Westport: Praeger.

    Google Scholar 

Download references

Acknowledgments

This work has received support from the Spanish Government’s State Secretary of Research, Development and Innovation (research projects La explicación basada en mecanismos en la evaluación de riesgos [FFI2010-20227/FISO] and La evaluación de beneficios como ciencia reguladora: las declaraciones de salud de los alimentos funcionales [FFI2013-42154-P]), and European Commission FEDER funds.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Todt.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Todt, O., Luján, J.L. Non-cognitive Values and Methodological Learning in the Decision-Oriented Sciences. Found Sci 22, 215–234 (2017). https://doi.org/10.1007/s10699-015-9482-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10699-015-9482-3

Keywords

Navigation