Keywords

Since the mid-twentieth century, insulating science from social and ethical values has been something of an obsession for philosophers of science.Footnote 1 Philosophers articulated, and then staunchly defended, a value-free ideal for science. This ideal did not insulate science completely from societal influence. Philosophers were willing to concede the “context of discovery” to the influence of values (which, in contemporary parlance, includes scientists deciding upon research projects and methodologies), but argued that the “context of justification” had no place for social values. This view was supported by three ideas: (1) that societal values can add no confirmatory weight to empirical claims (and that to think otherwise is to confuse “ought” claims with “is” claims); (2) that values distinctive to scientific theory choice could guide scientists when faced with inferential decisions (i.e., epistemic or cognitive values); and (3) that the authority of science in the public sphere rested on the separation and disentanglement of science from social and ethical values. This final presumption was bound up with hopes for science as a resource in public debates that could transcend divergent societal interests—that science could be a “value-neutral” resource in our democratic discourse.

I will argue that there is something to the first idea—that there is an important conceptual difference between normative and descriptive claims, although in practice they are both used to support each other. Yet because of their logical structures, normative claims cannot provide sole support for descriptive claims, and vice versa. I will argue that the second idea is crucially incomplete—that although there are distinctive epistemic values in science, they cannot decisively guide inference. And, finally, I will argue that the third idea is inadequate as well—that we need a more complex understanding of why we grant science general epistemic authority, with multiple bases supporting that authority.

Descriptive and Normative Claims

It is a standard presumption in philosophy that one cannot derive “is” claims from “ought” claims, nor can one derive “ought” claims from “is” claims. I can make arguments about how one ought to value science or how one ought to value democracy, but that does not mean that the people to whom I am making these arguments do value science or democracy. Similarly, I can describe the way the world is in great detail, but someone can always respond, “Yes, but that is not how it ought to be.” The difference between “is” and “ought” claims seems crucial for giving us the space to imagine what a better world might be like, even in the face of an (often grim) accurate and detailed description. It also provides space for resisting the automaticity that can follow from a particular “ought” claim. That the world is not like it should be (in some people’s eyes) may be a good thing in our view, and we might use that descriptive difference, or the projected costs of change that arise from a detailed description, to resist a normative appeal.

Philosophers debate whether the practical difference between these types of claims is grounded in some metaphysical difference in the nature of normativity. Is the true different from the good? Is the beautiful different from the just? I have no wish to wade into such debates, although it certainly seems plausible that the answer to both these questions is yes. The world does not seem so unified that all the normativities line up. For my purposes here, it suffices to note that making a whole set of descriptive claims (with nothing else) does not make an adequate argument for a normative claim; nor does a whole set of normative claims (with nothing else) make an adequate argument for a descriptive claim. Each kind of claim cannot serve as sole justification for the other. They simply don’t interact that way.

They do, however, interact. For example, it is difficult to see how to make an argument about how the world ought to be (or, more pointedly, how we ought to act) without relying upon some descriptive claims about the way the world is. We need empirical information about what causes pain, for example, if we are to craft a world with less pain in it. Arguments about what we should do rely upon descriptions of what we can do, what is feasible, what is readily achievable, what comes at higher costs and what those costs are. That we need both kinds of claims in our arguments is indicative that there are in fact two kinds of claims.

Conversely, the question of whether one can make an argument for a descriptive claim without normative claims is a central concern in the current values in science debate, particularly as science is now a major source for our descriptive claims. At issue is whether descriptive arguments rely on (without being built wholly out of) normative claims. The decisions of what to study, how to study it, and when to say the study is completed (when the evidence is sufficient) suffuses normative presuppositions into our descriptive statements. They are not there on the surface (just as descriptive claims are not there on the surface of normative claims) and they do not suffice on their own for arguing for (or supporting) the descriptive claims, but they are part of the overall argument for, and process of, generating the descriptive claims.

But to say that normative claims have a role to play in generating descriptive claims is not to give away the difference altogether. When Carl Hempel argued back in the 1960s that normative claims can provide no confirmatory weight to a descriptive claim, he was right.Footnote 2 Saying that the world ought to be a particular way is not a good argument that the world is actually that way. In this form of argument, such normative expressions are more pious hopes than reasons for the accuracy of descriptions. And this is a gap in kind that we want to preserve. The world is often not how we want it to be, and keeping this difference is essential for being able to perceive and to say that.

With this conceptual distinction in place, we can now address the debate over values in science. Acknowledging that there is a conceptual difference between descriptive and normative claims, we can examine more closely how they might (and should) interact in producing science. (For those who are not convinced there is a difference between descriptive and normative claims, the value-free ideal for science doesn’t make any sense. In the arguments that follow, I will presume a conceptual difference, but show a practical interdependence, between the two.)

Values in Science

Science is, of course, a human practice. And when we do science, we entangle values, including social and ethical values, in that science. The questions are how values interweave with science, whether it is legitimate and necessary, and ultimately what to do about it.

Critics of the value-free ideal for science initially pointed out how values (particularly social and ethical values) influence the practice and products of science, because science is performed by humans. Feminist philosophers of science showed how sexist values blinded scientists to alternative explanations of phenomena or directed the attention of scientists to some narrow subset of data, a fuller examination of which produced rather different interpretations and results. Examples from archaeology (explanations of how tool use developed), cellular biology (explanations of fertilization processes), and animal biology (explanations of duck genital morphology and mating behavior) demonstrate such influences of values on science in spades.Footnote 3 Feminists were quick to point out that problematic science of this sort was not obviously bad science (scientists were not making up data and not engaged in pseudoscientific practices immune from criticism and revision), but the limitations of it (and the value-influence on it) became obvious once better science was pursued.Footnote 4 Looking back on the cases critiqued by feminists, the science looks woefully inadequate and blinkered.

Several feminist philosophers proposed addressing these issues by focusing on the social nature of science.Footnote 5 Because science requires communities of scientists, in critical dialogue with each other, feminist scholars looked to the structure of those communities for answers. Improving the diversity of scientists, many argued, would improve the range of explanations pursued and the kinds of phenomena examined, bolstering the epistemic reliability of the sciences. And by having more diversity of participants in science, and more diversity of values through the participants, the value judgments and influences could be more readily spotted by someone in the scientific community rather than disappearing, invisible by virtue of universal acceptance among scientists. In addition, in such an agenda, the virtues of the just and the true could be aligned, as breaking down the barriers to participating in scientific research would be both fairer and produce more accurate science.Footnote 6

While this is certainly a worthwhile approach to addressing many issues of justice in science and epistemically inadequate science, this approach does not take on the value-free ideal directly. One could argue that the reason for increasing diversity in science is to ferret out those hidden value presuppositions that were distorting the search for truth.Footnote 7 Once made clear, one could hope that the values could be removed from the scientific explanations. The called-for diversity in science could be made to serve the ultimate aim of a value-free science. What the feminist critiques showed (for some) is not a problem with the value-free ideal per se but with the past practices of science. The cases of sexist science were weak science, empirically feeble science, and the pursuit of new theories and evidence made science stronger. Stronger science could still aim to be value-free.

Another reason the value-free ideal remained mostly unscathed was that it was narrowly focused on when values need to be kept out. One could still argue that science should be value-free in its justifications, that regardless of how the theories and explanations of empirical phenomena were developed (and feminist critiques showed we needed to improve this process substantially), what mattered when making inferences in science—when deciding what the evidence said—is that scientists try to keep values out of that process, and just focus on the evidence at hand (perhaps bolstered by a sense that with diverse participants in science, the evidence at hand is the best available set). The value-free ideal was articulated as being about the moment of inference in science, of being about the practices of justification at one particular point. The idea was that if values were kept out at this point, it could serve as the pure fulcrum for later decisions, that science could be universal and authoritative if and only if values were not part of the justificatory inference. And indeed, the idea that values can offer no confirmatory weight to the pile of evidence, and that if they did we would be blurring the important difference between descriptive and normative claims, added further reason to support the value-free ideal. Scientists needed to make inferences (and justify those inferences) with no regard to social and ethical values, according to the value-free ideal. Maintaining this ideal was crucial to the authority of science, which rested on purity from societal influences at the point of inference.

To upend the value-free ideal, and its presumptions about the aim of purity and autonomy in science, one needs to tackle the ideal qua ideal at the moment of justification. This is the strength of the argument from inductive risk. It points to the inferential gap that can never be filled in an inductive argument, whenever the scientific claim does not follow deductively from the evidence (which in inductive, ampliative sciences it almost never does). A scientist always needs to decide, precisely at the point of inference crucial to the value-free ideal, whether the available evidence is enough for the claim at issue. This is a gap that can never be filled, but only stepped across. The scientist must decide whether stepping across the gap is acceptable. The scientist can narrow the gap further with probability statements or error bars to hedge the claim, but the gap is never eliminated.

How is a scientist to decide that the available evidence is enough? That the gap is worth stepping across? That a claim is worth accepting? Some have suggested that epistemic and/or cognitive values can do this. It is time to examine whether there are “canons of inference” that can fulfill this role.

Epistemic and Cognitive Values: What Guidance?

When Isaac Levi suggested in 1960 that there were “canons of inference” that guided decisions of acceptance in science, and that these were sufficient for theory assessment in science, he helped to put in place a crucial piece of the value-free ideal.Footnote 8 There has been voluminous work on what became known as “epistemic values” (for some, “cognitive values”) in science. Some of the work has focused on particular attributes (e.g., What is the value of simplicity? Does prediction matter more than accommodation? What constitutes a good explanation?),Footnote 9 and discussions initially described a collective soup of values that scientists held.Footnote 10 More recent work has involved unpacking nuance among the values considered constitutive of science.Footnote 11

It has helped enormously to consider what these values are good for. Instead of merely noting their pervasive importance in science (historically and currently), one could attend to differences in why particular values might be central to science. For example, successful prediction and explanation are values that organize the evidence in relation to theory, and as such help to structure how we assess the strength of the available evidence.Footnote 12 Precision in successful explanation and prediction similarly helps assess how strong the evidence is—if precise theories explain or predict precise evidence, we think the evidential support is so much the stronger for the theory. Theories that successfully predict or explain a broad scope of evidence (across a range of phenomena), or theories that successfully predict or explain complex phenomena with simpler theoretical apparatus, also are judged to be supported more strongly by the evidence than competitors without these virtues. These kinds of values are properly epistemic, as they help us judge how good a theory is at this moment, and how strong the currently available evidence is.

Note that while these virtues are very helpful in assessing the strength of the available evidence, they are mute on whether the available evidence is strong enough to warrant acceptance by scientists. Such epistemic values do not speak to this question at all.

Other traditionally constitutive values in science are more future oriented, and direct our attention to the promise of a theory in the future. These values, such as broad scope over potential (but as yet ungathered) evidence, fecundity in producing predictions (as yet untested), and explanatory power (as yet uninstantiated), are suggestive of the general fruitfulness of a theory. But such future fruitfulness is only a reason to keep working on a theory, to use that potential fecundity to explore the world further, to accept it as a basis for further research, not to accept it generally for other decision-making. I have called these values “cognitive values,” because their presence means that a theory will be easier to work with going forward, and thus they have a pragmatic research value for scientists.Footnote 13 They are not epistemic, as they do not indicate the general reliability of a theory—they do not tell us that a theory is well supported and likely to produce accurate predictions. They do, however, indicate good research bets. Thus, neither the set of epistemic nor the set of cognitive values can tell us when we have enough evidence. They simply do other jobs.

There are two sources of trouble here for seeing the normative entanglement of science and social values. The first is local: that many of the cognitive values have the same name as the epistemic values, and thus are readily conflated. Predictive power could be a name for past successes (and thus be epistemic) or could be a name for future fecundity (and thus be cognitive). The same goes for explanatory power or scope or even precision and simplicity. That there is a sense of these values that is directed to past success in grappling with and organizing actual evidence (an epistemic sense) and that there is a sense of these values that speaks of the future promise of a theory (a cognitive sense) confuses things. It also makes it seem as though the general list of such values is indeed sufficient for science—for what else do scientists qua scientists need but to assess the strength of evidence and to decide upon the future promise of potential research questions?

But such a conception of scientific practice neglects that we want something else from scientists; that indeed, science is not just pursued for scientists alone. We need to know what to think about the world right now, and not just to know which theories are promising for future research. And we need to know more than how strong the evidence is for a particular theory—we need to know whether it is strong enough to use for deciding what to do in the wider world beyond the endeavors of scientific researchers. The inductive gap remains, despite the utility of epistemic and cognitive values, and we have to know what to do about it. Should it be stepped across or not? Even with probability statements or error bars, does the available evidence support the claim enough? Epistemic values can help assess how strong the evidence is; cognitive values can help assess where to place bets for future research. But for the assessment of evidential sufficiency in the moment, we need to look beyond epistemic and cognitive values.

The Necessity of Social and Ethical Values in Science

How do social and ethical values help with this inductive gap? While they can’t fill it, they are crucial for deciding when the evidence available (the strength of which is assessed using epistemic values) is strong enough. Strong enough for what? What is this assessment of sufficiency? How does a scientist decide that the inductive gap is acceptably small enough to step across? It is here, at this question, that philosophers and scientists must stop looking at the purely internal practices of science and answer this question with respect to a full understanding of science as it operates within societies, rather than isolated from societies. When scientists decide the evidence is strong enough, they are deciding not just for themselves, but for anyone who wants to rely upon science for guiding decisions in the broader world. For that, the internal practices and values of science are not sufficient.

Social and ethical values, however, do help with this decision. They help by considering the consequences of getting it wrong, of assessing what happens if it was a mistake to step across the inductive gap (i.e., to accept a claim), or what happens if we fail to step across the inductive gap when we should. In doing so, such values help us to assess whether the gap is small enough to take the chance. If making a mistake means only minor harms, we may be ready to step across it with some good evidence. If making a mistake means major harms, particularly to vulnerable populations or crucial resources, we should change our standards accordingly. Social and ethical values weigh these risks and harms, and provide reasons for why the evidence may be sufficient in some cases and not in others.

The difficulty is that there are risks of error in all directions. There are risks of error in prematurely making a claim; there are risks of error in failing to make a claim soon enough; and there are risks of error in saying nothing while we wait for more evidence. There is no perfectly safe space in which to stand. Neither science nor logic can assure us of safety—indeed nothing can. There are no guarantees. What this examination of science, values, and inference can give us is not assurances of success, but assurances that we are doing the best we can—and what that best consists of. Doing our best in science requires the involvement of social and ethical values in the decision that evidence is sufficient.

There are alternatives to involving social and ethical values in evidential sufficiency assessments. We could simply toss a coin when deciding whether to accept or reject a claim. But this would be arbitrary, and thus irresponsible to the authoritative weight that science has in society. And we would still need to decide when the evidence was enough to warrant the coin toss! We could also set standards internal to science: What are the risks to scientific researchers and to the practice of science of accepting or rejecting a claim? But that is also arbitrary—arbitrarily insular: Why should impacts on scientists and research be the only impacts that count? Note that this too would still involve ethical values (some of the impacts on scientists would surely be ethically weighty), but we would be considering only scientists. Why should we do that? With science taking place within a broader society, why should only scientists count in making these decisions? We could ask that scientists never step across inductive gaps, but merely tell us the evidence and how strong they think it is. The practical difficulties of this are insurmountable. As I have argued, the moment of inference is not the only place where inductive risk considerations arise.Footnote 14 In addition, we would have to learn how to examine the evidence ourselves, as scientists would no longer be free to tell us what it means (that would be drawing the inference). Finally, we could require that scientists only step across the inductive gap when it is very, very small, and thus be as conservative in their risk-taking as possible. But why is this the right standard? Such a standard presumes that only risks of making a claim incorrectly matter, and ignores the risks of not making a claim when it is true, of waiting too long.

To attempt to be value-free in the assessment of evidential sufficiency is to ignore the broader society in which science functions, by being arbitrary, or ignoring the full set of risks, or ignoring the implications of scientific work in the broader society. If science is to be responsible to the broader society in which it functions, if it is to earn its authority, it should not be value-free at all. Instead, it needs to be value-responsive.

Suppose one still wanted to maintain the purity of science from social and ethical values, and that to do so one was willing to institutionally isolate scientists from society. This would involve not only making sure that only risks to scientists and to research were considered in the assessments of evidential sufficiency, but keeping scientists from saying anything publicly about their research. Others would need to maintain and police the border between science and society, deciding what bits of information, which pieces of scientific research, were ready for public consumption and which were not. Communication among scientists would need to fall behind a shroud of secrecy, insulating scientific meetings, publications, and debates from public consumption. Scientists could be free to pursue inquiry indefinitely, and someone else would need to decide when the evidence was enough to instigate other decisions or actions. Scientists would need to eschew the public eye, and would likely need to be physically isolated from the rest of society. We could sever science from society in this way, and thus keep scientists willfully ignorant of the societal implications of their research and from thinking about them. We could have others trained to do this for scientists and have those specialists deciding when evidence was sufficient for a public communication of a claim.

I think we should view such an approach with alarm, and indicative of a misplaced desperation to keep science “pure.” Not only would such isolation likely produce questionable science (because the forums for discourse would have to be closed to only professional scientists, who would have to be more strictly credentialed than is currently the case, thus narrowing who was engaged in scientific discourse), but we would need to create and monitor an entirely new social institution. Who would keep track of the boundary policers, and whether they were acting in the public interest or corrupted by a narrower interest? These would be very difficult issues to address. It would also be a very authoritarian institution, as it would require the end of the free exchange of information, and sequestering of the entirety of empirical investigation under confidentiality wraps. The potential for abuse in such an institution is staggering. Despite the complexity we face with the demise of the value-free ideal, I think addressing the difficulties of relinquishing the value-free ideal is both more manageable and desirable than a truly isolated scientific enterprise.

Nevertheless, the demise of the value-free ideal does leave us with a problem in thinking about science and values: What ideals should guide the interaction of science and values?

Searching for New Ideals

That we need some ideals for values in science seems clear. Social and ethical values can have distorting and problematic effects on science, as evidenced by the cases of sexist science uncovered by feminists. Such cases are just one way in which social and ethical values can distort science. Occurrences of manufactured doubt show the influence of social ideologies on scientific research. Because the purveyors of doubt care so much about protecting unfettered capitalism, they are willing to distort the scientific record to forestall unwelcome policies.Footnote 15 Social values such as making a profit can lead scientists in the employ of for-profit entities to bend science (e.g., by selectively reporting the results of clinical trials in medical research).Footnote 16 And some cases of scientific fraud can be viewed as a pernicious influence of social values, when scientists are so sure of how the world should be, they make up the data to show that it is that way (e.g., the psychologist Cyril Burt and the manufacture of twin data to support his beliefs about the inheritability of intelligence).Footnote 17 Social and political values also drove such catastrophic cases as the influence of Trofim Lysenko on Soviet science under Stalin. We should not be sanguine about allowing social and ethical values into science unfettered. Such laissez-faire attitudes about values can make a mess of science.

Philosophers of science have offered several alternative ideals for thinking about how values should operate in science. I will articulate those ideals here and assess their strengths and weaknesses. We will see that there is no one all-encompassing ideal that can replace the traditional value-free ideal. What relinquishing the value-free ideal requires is that we grapple with a more complex terrain of science-society interactions.Footnote 18 Different ideals get at different aspects of scientific practice more or less effectively. Understanding their strengths and weaknesses allows us to see what they are useful for both philosophically and practically.

In the current literature (and I can make no claims to completeness in this fast-moving field), there are at least five different ideals (or norms) for values in science:

  1. 1.

    Placing priority on epistemic values

  2. 2.

    Role restrictions for values in science

  3. 3.

    Getting the right values into science

  4. 4.

    Ensuring proper community functioning

  5. 5.

    Ensuring good institutional structures for scientific practice

Let me describe each, articulating their strengths and weaknesses, and then we can see how they fit together.

Placing Priority on Epistemic Values

Daniel Steel has suggested that the correct ideal for values in science is to make sure they do not hinder the attainment of truth (within the realm of “practically and ethically permissible” science).Footnote 19 Ethical values, of course, do restrict our methodologies and the kinds of science we pursue, so Steel does allow those kinds of restrictions on scientific research, even if they do hinder the discovery of new truths. But aside from this restriction, Steel wants no social or ethical values to interfere with the attainment of truth.

This is an interesting ideal, but presents some problems for practical guidance in science. It can be hard in practice to know whether a particular value judgment (whether social, ethical, or cognitive) is helping or hindering the attainment of truth in the middle of a research project or scientific debate.Footnote 20 Part of the excitement of science is not knowing where the truth lies, so whether a value is helping or hindering can be quite unclear without the benefit of hindsight. In addition, one can wonder whether this is the right approach to take even in cases where social and ethical values do hinder the attainment of truth. What counts as ethically permissible science is an ongoing contested arena (as the debate over gain-of-function viral research shows).Footnote 21 Sometimes ethical values can inhibit the attainment of truth (because researchers are following their conscience) before the ethical debate is settled, and we might be quite happy about that in retrospect. In short, this ideal works well only when we have settled both what the truth is and what the ethical boundaries of permissibility are, which means guidance in medias res is lacking. And we might decide in hindsight that some truths are not worth having, given the ethical costs of getting them. This ideal seems primarily useful for retrospective examinations of scientific debates.

Role Restrictions for Values in Science

In my work, I have emphasized distinct roles for values in science. I have argued that there are two roles for values in science: a direct role (where values serve as a reason to do something, and thus direct the decision) and an indirect role (where values serve to help assess whether the available evidence is sufficient for an inference or choice). I have argued that depending on where one is in the scientific process, different roles are acceptable. For example, a direct role for values is acceptable in deciding which research agenda to pursue (e.g., because the scientist cares about a particular issue) and in deciding which methodologies to employ (e.g., because a particular methodology is ethically preferable). An indirect role is acceptable in these instances as well. But at moments of data characterization and inference (the targeted terrain of the value-free ideal), I have argued that we can maintain scientific integrity while permitting social and ethical values by constraining such values to the indirect role only.Footnote 22 It is also an ideal that can help guide discourse on contentious scientific issues, as it allows for both the expression of values (“Because of this value, I find the evidence insufficient”) and guidance for productive debate (“What evidence would be convincing for you?”).

This ideal is a direct counter to the value-free ideal, and targeted as narrowly as the value-free ideal is on these “internal” inferential moments. As such, it has little to say about the direction of research agendas. Further, it cannot help much with methodology selection (or distortion). Finally, it is not much of an ideal in the sense of something to strive for. It is more of a minimum floor, which if one does not meet, one is doing really poor science (such as writing down the data one wishes were accurate or making inferences that one wishes were true). Although I think it is an important norm to hold, it will not suffice for guiding scientific practice.

Getting the Right Values in Science

Several philosophers of science have argued in recent years that the important thing to focus on for values in science is making sure that the right values are influencing scientific research.Footnote 23 Such authors have taken an “aims-oriented” approach to the problem of values in science. Janet Kourany, for example, has argued for a “joint satisfaction” ideal for values in science—that only when a decision meets both epistemic and ethical criteria is it a good decision. Kevin Elliott has called attention to the multiple goals of science, including both epistemic aims and social aims.

There are several things to note about this approach. The first is that all the authors that champion this ideal take pains to express concerns for, and support of, the value of inquiry. Both the epistemic aim and the ethical/social aim must be met, for example, in Kourany’s joint satisfaction ideal. So this ideal is not just about social and ethical values, but about valorizing the general purpose of inquiry and discovery as well. The pursuit of truth matters a great deal to those who argue for this approach.

The second is that this approach successfully addresses concerns about research agenda choices and methodological choices, about which the role-restriction norm has little to say. Because both roles for values are acceptable for these choices, that approach has no normative bite at these stages. Arguing about what the right values are is exactly on target for these choices. For example, in cases where the methodological choices seem to be made to guarantee preselected outcomes, the get-the-right-values-in-science ideal can say that the decisions improperly neglect the value of inquiry, and thus are improper decisions.Footnote 24

Finally, the authors who support this approach tend to want the values utilized to be also the result of good inquiry—not necessarily of the same kind as empirical scientific research, but still informed by good empirical results and robust philosophical debate. Values are not mere contaminants in our process of inquiry with this ideal, but a strong support of it, as they too are open to inquiry.Footnote 25

However, despite its importance, it is doubtful that this ideal is enough. First, what the right values are is often hotly and openly contested. How we know we have the right values can be unclear. So guidance for scientists in practice can be lacking. Second, at the moment of inference (the moment of central concern to the value-free ideal and to the role-restriction ideal), this ideal provides either inaccurate or incomplete guidance. What are we to do when evidence arises that seems to challenge our value commitments? Suppose (and I think this unlikely) that we discover men and women really do have divergent mathematical abilities. Do we reject the evidence because it does not meet the joint satisfaction of ethical and epistemic values? Suppose it is strong evidence (and so meets the epistemic criterion). Do we reject it because it does not fit with our ethical commitments? This seems to conflate the “is” and the “ought,” and falls into the trap of wishful thinking and worrisome distortion that the value-free ideal was meant to ward off. It is also a case where the role-restriction ideal serves us well. We can say we want stronger evidence before we are willing to give up on our belief in the general equality of mathematical ability, and we can even say (one would hope) what such evidence should consist of. But rejecting the evidence because we do not like what it says is unacceptable. It is precisely this move that climate deniers often make, and we are rightly frustrated by that.

In short, for guiding scientists in practice, we need both of these ideals—the role-restriction ideal and the get-the right-values-in-science ideal—in operation, although at different levels of granularity. At particular moments of inference, getting roles right is important. And in general, having the right values is important. Indeed, one could justify the roles ideal in terms of the aims ideal—that valuing inquiry properly means, in part, keeping values in the right roles. But as noted above, there is often contention about what the right values are. To address this, we will need a broader communal perspective.

Ensuring Proper Community Functioning

One of the weaknesses of the get-the-right-values-in-science ideal is that it is mute when we don’t know what the right values are. What then? Or, what if the right values encompass a plurality of values, all legitimate, with good reasons to support them and reasonable disagreement among them? What kind of ideal can we articulate under these circumstances? Further, the previous ideals generally centre on the impact of values on particular scientific choices. How can we ensure that the conditions that support the requisite critical debate and pluralistic reflection in science are in place?

Philosophers of science (led by feminists) have focused on describing the conditions for proper community functioning to address these concerns. Ensuring that one has a diverse scientific community—with clear forums for debate, expectations for the uptake of criticisms, and effective distribution of research efforts reflecting needed diversity so that alternative theories can be explored—serves to provide essential conditions for the robustness and reliability of science.Footnote 26 Such conditions also provide assurance that value judgments will be elucidated and examined within the scientific community, and that if there are disagreements about which values should be shaping research agendas, those debates can occur in an open and productive way. Having proper community functioning is essential to ensuring that, if there is general agreement on the values, the right values influence science, and, if there is not agreement on the values, some diversity of values will be deployed in making judgments in science.

Some minimum of effective community functioning is needed for producing acceptable science. But we can always do better along the ideals that philosophers like Miriam Solomon and Helen Longino provide for us. This set of ideals, focused as it is on how communities of scholars should work and distribute their efforts, complements ideals 2 and 3, which are more focused on how particular choices should be made in science. The communal functioning ideal calls for proper response and uptake of criticism, for example, but it is from ideal 2, from an articulation of how values can properly play roles in scientific reasoning, that we can see what proper response and uptake consists of. (It is not proper, for example, to say: “I don’t accept that empirical claim because it disagrees with my values.” It is proper to say: “I find that evidence insufficient because of my values and my concern over false positives, so I want stronger evidence before accepting that claim.”) That we need ideals both for governing particular choices and for guiding communities should not be surprising. What none of these ideals address, however, is how the scientific community should interact with the broader (democratic) public.

Ensuring Good Institutional Structures for Scientific Practice

While the social epistemological tendencies reflected in ideal 4 are useful for thinking about how we want our scientific communities to work, they do not help inform how the scientific community should think about its role and responsibilities to the broader society or how we want to structure the science-policy interfaces that so powerfully shape the pursuit and use of science. This area for ideals is the least developed.

It is on this kind of interaction that many ideals articulated by philosophers working on science policy have focused.Footnote 27 The trouble is that the science-policy interface is multifaceted, and philosophers have yet to grapple with all the facets in articulating an ideal. What constitutes good institutional structure is very much up for debate.

For now, I hope I have shown that we need some set of nested ideals crafted from those described above. Ideal 2 is the most targeted response to the value-free ideal (both narrowly focused on inferences in science), but once we relinquish this ideal and confront the complexity of science in society, it seems obvious that no one ideal will suffice. Without the value-free ideal narrowing our focus, we have to think about and address all the ways in which values do influence science and consider how that should occur.

The Authority of Science and Ideals for Science

No one ideal for values in science will suffice. We need nested ideals, articulated for individual actors, communal practices, and science-society interfaces, in order to ground the authority of science.

The authority of science rests on the interlocking character of these norms. At the communal level, scientists are expected to continually question and critique each other’s work. They are expected to respond to criticisms raised, and to hold no scientific claim above criticism. Such mutual critique is a minimum for granting science prima facie epistemic authority. The more diverse and reflective of the plurality of society the scientific community is, the more taken-for-granted assumptions and unexamined value commitments will (hopefully) be elucidated, the more authority science should have.

But community practices need good individual reasoning practices with which to operate. Maintaining the proper roles for values in science keeps values from acting in place of evidence, which will support the critical interactions needed in science. New evidence should always be able to contest old positions, and this can only happen if values are not used to protect desired positions from unwanted criticism.Footnote 28 A scientist can point to their values to argue for why they require more evidence to be convinced, but they can never point to their values to argue for why evidence is irrelevant to the claims they make or protect. Asking for more evidence drives the inquiry dialectic; holding claims above evidential critique does not.

Further, it is not just in individual reasoning integrity (right roles) and communal practices, but in some shared values (operating within proper roles) that science gains its authority in a democratic society. Getting the values right, particularly in the realm of policy-relevant science, strongly supports scientific authority. That scientists are investigating questions we care about, using methodologies that we find morally acceptable and targeted at what we are concerned with, and using values we share for assessing evidential sufficiency, can and should make a big difference for what we think is epistemically authoritative. Thus, elucidating the proper roles and proper values for science is part of what makes science authoritative, rather than undermining the authority of science.

Finally, the authority of science also rests on its raw instrumental success. Relying upon scientific understandings of disease (e.g., in the instance of communicable diseases) has greatly increased lifespans; relying upon scientific understandings of materials has greatly increased the range of what we can manufacture; relying upon scientific understandings of what we can transmit in the air has transformed communication; and so forth. It is this raw instrumental success that is probably at the root of most of the trust that society places in science. But we are running into areas of science where success is not easily measured, especially in the short term, and the problems we are addressing seem more interrelated than ever. The challenge of science in democracy is still with us.

Implications

There is much work to be done in further fleshing out the ideals for individual, communal, and societal practices in science. We need these levels of norms to mesh together (at least somewhat), so that our communal expectations and societal practices do not place impossible burdens on individual scientists. We need to figure out how these norms align and how to encourage the pursuit of the ideals in real scientific practice.

But we also need to ensure that there is some space between what society might want and what scientists can pursue. While the full autonomy and isolation of science is undesirable, we also don’t want a science that only tells us what we want to hear. Some space is crucial for the practice of science. Keeping social values out of a direct role at the moment of inference is part of maintaining this space. Allowing scientists to have a say about research agendas (and to pursue some research for curiosity’s sake) is another.

Science cannot be just a mouthpiece for societal interests. If it becomes this, it will not have any claim to distinctive epistemic authority. While we need knowledge to help us pursue our social goals, we also sometimes need to know when such goals are not feasible or desirable (because of what else will come with their successful instantiation). Science needs to be able to tell us when we are running into such issues, to be able to “speak truth to power.” This ability is central to its authority in practice.