1 Introduction

The title of this paper, which connects three topics, leaves much leeway for its content. One reason is that there are apparently several different concepts of objectivity. For instance, Helen Longino identifies “two seemingly quite different senses” of objectivity ((Longino, 1990), pp. 170–171), Allan Megill claims that there are four senses of objectivity ((Megill, 1994), p. 1); Andrei Marmor distinguishes three “basically independent concepts of objectivity” ((Marmor, 2001), p. 113); Marianne Janack claims to have found 13 senses of objectivity in the literature ((Janack, 2002), p. 275); Heather Douglas claims that there are “eight operationally accessible and distinct senses of objectivity,” none of which “is strictly reducible to the others” ((Douglas, 2004), p. 453); Stephen Gaukroger claims five “understandings” of objectivity ((Gaukroger, 2012), pp. 4–10); Peter Steinberger distinguishes three “kinds” of objectivity ((Steinberger, 2015), Chap. 1); and finally, in their Stanford Encyclopedia of Philosophy entry on Scientific Objectivity, Julian Reiss and Jan Sprenger discuss four different “conceptions” of objectivity ((Reiss & Sprenger, 2020), p. 2). By contrast, I shall defend the idea that there is one abstract and general core meaning of “objectivity,” and what is seen as a variety of concepts or conceptions of objectivity are in fact mostly different criteria for, or indicators of, or means to achieve objectivity (Section 2).

Concerning the second concept in this paper’s title, value-free science, we must first determine what this ideal means, that is, which value influences it tries to bar from science (Section 3). It forbids the influence of non-scientific values on all questions of epistemic assessment but allows their influence on topic selection and the influence of scientific values at any stage of the research process. This ideal of value-free science is strongly related to the scientific goal of objectivity.

Then, we discuss the problem of inductive risk (Section 4). This is another denomination of Rudner’s famous objection against the value-free ideal because “the scientist qua scientist makes value judgments” (Rudner, 1953). Rudner claimed that non-scientific values necessarily enter epistemic assessments, in clear violation of the value-free ideal, and that this intrusion presents a considerable danger to the objectivity of science. At this point, we can adduce our analysis of objectivity in Section 2, which showed that the relation of objectivity and the ideal of value-free science is contingent. This implies that any concrete violation of the ideal of a value-free science must be analyzed with respect to its consequences for objectivity. Surprisingly, the intrusion of non-scientific values due to inductive risk is not only not detrimental to objectivity but, on the contrary, increases the standards for objectivity (Section 5). I shall conclude the paper with a discussion of under-treated influences of non-scientific values on science, which indeed pose a threat to the objectivity of some scientific disciplines (Section 6).

Note that this paper deals with only one strand of the contemporary discussion about objectivity and the value free ideal, in which the question is asked whether the value free ideal can be upheld in view of the problem of inductive risk. Another important strand of the discussion is not treated in this paper, in which it is asked whether the traditional understanding of objectivity and the value free ideal should be upheld.Footnote 1

2 Objectivity

In the current literature, there is a broad spectrum of opinions on what objectivity is. This is partly due to the fact that we call various things “objective” (or that they lack objectivity), for example reports, knowledge, truth, representations, reality, methods, procedures, processes, evaluations, people, councils, observations, criteria, and so on.Footnote 2 I propose that there is a general, abstract core-meaning of “objectivity.” I shall illustrate this core meaning first for the special case of reports and will then generalize to the other things that may also be called objective (Section 2.1). Next, I shall discuss the positions of other authors regarding a core meaning of objectivity (Section 2.2). Then I will analyze the apparent multitude of meanings of “objectivity” (Section 2.3). Finally, I will discuss a common philosophical objection to objectivity (Section 2.4).

2.1 The abstract core meaning of “objectivity”

In the recent explications of the concept of objectivity, few authors begin the discussion of “objectivity” with the contrast between “objective” and “subjective”:Footnote 3 the objective somehow excludes the subjective (whatever that is precisely). Despite some complications regarding this contrast,Footnote 4 it still seems to me to be a good starting point to understand the concept of objectivity.Footnote 5 Let us begin with a simple example. A report is called “objective” if it is free of subjective elements. What does this mean? It means that everything that is reported concerns properties of the reported object in question, and that properties, opinions, preferences, etc. of the reporting epistemic subject do not enter the content of the report. This means that an objective report is determined by the properties of the report’s object alone, and the reporting subject’s properties do, in a specific sense, not contribute to the report’s content. Of course, even the most objective report is not completely cut off from the reporter’s properties. The reporter’s properties necessarily come into play when, for instance, the appearance of a flock of rare birds is reported. Surely, a short-sighted reporter may have fewer details about the birds in her report than a reporter with extremely good eyesight. Thus, the reporter’s properties come into play when trying to get epistemic access to the object in question, and one reporter may have access to more properties of the respective object than another one. It is therefore misleading to say that an objective report must be free of any contributions by the epistemic subject. Some contributions may even increase the objectivity of the report (which will turn out to be important later, see Section 5). What must not happen in an objective report, however, is that properties are ascribed to the object that do not derive from it and thus distort the report. For instance, it is clear that a report of, say, a political demonstration is not objective if, for instance, the number of demonstrators is exaggerated or if it is speculated without evidence about the intentions of the demonstrators. These would be distorting contributions coming from the epistemic subject that harm the objectivity of the report.

This is, in my view, the abstract general core meaning of “objective,” illustrated by a report. Objectivity means the absence of contributions to the report by the epistemic subject, which are not derived from the object in question, and which are therefore distorting. In order to increase linguistic precision, I shall call such “subjective” factors (or contributions) “genetically subject-sided” factors, where “genetically” refers to their origin. They are opposed to “genetically object-sided factors.” These terms do not have the same meaning as the more common “subjective” and “objective,” which are impregnated with our everyday realism, whereas “subject-sided” and “object-sided” are neutral with respect to the spectrum of positions between all forms of realism and of anti-realism.Footnote 6

This explication of the core meaning of objectivity resonates well with what (Daston & Galison, 2007) preliminarily say about objectivity: “To be objective is to aspire to knowledge that bears no trace of the knower – knowledge unmarked by prejudice or skill, fantasy or judgment, wishing or striving,“ (p. 17) or with (Koskinen, 2020), p. 1189: “objective knowledge is knowledge about the object, untainted by distortions caused by our subjectivity” (similarly, but critically, also (Toole, 2022), p. 3). The negative meaning component of “objectivity,” the absence of distorting contributions by epistemic subjects, does not make “objectivity” a fundamentally negative concept, however, as (John, 2021), p. 14 supposes. On the contrary, the absence of genetically subject-sided factors is an indirect way of stressing the exclusive presence of genetically object-sided factors, that is of factors that fundamentally belong to and are derived from the object in question.

This abstract core meaning of “objective” as applied to reports can now be generalized to other things that are similar to reports, like statements, stories, observations, etc., or more generally, to representations, which yields the general abstract core meaning of “objectivity”. It is primarily representations that have or fail to have the property of being objective, as also other authors note.Footnote 7 Representations are objective if they are free from distorting contributions by the epistemic subjects, thus only presenting features deriving from the object in question. Very similarly, (Gunton et al., 2022) state that “the core idea of objectivity” is “unbiasedness” (p. 942), or (Stamenkovic, 2022) that “the core idea – present in the word objectivity itself – that what is objective does not depend on us (the subject), but describes something characteristic of the ‘object’ of our investigation.” (p. 4)

Due to its abstractness, this characterization of objectivity leaves many questions open. For instance, does the red color of an apple that I claim to be objective really “derive” from the apple? Does the same hold for my color-blind friend? Or does objectivity depend on what “normal” humans can perceive?Footnote 8 To answer such questions, the abstract concept of objectivity must be made more concrete by specifying the universe of discourse in which it shall be applied. I shall deal with this and related questions in Section 2.4 below.

In our language, the core meaning of “objectivity,” applicable to representations, has been transferred to processes and methods whose results are representationsFootnote 9 and to individuals and institutions that create (objective) representations (as also many other authors have concluded). Such things are derivatively called “objective” if the representations they generate are objective. Furthermore, there exists a somewhat watered-down meaning of “objective,” applicable to processes. Processes that run independently of any human intervention, “mechanically” so to speak, are also sometimes called “objective processes.” For instance, it could be stated that “evolution is an objective process,” meaning that the process runs without human influence.Footnote 10 However, in the following I shall always use the core meaning of “objectivity.”

Clearly, the notion of objectivity is somehow related to the notion of truth in the correspondence sense.Footnote 11 In both cases, some sort of reference to “mind-independent” or “subject-independent” facts or objects is implied (whatever this means concretely, see Section 2.4). However, an obvious difference concerns the possibility of comparative use, which “objective” smoothly allows (“a is more objective than b”) and which sounds somewhat awkward for “true” (“a is truer than b”). The reason is that “objective” contains the additional meaning component of fairness and balance, that is, the rejection of an unbalanced (“subjective”) selection of features of the represented object, and fairness and balance come in degrees. The additional meaning component of “objective” becomes especially obvious when we consider an extreme case. Imagine a report of a demonstration of 10,000 people in which 100 active rioters participate. Suppose the report hardly mentions the peaceful participants but extensively describes the actions of the rioters such that the impression arises that the demonstration was fairly violent. Although every single sentence in the report may be true, the report will not count as objective because the selection of reported features of the demonstration is unbalanced and one-sided.—It may be noted that this meaning component of fairness and balance gives rise to an additional derived meaning of “objectivity” to cases that are neither representations nor representations generating things. A court decision may be called “objective” if it is balanced and non-partisan; clearly, this is a derived meaning because a court decision does not represent anything in a straightforward sense.

Finally, I quickly note a special use of “objective” in our everyday language. There are cases in which the attribute “objective” is redundant, when “objective” reinforces something that is already implicated in the respective noun. For instance, “objective truth” seems to be pleonastic because strictly speaking, there is no such thing as “subjective truth”; “objective reality” is a similar case.

2.2 Other authors on a core meaning of “objectivity”

How do other authors deal with the question of the existence of an abstract core meaning of “objectivity”? In her very influential publications, Heather Douglas claims that there are eight distinct and mutually irreducible senses of objectivity, and what unites them “is the rhetorical force of ‘I endorse this and you should too’” ((Douglas, 2004), p. 453, similarly in (Douglas, 2009), p. 116).Footnote 12 Thus, what unites the different senses of objectivity according to Douglas is a particular function that any claim to objectivity supposedly has. However, I have difficulties to accept that “this sense of trust and endorsement provides a common meaning for objectivity” ((Douglas, 2009), p. 116, my ital.). First, there are other things besides objectivity that partake in a kind of reliability that may be the basis of trust and endorsement. If a person is known to keep promises or an institution is known to be reliable, this also provides a basis of trust and endorsement, but it may not make sense to describe this person or that institution as “objective.” Second, it is much more plausible to derive the trustworthiness of something objective from the core meaning of objectivity, namely the absence of distortive subjective factors. Thus, if I am interested in the object itself (and not in subjective opinions about it), then I can trust objective representation of this object and can suggest the same trust to others. However, this trust and endorsement is not a conceptual component of objectivity but depends on a specific social constellation in which an interest in objective representations, that is in representations that present the object without subjective distortion, is shared. Thus, the connection between trust and objectivity is contingent and not conceptual. Therefore, it is not the case that “trust […] provides a common meaning for objectivity” as (Douglas, 2009), p. 116 claims. This contingent function of trust in certain societies cannot generate the conceptual unity of the concept of objectivity.

Koskinen (2020) has improved Douglas’ analysis by stressing that it is not trust that the concept of objectivity is linked to, but it is reliance. This is certainly more precise than Douglas’ view. Koskinen claims, analogously to Douglas, that it is reliance “that we should talk about when trying to identify what the applicable senses of objectivity have in common” (p. 1192, also pp. 1201 and 1204). This, again, seems to be wrong because although the generation of reliance may be an important contingent function of seemingly different senses of objectivity, this does not make reliance a conceptually unifying ingredient of objectivity.Footnote 13

Reiss and Sprenger (2020) have developed four different “conceptions of objectivity” (pp. 2, 25, sometimes also called “concepts of scientific objectivity”, p. 24). They then ask the question of “how unified (or disunified) scientific objectivity is as a concept: Is there something substantive shared by all of these analyses?” They then tentatively agree with Douglas (and others) that “perhaps what is unifying among the difference [sic] senses of objectivity is that each sense describes a feature of scientific practice that is able to inspire trust in science” (p. 24, my italics). However, they moved, in the beginning of their entry, very indirectly in the direction of the current paper, when they called one of these conceptions “a natural conception of objectivity: faithfulness to facts” (p. 2, emphasis in the original) – implying that the other conceptions are less natural or even unnatural, which is somewhat awkward. Of course, the idea of faithfulness to facts is very close to what I claimed to be the abstract core meaning of objectivity, the absence of distorting contributions by the epistemic subject. Unfortunately, Reiss and Sprenger did not develop this thought but rather followed Douglas’ idea of multiple conceptions of objectivity, which I find wrong-headed (see below, Section 2.3).

What is yet missing in my presentation is a discussion of the relation that the claimed core meaning of objectivity has to what other authors call different “senses,” “conceptions,” “concepts,” “kinds,” or “understandings” of objectivity (see Section 1). This is the subject of the following section.

2.3 The apparent variety of senses of “objectivity”

Whereas the abstract idea that being objective means being free of distorting genetically subject-sided contributions may be clear and unambiguous, the application of this idea to concrete situations may not be obvious. This is partly due to the applicability of “objectivity” to a variety of different things, like representations, processes, institutions, and epistemic subjects (Section 2.1). However, even if one has mastered this challenge by having clarity about the core meaning of “objectivity”, the problem remains how to determine in concrete cases the objectivity status of a particular thing or how to improve upon it. Just knowing the core meaning of “objectivity” does, for example not necessarily enable one to concretely evaluate a given report regarding its objectivity. What are signs for the absence of distorting contributions by the reporting subject? What are indicators for the one-sidedness of the report? In other words, one needs criteria for objectivity on the basis of which one can actually judge the objectivity status of a report, a statement, etc.Footnote 14 Note that such criteria may have different origins and strengths. They may be strictly valid (as necessary or sufficient) if they derive from the meaning of the concept itself, or their validity may be weaker if they are only contingently, that is empirically, connected with the concept. In the latter case, they may be mere indicators for (or symptoms of) objectivity whose increase or decrease may be correlated with an increase or decrease of some (possibly ill-defined) probability for the objectivity of the report, statement, etc. They may also function as means to increase objectivity.Footnote 15 For example, if you read in the Los Angeles Times that a certain process was such and such, and you compare this report with the report about the same process in the Wall Street Journal and they both concur, then it seems justified to conclude that the probability of the objectivity of these reports has increased (of course, if you believe that these two newspapers like all mainstream media belong to the lying press, your conclusion likely will be the opposite). What this example also shows is that such criteria of objectivity may serve as means to establish objectivity or at least to move into the direction of more objectivity. Take intersubjectivity as a more general example. It is often used as an indicator of objectivity (although it is neither necessary nor sufficient), for instance in assessing the consensual judgment of a competent scientific community. Therefore, in order to support objectivity, the responsibility for certain decisions is often transferred to teams who after deliberation are asked to reach an unanimous decision.Footnote 16 Of course, intersubjectivity does certainly not mean the same as objectivity, but intersubjectivity is possibly a good indicator of, or means to come close to, objectivity.

I have now made the distinction between the meaning (or sense) of objectivity on the one hand and criteria for objectivity on the other.Footnote 17 Clearly, it will depend on the particular situation which of the different criteria can be applied.Footnote 18 For instance, a single researcher may use different procedures in order to increase the probability of the objectivity of some result, whereas a scientific community may rely on the resource of intersubjectivity. Armed with this distinction, we may now come back to the purported many different meanings (or senses) of objectivity as stated by various authors. Let us begin by examining some of Douglas’ candidates for different senses of objectivity.

Take “manipulable objectivity” first ((Douglas, 2009), pp. 118–119). For instance, the objectivity of the claim “that DNA is the genetic material of the animal” gets support if we, on the basis of this claim, are able to manipulate the world “repeatedly and successfully.” This is correct, but it does not concern the meaning of objectivity. The claim that DNA is the genetic material of animals is objective if this is indeed the case, and that this claim is not rooted in distorting contributions by the investigating epistemic subject. Whether the fact that DNA is the genetic material leads to successful genetic manipulability is a conceptually different question, whose answer depends on a myriad of contingent facts that are largely independent of DNA being the genetic material.

Take “convergent objectivity” next ((Douglas, 2009), pp. 119–121). If “multiple avenues” lead to the same result, we will “have increasing confidence in the reliability of the result”, that is its objectivity. This is correct, but it does not concern the meaning of objectivity. We may be unsure whether a particular avenue to the result may be due to yet unknown faults of that particular avenue, that is due to distorting contributions from the subject-side. However, if a variety of avenues lead to the same result, we may (tentatively) explain this convergence by the dependence of the result on the investigated object alone, resulting in objectivity.

Take now “detached objectivity” ((Douglas, 2009), pp. 121–122). It means “the prohibition using values instead of evidence”. This is correct, but this does not concern the meaning of objectivity. Objectivity means the absence of distorting genetically subject-sided contributions, so so-called “detached objectivity” is a straightforward application of this core meaning of objectivity to individual epistemic subjects (Douglas realizes this somehow by qualifying this as the “least controversial and most crucial sense”Footnote 19).

Next, take “concordant objectivity” ((Douglas, 2009), pp. 126–127) which is roughly intersubjectivity. Clearly, intersubjectivity is an indicator of objectivity, although neither a sufficient nor a necessary one.Footnote 20 However, the meaning of intersubjectivity is clearly different from the meaning of objectivity, although also scientists sometimes equate the two.Footnote 21 Identifying intersubjectivity with objectivity may have misleading consequences. For instance, a paper entitled “The emergence of objectivity: Fleck, Foucault, Kuhn, and Hacking” (Sciortino, 2021) may elicit the expectation that the historical emergence of objective facts is discussed, like in (Fleck et al., 1979). However, the paper discusses how consensus emerges in scientific communities, and the author follows (unfortunately, in my opinion) Douglas’ lead to call such consensus “objectivity.” (p. 130)

Douglas discusses more variants of objectivity, and my rejoinder is the same as in the previous cases. Where she thinks variants of the meaning of objectivity are at issue, I claim she deals with different criteria or indicators for one and the same concept of objectivity.Footnote 22

Let us now turn to the Stanford Encyclopedia of Philosophy entry on “Scientific objectivity.” (Reiss & Sprenger, 2020) As stated above, the authors claim that there are four different “conceptions” of objectivity. The first is “faithfulness to facts,” the second the “absence of normative commitments and value-freedom,” the third the “absence of personal bias,” and the fourth something that “is anchored in scientific communities and their practices.” (p. 2) The faithfulness-to-facts conception is very close to what is defended here as the abstract core meaning of objectivity, whereas the other conceptions are much more means to reach this goal, or, when present, indicators for objectivity.Footnote 23

I finally discuss the criterion of value-freedom that is seen by many authors as one of the senses of “objectivity”, or at least as a necessary condition on objectivity.Footnote 24 For instance, (Reiss & Sprenger, 2020) introduce the value-free ideal (VFI): “Scientists should strive to minimize the influence of contextual values on scientific reasoning, e.g., in gathering evidence and assessing/accepting scientific theories” (p. 9). They continue: “According to the VFI, scientific objectivity is characterized by absence of contextual values and by exclusive commitment to epistemic values in scientific reasoning.” In this view that identifies objectivity with the value-free ideal, any influence of (non-scientific) values upon the content of science is ipso facto an attack on science’s objectivity and may therefore trigger alarm. To be value-free appears then to be a goal in itself for science.Footnote 25

From the current perspective, however, it is a mistake to identify the concept of objectivity with the concept of value-freedom, or to advance the slightly weaker claim that value-freedom is a conceptual component of objectivity. “Value-freedom” should be understood as a criterion (of yet undetermined strength) of objectivity, or a means to reach objectivity. In this view, the value-free ideal is a possible instrument useful for the goal of objectivity. This instrument may be used in some context as a means to increase objectivity, in others to tentatively identify objectivity, and in still others it may be without any application.

Stating this difference between the value-free ideal as a conceptual component of, or even conceptually equivalent with, objectivity and as a potentially useful criterion of objectivity is not an excess of philosophical pedantry. If value-freedom is conceptually connected with objectivity, any violation of value-freedom is necessarily a violation of objectivity. If, however, value-freedom is seen as (potentially) instrumentally useful for objectivity, then the connection between value-freedom and objectivity is contingent, and any violation of value-freedom must be analyzed regarding its consequence for objectivity. This is what I shall do in some interesting cases in Section 3. First, however, I will treat a philosophical conundrum that seems to make the concept of objectivity pragmatically worthless.

2.4 Objectivity in metaphysically different contexts

The problem I want to discuss in this section is described in a representative way in (Reiss & Sprenger, 2020), Section 2. They start there with what they call “a natural conception of objectivity: faithfulness to facts” (p. 2). As I already mentioned, this is very close to what I identified as the core meaning of objectivity. However, the problem is whether this ideal of “the view from nowhere” (Nagel, 1986) of objectivity is attainable. If claims to objectivity must be based on evidence, as is the case in science, then it is hard to see how the faithfulness-to-facts claim can ever be fully established, given that observation is theory-laden and that theory choice is value-laden. Therefore, because experimental results do not “reflect the facts alone” and are therefore not “aperspectival in an interesting sense” ((Reiss & Sprenger, 2020), p. 6), it seems that the aperspectival account of objectivity has to be given up. Other authors share this concern.Footnote 26

However, there is certainly no consensus about this matter in philosophy because realists, especially scientific and structuralist realist, have not given up the hope that theory-free facts are epistemically accessible and that therefore a concept of objectivity that operates with such facts is not empty. But we don’t have to decide this matter here because we may, in this discussion about objectivity, take a metaphysically neutral position with respect to the spectrum of positions between realism and antirealism. The question of objectivity comes up in metaphysically different context. By “metaphysically different contexts” I mean contexts in which different metaphysical assumptions are made, often as a matter of course and implicitly. These contexts differ in what is taken to be real and what the properties of and the relations between these real entities are.Footnote 27 For instance, in an everyday context, the statement “There is one blue and two green sweaters in my cupboard” may count as unproblematically objective if there are indeed one blue and two green sweaters in my cupboard. This is because in our everyday understanding of physical things, there really are colored objects and if there is appropriate light, we have epistemic access to their colors. Thus, in this context, we can often check unproblematically whether a certain report is objective or not, like in the report about sweaters in the cupboard. We just have to get epistemic access to the reported objects in question, find out what their properties are, and compare them with the reported properties. Roughly speaking, if the two coincide, the report is objective, otherwise not. Of course, this procedure presupposes that we really have epistemic access to the real properties of the objects, and this is what we indeed do in many situations of our everyday life. Sweaters do have colors in this context, we are able to determine them, and therefore that application of the concept of objectivity may be unproblematic.

Let us now change the context to a physical investigation of the optical properties of surfaces. Now the objects investigated may not be taken to have colored surfaces because in this context, colors are seen as secondary qualities. Instead, surfaces have a specific pattern of spectral absorption and reflection and are colorless. Thus, in this context an objective report about these surfaces must not refer to colors because they are not properties of the objects in question.

Switch now to a philosophical context in which a realist and a Kantian argue about the concept of objectivity. For the realist, reality is purely object-sided, “objective reality”, completely void of any subject-sided contributions. Therefore, an objective report must not contain anything that is genetically subject-sided. By contrast, when the Kantian gives a report on the states of some physical bodies, time and space measurements may be part of it. For the Kantian, time and space are our contributions to the constitution of physical things, in other words, they are genetically subject-sided. However, for the Kantian these genetically subject-sided contributions are not at all detrimental to the potential objectivity of the report, quite on the contrary, they are constitutive of the thinghood of the physical things in question, and therefore a necessary part of an objective report. If one replaces the Kantian by a Kuhnian, the picture changes again. The Kuhnian is, to use Peter Lipton’s apt expression, a Kantian “on wheels.”Footnote 28 For the Kuhnian, the epistemic objects of science are constituted by contributions from historically changing paradigms, that is, by genetically subject-sided contributions. An objective description of, say, an electron in classical electrodynamics therefore necessarily embeds elements of the paradigm. Again, for Kuhnians these genetically subject-sided contributions are not detrimental to the intended objectivity about the state of the electron (which is so hard to swallow for realists).

The lesson is this. Whatever the context is in which the concept of objectivity is used, the core meaning if “objectivity” is the same: objective statements etc. must only contain what derives from the object itself and not from distorting subjective influences. However, what is taken to be an object and its properties may vary from context to context. Thus objectivity, properly understood, is itself a metaphysically neutral concept that is compatible with and applicable in metaphysically very different contexts. However, when metaphysical opponents discuss objectivity and related subjects like scientific progress, the term “objective” may become virtually useless. This is because they will assess some situations differently regarding their objectivity, not due to a fundamental difference in their understanding of the abstract core concept of objectivity, but due to their incompatible metaphysical presuppositions. This is especially obvious in the discourse between (scientific) realists and their opponents (for example, instrumentalists).

Now we can come back to the discussion of the ideal of value-freedom and its connections to objectivity.

3 The ideal of a value-free science

The defense of the ideal of a value-free science became prominent at the beginning of the 20th century mainly through the work of Max Weber; recently, this discussion has picked up speed again.Footnote 29 The postulate of value-freedom for science is, however, potentially misleading because there are two well-known and widely acknowledged avenues through which values necessarily enter scientific results.Footnote 30 First, in research there is a necessary choice of a research topic and of a correlated methodology. These choices reflect values that guide the researcher. There are many reasons why someone picks a certain research topic and a correlated methodology: personal curiosity, availability of financial support, career considerations, optimal use of intellectual resources, political acuteness (possibly enforced by economic incentives), moral commitment, pressure from superiors, job description, laziness (minimal effort), personal abilities, swimming with the stream, swimming against the stream, and many more. All these reasons reflect certain values on the part of the researcher, including social and moral values, that thereby enter science. Sometimes, such values are, in contrast to the “scientific” values to be discussed in a moment, called “contextual values” (see, e.g., (Reiss & Sprenger, 2020), pp. 7–8).

The second avenue by which values enter research derives from the necessity to evaluate every step in the research process regarding its correctness or suitability and its alignment with the goal of the respective project. The relevant values are usually called “scientific” values, and they comprise values like predictive power, accuracy, explanatory power, scope, consistency, simplicity, and others.Footnote 31 Among the scientific values, one may distinguish those that are presumed to be truth or objectivity conducive, like predictive power and accuracy, from those that are rather instrumental for research, like simplicity. For example, “choose the most accurate hypothesis!” serves the purpose of getting (roughly) true results, whereas “choose the simplest hypothesis!” may make research activities more effective. The first subgroup of scientific values may be called “epistemic scientific values,” the second subgroup “instrumental scientific values.”Footnote 32 Scientific values play a role in practically all stages of research processes; they have mostly been discussed when used in situations in which theories, hypotheses, or models are evaluated, typically in a comparative way (see, e.g., (Kuhn, 1977)).

The influence of a variety of values upon science via the two avenues cannot be denied, and usually is not.Footnote 33 The important point is that this particular influence of values on a science is not seen as in conflict with the intended objectivity of that science and is therefore not banned by the ideal of value-free science. First, the influence of all kinds of values on the choice of the research topic and an envisaged methodology is unavoidable. Regarding the choice of a research topic, there is nothing like an objective or a non-objective choice.Footnote 34 Objectivity can only come into play when the research process begins, with the choice of a methodology. Already at this point, violations of objectivity can occur due to intentional falsification. For instance, it may be known that a certain methodology is incapable of seeing certain effects, and if one intends to suppress such effects, one may choose this methodology for that reason (see, e.g., (Wilholt, 2009)).

The second avenue of value influence on a science by the scientific values discussed above is not only not in conflict with objectivity, but, on the contrary, it is an operationalization of objectivity. This is immediately obvious for the epistemic scientific values: to do scientific work that is guided by these values is the attempt to achieve the ideal of objectivity;Footnote 35 the instrumental scientific values are designed to make the research process more effective. Thus, the scientific values are not an unwanted value intrusion into science, but, on the contrary, an expression of and guide to its pursuit of objectivity.Footnote 36

What is meant by the postulate that science should be value-free is therefore that “the justification of scientific findings should not be based on non-epistemic (e.g. moral or political) values.” ((Betz, 2013), p. 208) This means that such values are not allowed to play any role in the epistemic assessment of data, hypotheses, theories, and the like. Especially, it is “the prohibition of using values in place of evidence.” ((Douglas, 2009), pp. 121–122) For instance, the social value of equality must not influence empirical results about existing inequalities in some society, or the belief in the value of death penalty must not influence the results of an empirical investigation of its effectiveness. It is obvious why such an influence must be barred: it would compromise the claim to objectivity of the respective science.

However, the apparent clarity and validity of this specified ideal of value-free science came under attack in the 1950s by an article in which the author claimed that “the scientist qua scientist makes value judgements.” (Rudner, 1953) The claim was that non-scientific values necessarily enter scientific justificatory procedures of hypotheses. (Hempel, 1965 [1960]), p. 92 called this situation the problem of the “inductive risk” of a hypothesis.Footnote 37 Within the last two decades, the discussion of the problem of inductive risk has significantly intensified, triggered mainly by (Douglas, 2000).Footnote 38

4 The problem of inductive risk

4.1 What is the problem of inductive risk?

The problem of inductive risk results from the fact that data relevant to a hypothesis never logically coerce the hypothesis’ acceptance or rejection. Thus, accepting a hypothesis on the basis of given data implies the risk of accepting a false hypothesis (a ‘type-1 error’), and rejecting a hypothesis on the basis of data implies the risk of rejecting a true hypothesis (a ‘type-2 error’). There are cases in which scientists cannot simply suspend judgement regarding acceptance or rejection because of serious practical consequences of both options. On what should they base their unavoidable decision? According to Rudner, this is where comparative value judgements regarding the severity of both kinds of error necessarily enter: the acceptance/rejection decision of the hypothesis depends on them. However, the comparative assessment of the severity of possible consequences is not based on scientific values, but on social ones. Now, the case is even worse, as Heather Douglas has convincingly argued (Douglas, 2000). Her argument is, as concisely summarized by (Reiss & Sprenger, 2020), p. 10

that the “acceptance” of scientific theories is only one of several places for values to enter scientific reasoning, albeit an especially prominent and explicit one. Many decisions in the process of scientific inquiry may conceal implicit value judgments: the design of an experiment, the methodology for conducting it, the characterization of the data, the choice of a statistical method for processing and analyzing data, the interpretational process findings, etc. None of these methodological decisions could be made without consideration of the possible consequences that could occur.

Some authors believe that this is “one of the most forceful arguments for the inevitability of value-judgements within scientific research” ((Wilholt, 2009), p. 94). Note that we are talking about non-scientific social values here, and that their influence is not of the (apparently) harmless and accepted kind regarding the choice of research topics, discussed in Section 3. Rather, the claim is that social values co-determine the fate of hypotheses in science; they are operative in the context of justification, and this is where they are, according to the ideal of value-free science, forbidden.Footnote 39

Naturally, this alleged problem of inductive risk provoked the defenders of the ideal of value-free science. There were several objections.

4.2 Attempted rebuttals of the problem of inductive risk

One strategy to neutralize the impact of the argument from inductive risk on the value-freedom of science was presented by (Jeffrey, 1956).Footnote 40 Jeffrey denied a fundamental presupposition of the alleged problem of inductive risk, namely, that scientists indeed accept or reject hypotheses. Instead, “the activity proper to the scientist is the assignment of probabilities (with respect to currently available evidence) to the hypotheses.” ((Jeffrey, 1956), p. 237) The acceptance and use of hypotheses does not belong to the scientist’s job but is left to the policy maker. As (Levi, 1962), p. 48 summarizes the position,

scientists never accept or reject hypotheses but merely assign degrees of confirmation to them. These assignments serve as guides to the policy maker in deciding on optimum policies for realizing goals.

If that view is correct, the value-free ideal would not come under pressure by the problem of inductive risk whose existence is not denied, because its impact is felt only in the context of application, which is outside of science.

The principal flaw of this argument lies in the fact that scientific hypotheses are not only “applied” outside of science, where the “policy makers” have their say, but also inside science, and this on an absolutely regular basis. As a matter of course, hypotheses are applied in the design of new experiments. For example, the hypothesis that the genetic material was made of DNA was “applied,” or rather its validity presupposed, in all experiments that tried to decipher the genetic code.Footnote 41 However, the damage produced in such cases by a false hypothesis will remain inner-scientific, mostly that an experiment based on the false hypothesis will not work. But there are also examples from purely scientific contexts, in which the inductive risk of a hypothesis is indeed evaluated based also on extra-scientific values. Take an experiment that is planned for epistemic reasons. Imagine that the experiment is practically feasible, but its performance may possibly produce serious damage, like an explosion. Before performing the experiment, one should therefore confirm the hypothesis that the experiment will, in fact, not produce the feared damage. The acceptable level of inductive risk of this hypothesis will depend on the amount of potential damage: the higher the damage, the lower the probability for its occurrence must be. It does indeed make a difference whether the maximal damage of the performance of the experiment consists in a broken Erlenmeyer flask or in blowing up the whole lab. Clearly, the evaluation of the undesirability of the damage is based on social values, not on scientific ones. Thus, social values enter the decision about the performance of experiments in pure science. The level of the acceptable inductive risk of the hypothesis that the experiment will cause damage will depend on the severity of the potential damage. Note that the damage is not the sort of damage that Jeffrey discusses—damage caused by the application of a scientific hypothesis outside of science. Jeffrey does not seem to be aware that the problem of inductive risk also arises within science even in cases of pure research far away from any possible application outside of science.

Here is a drastic example from application oriented basic science. On July 16, 1945, a scientific experiment that was performed at Alamogordo in New Mexico. It was the “Trinity Test” of the first atomic bomb ever built. The purpose of the experiment was to find out whether the bomb design worked and what the details of the explosion were. The danger was that this explosion might trigger a nuclear fusion reaction of the Earth’s atmosphere, transforming the atmosphere into a nitrogen fusion bomb, leading to complete destruction of the Earth in a nuclear fireball. Regarding this possibility, Edward Teller is quoted as saying “[t]his kind of thing had to be ruled out beyond a shadow of a doubt” ((Blumberg & Owens, 1976), p. 118). Or Oppenheimer: “Better be a slave under the Nazi heel than to draw down the final curtain on humanity.” (ibid., p. 117) However, when discussing the issue first “[n]o one [of the physicists who were there] considered the mathematical possibility higher than about one in three million. That would be a safe bet in any other enterprise, but such odds would be disturbingly low in the face of the consequences.” (ibid., p. 117) In other words: in all other situations in physics, the normal standard was to accept a hypothesis if the inductive risk of its falsehood is less than one in three million. In the case of the hypothesis that the fission reaction of the bomb will not ignite a fusion reaction of the Earth’s atmosphere, the probability of one in three million for its falsity was not good enough, because of social values. This is a clear case in which the potential damage influenced the level of the acceptable inductive risk.Footnote 42 This demonstrates that Jeffrey’s strategy to ban inductive risk considerations from science and delegate them to the policy makers does not work.Footnote 43

Another well-known objection against Rudner’s argument was published a little later than Jeffrey’s objection in (Levi, 1960). The title question of this paper “Must the scientist make value judgements” is answered in the negative. The result of Levi’s argument is that

given [the scientist’s] commitment to the canons of inference he need make no further value judgments in order to decide which hypotheses to accept and which to reject (p. 356).

We do not have to analyze Levi’s arguments because, as we have seen, there are absolutely compelling counter-examples against his conclusion: scientists must evaluate the dangerousness of experiments with inner-scientific aims and must use non-scientific values for this purpose.

The weakness of both Jeffrey’s and Levi’s arguments derives from the fact that they only consider the kind of hypotheses that scientists usually try to confirm or disconfirm about the subject matter their research field is concerned with. Both authors disregard that scientists not only try to assess such substantial hypotheses of their research field, but also hypotheses about the course of events when an experiment is set in motion. Clearly, they must evaluate an experiment not only with respect to its epistemic utility, but also with respect to the damage that the performance of the experiment may cause. The latter is guided by social values.

Therefore, it seems undeniable that social values indeed enter the heart of even pure science. Jeffrey wanted to neutralize this by pushing the problem out of science to the policy makers, Levy denied it altogether. As these strategies are demonstrably inadequate, the problem of inductive risk is real and will stay. Rudner opined that if it is correct that “Scientists qua Scientists make value judgments […], then we are confronted with a first order crisis in science & methodology” ((Rudner, 1953), p. 6). Why should the necessary use of non-scientific values by scientists qua scientists wreak havoc on science?

5 The problem of inductive risk and the objectivity of science

Rudner is very explicit about the latter point:

The positive horror which most scientists and philosophers of science have of the intrusion of value considerations into science is wholly understandable. Memories of the […] conflict between science and, e.g., the dominant religions over the intrusion of religious value considerations into the domain of scientific inquiry, are strong in many reflective scientists. The traditional search for objectivity exemplifies science’s pursuit of one of its most precious ideals. ((Rudner, 1953), p. 6)

It is obvious that for Rudner, the intrusion of social values into science challenges the objectivity ideal of science, and knowledge that is as objective as possible is what science is all about. In other words, Rudner takes for granted that the ideal of value-freedom is a necessary component of the ideal of objectivity (also (Rudner, 1953), p. 2): any attack on the value freedom ideal is ipso facto an attack on the ideal of objectivity. Loosely speaking one might say that he “identifies” the ideal of objectivity with the ideal of value-freedom, or that the ideal of value-freedom expresses the ideal of objectivity. However, neither (Jeffrey, 1956) nor (Levi, 1960) use the term “objectivity.” Levi instead often refers to the “value-neutrality thesis” that he defends. We can safely assume that also Levi takes the value-neutrality thesis to be an essential component of the objectivity ideal.

We can now come back to our earlier discussion on the relation between the value-free ideal and the scientific goal of objectivity (Section 2.3). There are two principal options. According to the first option, there is a conceptual nexus between the ideal of objectivity and the value-free ideal. The value-free ideal is then either a kind of objectivity, or an explication of objectivity, or at least a conceptual component of objectivity. In this view, any deviation from the value-free ideal is then necessarily a deviation from objectivity. According to the second option, the value-free ideal is an indicator (of yet undetermined strength) of objectivity, or a means to increase objectivity. The connection between objectivity and value-freedom is then seen as contingent. In this view, any deviation from the value-free ideal must be investigated with respect to its consequences for objectivity.

Clearly, Rudner, Jeffrey, and Levi, together with a plethora of other authors, adhere to the first option. For them, value neutrality appears itself to be a goal of science (as it is taken to be a conceptual part of the ideal of objectivity). Therefore, they are alarmed by any value intrusion into science. One strategy to avert this damage is to deny the intrusion of social values into the justificatory core of science. Jeffrey pushes the operation of social values to the application domain, away from science proper. Levy denies the intrusion of social values altogether: in their core business, scientists just use their professional standards, unimpressed by possible external damage. For defenders of the value-free ideal, this reaction seems natural; whether it is successful is another question.Footnote 44

However, I have argued for the second option regarding the connection between objectivity and the ideal of value-freedom (Section 2.3). In this view, the value-free ideal is an instrument potentially useful to achieve the goal of objectivity. Therefore, we will have to investigate what the effect of the problem of inductive risk on the objectivity of science is. How much damage does the problem of inductive risk inflict on the ideal of the objectivity of science? Does the dependence of the level of acceptable inductive risk on the anticipated severity of possible damage (social value) influence the operation of scientific values in such a way that their intended function of objectivity generation is jeopardized?

For simplicity, let us look at an example, the Trinity case. If the physicists were just doing any other normal scientific job, and no-one “considered the mathematical possibility higher than about one in three million” for a type-1 error, that “would be a safe bet in any other enterprise.” (Blumberg & Owens, 1976, p. 117) In other words, this would be the usual professional standard physicists apply to avoid a type-1 error. However, in the given case “such odds would be disturbingly low in the face of the consequences.” (ibid.) Thus, the usual professional level of trustworthiness of a scientific hypothesis is not high enough, given the potential dire consequences of a type-1 error. Thus, in this case the influence of the social values (avoid the annihilation of the whole Earth!) forced the physicists to increase the level of what one of the relevant epistemic values demanded. In other words, the effect of anticipated potential damage (social value) on the scientific values is not interference with their intended function, but, on the contrary, enforcement of them! To put it simply, the more dangerous an inner-scientific action is, the more careful scientists must be regarding their predictions of absent damage, or, in other words, the stronger they must reinforce the objectivity-conducive scientific value(s). Douglas got this point exactly right when she states, following (Heil, 1983), that in cases of inductive risk “we can and do have legitimate motives for shifting the level of what counts as sufficient warrant for an empirical claim” ((Douglas, 2009), p. 97); she calls this sort of influence of non-epistemic values on cognitive values their “indirect role” in science (ibid., pp. 96–98, 103–108).

Note that this kind of influence of social value can only increase the threshold from which on scientists are willing to accept a hypothesis, not decrease it. Scientists will not accept a hypothesis for which they believe not enough evidence is available just on the grounds that they might miss some beneficial applications of the hypothesis. The medical sciences provide ample illustration of this fact. There are medical treatments that have not yet been admitted by the relevant regulatory authority because it is yet unknown whether they fulfill the scientific standards of efficacy and safety. Nevertheless, medical doctors may sometimes use these treatments “off-label” because of lack of alternative treatments. Of course, these physicians are aware of the missing scientific backing of the treatment, and they must get informed consent by their patients. In such cases, everyone knows that with that treatment one leaves the confines of science. However, scientists will accept within their science increased cognitive demands on a hypothesis whose potential falsehood would create great damage. This does not only hold for the complete annihilation of the Earth, but also for comparatively more trivial things like blowing up a whole lab or underestimating the medical or ecological dangerousness of some substance. In other words, the influence of social values on science in the relevant situations discussed here is not in conflict with the sciences’ goal to strive for objectivity. On the contrary, this influence only reinforces the sciences’ quest for objectivity by demanding a higher level of reliability than usually.

(Rudner, 1953) has brought to our attention that in some situations “the scientist qua scientists makes value judgements”. This has been widely perceived as a threat to the objectivity of science because of the identification of the value-free ideal with the scientific goal of objectivity. As soon as one realizes that this is not the only possible conceptualization of the relation between the value-free ideal and the goal of objectivity, namely, that the relation can also be seen as instrumental, the whole picture changes. Then one can investigate what the effect of this particular influence of social values is and one does not have to immediately identify this influence as a violation of objectivity. The apparently paradoxical result is that under such an influence of social values, science is forced to increase its level of objectivity.

The just practiced mode of investigation of the influence of values on the scientific objectivity ideal should not surprise us. The same mode of investigation was practiced in the discussion of the role of non-scientific values in topic selection and the role of scientific values in general (Section 3). In such investigations, it is asked whether some particular value is detrimental, neutral, or supportive of the objectivity goal of science, and then it is assessed accordingly.

6 An overlooked threat to the objectivity of science

We have seen in the previous discussion that the problem of inductive risk is not a threat to the objectivity of science, contrary to its common perception. However, one important aspect potentially impairing the objectivity of science has, to the best of my knowledge, not been addressed in the most discussed writings on the objectivity of science. This aspect threatens the objectivity of science sometimes avoidably, sometimes unavoidably.

Like many other authors, I have claimed in Section 2.3 that the choice of a research topic, based on whatever values, cannot by itself collide with the postulate of objectivity. This, however, is only true if one considers research projects individually. If one aggregates the topics of research projects in some scholarly discipline, there may be such an imbalance that the aggregate of the results may be deeply misleading about the subject matter of that discipline, although every single investigation may completely accord with the canons of objectivity.

Historiography provides an example of a discipline that realized that it had a systematically constrained selection of research topics resulting in an imbalanced view of its subject matter. Since the establishment of a canon of research methods for historical research in the early 19th century, during much of the 19th century historiography was centered around political history. Towards the end of the 19th century, some historians begun to doubt whether a historiography focusing on politicians, diplomats, and military leaders would represent history in an objective way (see, e.g., (Iggers, 1983, 1984)). Instead, or in addition, anonymous social processes should be investigated, partly transforming historiography into a social science. During the 20th century, various such enterprises were launched in various countries, leading to new subdisciplines of historiography such as history of mentalities, social history, cultural history, Alltagsgeschichte, and many more. From their points of view, the older tradition of politics centered historiography as a whole did not objectively represent its subject matter, history, although every single historical investigation of a politician or a war may have been as objective as possible.

Similar distortions of objectivity can also be found in other disciplines, especially the social sciences. For instance, “feminists have detailed the historically gendered participation in the practice of science—the marginalization or exclusion of women from the profession and how their contributions have disappeared when they have participated.” ((Crasnow, 2020), p. 1) Likewise, in managerial science there is arguably a much higher number of marketing studies that aim to support producers to optimize production and distribution than investigations that help consumers to optimize consumption; marketing research may be a particularly compelling example of this imbalance.Footnote 45 The problem with this kind of distortions is that nobody is responsible for them. Whereas distortions of objectivity in any single investigation can be ascribed to its authors, the lopsidedness of a discipline or sub-discipline is only ascribable to the respective “scientific community”—a notoriously fleeting entity, certainly not capable of being held accountable. In addition, in cases like the above-mentioned historiography, the distortion of objectivity by omitted perspectives on a certain subject matter must first be discovered, and this may be a protracted scientific process.

However, there are also potential distortions of objectivity that are willfully introduced by scientific communities and societies as a whole and that are also based on non-scientific values. I am referring to research constraints introduced for moral reasons. There are numerous moral constraints on biological, ecological, medical, and psychological experimental research, which prevent the execution of many studies. For example, consider medications for children:

Many routinely prescribed medications have not been well studied in the pediatric population. This is particularly true for critically ill newborns, where up to 75% of the medications used have never been adequately studied.Footnote 46

This lack of studies results in gaps of knowledge that may, when aggregated, result in a tilted representation of the respective subject matter as a whole, similar to the cases discussed above. As long as we keep to our moral standards, this kind of lopsidedness cannot be corrected, and is therefore unavoidable. This substantive influence of non-epistemic values on science is only rarely discussed in the context of the discussion of objectivity of science and the value-free ideal, but it belongs here.

7 Conclusion

Given our analysis, it is clear that in various ways, scientific and non-scientific values influence all sciences. Therefore, the central question is not whether there is an influence of values upon science, but what the relation of these values to the ideal of scientific objectivity is; this is how the problem should be framed. Some of the values are constitutive for the objectivity seeking scientific research process, namely, the scientific values. The influence of non-scientific values is harmless when they determine the choice of individual research projects. It is also harmless regarding objectivity when inductive risk comes into play because they can only increase the standards of objectivity (in cases of dangerous experiments or applications). The influence of non-scientific values is not harmless in cases when the choices of research topics become one-sided in such a way that a whole (sub-)discipline becomes lopsided. Every single investigation in such a situation may be objective, but the (sub-)discipline as a whole may represent the research domain in a tilted or distorted way. This non-objectivity due to aggregation may be avoidable, at least in principle, in cases in which certain identifiable non-scientific interests dominate research agendas. Such non-objectivity may, however, also be at work without being intended, namely, when the contemporary epistemic situation straitens the scientific horizon. This kind of non-objectivity may be diminished in the longer run when scientists become aware of it. The final kind of non-objectivity due to aggregation results from moral constraints. Moral considerations prevent experimental research in many areas, not only on human subjects. We can only hope that in the longer run we can correct the resulting lopsidedness or the respective disciplines by alternative, morally admissible methods.