The argument from inductive risk has become widely accepted as good reason to reject the value-free ideal. The literature that follows is then focused on where inductive risk judgements are required and whose values ought to determine them. The purpose of this paper is twofold. Firstly, to offer motivation for aiming at the value-free ideal in at least some areas of science, and therefore avoiding inductive risk. To do so I show that there is a tension between principles in medical ethics and value encroachment because of inductive risk. Secondly, I offer a renewed defence of Jeffrey’s response to the argument from inductive risk. By appealing to theories in epistemology about rational belief modelling, I argue that the Bayesian belief model offers a suitable alternative to current belief modelling in science, despite criticism that is either explicit in the literature or fairly expected to arise.

This paper offers a renewed motivation and defence of the Bayesian response to the argument from inductive risk. First proposed by Rudner in 1953,Footnote 1 the argument from inductive risk is thought to be one of the most damning for the realisability of the value-free ideal of science. However, as I show throughout this paper, the argument from inductive risk relies on faulty assumptions about the doxastic aims of science and the epistemic models that are most appropriate for scientists to follow. I put the doxastic aims of science and epistemic models used in science into question and offer normative motivation for avoiding what I call “value encroachment because of inductive risk”. My concern is broadly that there is a tension between the respect for value plurality in medicine and value encroachment in bio-medical sciences.

Having motivated the project of avoiding value encroachment, I then follow Jeffreys (1956) in offering a Bayesian model of belief as both a suitable alternative to the current epistemic practices in science and also an effective model for avoiding inductive risk gaps and therefore value encroachment. I offer an updated defence of Jeffreys by managing the second-order concern that both Rudner and Jeffreys mount against the probabilistic approach.

Acknowledgements, however, that the Bayesian position is not without other controversy and identify and respond to some of what I take to be the relevant concerns mounted towards the Bayesian throughout both the epistemology literature and philosophy of science literature. In Sect. 4 I address the concern that only functioning with credences is overly demanding on finite epistemic agents, i.e. the cognitive load argument. In Sect. 5 I address Douglas’ argument that the argument from inductive risk applies to pre-results stages in science. In Sect. 6 I address the concern that even the Bayesian method will be value-laden because of value encroachment when choosing priors.

While I take myself to successfully address these concerns, I ultimately concede that scientific communication might in some cases be subject to the argument from inductive risk in a way that scientific belief construction is not. I take it that further research is needed to determine in what cases the Bayesian approach will suitably avoid the argument from inductive risk and where it will not.

1 Inductive risk

The argument from inductive risk (AIR) is broadly concerned with the risk of error in accepting or rejecting a hypothesis when there is insufficient evidence in favour of or against the hypothesis. The argument is used to argue for the necessity of value encroachmentFootnote 2 in science and is taken to be one of the most damning arguments against the value-free ideal.

The AIR follows from a general epistemic tension in science. It is both natural to suppose that science has the doxastic goal of knowingFootnote 3 (and therefore outright believing) and natural to think that scientific inquiry aims at accuracy. Additionally, it is commonly accepted that scientific practice involves accepting or rejecting hypotheses and that this is central to knowledge generation. However, it has also been largely accepted that evidence produced by scientific inquiry is often insufficient insofar as it is inconclusive. In other words, there is always some uncertainty as to whether the hypothesis is true given the evidence available. So, if scientists want to reach their doxastic goals by accepting and rejecting hypotheses, they have to manage this epistemic uncertainty. It is in scientists’ attempt to manage uncertainty that Rudner claims “the scientist qua scientist makes value judgements” (1953, p. 2).

The intuitive base of Rudner’s argument can be summarised as; “how sure we need to be before we accept a hypothesis will depend on how serious a mistake it would be” (Rudner, 1953, p. 2). That is to say, value judgements about the consequences of accepting or rejecting a hypothesis will determine what level of confidence we must have in a hypothesis before we make a definitive judgement on it. To take Rudner’s own “crude but not unimaginable” (ibid.) example, “if the hypothesis under consideration were to the effect that a toxic ingredient of a drug was not present in lethal quantity, we would require a relatively high degree of confirmation or confidence before accepting the hypothesis—for the consequences of making a mistake here are exceedingly grave by our moral standards” (ibid.). So, the level of confidence required to justify the acceptance of a hypothesis will be at least partly determined by value judgements about the possible consequences of wrongly accepting or rejecting it.

This claim is intuitively compelling but, at this stage, vague and riddled with ambiguities. What is it that ‘level of confidence’ refers to? Whose values ought to determine the value judgements? Does this apply to all science? I will address the latter two questions in the next section but for now let me point out that Rudner himself is non-committal in his handling of the first question.

In Rudner’s words, “the scientist must make a decision that the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis” (ibid., emphasis added)—with the operative word here being “or”. Rudner is presenting us with two interpretations of ‘level of confidence’; one concerned with how strong the evidence in favour of the hypothesis is and one concerned with how likely it is that the hypothesis is true. In contemporary terms, this looks like the distinction between the significance level of the evidence, on the one hand, and the credence level that it warrants, on the other. However, Rudner tends to run the two together and seems unaware that a statistically significant result from a study does not offer us any indication of the actual likelihood that a hypothesis is true (Bird, 2021; Papineau, 2018).

However, in his defence, he also reasonably claims that the AIR applies to either interpretation of ‘level of confidence’.Footnote 4 Rudner is right that both will be subject to the AIR, as understood as an argument about scientists’ actual practice of attempted inference, so long as we take both as possible measures for grounding outright belief and accept that neither measure will ever provide us with conclusive results. In each case, the scientist is still saying the level of confidence is enough to justify accepting the hypothesis and using value judgements to determine what constitutes ‘enough’.

I start my criticism of the AIR by questioning the doxastic aim itself. In doing so I put into question Rudner’s claim that “rational reconstruction of the method of science must comprise the statement that the scientist qua scientist accepts or rejects hypotheses” (p. 4). This then gives way for thinking about alternative doxastic systems for scientists to follow and ask if they bypass the AIR.

It is worth noting that the inductive risk literature has significantly developed since Rudner. Much of the literature takes the AIR to convincingly tell us that the value-free ideal is unrealisable and instead of contesting it tries to smooth out some of the details and ambiguities in the argument itself. Is the AIR best understood in terms of belief, decision, assertion or something else entirely? (John, 2015; Steele, 2012; Ward, 2021). Is value encroachment because of the AIR just an inevitability because of epistemic uncertainty in science or do scientists have a moral responsibility for considering the value implications of their research? If so, what are the acceptable and unacceptable roles of values in science? (Douglas, 2000, 2009, 2016; Elliott, 2011). Further research has also used the AIR to point out where values play a role in specific fields of science (Biddle, 2016; Frank, 2019; Stegenga, 2017).

It is not possible to address all these questions here. Instead, I note the developments in the literature to acknowledge that what I say here might not fit well with some interpretations of the AIR. For example, some may not be convinced that the AIR is concerned with the epistemic dimensions of science and instead just with the communication of science in policy advisory environments and so be unsatisfied with my focus on scientific belief formation.

My aim here is to build on the response to Rudner that was first offered by Jeffreys in 1956 and has received some defence since (Betz, 2013). This response says that the AIR can be avoided if we follow the Bayesian and claim that the scientist ought to work with probabilities, not outright beliefs. Defences of this position in the literature have fallen short and so my aim here is to offer a new defence of this Bayesian position. To motivate this project, I shall first show how the AIR leads to a species of value encroachment that is in tension with medical ethics. After that I shall show how the Bayesian option offers a way around the AIR.

2 Whose values? My values!

One of the questions that Rudner’s presentation of the AIR raises is whose or what values ought to inform the value judgements of scientists? One answer that is subtly implied when Rudner claims “the scientist qua scientist makes value judgements” is that it is the scientists’ personal values that determine when they are in a strong enough epistemic position to accept or reject a hypothesis.Footnote 5

Now of course, if there are no consequences from accepting a hypothesis, there is no risk in the relevant sense from accepting it. So, the AIR is assuming, in addition to the assumption that scientists are in the business of accepting and rejecting hypotheses, that what scientists accept has consequences. Considering the guiding role of belief in action, it is fair to assume that beliefs do have consequences. If I believe it is going to rain today, then I am likely to take an umbrella. It is not a necessary relation – I can take an umbrella without fully believing it is going to rain just as I can fully believe it is going to rain without then deciding to take an umbrella. But, it is often thought that beliefs play an important role in our practical reasoning, i.e. deciding what to do.Footnote 6

The concern I have with the AIR then is how someone else’s values will play a role in determining what we believe given the role that beliefs then play in our personal decision making.Footnote 7 When spelt out like this, the claim that scientists’ personal values ought to be employed to make inductive risk judgements seems concerning, as the implication is that people might act according to beliefs that are laced by scientist values. In doing so, it will not only be their own values guiding their decision making.

It is worth noting that Douglas (2000, 2009, 2016), who has discussed the problem of ‘whose values’ in relation to the AIR at length, is similarly subject to my concern. She is concerned with the scientists’ personal values encroaching on the output of science, specifically on policy making. She rightly argues that policy decisions should be made by democratically elected officials, not people with science PhDs hiding in a lab who likely do not represent the public’s values. However, confronted with the concern that scientists’ personal values might encroach on science she does not, as I will, argue against the AIR but instead argues that scientists ought to employ values that are reflective of the public. She attempts to avoid the ethical concerns we might have with the AIR by arguing that it shouldn’t be the scientists’ idiosyncratic dislike of, say, plastic straws that influence how much inductive risk is tolerable in climate science, but rather democratically determined values.

Putting to one side the practical questions about how we can establish values in a democratic way, the concern that I have with Rudner’s proposal can be extended to anyone who argues that democratic values ought to guide inductive risk judgements. After all, not all values will be represented by a democratic process and people with different personal values might rightly feel theirs are the ones that ought to guide decision making. Many cases in medicine are clear examples of this.

For example, consider the following case. Say there are a series of trials investigating the connections between caffeine and heart failure. The group of scientists leading the investigations have a full cohort of 100 people, 50 of whom are given high levels of caffeine over a set period of time and 50 who are given a low level. Their hearts are then monitored for abnormalities throughout the course of the trial and differences between the groups at the end of the trial are assessed as evidence regarding the hypothesis (h) “high levels of caffeine increase rate of heart palpitations”. The results show that of the 50 in the control group, 8 presented with palpitations whereas in the active group, 20 presented with palpitations. According to this evidence, should we accept and believe h? Well, according to the Rudnerian position, it depends on the evaluation of consequences of falsely accepting it or falsely rejecting it. Of course, how these consequences are evaluated will differ from individual to individual and, accordingly, so will the threshold that has to be passed for ‘high enough level of confidence’ to be established.

So, say the threshold for accepting h is set at x by a group of scientists (where x can refer to either 1- p where p is a p-value generated from a statistical significance test or a degree of confidence in h where full confidence is 1 and no confidence is 0). However, given the chance, an athlete who requires good heart health to succeed in their career would set it at x-n (lower, so that h is easier to accept). Alternatively, a barista who is not so concerned with heart health but is concerned with their ability to drink coffee would set it at x + n (higher, so that h is harder to accept). The stakes for these two individuals in persisting with drinking coffee if the claim h is true are very different and so is their willingness to accept h given the implications of h on their actions. Where the decision is not in their hands, and the willingness to accept the consequences of accepting h are not up to them, someone else is making value judgements on their behalf.

The upshot of spelling out this case is that if we want to respect individuals’ autonomy over what they believe and do, then we ought to aim for science that does not include value-based justifications in the way that Rudner presents.Footnote 8

Further, in spelling out the implications of the AIR for the field of medicine, I offer normative motivation for questioning the AIR and the inevitability or inductive risk. As mentioned, to question the AIR we can first put into question the doxastic aim of establishing outright belief. In what follows, I establish that there are suitable alternative doxastic aims in science and argue that there are not any compelling reasons to aim for outright belief in sciences. Thus, we can avoid the Rudnerian position. I appeal to the broader epistemology literature to do so.

3 The Bayesian alternative so far

Jeffreys in his 1956 response to Rudner questioned Rudners’ understanding of proper scientific practice. Unlike Rudner who thinks that the activity proper to scientists is accepting or rejecting hypotheses, Jeffreys argues that “the activity proper to the scientist is the assignment of probabilities” (1956, p. 237). Taking this broadly Bayesian position, Jeffreys is making an alternative claim to Rudner about the doxastic aims of scientists. I argue that Jeffreys is on the right track in restating the doxastic aims of scientists but does not go far enough in showing how “assignment of probabilities” can avoid inductive risk gaps and therefore the AIR.

The problem with Jeffreys claim was actually pre-empted by Rudner in his original paper and later recognised by Jeffreys. As Rudner points out, “for the determination that the degree of confirmation is say, p, or that the strength of evidence is such and such, which is on this view being held to be the indispensable task of the scientist qua scientist, is clearly nothing more than the acceptance by the scientist of the hypothesis that the degree of confidence is p or that the strength of the evidence is such and such” (1953, p. 4). Jeffreys describes this as the weightiest concern against his position, to the point that it might be “fatal” (1956, p. 246) for his argument that a probabilistic approach can avoid the AIR. However, I argue that Jeffreys spoke too soon.

Rudner in a sense is right. By moving to a probabilistic approach where someone is still accepting or rejecting the likelihood of something, they just move the inductive risk gap to another place, i.e. to the risk of accepting or rejecting the probability function of h. However, this is not strictly the Bayesians position. Simply, the Bayesian is not in the business of accepting probability functions, let alone accepting anything. In what follows, I spell out the Bayesian position via literature in Bayesian epistemology. I show how it can be used to refute the AIR and avoid the Rudnerian concern against Jeffrey. Thus, I offer an updated defence of Jeffreys response to Rudner.

First, it is worth noting that there have been a few attempts in the inductive risk literature to defend a Jeffreys-like position. Most notably, Betz (2013) attempts a defence of Jeffreys with reference to the International Panel on Climate Change (IPCC)’s management of uncertain evidence. The IPCC does aim to communicate uncertainty in terms of probabilities which leads Betz to claim that they are an example of Jeffrey’s position in practice. However, their method for communicating uncertainty is to place probability functions into probability intervals, i.e. ‘very likely’, ‘not likely’, etc. As Frank (2017) points out in response to Betz, this practice does not evade inductive risk and therefore value encroachment but instead requires that the climate experts make inductive risk judgements about whether evidence is deemed sufficient to claim it is ‘likely’. Individuals still have to determine whether evidence is sufficient for making a categorical claim, the claim is just no longer in the form of outright belief. So, as Frank rightly points out, the IPCC, and Betz’s argument, does not evade inductive risk judgements. A similar argument is made by Steele (2012) who argues that that Jeffreys position offers insufficient guidance for science policy advising generally. She too opts for using probability intervals in science advising but, unlike Betz, acknowledges that this will involve some value judgements.

In what follows, I offer a renewed defence of Jeffrey’s position with reference to literature in epistemology.

4 Bayesian epistemology and the Bayesian position revitalised

Bayesian epistemologists are concerned with the norms that govern belief formation and have stirred significant discussion in the epistemology literature (Clarke et al., 2022; Jackson, 2020; Mark Kaplan, 1996; Staffel, 2019; Sturgeon, 2020). According to the Bayesian, beliefs should only come in degrees such that our doxastic dispositions towards propositions track our level in confidence in that proposition being true. Underlying the Bayesian claim is the assumption that there is always uncertainty regarding the truth of a proposition. Our doxastic attitudes ought to reflect this.

In technical terms, belief states should be understood in terms of credences that represent the degree of belief you have towards a proposition. The credence (cr) you afford to a proposition (p) can be described as being C% sure of p. So, cr(p) = n, where n is in the unit interval [0,1]. Importantly for us, according to the Bayesian a rational epistemic agent ought to aim for an informed degree of belief regarding a proposition, not an outright belief. For example, the rational degree of belief to have in the proposition that the coin will land on heads, where 0 is to outright belief ~ p and 1 is to outright believe p, is 0.5. Thus, when tossing a coin, my credence that the coin will land on heads is 0.5.Footnote 9

Following then another principle of Bayesianism, one’s credences ought to continually be updated as new evidence become available using the following calculation known as Bayesian conditionalizationFootnote 10:

$${\text{c }} = {\text{ P}}\left( {{\text{H}}|{\text{E}}} \right) \, = \frac{{{\text{P}}\left( {{\text{E}}|{\text{H}}} \right) \, *{\text{ P}}\left( {\text{H}} \right)}}{{{\text{P}}\left( {\text{E}} \right)}}$$

where H is the hypothesis and E is new evidence.

So, the probability of H given E is relative to the prior probability of H and new evidence regarding H. One’s credence ought then to match the new probability.

Following this, when establishing a belief state regarding a proposition, I will look at the evidence at any given time, determine how likely it is that the proposition is true and then match my credence to that likelihood. For example, I might have a credence of 0.4 that it is going to rain tomorrow, or a credence of 0.9 that caffeine increases the risk of palpitations. As new evidence becomes available, I will continue to update my credence using the calculation above. What is crucial, is that at no point am I aiming to convert my credence into an outright belief and so at no point am I asking what threshold of high enough credence (i.e. high enough level of confidence) would justify adopting an outright belief. Therefore, one is not confronted with inductive risk gaps.

At this stage, it might still look like the Bayesian is subject to the Rudnerian criticism with the probabilistic approach; the Bayesian is still in the business of accepting or rejecting when establishing belief states, just what is being accepted or rejected and what the belief state looks like has changed. However, this criticism of the Bayesian fails to recognize the distinction between subjective and objective probabilities. It is this distinction that is essential to understanding why the Bayesian evades the Rudnerian criticism.

While objective probabilities measure the real tendencies or real likelihood of something to happen, subjective probabilities concern the agent’s expectation that some outcome will be the case, i.e. credences.Footnote 11 Of course, objective probabilities and subjective probabilities are not completely unrelated. Following Lewis’ Principal Principle (1980), we might want to say that our subjective probabilities (credence) ought to match the objective probability. It would seem very strange to say that an agent’s credence of 0.75 in the proposition ‘this fair coin will land heads’ is reasonable. If we think our doxastic states ought to match the available evidence, then our credences ought to match the available evidence regarding the objective probability of a proposition.

However, that we ought to match our credence to the best available evidence does not mean we have to accept any probability function to establish our doxastic attitude. Subjective probabilities are not doxastic states that represent the acceptance of a state of affairs – probabilistic state of affairs or absolute state of affairs—despite uncertainty. They are states that represent an individual’s attitude towards a proposition at any given time given the evidence available. Let’s apply the Bayesian position to a hypothetical case in science to illustrate the success of the Bayesian position in avoiding inductive risk.

In Sect. 2, I outlined a case where scientists were investigating whether caffeine increases the risk of palpitations. I outlined what the scientists' doxastic process would be according to AIR. This included choosing a threshold (x) of high enough level of confidence that, once passed, could justify the acceptance of the claim that caffeine does increase the risk of palpitations. What I am proposing is that the scientists’ doxastic states do not aim to acceptance but instead continuously reflect the likelihood of the hypothesis given the evidence available. No longer will the scientist be using their own or others' value judgements to determine how significant the results are. This resolves my concern about the individuals’ values of the barista and athlete not being respected. They can now impose their own thresholds that represent their values to establish what doxastic attitude they have towards the hypothesis. I say more on the doxastic aims of people qua people, as opposed to scientists qua scientists, in the next section.

To summarise, according to this construction of the Bayesian position, the AIR can be avoided. Of course though, even if we can avoid the Rudnerian concern about the probabilistic approach and therefore revitalize Jeffrey’s response, there are still lingering concerns about the Bayesian position that can be found both in the epistemology and philosophy of science literature. The rest of this paper is dedicated to managing these concerns in order to defend the revitalized Bayesian approach. I address the following three concerns in turn:

  1. 1.

    Do we not need outright beliefs? i.e. are credences sufficient for what we require from doxastic states?

  2. 2.

    What about inductive risk at other stages of the scientific process?

  3. 3.

    Do we have to worry about the problem of the priors?

5 Why outright belief?

In a sense, the Bayesian response is quite radical. It is asking that scientists abandon their doxastic aims and accept a new system where concluding or accepting are actively avoided. But that the demand is radical is not reason alone to reject it. In this section I ask whether there are any good reasons to reject the new demands and continue to attempt to fulfil the well-accepted doxastic aim of science, i.e. to establish outright beliefs.

The general question of ‘why outright belief’ has gained some attention in the literature but more often the question of how suitable credences are as primary doxastic attitudes has been raised. Generally, one would not argue that people do not have degrees of beliefs as well as outright beliefs or vice versa. But whether we should only function with degrees of beliefs, as I propose scientists should, is what is up for question. So, what is the benefit of outright beliefs, if any? And, importantly, do these benefits apply to scientists?

It is thought by some (Wedgwood, 2012; Williamson, 2020) that outright beliefs play an important role in our practical reasoning. For example, believing outright that (p) “this bus will get me to King’s College London”, as opposed to having a credence towards the proposition, is important for how I decide to act. Supposedly, outright believing p works as a premise in my practical reasoning such that I can infer my intentions to act partly from that belief. If it is the case that this practical role is unique to outright belief, then we perhaps find ourselves appealing to inductive risk judgements in determining what to believe when deliberating about our choices. For example, 80% risk of rain on a day when I am meant to go to the beach should be sufficient evidence for me using the premise “it is going to rain” in my practical reasoning and developing my intentions accordingly (e.g., replan my trip or simply take an umbrella and some wellington boots).

However, from this example we can illustrate just how unnecessary outright belief is in our practical reasoning. A weather report that says there is an 80% chance of rain tomorrow, when I have the plan to go to the beach tomorrow, need not mean that I ought to believe it is going to rain but instead, believe that the likelihood of rain is high enough that I ought to take some provisions or change my plan. Not only are we able to practically reason in states of uncertainty but we also often naturally do. Not only do we tend to claim that “we believe that Paris is the capital of France” or that “I am sure the bus takes me to KCL” but also, we say things like “I think it is likely that it will rain” or “I am pretty sure that bus takes me to KCL”. Of course, sometimes we are too uncertain to act in a specific way. I want to be really sure that the bus will get me to KCL for my viva. Yet, still, this is not to say that sufficient evidence levels differ for what we ought to believe, but instead what we ought to do.Footnote 12 Concerns about the unique role that outright beliefs play in our practical reasoning are thus slightly misled.

However, my response to the claim that outright beliefs play a specific role in our practical reasoning is subject to the most common complaint against credences and one of the weightiest defences of incorporating outright beliefs into our doxastic practice. This is that credences are overly demanding of our rational faculties. It is thought that as epistemic agents we have cognitive limitations that are incompatible with the demands of the Bayesian. It is because of the over demandingness of credences that we inevitably do function with outright beliefs.Footnote 13

When I decide to take a bus to KCL in the morning, I do so because I have an outright belief that the bus will get me to KCL. Now, as mentioned already, this might differ depending on the stakes associated with getting to KCL. If I am going for my PhD viva, I might be more aware of the uncertainty that the bus will get me to KCL on time and reflect this in my doxastic attitude by adopting a credence.Footnote 14 But, generally, in my day-to-day I function with outright beliefs as well as credences. It would be highly impractical to take into consideration all evidence and uncertainty at all stages of my reasoning. I am not, and practically cannot be, a Bayesian robot. On a descriptive and practical level then, we do just have outright beliefs, and this makes sense given concerns about our cognitive limitations.

Claims about the relevance of the cognitive load argument as presented so far for the Bayesian proposal in science are misled. One initial response to this argument is to point out (in Rudnerian terms) the difference between a scientist qua person and a scientist qua scientist. It is common and right to be more epistemically demanding of scientists (and other members of society like lawyers, politicians etc.) than the average lay person. Part of what it is to be an expert is to be more epistemically precise about the field in which you are an expert over. So, while the cognitive load argument as presented so far here might very well justify establishing outright belief in our day-to-day reasoning, it is not clear that this maps directly onto scientific reasoning.

Of course, you could take the cognitive load argument to say that even the scientist who is expected to engage in laboursome epistemic practice is being asked too much by the Bayesian. The worry is that the Bayesian scientist will be expected to take account of uncertainty regarding all auxiliary hypotheses used in reasoning. For the Bayesian, no one should ever assign certainty to any given hypotheses. In other words, no one should ever have a credence 1 in any hypothesis. The scientist then ought not take any claim or premise to be probability 1 in their reasoning. Instead, all premises in reasoning must be assigned a probability. This will include anything from the structure of a molecule to the mechanism of the most common drugs like penicillin. Treating all assumptions as probabilistic will require them to make near to impossible calculations and will be generally incredibly stifling for the progress and practice of science. So, the cognitive load argument cannot be so easily dismissed as only a problem for people in their day-to-day lives.

However, this concern is again slightly misled. The Bayesian does not need to operationally treat all assumptions in probabilistic terms in order to stick to the principle that credence 1 is never justified. In other words, assigning credence 1 to a hypothesis for operational reasons, where the difference in 1 and the appropriate credence is negligible in terms of calculating new probabilities, is permitted by the Bayesian. The scientist therefore need not take into account all uncertainty including uncertainty regarding the most fundamental assumptions in biological sciences about, say, the structure of a molecule. Instead, they are asked to incorporate the operationally relevant uncertainties into their epistemic practice. Again, the cognitive load argument is not as concerning for the Bayesian as some may think.

There are other convincing arguments for the importance of having outright beliefs in our epistemic systems. Owens (2017) makes the good point that outright belief enables emotional engagement with the world in way that degrees of beliefs do not. For example, I cannot blame someone (and therefore have blame-like attitudes towards them like anger) for stealing my bike if I do not outright believe that they stole my bike. Again, outright beliefs are important parts of our epistemic systems generally, but this does not mean they are for scientists. A high likelihood that smoking causes cancer is sufficient for developing levels of anxiety about the rates of smoking in a population, which is just to say that degrees of belief can fulfil the emotional engagement with the world that we think some scientific findings might require.

To summarise, there are strong and reasonable arguments for outright beliefs. Not only do we just have outright beliefs (and so therefore should welcome work that tries to describe when these are rational and when they are not) but also our limited cognitive capacities make outright beliefs important on a functional level. Further considerations about our emotional engagement contribute to the justification of outright belief. Yet, these arguments do not need to apply to science or scientists and so the fundamental move of dismissing outright belief as the doxastic aim of scientists can be defended. Thus, the Bayesian position stands.

Having justified my move to dismiss the requirement of outright belief in science, I anticipate that sceptics of the Bayesian proposal will argue that the Bayesian only scratches the surface of the scope of the AIR. The concern with the Bayesian position is not just about rewriting the doxastic aims of science but also about how successful the Bayesian is in avoiding value encroachment because of the AIR in the first place. So, as Bayesians we are not out of the woods yet.

Two potential lines of argument are as follows.

Value-laden categorising: The Bayesian still relies on categorising kinds in determining posterior credences. Categorising kinds can be influenced by values. Values can influence posterior credences.

The problem of the prior: The Bayesian requires a prior to conditionalize new evidence on and generate a posterior credence. What prior you choose can have significant impact on your posterior. Values can partly determine what prior you choose or end up with. Values can have significant impact on your posterior.

I address both in turn and then finally make some brief comments on the scope of my argument with reference to the different considerations of the AIR in science communication.

6 Value-laden categorising

So far, I have only argued that the Bayesian method bypasses the AIR at the point of determining whether results are significant for acceptance. However, Douglas influentially argues that “a scientist decides which empirical claims to make about the world. At each of these decision points a scientist may need to consider consequence of error” (2000, p. 104). The point being that supposedly, there are stages prior to the point of determining the significance of results where inductive risk judgements inevitably have to be made. Thus, for the Bayesian method to succeed in fully bypassing the AIR, it has to deal with these earlier stages too.

To illustrate this claim, Douglas offers a case study of an experiment that looks to evaluate the toxic effects of dioxins. In the experiment, rats were exposed to incremental doses of dioxins over a period of time. The rats then undergo full body autopsies, and all organs are examined for cancerous growths. The raw data produced by the experiment was then interpreted in 1978, 1980 and 1990 by different bodies of researchers with different stakes in either dioxin production or health. Essential to Douglas’ case is that at each point of interpreting the raw data, different results were obtained. In 1978, 34/50 rats were noted to have tumours after 100ng/kg/day doses of dioxins whereas in 1980 33/47 are noted to have tumours and in 1990 only 18/50 rats are noted to have tumours after the same dose. Further, different conclusions were made about the toxicity of dioxins based on the number of tumours recorded in the rats. Her claim is that the different interpretations of the same raw data were partly (if not entirely) because of different inductive risk judgements at different stages in the experiment. One of the most important (and relevant for us here as it is not clearly managed by the Bayesian account offered) is that value judgements are made in evidence categorisation.

In the case she offers, the value judgements arise at the stage where pathologists determine whether a tissue sample has a cancerous lesion or not. That is by judging the risk of including borderline non-cancerous tissue in the cancerous lesion category and excluding borderline cancerous tissues from the cancerous lesion category. Again, the borderline cases will be allocated (and thus the border determined) by the relevant risks: “If they prioritize the interests of the industry (the general public), they will tend to identify borderline cases as non-cancerous (malignant)” (Henschen, 2021). Again, value judgements are required.

Her example aims to show that the attitudes of the scientists to dioxin risk will have significant impact on results. If your interests lie with the industry than you might be more willing to treat borderline cases as non-cancer cases and therefore have less positive cancer results. On the converse, if your interests lie in public health or cancer prevention than you might be more willing to treat borderline cases as cancer cases and therefore have more positive cancer results.

Of course, it is worth noting that as understandings about the biology of cancer have developed, managing borderline cases with reference to facts about cancer will be more possible. Accordingly, there will be less of an inductive risk gap when determining borderline cases as there will be less uncertainty. Still, though, the point of Douglas’ case and argument is that wherever there is uncertainty in any decision in an experiment there will be inductive risk and therefore values, like pro-industry leanings, might impact results. The prima facie worry about bad value encroachment thus strikes again. But is there a better way to manage uncertainty when categorising evidence, if so, are there any reasons that we continue with the current value-laden model as opposed to an alternative?

Well, one option is to abandon categories all together and instead of having to put the rats in cancer or non-cancer categories, one could just take borderline cases as that. In practice, scientists could note down the actual change in the cancer cells relevant to the development of a tumour into cancer (i.e. rate of growth of cells, level of invasion into surrounding tissue) at each dosage for each rat. The data set that scientists will then end up with transparently represents the uncertainty regarding the appropriate category for each case. In fact, one could reject the need to put them into cancer or not-cancer categories in the first place. The purpose of the experiment is to test the carcinogenic effects of dioxins. Results that represent the average increase in cancer tissue for all rats would answer this question without making any flat out claims about the number of cancer cases.

Much more detail is needed to make sense of this proposal and before we do that, we can see that this solution is incredibly demanding and, combined with concerns about cognitive load and Bayesian models of rationality, it prima facie does not seem promising. There is another solution though, which is to change what categories we use.

Recall that our concern is not that anyone makes value judgements in determining what to believe given a set of uncertain evidence but that scientists (or perhaps doctors) make those judgements on behalf of others. All that is needed for this is to push the value judgements back. In order to do so, we could change what is being categorised at the scientific stage. Instead of scientists putting tumour scans into “cancerous” or “non-cancerous”, which are categories that are already value-laden and action provoking, they should put tumour scans into non-value-laden descriptive categories. For example, instead of ‘cancerous’ the scientist could categorise according to different characteristics of tumour cells that are thought to be relevant to identifying cancer (i.e. tumour growth rate or border shape). These descriptive characteristics do not in themselves have implications for action or any value implications generally.

The concern about inductive risk judgements being required at the stage of determining how to categorise borderline evidence disappears if the categories being employed are not value-laden. Again, the Bayesian alternative withstands anticipated criticism.

7 The problem of the priors

The final concern that might be mounted against the Bayesian method as a response to the AIR is that the Bayesian method itself suffers from value encroachment as well. This will most likely be argued through the problem of the priors.

The problem of the priors for our purposes refers to the concern that values can partly (or fully) determine the prior probabilities that are taken as the base for conditionalization. More generally, the problem of priors for the Bayesian is simply; where do the prior probabilities that have such significant impacts on posterior probabilities come from?

Say I take the prior probability of an individual having a disease to be 5% given that 1/20 in the population have that disease. Then say I get them to take a test that has 95% sensitivity, and it returns a positive result. The probability that you will get a positive test given you have the disease is therefore 0.95. We can then do some simple calculations, based on what we know, to establish that the probability that one will get a positive test generally will be 0.095 (P(E) = P(E|H)*P(H) + P(E|~ H)*P(~ H). What now should I believe about the probability that the individual has the disease if they get a positive test? If we plug these numbers into the Bayesian calculus, we have the following:

$${\text{c }} = {\text{ P}}\left( {{\text{H}}|{\text{E}}} \right) \, = \frac{{{\text{P}}\left( {{\text{E}}|{\text{H}}} \right) \, *{\text{ P}}\left( {\text{H}} \right)}}{{ {\text{P}}\left( {\text{E}} \right)}} = \frac{{0.{95}*0.0{5}}}{{ 0.0{95}}} = \, 0.{5}$$

Despite the positive result, given the prior probability that the individual has the disease in the first place, the individual is only actually 50% likely to have the disease. If, however, the probability that the person has the disease is much higher in the first place, let us say at 20%, then the probability that the person has the disease given a positive test will be much higher. Again, let us plug the numbers into the calculus (but recognize that P(E) is now higher given that the P(H) is higher).

$${\text{c }} = {\text{ P}}\left( {{\text{H}}|{\text{E}}} \right) \, = \frac{{{\text{P}}\left( {{\text{E}}|{\text{H}}} \right) \, *{\text{ P}}\left( {\text{H}} \right)}}{{ {\text{P}}\left( {\text{E}} \right) }} = \frac{{0.{95}*0.{2}}}{{0.{23}}} = \, 0.{826}0{9}$$

where the probability that the person has the disease is 20% to start with, the probability that they have the disease given the positive test is about 83%. The point being that the prior probability that we assign to the hypothesis in question will have impact on the posterior probability we assign. And, if values play a role in determining what priors we choose, then the Bayesian is still subject to concerns about value encroachment.

The weight of ‘the problem of the priors’ for our debate here can be disputed on two grounds. Firstly, when talking about science we are not talking about individuals who are uninformed or (at least hopefully not) seeking to make value-laden claims. Scientists are not starting from nowhere and should not be picking priors that make their desired posterior probabilities more likely. We should be able to trust that scientists’ priors are continuations of investigation and work that has come before them and, over time, are results that have been generated in the Bayesian way.

However, even if some scientists are not as honest and well informed as I hope, it is important to point out that the problem of the priors is a distinct problem from the problem I tackle in this paper. The aim of this paper is not to argue that the Bayesian response to the AIR resolves all worries about value encroachment in science. There might be many other ways that values determine what we come to believe from scientific inquiry. More work will need to be done (and indeed is being done) to determine the extent to which values can be avoided at other stages in scientific practice. But, at least, this is a way to avoid them at the point of justifying the establishment of scientific beliefs.

8 Believing versus communicating

At the beginning of this paper, I made it clear that I would be talking about the AIR as understood in epistemic terms. I pointed towards ambiguities in the Rudnerian definition of the AIR and various discussions in the literature about what ‘inductive risk’ refers to. This is all to say that some may still be unconvinced by the Bayesian position I defend here because they are unconvinced by my reading of the AIR as being at least partly to do with our epistemic practices. John (2015), for example, takes the AIR to be an argument strictly about science communications.

It is worth noting that there is an interesting tension that arises out of the Bayesian position combined with the ethical motivation of this paper. We want to avoid value encroachment to ensure that people's autonomous decision making is not compromised by other people’s values, and I propose that a properly probabilistic practice in science can, at least, avoid value encroachment because of inductive risk. However, the Bayesian model I propose might face problems in practice when we take into account that individuals typically struggle to understand and retain lots of probabilistic information (as discussed when outlining the cognitive load argument). Trying to ensure that individuals can understand and use the science might take precedent in some cases over only functioning with probabilities. To illustrate, consider the following narrative.

Say I sit down with a clinician, and they show me the results of a biopsy they had conducted on a tumour in my gland. They tell me how developed they believe the tumour is and what they believe, based on various studies on tumours of the kind I have, their credence is that it will develop into a cancerous tumour with long term adverse health effects. After the doctor had finished talking, it would be totally normal for me to turn around and say, “what does this all mean?”. At which stage, the doctor might have to translate their credences into some categorical claims.Footnote 15

My concern is not about the evidence itself but what the evidence means for me and, plus, as someone who is not well versed in the details of oncology, there is plenty of information that I will not understand. The same could be said for most members of the public and many scientific findings. Yes, in some cases, the actual probabilities might be very important for individuals. I want to know the actual likelihood that it will rain tomorrow or that caffeine increases the risk of heart palpitations, not someone else’s interpretation of the likelihood! But in others, practical considerations about understanding and usage might trump the want for value-freedom.

Moreover, as briefly mentioned, Steele (2012) specifically argues that the Bayesian position against the AIR will fail when it comes to scientific communications to policy makers as communicators will inevitably have to communicate results in credal intervals, instead of pure credence. Steele takes the ideal position to be to communicate in probabilities but takes it that this will be too costly on the policy advisors making decisions. Just like the doctor may need to translate the scientific evidence to the patient, so too might the scientist to the policy advisor.

Without offering comprehensive guidelines for when to assert in categorical terms and when to assert with probabilities, all I aim to highlight here is that there are some secondary pragmatic considerations that might mean that the Bayesian model defended in this paper cannot be followed in all science communications. If the priority in a clinical setting is to have the patient feel taken care of then perhaps inductive risk judgements on behalf of their physician are legitimate. The same might go for a scientist explaining to a public group why policy makers have decided to regulate the aviation industry.

However, the reliance on inductive risk judgements in these cases requires some sort of secondary pragmatic reasoning. My overall claim then can be restricted, but not defeated, to ‘there is good motivation to avoid value encroachment in science because of inductive risk and the Bayesian model offers a suitable alternative for avoiding inductive risk but, in some cases, other pragmatic considerations override the motivation for avoiding value encroachment’. More work needs to be done to establish when and what pragmatic considerations might override my argument from value pluralism. Should then the scientists be providing these practical predictions or outlining to the public the practical implications of scientific findings? If so, does this not involve inductive risk and prove the Bayesian method to be somewhat futile? I leave these for future research.

9 Concluding remarks

I have offered a renewed defence of the Bayesian response to the AIR. I have motivated the response by illustrating the practical implication of the AIR on individuals’ autonomous decision making and asserting that this is ethically problematic. I propose that the Bayesian response to the AIR withstands various criticisms that are either explicit in the literature or fairly anticipated considering common arguments in the philosophy of science and epistemology literature. While this paper broadly sits on the side of value-freedom, it does not claim to offer a comprehensive defence of value-free science. Further, it also does not claim to speak to all versions of the AIR. As I have made clear, the AIR might play a role in science communications that are less easily avoidable by the Bayesian response. More research is needed to test the limits of the Bayesian response to the AIR, but I settle with having offered a renewed defence of the Bayesian response and generally pushing back on the wide acceptance of the AIR found in the literature.