The value of cost-free uncertain evidence

We explore the question of whether cost-free uncertain evidence is worth waiting for in advance of making a decision. A classical result in Bayesian decision theory, known as the value of evidence theorem, says that, under certain conditions, when you update your credences by conditionalizing on some cost-free and certain evidence, the subjective expected utility of obtaining this evidence is never less than the subjective expected utility of not obtaining it. We extend this result to a type of update method, a variant of Judea Pearl’s virtual conditionalization, where uncertain evidence is represented as a set of likelihood ratios. Moreover, we argue that focusing on this method rather than on the widely accepted Jeffrey conditionalization enables us to show that, under a fairly plausible assumption, gathering uncertain evidence not only maximizes expected pragmatic utility, but also minimizes expected epistemic disutility (inaccuracy).

selected urn. Yet Ann knows in advance that the lighting is so dim that it would be difficult to discern what her experience says: given her background information about the lighting conditions, she expects that learning experience would make her uncertain about whether the drawn marble is blue, B, or violet, V .
A classical result in Bayesian decision theory (Savage 1954, ch. 4;Raiffa and Schlaifer 1961, ch. 4.5;Good 1967;Ramsey 1990), known as the value of evidence theorem (VET), says that, under certain conditions, when an agent updates her credences upon the receipt of cost-free evidence, the subjective expected pragmatic utility of obtaining this evidence is never less than the subjective expected pragmatic utility of not obtaining it. That is, expecting to obtain cost-free evidence cannot lead you to expecting to make worse practical decisions. 1 The original VET, however, is limited to cases where the agent learns proposition E for certain from a set E of mutually exclusive and jointly exhaustive propositions, and hence may update her credences by dint of Bayesian conditionalization (BCondi, for short). Crucially, in such cases the agent is certain ex ante that exactly one proposition E from E will be true. But often, like in Ann's case, we undergo learning experiences where it is hard to discern what that proposition is, and so we become uncertain about which element of E is true. Could cost-free uncertain evidence so understood be worth waiting for in advance of making a decision?
This question has not gone unnoticed in the literature. Graves (1989) showed that we can extend VET so that it holds for cases where we become uncertain about what the logically strongest proposition we learn is, and update our credences by using a rule called Jeffrey conditionalization (JCondi, for short). This rule requires uncertain evidence to be specified as a redistribution of the agent's credences over the propositions in some partition E of a set of possibilities, without assigning absolute certainty to any particular proposition (hereafter, a Jeffrey shift). For example, Ann's learning experience can be understood as a Jeffrey shift over the partition {B, V }. To accommodate this type of uncertain evidence, Graves's argument, as we will argue, is mobilized by two conceptual moves. The first one is that any Jeffrey shift can be specified as a sort of propositional certainty, i.e. as a proposition that receives posterior credence 1 in an enriched subjective probability space. This enrichment is achieved by adding to the original smaller space propositions about one's posterior credences attached to the members of a partition E. The second key move is to show that, under certain conditions, BCondi on the proposition specifying the posterior credences over E in the enriched space is equivalent to JCondi in the original small space.
After challenging Graves's argument, this paper offers an alternative extension of VET to the case of learning from uncertain evidence. To preview, instead of recasting uncertain evidence as certain in an enriched subjective probability space, the proposed view retains the uncertainty of one's evidence in the original smaller space, and provides a specification of this uncertainty by utilizing the method of virtual evidence proposed by Pearl (1988Pearl ( , 1990 and developed in Chan and Darwiche (2005). According to this method, uncertain evidence can be specified as a set of likelihood ratios, where each likelihood ratio tells you how well some virtual evidence fits with some proposition in partition E as compared to how well it fits with another proposition in that partition. The virtual evidence is meant to be an auxiliary proposition that bears on the truth of propositions in E. We supply this method with a specific understanding of what the auxiliary proposition could be in the context of learning from uncertain evidence. Our proposal is that it can be understood as the proposition U E c λ which says that you take your evidence to be E and adopt the posterior credence function c λ in response. For short, we will refer to this proposition as saying that you update on E. 2 Importantly, when you take E as your evidence and adopt a posterior credence function in response, you foresee the possibility that U E c λ could be true, even if E is in fact false. This is because, in the context of learning from uncertain evidence, you may mistake the true evidence E for some other E ∈ E. So for example, in Ann's case, when looking at the drawn marble in a dim light, she might take her evidence to be B, and adopt the posterior c λ in response, when in fact V is true.
There is, we suggest, a way to incorporate this possibility of mistake in one's learning into the update mechanism. The key idea is that we can express it by determining the extent to which the proposition U E c λ is more likely under E than E , or the extent to which U E c λ favours E over E . Intuitively, if Ann has updated on B, she can express the extent to which U B c λ is more likely under B than V . One natural way to determine this extent is to settle on a likelihood ratio which tells Ann to what extent the proposition U B c λ is more expected under B than V . This likelihood ratio can be any non-negative number she considers reasonable in the light of her background knowledge. As will be explained in more detail later on, there is a reasonable way to plug these likelihood ratios into an update rule, without the need of determining the absolute likelihoods for U E c λ . As will be shown, the proposed update method gives the expected result in Ann's case in the sense that it leaves B and V uncertain in Ann's posteriors.
Armed with the method of virtual evidence so understood, we show how VET can be extended to the context of uncertain evidence. Three basic ideas underpin this extension. First, updating on a set of likelihood ratios for the proposition U E c λ can be modelled as a variant of what Pearl called 'virtual conditionalization' (VCondi, for short). Second, under a plausible assumption, updating by this variant of VCondi is equivalent to a version of Bas van Fraassen's (1984) reflection principle. This principle says that one's posterior credence function should be equal to one's prior conditional on the proposition U E c λ . Third, once we assume that the propositions U E c λ form a partition, we can show that the expected worth of accomodating cost-free uncertain evidence by the proposed variant of VCondi cannot be negative. And this expectation is calculated relative to the agent's prior credences over the propositions U E c λ . We proceed as follows. In Sect. 2, we discuss three different ways of specifying uncertain evidence that play a crucial role in both Graves's and our alternative extension of VET. In Sect. 3, we present three updating rules, each attuned to a different way of specifying uncertain evidence. In Sect. 4, we spell out in detail Graves's extension of VET. In Sect. 5, we show that Graves's argument is problematic when we care not only about the practical rationality of our decisions, but also about the epistemic rationality of our beliefs. In Sect. 6, we lay down our alternative extension of VET to the case of learning from uncertain evidence and show that it dovetails with a purely epistemic approach that aims to vindicate updating on uncertain evidence. Section 7 concludes.
2 Specifying uncertain evidence: three ways Let (W, F, c) be an agent's subjective probability space (her credal space), where W represents the possibilities that the agent can distinguish between, F is an algebra of subsets of W that can be understood as the propositions the agent can express, and c is the agent's credence function which assigns numbers from [0, 1], called credences, to propositions in F. We will assume throughout that, at any given time, the credence function c is a probability function over F. Since we will be mostly interested in the dynamics of credences in a decision context, we require that F includes a finite partition S of W, S ⊆ F, which contains propositions S representing the states of the world upon which the consequences of the agent's actions depend. As will be apparent later on, we also need to require that the algebra F is sufficiently rich so that it includes a finite number of propositions U E c λ . Recall that a proposition of this sort says that you update on E. Now, learning experience can provide an agent with various types of evidential input λ that may prompt a revision of their credence in any X ∈ F, c(X ), resulting in a posterior credence in X , c λ (X ). But how can we characterize an evidential input more precisely? There are at least two ways of answering this question that are well entrenched in Bayesian epistemology. Both assume that learning experiences do not provide evidential inputs all by themselves. Rather they provide evidential inputs because they impact on our credences about evidence propositions, which may in turn provide support for other propositions. However, these two approaches differ on how this impact should be cashed out.
According to the first, somewhat more popular view, any evidential input takes the form of a direct change in one's credences over some set of propositions. Thus, any evidential input acts like a constraint on the set of possible credence functions C and restricts the candidates for the agent's posterior credence function. More formally, an evidential input of this sort can be understood as a set of posterior credence functions over F, C λ , which contains those possible credence functions of the agent that are consistent with this input, that is, C λ ⊆ C (see, e.g. van Fraassen 1989, ch. 13;Uffink 1996;Joyce 2010;Dietrich et al. 2016;van Fraassen and Halpern 2017). Importantly, this view allows us to treat as evidential input information or data that cannot be expressed as propositional evidence.
According to the second view, we may think of evidential input as a sort of update factor, which can then bring about changes in one's credences (see, e.g. Wagner 2002;Hawthorne 2004). Some update factors may be essentially relative, i.e. they tell us how well an outcome of one's learning experience fits with the proposition X as compared to how well it fits with the proposition Y . A classical example of such a factor is given by the likelihood ratio of propositions X and Y given the evidence proposition E, c(E|X ) c(E|Y ) (see, e.g. Good 1950). A more general representation of this factor is given by the odds ratio of X and Y , also called the Bayes factor: 3 Bayes factor: If c λ is a posterior credence function and X , Y ∈ F, then the Bayes factor of X and Y is given by: That is, the Bayes factor of X and Y is a ratio of new-to-old odds for X against Y , where the new odds are c λ (X ) c λ (Y ) , and the old odds are c(X ) c(Y ) . This ratio is meant to capture the factor by which the old odds for X against Y can be multiplied to get the new odds. Observe that the agent's Bayes factors do not uniquely specify her posterior credences, but only impose overall constraints on these posteriors.
In order to show that the likelihood ratio-which isolates the full import of learning experience, with prior credences factored out-is an instance of the Bayes factor, assume that one's posterior credence for any X ∈ F, c λ (X ), comes from one's prior credence in X by conditionalizing on the proposition E ⊆ W, i.e. c λ (X ) = c (X |E). Then, Given the above preliminaries, how can we represent uncertain evidence? In what follows, we will characterize three ways of specifying uncertain evidence: the first one is due to Jeffrey (1983), the second one is due to Skyrms (1980), and was used in Graves's extension of VET, and the third one is an amended method proposed in Pearl (1988Pearl ( , 1990 and developed in Chan and Darwiche (2005). Jeffrey (1983) argued that in many cases learning experience does not constrain an agent's credences in a way that is tailor-made for the orthodox Bayesian conditioning, that is, when the evidential input can be modelled as the logically strongest evidence proposition E in W that receives the posterior credence of 1, or simply as the Bayes input: Jeffrey believed that, in cases like Ann's, although learning experience does not single out an evidence proposition E that receives posterior credence 1, c λ (E) = 1, it nevertheless directly affects the agent's prior credences over the propositions in some set E, which is a partition of W, shifting them to posterior credences c λ (E), for all E ∈ E. Thus, uncertain evidence so understood can be modelled as the following evidential input: That is, uncertain evidence so understood is a redistribution of the agent's credences over the propositions in some partition E of the set of worlds she considers possible.
To provide Skyrms's characterization of uncertain evidence, we need to consider a particular extension of (W, F, c). Given two algebras F and F * , let the injective map * : F → F * be an algebra embedding, that is, a function that preserves all Boolean operations. Then, let (W * , F * , c * ) be an extension of (W, F, c) such that: For some E ⊆ F and a finite set of posterior credence functions C λ , F * contains a finite number of propositions R E c λ . That is, the algebra F * in the extended credal space (W * , F * , c * ) contains the copies X * of all the propositions X in the original smaller algebra F, and it also contains a finite number of propositions R E c λ , each saying that the posterior credences over E are given by the credence function c λ . Now, we can also consider learning experiences in the extended credal space that prompt revisions of c * resulting in a new credence function c * λ . In particular, given the set of posterior credence distributions over F * , C * λ ⊆ C * , that are consistent with the evidential input λ in that extended credal space, uncertain evidence may be presented as the following evidential input: Thus, uncertain evidence modelled as Skyrms input is the agent's assignment of posterior credence 1 in the extended algebra to the proposition R E c λ that specifies a Jeffrey shift over some partition E.
Unlike Jeffrey's and Skyrms's methods, Pearl's method of virtual evidence specifies uncertain evidence as an evidential input that directly constrains quantities different from the absolute values of posterior credences. The core idea of Pearl's method is that we can interpret the uncertainty of every proposition E in some partition E of W as the uncertainty of E's relevance to some auxiliary proposition in W. And to specify how uncertain this relevance is, Pearl proposes to use a set of likelihood ratios. Here we will amend Pearl's method by assuming that the auxiliary proposition is the proposition U E c λ , which says that you update on E. More precisely, Importantly, to determine a Pearl-style input, we only need to specify a set of likelihood ratios, not the absolute likelihoods for U E c λ given E, for every E ∈ E. But since every likelihood ratio α E in the set L E is proportional to the absolute likelihood of U E c λ given E, i.e. α E ∝ c U E c λ |E , there exists a positive constant l such that, for all E ∈ E, c U E c λ |E = l ·α E . Thus, the absolute likelihoods can be determined indirectly from the set of likelihood ratios, albeit not uniquely.
Before moving on, let us elaborate on the proposition U E c λ . Recall that this proposition says that you take your evidence to be E and adopt the posterior credence function c λ in response. Firstly, following Gallow (2019), we assume that 'taking your evidence to be E' does not involve any belief that E is true, and hence does not mean that you assign to E the posterior credence 1. Secondly, we want to emphasize that even if the agent already knows that she has updated on E, it does not entail that c U E c λ |E = 1, for all E in E. That is, though U E c λ can be regarded as 'old evidence' after the agent has updated on E, we can still reasonably inquire about its evidential impact on the propositions in E. 4 If so, we should not interpret the conditional credence c U E c λ |E as the agent's actual credence in U E c λ , supposing E to be true, for if you already know that U E c λ is true, your prior actual credence in U E c λ conditional on any proposition is 1. Instead, we should understand it as a kind of counterfactual credence. That is, in determining these credences, we should answer the question: How probable would the actual evidence U E c λ be if E were true? Naturally, we might expect that assigning precise counterfactual credences to propositions involves a lot of conceptual and formal intricacies. 5 Fortunately, in our approach, there is no need to determine the precise values of these counterfactual credences. It suffices that the agent will express only their ratios, leaving the precise values unspecified.
Let us now show how uncertain evidence recast as Pearl-style input can be applied to Ann's case. Recall that Ann knows beforehand that because she would observe the drawn marble in a dim light, she would neither become certain that B is true, nor that V is true. Nevertheless, foreseeing the possibility of error, she can take B or V as her evidence and adopt some posterior credences over F in response. Suppose that, after looking at the drawn marble, she updates on B. But, due to her knowledge about the dim lighting, she foresees the possibility that U B c λ could be true, even if B is false. Still, she can interpret U B c λ as providing evidence for B against V whose strength can be given by a set of likelihood ratios. Let us suppose that Ann thinks that it is twice as likely that U B c λ would be true if B were true as if V were true. If this is so, then we may say that learning experience provides Ann with the following likelihood . This set of likelihood ratios enables her in turn to specify, though not uniquely, the absolute 4 Consider the following analogy. You are about to toss a coin, but you don't know whether it is fair (H 1 ) or double-headed (H 2 ). Suppose that you have observed that the coin lands heads 100 times in a row (E). Even though you already know that E is true, your observation provides evidence in favour of H 2 . For the more times you see heads, the more evidence you have for the coin being double-headed. Hence, you are far from saying that c (E|H 1 ) = c (E|H 2 ) = 1. 5 See, e.g. Sprenger (2015).
The above understanding of Pearl-style input can also be explained in terms of Ann's error credences, as shown in Table 1. Think of U B c λ and U ¬B c λ , where ¬B = V , as the possible, mutually exclusive noisy signals of Ann's learning experience. That is, before looking at the drawn marble, Ann thinks that she could mistake B for ¬B, and likewise she could mistake ¬B for B. Thus, Ann thinks that there is some non-zero probability-Ann's false positive credence-that U B c λ would be true, even if B were not, and some non-zero probability-Ann's false negative credence-that U ¬B c λ would be true even if B were true. Then, when Ann takes B as her evidence and adopts the posterior c λ in response, the differential support that U B c λ provides can be expressed as a likelihood ratio α B of false negative and false positive credences, Why should we think that the problem of specifying uncertain evidence is philosophically important? Firstly, as it turns out, the way we choose to specify uncertain evidence helps us to resolve the problem of non-commutativity of JCondi, which says that, once you update sequentially by dint of JCondi, switching the order in which a pair of Jeffrey shifts over partitions E and E is learned can yield different posterior credences in the end. This feature of JCondi is often regarded as its flaw. But, as shown by Field (1978) and developed by Wagner (2002), JCondi is commutative when identical learning is interpreted as identical Bayes factors. Note also that if uncertain evidence is represented by Skyrms input, then sequential updating on such uncertain evidence by a variant of BCondi, as given in chapter 3, would also be commutative, since BCondi is essentially commutative. Secondly, as argued in Wagner (2009), a certain parametrization of JCondi which uses the agent's Bayes factors rather than her posterior credences enables us to show that JCondi and the so-called opinion pooling-a method of aggregating probabilistic credences-commute. That is, the result of pooling and then updating by JCondi is the same as first updating by JCondi and then pooling.
As we will show in what follows, the way we specify uncertain evidence bears also on whether or not updating on uncertain evidence both maximizes expected pragmatic utility and minimizes expected epistemic disutility (inaccuracy).

Updating on uncertain evidence
A widespread position in Bayesian epistemology is that JCondi is an appropriate update rule when the agent undergoes a learning experience which does not rationalize absolute certainty in any proposition. This rule may be presented as follows: JCondi: Given a Jeffrey shift C E λ , the agent's posterior credence function c λ should be such that, for every X ∈ F, Although any Jeffrey shift C E λ which does not rationalize absolute certainty in any proposition in E cannot be mediated by way of Bayes input in the credal state (W, F, c), it might be tempting to think that it can be mediated by some other proposition in (W, F, c) that we learn for certain. But if this were possible, there would be no need for JCondi in the first place. After all, this rule was motivated by the thought that there is no proposition in F to conditionalize upon. Consider Jeffrey's candlelight case in which the agent inspects a piece of cloth by candlelight and gets the impression that it is green, although she concedes that it might be blue, or even violet. As Jeffrey (1983, p. 165) argued convincingly, there is no proposition that can convey the precise quality of this learning experience. And even if we allow for the possibility that the agent learns with certainty the proposition that the cloth appears green, this proposition would be too vague, for various learning experiences that fit this proposition would justify different credences in the proposition that the cloth is green (Christensen 1992). But we might well think that a Jeffrey shift can be mediated by the proposition R E c λ that the agent learns with certainty in (W, F, c). After all, it seems plausible to think that, in the context of JCondi, the agent becomes certain that her credences over some partition E shifted in a certain way. But it seems that, in cases like Jeffrey's candlelight example, this proposition does not describe the precise content of the agent's learning experience, but merely summarizes the effect of this experience on her posterior credences over E. And, again, if (W, F, c) included this kind of proposition, there would be no need for JCondi in the first place. After all, one could just then use only BCondi and conditionalize on the proposition R E c λ , so that, Schwan and Stern (2017) have recently argued that what the agent learns with certainty in the context of JCondi can be represented by a dummy proposition, D, which says what the agent would have learned with certainty were she capable of expressing it. They claim that this in turn allows us to represent updating on uncertain evidence as conditionalization on a dummy proposition in F. For example, when Ann looks at the drawn marble in a dim light and undergoes a Jeffrey shift over {B, V }, we can represent her as if she becomes certain of some ineffable-colour-proposition D. But though this view has considerable merit when combined with Schwan and Stern's causal understanding of when the condition called rigidity 6 is satisfied, it does not provide much help in establishing VET in the context of uncertain evidence. For VET requires an agent to determine prior credences over various evidence propositions that she might learn in the future, but the ineffability of the dummy proposition prevents it from entering into the calculation of her prior credences. In particular, we cannot determine Ann's prior credence in the ineffable-colour-proposition D because we lack the ability to describe what Ann sees in a dim light.
Still, however, it seems reasonable to think that a Jeffrey shift can be mediated by way of a Skyrms input, C * R λ , i.e. by the proposition R E c λ that one learns with certainty in the extended credal state (W * , F * , c * ). 7 If so, then we may think that when the agent updates by JCondi, she ought to update as if she were (i) expanding F to F * and extending c to c * , (ii) conditionalizing c * upon some R E c λ in F * to get the posterior c * λ , and then (iii) recovering her posterior c λ over F by restricting c * λ to F. This idea is captured by the extended Bayesian conditionalization: EBCondi: Given a Skyrms input C * R λ , the agent's posterior credence function c λ should be such that, for every X ∈ F: . Skyrms (1980) has argued that, under some auxiliary assumptions, updating by EBCondi in (W * , F * , c * ) is equivalent to JCondi in (W, F, c). This result can be stated more formally as follows: And suppose that c * satisfies the following two conditions: Proof Given the conditions (C1) and (C2), the proof of Proposition 1 is straightforward: For this view, see Skyrms (1980).
More informally, Proposition 1 tells us that, given a fixed extended credal state (W * , F * , c * ) which satisfies both (C1) and (C2), updating by EBCondi in that space gives the same result as if the agent were updated her credences by JCondi in the smaller credal state (W, F, c).
Note that condition (C1) may be understood as an instance of what Bas van Fraassen (1984) called reflection principle, i.e. a principle which requires one's current credences to defer to one's future credences. After all, (C1) says that the agent's prior conditional credence in proposition E * ↔ E ∈ F given that her posterior credences over E are determined by c λ should be equal to the posterior credence in E, c λ (E). The second condition (C2) tells us that once you learn that some E in the partition E is true, the information about your posterior credences over E should have no bearing on the truth of X .
Importantly, Proposition 1 is not the only way to establish how JCondi can be represented as Bayesian conditioning in some extended credal state. Another influential approach has been given by (Diaconis and Zabell 1982, Theorem 2.1). They have identified a necessary and sufficient condition, sometimes called superconditioning, for one's posterior credence to be the result of conditioning one's prior credence in some larger credal space. It says that c λ comes from c by conditioning in the extended credal space just in case there exists a number b ≥ 0 such that, for every X ∈ F, c λ (X ) ≤ b · c (X ). However, contrary to Skyrms, Diaconis and Zabell's approach places no constraints on how the extended credal state should look like. It only says that we can construct an extended credal space by adding two propositions a and b to the original credal space, where a says that the agent had the learning experience she had, and b indicates its absence. For this reason, Skyrms's approach appears to be better suited for the task of providing VET in the case of uncertain evidence. For VET requires an agent to assign prior credences to propositions describing the possible outcomes of her learning experience. And in order to do this, the agent should be in a position to grasp the content of these propositions, prior to her learning experience. But a is a proposition the agent is in a position to grasp once she has already had the learning experience. Skyrms's approach appears to deliver the required element by telling us that these propositions, represented by the R E c λ 's in the extended credal state, describe the possible Jeffrey shifts the agent might undergo, and hence their content can be grasped before the agent's learning experience.
As we have argued in the previous section, we can also think of uncertain evidence in terms of Pearl style-input. But how should the agent update her credences in response to this input? Here we propose a variant of what Pearl calls virtual conditionalization: 8 VCondi: Given a Pearl-style input L E , an agent's posterior credence in every X ∈ F should be: Let us look at how VCondi works in Ann's case. Suppose that Ann assigns equal credences to the proposition that the selected urn is of type X (S) and to the proposition that the selected urn is of type Y (¬S). Given the experimental set-up she faces, her prior credences for the propositions B, V , S ∧ B, and S ∧ V can be determined as follows: Ann then looks at the drawn marble in dim light, takes B as her evidence and adopts the posterior credence function c λ in response. This in turn provides her with evidence for B and V whose differential support is settled by Ann as α B = 8 Since the likelihood ratio α E is a special kind of the Bayes factor, our variant of VCondi can be understood as an instance of Wagner's (2002) updating which says that upon learning Bayes factors of the form , an agent's posterior credence in every X ∈ F should be: .
Note also that Wagner's updating rule is a reformulation of Field's (1978) updating-yet another updating rule in the context of learning from uncertain evidence.
Similarly, VCondi shows that, when after looking at the drawn marble in a dim light, Ann updates on B, she should become more confident in B, though still less than certain. That is: Although VCondi seems to deliver intuitively correct answers in the context of learning from uncertain evidence, one might still ask if it is a rational update method to follow. Specifically, VCondi hinges on the assumption that, after learning, the agent takes her evidence to be some E from E and adopts the posterior credence function c λ in response. But why shouldn't we think that the agent adopts this posterior also for reasons other than taking E as her evidence? For example, suppose that you take E as your evidence and you believe it provides a strong support for your scientific hypothesis. But you are a scientist in a small town, and you suspect that, like most small-town scientists, you will come very soon to justifiably doubt your hypothesis. So your posterior credence in X is not only a result of taking E as your evidence, but also a result of expecting evidence against X in the near future. But this is hardly acceptable, for expecting evidence against X is not the same as possessing or taking E as evidence against X . 9 Yet there is nothing in the update mechanism of VCondi that precludes this possibility.
But there seems to be a reasonable way to prevent the above possibility. It rests on the following condition: That is, CIndi, when imposed on one's prior credences, expresses a relation of conditional independence between U E c λ and X given E. It says that once we suppose E, the propositions U E c λ and X cease to bear any information about one another. Recall the small-town scientist's example. Suppose you think that it is more likely to have a high posterior credence in X when you don't expect evidence against X than when you do. What CIndi says is that, given E, your posterior credence function which assigns a high credence to X should be insensitive to whether or not you expect evidence against X . This seems intuitively right, for you adopt your posterior credence in X in response to taking E as your evidence which provides a strong support for X , and not in response to expecting evidence against X .
As it turns out, when we assume CIndi, an agent who updates by VCondi would satisfy an instance of reflection principle which goes as follows: provided that c U E c λ > 0. That is, this principle says that your prior credence in X given your anticipated posterior credence function over F which results from taking E as your evidence should be equal to your anticipated posterior credence in X . 10 Note that the proposition U E c λ does not specify the mechanism by which you arrive at the posterior credence function in response to taking E as your evidence. It thus describes your anticipated posterior credence function after the black-box learning event, where the content of your learning experience is left opaque (see Huttegger 2017, ch. 5.4). For your learning experience does not tell you which proposition in E is true.
More precisely, we can state the following proposition:
It is important to emphasize that the above result may be understood as providing a guidance to rational updating in the context of uncertain evidence. For it is often argued that reflection principles regulate rational learning in the sense that an agent cannot violate some instance of reflection principle and at the same time think that she will form her posterior credences in a rational way (see, e.g. Skyrms 1990Skyrms , 1997Huttegger 2013Huttegger , 2017. For example, when you update your credences in cases involving forgetting or memory loss, you often violate reflection principles and hence you do not form your posterior credences by rational learning. 11 If so, then the above result tells us how learning uncertain evidence can be deemed rational. After all, it implies that when VCondi is combined with CIndi, the agent's credences satisfy an instance of reflection principle, and hence are mandated by a rational learning process. 11 For an excellent survey of these cases, see Briggs (2009).

Graves's extension of VET
In this section, we present a variant of Graves's extension of VET. Before doing so, however, let us highlight a few assumptions of this result.
First, we assume that an agent faces a decision problem A, S, c, u , in which A is a finite partition of propositions A 1 , ..., A n representing the actions available to her, S is a finite set of propositions S 1 , ..., S n representing the possible states of the world upon which the consequences of the actions depend, c is the agent's credence function in her credal state (W, F, c), and u is the agent's utility function which assigns to propositions of the form A ∧ S a number that measures the pragmatic utility that would result for the agent were the act A to be performed in state S. 12 Second, following Graves, we assume that before undergoing a learning experience, the agent contemplates a finite set R of possible Jeffrey shifts. And each of these possible Jeffrey shifts can be represented as proposition R E c λ that specifies the agent's assignment of posterior credences to the members of E. Thus, what is crucial to Graves's argument is that there exists an extension (W * , F * , c * ) of (W, F, c) such that: With the above in mind, observe first that, prior to undergoing a Jeffrey shift, the agent would choose the prior Bayes act, i.e. the act A that maximizes (3)

By (G1), (3) is equivalent to 13
After undergoing a learning experience which results in a Jeffrey shift C E λ , the agent would choose an act A to maximize That is, she would choose the act A that maximizes expected pragmatic utility calculated relative to the agent's posterior credence function, c λ , recommended by JCondi. By Proposition 1, (5) is equivalent to Since we want to know ex ante whether updating by JCondi is always helpful in making practical decisions, we need to determine an expectation of max A∈A E c * λ u ( A) |R E c λ , which is the maximal value of making a choice after undergoing a Jeffrey shift (or the posterior Bayes act). Because it is assumed that the agent considers ex ante a finite set of possible Jeffrey shifts, and these shifts can be represented by propositions R E c λ from the partition R, this can be achieved by weighing the posterior value of Bayes act by the prior credence of R E c λ . Hence, the expectation of posterior Bayes act is: Note that, by the equivalence of (5) and (6) , and hence (7) is equivalent to Now let us introduce the quantity u c * λ (A R , A max ), which is the difference between the maximizer of (6), A R , and the maximizer of (4), A max , as assessed by the agent's posterior credence c * λ (S * ) = c * S * |R E c λ for every S * ∈ F * : (9) Importantly, notice that u We can also determine the expectation of u c * λ (A R , A max ) relative to the agent's extended prior function c * over R: Given the above notions, it is now not difficult to show that the maximal value of choosing now cannot be greater than the expected value of choosing after updating by JCondi. To begin with, observe that since u c * λ (A R , A max ) ≥ 0, its expected value is also non-negative, and thus: Then, by the definition of (9), we get By using (6), (12) (13) Then, we get By the law of total probability we have c * ( And by (4) and (7), we get Hence, we have And since (3) is equivalent to (4) and (5) is equivalent to (6), we get as required. Thus, we have shown that if we utilize an extended credal state in which c * satisfies conditions (C1) and (C2), then the agent's update in response to uncertain evidence recommended by JCondi won't result in making foreseeably harmful practical decisions. However, Graves's extension of VET seems problematic for at least two reasons. First, one might claim that it imposes unrealistic cognitive demands on decisionmakers. That is, it requires that they must have ex ante prior credences over all the possible Jeffrey shifts they could undergo in the future. But it is doubtful that, in Ann's case, before looking at the drawn marble, she is aware of all the possible ways by which her visual experience can prompt changes in her initial credences. For a Jeffrey shift is understood as the agent's probabilistic judgement over E or what she takes away from experience rather than what experience delivers to her. And it is perfectly possible that, when undergoing her learning experience, Ann would actually judge probabilistically in a way that she has not considered prior to her learning experience. But how fatal is this objection?
In response, one may argue that this is just an instance of a more general problem, which says that it is implausible that each evidence you might learn is one about which you had a prior credence. After all, it is tempting to think that you might learn a proposition that you do not already grasp. But if this is so, then this problem also affects the original VET which shows that updating by BCondi maximizes expected pragmatic utility. After all, it assumes that each proposition that you might conditionalize upon is already contained in the partition E over which you have prior credences. Hence, the cognitive demands imposed by Graves's extension of VET are no more unrealistic than those imposed by the original VET.
Still, however, we think that this problem faces Graves's approach more severely. For, as it is often claimed, JCondi does not even tell us what partition of propositions should be affected by a given learning experience (Christensen 1992;Weisberg 2009). So, prior to her learning experience, the agent might not be able to even identify correctly the partition of propositions over which she would actually undergone a Jeffrey shift. That is, the agent might contemplate ex ante various Jeffrey shifts over partition E when in fact her learning experience would directly affect an entirely different partition E . For example, although in Ann's case, she may stipulate ex ante that her learning experience would directly affect the partition {B, V }, there is no normative guidance as to how she could prevent the possibility that it would actually affect the partition {B ∧ L, V ∧ L}, where L is the proposition that the lighting is dim. Therefore, by assuming that, prior to a learning experience, the agent is always in a position to grasp correctly the set E that would be directly affected by her anticipated learning experience, Graves's approach imposes far more unrealistic cognitive demands than the original VET.
The second problem involves accuracy considerations, and, as we will argue, is more threatening to Graves's approach than the first one. If we grant that the only goal of updating our credences is to improve our practical decisions, i.e. to maximize expected pragmatic utility, then Graves's approach seems successful. But many philosophers claim that, alongside practical goals, we should update our credences in a way that could also be regarded as epistemically rational. In particular, some argue that our updated credences should minimize expected inaccuracy, i.e. they should be as close as possible to the truth, from the standpoint of our prior credences (e.g. Greaves and Wallace 2006;Leitgeb and Pettigrew 2010). If this is the case, then we will argue that Graves's extension of VET cannot guarantee that updating on uncertain evidence would be both practically and epistemically rational. Since this argument requires more scrutiny, we devote the next section to it.

Graves's extension of VET and accuracy
In the previous section, we have seen that if the agent obeys JCondi, then updating on uncertain evidence would never lead, in expectation, to worse practical decisions. In this section, we argue that this approach is in tension with a purely epistemic approach which seeks to justify the agent's dynamic norms for credences as a consequence of the rational pursuit of accuracy. That is, if we apply the machinery of accuracy-first epistemology, then we must conclude that there appears to be no acceptable way to show that JCondi minimizes expected inaccuracy. Therefore, for an expected inaccuracy minimizer, Graves's extension of VET cannot be satisfactory, since it establishes VET for JCondi and this update rule is hardly justifiable, if ever, in terms of the rational pursuit of accuracy.
To make this point more precise, we assume, following accuracy-firsters, that we have a local epistemic disutility function (or local inaccuracy measure) for each proposition X , s X , which takes X 's truth-value at w, w(X ), and the credence c(X ) and returns the local epistemic disutility (or inaccuracy) of having that credence in X at a world in which X 's truth-value is w(X ), s X (w(X ), c(X )). There are some desirable properties that this function should have, and they single out the class of strictly proper scoring rules. These properties are the following: Extensionality: s X is extensional if it can be thought of as two functions: s X (1, x) which gives the local inaccuracy of having credence x in X when X is true, and s X (0, x) which gives the local inaccuracy of having credence x when X is false.
with equality iff x = p.
Continuity: s X is a continuous function of x on [0, 1].
More informally, Extensionality says that the inaccuracy of having credence x in proposition X depends only on whether X is true or false. Strict Propriety tells us that an agent with probabilistic credence p in X proposition expects only that credence to have the lowest inaccuracy. And Continuity says that the the inaccuracy of having credence x in X varies continuously with that credence.
A standard expected inaccuracy-minimization argument for an update rule says that, before the evidence is in, an agent should expect to have less inaccurate posterior credences recommended by that rule than by any other update rule. To establish such an argument, accuracy firsters say that we should evaluate our posterior credences by looking at their expected inaccuracies, where the expectation is taken relative to a prior credence function. Can we establish such an argument in the case where your posterior credence in X is recommended by JCondi?
As shown by Leitgeb and Pettigrew (2010), the answer is negative. Their point is that, under the quadratic scoring rule, one's posterior credence in any X ∈ F that results from JCondi does not minimize expected local inaccuracy given by That is, if our goal is to choose a posterior credence function c λ that assigns credence c λ (E) for all E ∈ E (i.e. satisfies the constraints imposed by a Jeffrey shift) and is minimal with respect to the expected local inaccuracy of the credence it assigns to each X ∈ F by the lights of one's prior credence function c over the set of possible worlds W, then this cannot be achieved by selecting the posterior credence function that results from JCondi. A crucial assumption of this result is that the local inaccuracy is measured by the quadratic scoring rule, q X = (i − x) 2 , where i = 1 or 0. That is, this rule gives (i) the squared difference (1 − x) 2 between the credence x in X and the value w(X ) = 1 of the indicator function of w (w(X ) = 1 if X is true at w), and (ii) the squared difference (0 − x) 2 between the credence x in X and the value w(X ) = 0 of the indicator function of w (w(X ) = 0 if X is false at w). Importantly, q X is a strictly proper scoring rule. To make Leitgeb and Pettigrew's point more concrete, let us first introduce their alternative update rule: LPCondi: Given a Jeffrey shift C E λ , let d E be the unique real number such that Then, the agent's posterior credence function c λ should be such that, for w ∈ E, Now, suppose that initially Ann does not rule out any of the following possible worlds: • w 1 , in which the selected urn is X and the drawn marble is blue.
• w 2 , in which the selected urn is X and the drawn marble is violet.
• w 3 , in which the selected urn is Y and the drawn marble is blue.
• w 4 , in which the selected urn is Y and the drawn marble is violet.
More precisely, she assigns the following prior credences to these worlds: Ann, then, looks at the drawn marble in dim lighting which results in the following Jeffrey shift over the partition {B, V }: In response to this Jeffrey shift, Ann applies JCondi and gets the following posterior credences: 10 . Let us then apply the quadratic scoring rule, q {w 1 } , as our local inaccuracy measure in order to determine the local expected inaccuracy of Ann's posterior credence in proposition {w 1 } that results from JCondi, E c q {w 1 } (w({w 1 }), c λ ({w 1 })) . If we let x = c λ ({w 1 }), then we get: , it follows that Ann's posterior credence in {w 1 } determined by JCondi does not minimize the local expected inaccuracy. Moreover, as shown by Leitgeb and Pettigrew (2010), the situation is not better when instead of focusing on the expected local inaccuracy, we focus on the expected global inaccuracy. Firstly, given a strictly proper scoring rule (or local inaccuracy measure) for proposition X , s X , we may define a global inaccuracy measure for the credence function c as follows: If we define I s in this way, we may say that the global inaccuracy measure is generated from s. As it is easy to see, I s is an additive inaccuracy measure, for the inaccuracy of c is the sum of the inaccuracies of individual credences that c assigns to the propositions in F. Given an additive and strictly proper 14 I s , we can define the expected global inaccuracy of c λ from the standpoint of c as follows: Specifically, Leitgeb and Pettigrew adopt the Brier score, i.e. a global inaccuracy measure generated from the quadratic scoring rule q X : Secondly, suppose that you are governed by the following norm: you should adopt the posterior credence function that satisfies constraints given by the c λ (E)'s for all E ∈ E, and is minimal amongst the posterior credence functions thus constrained with respect to expected global inaccuracy given by Then, Leitgeb and Pettigrew show that the posterior credence that satisfies the above norm does not result from JCondi, but is mandated by LPCondi. So, again, if your goal is to minimize expected global inaccuracy, you should not update by JCondi. Is there any way to salvage the idea of inaccuracy minimization for the case of JCondi? Levinstein (2012) has suggested that this could be achieved if we replace the Brier score with the following logarithmic global inaccuracy measure: More precisely, L takes the inaccuracy of a credence function c at world w to be the negative of the natural logarithm, ln, of the credence it assigns to w. Armed with the inaccuracy measure L, Levinstein has shown that one's probabilistic posterior credence function recommended by JCondi satisfies constraints given by the c λ (E)'s for all E ∈ E and minimizes expected global inaccuracy given by However, this accuracy-based vindication of JCondi has a number of complications. First, L is not generated from a strictly proper scoring rule, and hence is not itself strictly proper. But without a strictly proper scoring rule, neither an expected-accuracy argument for BCondi (Greaves and Wallace 2006) nor an accuracy-dominance argument for BCondi (Briggs and Pettigrew 2020) can be established. Thus, if we adopt L, we rule out at least two well-trodden accuracy-based justifications of the most popular updating rule, BCondi. Second, L is not additive, for it only considers credences assigned to singleton propositions {w}, and says nothing about credences in the more coarse-grained propositions in F. Consequently, this measure cannot distinguish between probabilistic and non-probabilistic credence functions, and hence cannot be used to establish the accuracy-dominance argument for probabilism, i.e. the norm which says that one's credences ought to be probabilities. 15 Thus, it appears that L is hardly defensible.
The above considerations show that an expected inaccuracy minimizer cannot accept JCondi as an updating rule, since it does not minimize expected inaccuracy under the class of strictly proper scoring rules, and even if it does so under the inaccuracy measure L, this accuracy-based justification of JCondi is hardly acceptable. Hence, although Graves's argument establishes VET in the context of uncertain evidence, it does so for JCondi, and so cannot be accepted by the expected inaccuracy minimizer. If we wish to show that gathering uncertain evidence is both practically and epistemically rational in expectation, we should consider some alternatives to Graves's approach.

An alternative approach
In this section, we present an alternative account of the idea that learning from uncertain evidence cannot lead an agent to expect to make worse practical decisions. Our approach utilizes the following assumptions. First, we assume that the agent assigns prior credences to propositions U E c λ from the finite set U, which is a partition of W. And since U is a partition, exactly one proposition U E c λ will be true after the agent's learning experience. Importantly, the finite set U is already included in W in the small space (W, F, c). Hence, contrary to Graves's approach, we do not need to invoke an extended credal state (W * , F * , c * ) in order to account for the practical value of updating on uncertain evidence. Recall that, in Graves's approach, if the propositions R E c λ were already included in the small space (W, F, c), then, as we have argued, there would be no need to use JCondi in the first place, and so Graves's approach would in fact establish VET in the case where one updates by conditionalizing on the proposition R E c λ . But since we do not assume that the agent updates by JCondi, and hence that no proposition in (W, F, c) that describes your experience can be learned for certain, we take it to be plausible that U is already included in W and the agent learns exactly one proposition U E c λ for certain. Second, our approach assumes that when experience results in the proposition U E c λ being true, the agent settles on the likelihood ratios for U E c λ , and updates her credences by dint of VCondi. Moreover, in order to ensure that the agent adopts the posterior credence c λ in response to evidence E alone, we assume that her prior credences obey CIndi. Thus, we are assured that updating by dint of VCondi is a rational learning process.
With the above assumptions in mind, let us now show how VET can be established for the agent who faces a decision problem A, S, c, u and updates, in response to uncertain evidence, by VCondi. Before receiving uncertain evidence, the agent would choose the act A which maximizes: After undergoing a learning experience which provides a set of likelihood ratios for some U E c λ in U, the agent would choose the posterior Bayes act, i.e. an act A which maximizes Note that the posterior Bayes act is an act that maximizes expected pragmatic utility relative to the agent's posterior credence function, c λ , mandated by VCondi. Since U is a partition, we can determine the expectation of posterior Bayes act as follows: Now the difference, u c λ (A U , A max ), between the maximizer of (27), A U , and the maximizer of (26), A max , as assessed by the agent's posterior credence c λ (S) = E∈E α E ·c(S∧E) E∈E α E ·c(E) , can be given as follows: Note that u c λ (A U , A max ) ≥ 0, for if A max = A U , then u c λ (A U , A max ) = 0, and if A max = A U , then u c λ (A U , A max ) > 0. We can also calculate the expectation of u c λ (A U , A max ) as follows: Now in order to establish that, in expectation, updating on uncertain evidence by VCondi cannot lead you to make worse practical decisions, we will show that To begin with, observe first that since u c λ (A U , A max ) ≥ 0, the expectation of its value must also be non-negative, and so: By (29), we have (33) By using (27), we get Now, assuming CIndi, we get, by Proposition 2, 0 ≤ U E c λ ∈U so because taking E as one's evidence is essentially different from taking E as one's evidence. 16 One may also worry that the use of VCondi in our approach is redundant, for if the agent learns a proposition U E c λ for certain, she might simply conditionalize on U E c λ and set her posterior credence function equal to c ·|U E c λ . In particular, she might determine her posterior credences in every E ∈ E as follows: c λ (E) = c E|U E c λ . To answer this objection, we would like to stress out that VCondi appears to be a more user-friendly update rule than Bayes conditioning on U E c λ . After all, Bayes theorem tells , and so requires the agent to determine the absolute likelihood for U E c λ , c U E c λ |E . And this is a hard task if one wants this quantity to be determined in a reasonable way. For example, it is hard to say what the absolute likelihood c U B c λ |B should be in Ann's case. VCondi enables us to mitigate this problem, for it allows the agent to express only the likelihood ratios of the form , without the need of determining the absolute likelihoods.

Conclusion
The question of how to capture learning without certainties is one of the pressing problems within Bayesian epistemology. A prevalent view among Bayesian epistemologists is that JCondi is an appropriate updating mechanism in the context of learning from uncertain evidence. This rule requires uncertain evidence to be specified as a redistribution of the agent's credences over the propositions in some partition of a set of possibilities. Within a decision-theoretic context, Graves has shown that gathering uncertain evidence so understood is pragmatically rational, i.e. updating by dint of JCondi is worth waiting for in advance of making a practical decision. We have argued that Graves's approach is problematic, for it imposes highly unrealistic demands on the agent, and, more importantly, is in tension with a purely epistemic vindication of updating on uncertain evidence. In its stead, we have suggested replacing JCondi with a different updating rule in the context of uncertain evidence, VCondi. This rule requires uncertain evidence to be specified as a particular set of likelihood ratios. As we have tried to show, VCondi gets cases like Ann's right, and when combined with a fairly plausible assumption, CIndi, is equivalent to a particular sort of reflection principle. Armed with this rule, we have shown how VET can be established when one acquires uncertain evidence, and how updating on uncertain evidence minimizes expected epistemic disutility. We want to emphasize that it was not our goal to show that VCondi is the correct updating rule in the context of learning from uncertain evidence. Rather, our aim was to show that the way we specify uncertain evidence matters to whether updating on uncertain evidence is both pragmatically and epistemically rational. When we specify uncertain evidence as a set of likelihood ratios and employ VCondi, we can show that 16 For a similar view, see Gallow (2019). gathering uncertain evidence maximizes expected pragmatic utility and minimizes expected epistemic disutility. But this goal is hard to achieve, if ever, when uncertain evidence is understood as a Jeffrey shift in response to which we use JCondi.