## Abstract

There is an ongoing debate in the philosophical literature whether the conditionals that are central to deliberation are subjunctive or indicative conditionals and, if the latter, what semantics of the indicative conditional is compatible with the role that conditionals play in deliberation. We propose a possible-world semantics where conditionals of the form “if I take action *a* the outcome will be *x*” are interpreted as material conditionals. The proposed framework is illustrated with familiar examples and both qualitative and probabilistic beliefs are considered. Issues such as common-cause cases and ‘Egan-style’ cases are discussed.

This is a preview of subscription content, access via your institution.

## Notes

As a reviewer pointed out, a decision/game theorist might argue as follows: “We have the decision matrix. It is understood that we can select an act but not a state, and that we have beliefs over states but not over acts. What more is needed?” The reviewer points out that one could read the decision matrix as a list of counterfactuals and take expected utility maximization to be a description of the process of choice. Why worry about more than that? The reviewer goes on to state “For many of us the answer is clear. We have inherent intellectual interest in modeling the reasoning process; we believe that it will be very useful to understand the psychology of decision making, and probably indispensable if we want to do Artificial Intelligence (AI).” However, the focus of this paper is not on the psychology of decision making, nor on its AI implementation, but rather, as explained below, on the complexity of the conceptual apparatus that one needs in order to model deliberation.

One then declares the sentence “if \(\phi \) were the case then \(\psi \) would be the case” to be true at a possible world \(\omega \) if \(\psi \) is true at the most similar world(s) to \(\omega \) where \(\phi \) is true.

An analogy might be useful: suppose that a mathematical theorem has been proved using the “heavy” apparatus of algebraic topology and somebody then offers a much simpler proof that uses only elementary tools. Rather than asking “why should we care

*how*we prove a theorem, if we already know that it is true?”, mathematicians would probably value the conceptual clarification provided by the elementary proof.A Reviewer observed that “Even though there seems to be no agreement whatsoever concerning the meaning of indicative conditionals, the idea that a natural language conditional is a material conditional is by far the hardest to maintain” and added “What would it mean for the proposal put forward in the paper? That a decision maker can make decisions by interpreting conditionals in a completely artificial way?”

Nothing depends on this assumption, but it makes for a more convenient graphical representation of the relation \({\mathcal {B}}\).

In other words, for any two possible worlds \(\omega \) and \(\omega ^{\prime }\) that are enclosed in a rounded rectangle, \(\{(\omega ,\omega ),(\omega ,\omega ^{\prime }),(\omega ^{\prime },\omega ),(\omega ^{\prime },\omega ^{\prime })\}\subseteq {\mathcal {B}}\) (hence the relation is total on the set of possible worlds contained in the rectangle) and if there is an arrow from a possible world \(\omega \) to a rounded rectangle then, for every \(\omega ^{\prime }\) in that rectangle, \((\omega ,\omega ^{\prime })\in {\mathcal {B}}\).

This basic—and very weak—definition of rationality seems to be uncontroversial. For example, it seems to be in accordance with the following definition proposed by Krzyżanowska (2018, p. 4): “If A, in a context C, accepts ‘If \(\phi \), \(\psi \)’, and desires \(\psi \) to be the case, it would be rational for A in C to (attempt to) make \(\psi \) true.”

Unlike Gibbard’s, DeRose’s version has the advantage of not requiring familiarity with the game of poker. DeRose’s example involves a deck with 100 cards.

In the original story, Zack is an accomplice of Pete’s and secretly communicates to Pete the value of Gus’s card. In our simplified version of the story, this is not necessary: Zack knows that Pete either has a 1 – in which case Pete knows that if he plays he loses – or Pete has a 3 – in which case Pete knows that if he plays he wins.

We have chosen utility numbers that would not create confusion with the numbers on the cards. Since the preferences of Pete are taken to be merely ordinal preferences—in the sense that all that is expressed by utilities is that Pete prefers

*W*to*D*and*D*to*L*—any three numbers would do.Suppose that, contrary to what Zack believes, Pete is

*not*rational and chooses to play, so that he loses: the true or actual world turns out to be \(\omega _3\). Can we still claim that Zack is in a position to assert ‘if Pete plays, he will win’? Those who require knowledge in order to validate an assertion as permissible, would answer ‘No’; however, we side with those (e.g. Reuter and Brössel 2019) who take the view that the norm of assertion is justified belief: truth or knowledge are not reqired.If an action is available to the DM, but he is unaware of it, then it cannot enter into his deliberation. On the other hand, the DM might mistakenly believe that he can perform an action which—as a matter of fact—is

*not*available to him (e.g. shooting a gun that he believes to be loaded, whereas in fact it has no bullets), in which case he*will*consider the consequences of taking that action (and, if he attempts to take it, he will be surprised by the outcome).Edgington (2011, p. 84) writes that “A properly causal decision theory should be up-front about causation”. The function

*R*captures the objective causal link from environment and action to outcome. Kyburg (Kyburg 1988) might say that the function*R*represents the DM’s*power*to bring about outcome*z*by taking action*a*in environment*e*. The constraints on how the DM should perceive this causal link are discussed below.In this vein, Ahmed rewords Egan’s psychopath example as follows (emphasis added): “[...] If you do push button A then it is 99 to 1 that you are a psycho.

*This is not because pushing the button makes you a psychopath (it does not)*[...]”As a matter of fact, at \(\omega _1\) the DM decides to smoke and gets cancer.

As Edgington explains (2011, p.77), under the hypothesis that smoking and cancer are effects of a common cause, but one does not cause the other,

The conditional probability of getting cancer, given that you smoke, may still be considerably higher than the conditional probability of getting cancer, given that you don’t smoke (because smoking is a sign that you have the bad gene). But this is no longer a reason not to smoke. You either have the gene or you don’t, and [if you do] refraining from smoking isn’t going to reduce your chances of getting cancer.

In Egan’s smoking scenario, assuming that the DM attaches sufficiently high probability to

*not*having the genetic condition, smoking has a higher causal expected utility than not smoking; this is the reason why CDT would recommend smoking. However, Egan claims that the rational decision in this case is to refrain from smoking because, even if the DM thinks that he probably does not have the genetic condition (and thus that smoking would not cause cancer), matters are different if the DM supposes that he will smoke. For,*on the assumption that he will smoke*, he is likely to have the genetic condition, and thus is likely to get cancer. It should be noted that, while some authors agree with Egan (e.g. Edgington 2011), others do not (e.g. Joyce 2012; Williamson 2019).With the usual understanding that if \(\omega \models a\) then \(\omega \) itself is the unique closest world where

*a*is true.By transitivity of \({\mathcal {B}}_0\) (positive introspection of belief), for every formula \(\phi \), the formula \(B_0\phi \rightarrow B_0B_0\phi \) is valid, that is, true at every possible world; in particular, \(B_0(a\rightarrow x)\rightarrow B_0B_0(a\rightarrow x)\) is valid. As noted above, for every formula \(\phi \), the formula \(B_0\phi \rightarrow B_1\phi \) is valid; in particular, \(B_0B_0(a\rightarrow x)\rightarrow B_1B_0(a\rightarrow x)\) is valid. Finally, since, by (12), \(B_0(a\rightarrow x)\) is equivalent to \(a\leadsto x\), it follows that \(B_1B_0(a\rightarrow x)\) is equivalent to \(B_1(a\leadsto x)\). Thus, for every \(\omega \in \Omega \), \(\omega \models a\leadsto x\) if and only if \(\omega \models B_0(a\rightarrow x)\) only if \(\omega \models B_0B_0(a\rightarrow x)\) only if \(\omega \models B_1B_0(a\rightarrow x)\) if and only if \(\omega \models B_1(a\leadsto x)\).

In the example of Fig. 8, we have that \(L\leadsto H\) and \(R\leadsto C\) are true at every world and thus believed at date 1, that is, \(\omega \models B_1(L\leadsto H)\wedge B_1(R\leadsto C)\) for every \(\omega \in \{\alpha ,\beta ,\gamma ,\delta \}.\)

We put quotation marks around the word ‘counterfactual’ to stress that we are not thinking of such formulas as counterfactuals in the sense that is normally understood, namely in the objective sense of the Stalnaker–Lewis theory.

Note, however, that this would still be a

*subjective*counterfactual: it might very well be that both faucets were connected to hot water and thus the stated counterfactual would be objectively false, but still subjectively true and thus assertable by Bob.

## References

Aumann, R. (1987). Correlated equilibrium as an expression of Bayesian rationality.

*Econometrica*,*55*, 1–18.Aumann, R. (1995). Backward induction and common knowledge of rationality.

*Games and Economic Behavior*,*8*, 6–19.Battigalli, P., & Bonanno, G. (1999). Recent results on belief, knowledge and the epistemic foundations of game theory.

*Research in Economics*,*53*, 149–225.DeRose, K. (2010). The conditionals of deliberation.

*Mind*,*119*, 1–42.Douven, I. (2016).

*The epistemology of indicative conditional*. Cambridge: Cambridge University Press.Edgington, D. (2007). On conditionals. In D. M. Gabbay & F. Guenthner (Eds.),

*Handbook of philosophical logic*(2nd ed., Vol. 14, pp. 127–222). Berlin: Springer.Edgington, D. (2011). Conditionals, causation and decision.

*Analytic Philosophy*,*52*(2), 75–87.Egan, A. (2007). Some counterexamples to causal decision theory.

*The Philosophical Review*,*116*(1), 93–114.Gibbard, A. (1981). Two recent theories of conditionals. In W. L. Harper, R. Stalnaker, & G. Pearce (Eds.),

*Ifs: Conditionals, belief, decision, chance, and time*(pp. 211–247). Dordrecht: Springer.Gibbard, A., & Harper, W. L. (1978). Counterfactuals and two kinds of expected utility. In W. L. Harper, R. Stalnaker, & G. Pearce (Eds.),

*Ifs: Conditionals, belief, decision, chance, and time*(pp. 153–190). Dordrecht: D. Reidel.Gilboa, I. (1999). Can free choice be known? In C. Bicchieri, R. Jeffrey, & B. Skyrms (Eds.),

*The logic of strategy*(pp. 163–174). Oxford: Oxford University Press.Ginet, C. (1962). Can the will be caused?

*The Philosophical Review*,*71*, 49–55.Goldman, A. (1970).

*A theory of human action*. Princeton: Princeton University Press.Joyce, J. (2012). Regret and instability in causal decision theory.

*Synthese*,*187*, 123–145.Krzyżanowska, K. (2018). Deliberationally useless conditionals.

*Episteme*,*17*(1), 1–27.Krzyżanowska, K., Wenmackers, S., & Douven, I. (2014). Rethinking Gibbard’s riverboat argument.

*Studia Logica*,*102*(4), 771–792.Kyburg, H. E. (1988). Powers. In W. L. Harper & B. Skyrms (Eds.),

*Causation in decision, belief change, and statistics: proceedings of the Irvine conference on probability and causation*(pp. 71–82). Dordrecht: Springer.Ledwig, M. (2005). The no probabilities for acts-principle.

*Synthese*,*144*, 171–180.Levi, I. (1986).

*Hard choices*. Cambridge: Cambridge University Press.Levi, I. (1997).

*The covenant of reason: Rationality and the commitments of thought*. Cambridge: Cambridge University Press.Lewis, D. (1973).

*Counterfactuals*. Cambridge: Harvard University Press.Nozick, R. (1969). Newcomb’s problem and two principles of choice. In N. Rescher (Ed.),

*Essays in Honor of Carl G, Hempel: A tribute on the occasion of his sixty-fifth birthday*(pp. 114–146). Dordrecht: Springer.Peterson, M. (Ed.). (2015).

*The prisoner’s dilemma. Classic philosophical arguments*. Cambridge: Cambridge University Press.Reuter, K., & Brössel, P. (2019). No knowledge required.

*Episteme*,*16*(3), 303–321.Spohn, W. (1977). Where Luce and Krantz do really generalize Savage’s decision model.

*Erkenntnis*,*11*, 113–134.Spohn, W. (1999).

*Strategic rationality. Forschungsberichte der DFG-Forschergruppe Logik in der Philosophie*(Vol. 24). Konstanz: Konstanz University.Stalnaker, R. (1968). A theory of conditionals. In N. Rescher (Ed.),

*Studies in logical theory*(pp. 98–112). Oxford: Blackwell.Williamson, T. L. (2019). Causal decision theory is safe from psychopaths.

*Erkenntnis*. https://doi.org/10.1007/s10670-019-00125-2.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

I am grateful to three anonymous reviewers for helpful and constructive comments.

## Rights and permissions

## About this article

### Cite this article

Bonanno, G. The Material Conditional is Sufficient to Model Deliberation.
*Erkenn* **88**, 325–349 (2023). https://doi.org/10.1007/s10670-020-00357-7

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s10670-020-00357-7