1 Introduction

Uncertainty is a pervasive feature of life. From the smallest choices to the big issues of our society, we may not be sure about what we want, what to do, and what will happen. The extent of our doubts can be paralysing—and yet in the face of all this uncertainty we still need to take action.

The pervasiveness of uncertainty makes it central to many different fields, from philosophy and economics to climate science, medicine, psychology, and management. This means that the concept has been addressed from different perspectives and using different labels. Discussions on uncertainty include mentions of risk, ignorance, ambiguity, unawareness, and indeterminacy, as well as distinctions between epistemic, aleatory, external, internal, fundamental, procedural, objective, subjective, ontological, normative, moral, ethical, additive, multiplicative, Keynesian, Knightian, severe, deep, great, strong, empirical, and modal uncertainty—among others.

My starting point is that uncertainty is problematic because it makes it hard to choose what to do. Ultimately, we need to act effectively in our environment, and conditions of uncertainty hinder our efforts to move adequately in the world. We constantly face practical uncertainty (Peter, 2021), or uncertainty about what to do. In order to solve it, we need principles to guide our decision making. And for that, we need to understand the uncertainty we face.

Thus, the aim of this paper is to propose a definition of uncertainty in decision-making. Unitary conceptions of uncertainty tend to trace it back to a limitation in the agent’s knowledge due to some lack of information (Dow, 2012; Winkler, 1996), and often see typologies to be at most pragmatically useful when dealing with complex issues. But given the relevance that different types of uncertainty may have for decisions, I will present a unitary notion that at the same time illuminates important differences between types of uncertainty. I take uncertainty to be a matter of conflicting attitudes (Sect. 2): given this account, uncertainty stems from disagreement, and dealing with uncertainty means dealing with disagreement. But this disagreement can be radical (Sect. 3), so that one key implication of my account is that one cannot expect to solve uncertainty due to radically conflicting attitudes through empirical and logical inquiry. I then proceed to explore the mechanisms of this account by building an illustrative typology of uncertainty (Sect. 4). Finally, I apply the analysis to decision-making (Sect. 5) and I identify the different types of uncertainty that agents can face, and that go beyond the one typically modelled in decision theory.

2 Uncertainty

When thinking about uncertainty, an obvious starting point is to take it to be the opposite of certainty. Reed (2021) distinguishes between two different senses of certainty. Psychological certainty is the feeling of the person that is perfectly convinced of the validity of their opinions; uncertainty would then be the feeling of having some degree of doubt. While interesting per se and relevant to understand human decision processes, this is not the sense of uncertainty that we try to capture here. On the other hand, epistemic certainty is a property of beliefs that is usually taken to be stronger than knowledge, because it concerns beliefs that are in some sense indubitable or infallible (Reed 2021). While according to Reed (2021) we may not currently have a satisfactory account of epistemic certainty, this general picture suggests that uncertainty cannot simply be taken to be lack of knowledge, as knowledge itself can fall short of certainty.

However, Reed’s distinction between psychological and epistemic uncertainty is not exhaustive, because the concept of uncertainty as an epistemic property does not include the possibility that uncertainty may concern attitudes beyond beliefs. Attitudes are standardly distinguished between cognitive and non-cognitive, where the first are those that purport to represent reality (e.g., belief, suspect … ), while the second are those that do not (e.g., desire, hope, aversion … ). Among the doubts explored in 5, it seems possible that at least some concern non-cognitive attitudes—or at least, they could reasonably be interpreted as such. If this is so, then an account of uncertainty that managed to cover doubts on both types of attitudes would be preferable, as it would allow for the possibility to have some degree of doubt on all sorts of judgements, including those expressing non-cognitive attitudes.

As we have seen, the question of uncertainty arises because we need to act effectively in the environment. Thus, all the uncertainty faced by the agent is problematic insofar as it hinders their ability to deal with practical uncertainty, or uncertainty over what they should do. If the agent is uncertain about what to do, it means that there are more than one mutually exclusive alternatives, and the agent has (inconclusive) reasons for more than one of them. If either of these conditions failed, the agent would not be uncertain about what to do: the uncertainty arises from conflicting reasons, that is, reasons supporting mutually exclusive alternatives.

Makins (2021) proposes to interpret uncertainty over non-cognitive attitudes by appealing to the psychological notion of ambivalence, which is the predicament of an agent having pro tanto competing reasons for alternative options. The agent has pulls in opposing directions—a situation which is different from indifference, where they have no significant pulls either way.Footnote 1 Just as uncertainty, ambivalence is a gradable notion, the degree of which is determined by the balance and weight of the competing reasons: the closer the balance, and the stronger the weight, the higher the ambivalence. Thus, for Makins ambivalence is a form of doubt that is not due to lack of information like uncertainty, but to the presence of competing reasons.

However, the opposition between uncertainty for beliefs and ambivalence for other attitudes may not be as strict. Information, evidence, theoretical considerations—these can all be reasons to believe something. The agent is uncertain about some proposition because they have reasons to believe it, but also reasons not to. When uncertain about tomorrow’s weather, I have reasons to believe that it may rain—that we are in a wet season and that it rained today, for instance—and also to believe that it may not—this may be what the forecast says. If all my information points in the direction of rain, then my uncertainty will be significantly lower. I may still consider the information to be inconclusive, which on the other hand may in itself be a reason not to believe that it will rain. In other circumstances, scarcity of information itself may be a reason not to believe something, and considering something to be possible a reason to believe it may happen.

We can then generalise the intuition that uncertainty is a matter of conflicting reasons and propose the following definition:

Uncertainty. An agent is uncertain about some attitude if and only if (i) there are some mutually exclusive alternatives to it, and (ii) the agent does not have conclusive reasons for any of them.

Some clarifications are due in order to understand this definition. By (cognitive or non-cognitive) attitude I refer to an intensional mental state, i.e., a mental state about some content. For cognitive attitudes, I assume that this content is propositional, i.e., that what is believed are propositions: that all attitudes have propositional content is not uncontroversial (Grzankowski, 2015), but it is a simplification that I do not extend to non-cognitive attitudes. Finally, notice that this definition does not capture Reed’s psychological uncertainty. It is possible for someone to feel doubtful even in the absence of competing reasons, just as it may be possible to feel certain even when there is some conflict. Instead, the definition aims to capture the notion of uncertainty with respect to both cognitive attitudes (epistemic uncertainty) and non-cognitive ones (Makins’ ambivalence).

A few words about reasons. As mentioned above, the reasons involved in this account are motivating, rather than normative (Alvarez, 2017; Dancy, 2000)—namely, reasons in the eyes of the agent, rather than whatever the agent should consider a reason. Beside this, the nature of reasons has long been investigated in philosophy. They have been taken to be facts (Raz, 1999), evidence (Broome, 2013), and propositions (Dietrich & List, 2013; Sher, 2019), among other things, and their connection with internal motivations is highly debated (Korsgaard, 1996; Williams, 1979). In this text, I will follow Scanlon (1998) and limit myself to saying that reasons for something are considerations counting in favour of that something (p. 17). Nothing more substantial about the nature of reasons should be required by this account of uncertainty.

In deliberation, the agent will consider all the reasons available in support of the different alternatives, both for and against (for the difference between reasons for and reasons against, see Snedegar, 2018). Whether this set of reasons is deemed to tilt conclusively in favour of some alternative will depend on the agent’s circumstances and on context: holding an attitude with higher stakes may require more reasons (or more weighty reasons, see Makins (2023)) than one with lower stakes.

Note that the case where there are simply not enough reasons in favour of any alternative falls into the second case, as lack of sufficient reason to hold some attitude counts as a reason against holding it. Let me try to justify this claim. Reasons for a certain option are indispensable to take some agent’s action as intentional: an action is intentional only if it is done for a reason (Alvarez, 2005; Anscombe, 1957; Audi, 1986; Miller, 2008). Indeed, the absence (or the insufficiency) of reasons for a certain action can be used to explain or justify not doing it—the agent did not do a because they had no reason to do a. Among cognitive attitudes, absence of evidence that p is a reason not to believe that p.Footnote 2 In this sense, the absence of (sufficient) reasons for a certain option is a reason against that option. There is a significant asymmetry here: while lack of reasons for a certain attitude is a reason not to hold that attitude, lack of reasons against it is not a reason to hold it.Footnote 3 If I have no reason to do something, then this is a reason not to do that; on the other hand, if I have no reason against doing something, then that in itself is not a reason to do it.Footnote 4

Thus, from this perspective uncertainty arises from the presence of conflicting reasons, i.e., reasons that support incompatible alternatives (Sher, 2019), and it is resolved whenever either of the two conditions listed in the definition fails—either because there is only one possible remaining option, or because the balance of reasons becomes conclusively tilted in favour of one of them.

Just as Makins’ ambivalence, this is a graded notion of uncertainty. The severity of the uncertainty increases the more balanced the reasons and the farther away from the threshold of sufficiency, and it is particularly severe in the extreme case of complete absence of reasons in favour of any of the alternatives.Footnote 5 Consideration of new reasons may thus reduce the agent’s uncertainty by pushing the balance towards one alternative, and even resolve it if they push it beyond the required threshold. The extent to which this is possible depends on the nature of the disagreement between the reasons involved.

3 Disagreement

If uncertainty is a matter of conflicting reasons, then dealing with uncertainty means dealing with disagreement—albeit not between people, but between reasons. If this is so, then understanding the nature of the underlying disagreement may be necessary to properly approach decision making under uncertainty.

In general, if there is a disagreement, we may think that there is a mistake somewhere: one of the sides may be misled by some biases or cognitive shortcomings. Once these are removed, the disagreement may persist due to epistemic limitations, like lack of some information or ignorance of relevant facts. If this is the case, then eliminating these epistemic limitations should eventually eliminate the disagreement as well. I will call this sort of disagreement amenable. But in some circumstances the disagreement may survive even under ideal cognitive and epistemic circumstances. This sort of disagreement can be called radical (Tersman, 2006), and does cannot be entirely eliminated with increases in evidence or other epistemic progresses.Footnote 6

The nature of this distinction is such that the two types of disagreements are not approachable in the same way. While we can expect evidence and epistemic investigations in general to be able to resolve, at least in principle, amenable disagreement, the same cannot be said for radical disagreement. It is possible that some disagreement has both amenable and radical components, so that better epistemic conditions may lead to a reduction of the overall disagreement by dispelling the conflict on some aspects of the matter at hand; but the radical components will persist and will not have been reduced.

If there are different types of disagreement, and uncertainty is a matter of disagreeing reasons, then we may expect to find corresponding types of uncertainty, as long as we can expect the reasons underlying uncertainty to be in both amenable and radical disagreements. Thus, uncertainty arising from radical disagreement will not be resolvable with improvements in the epistemic or cognitive conditions of the agent, while the one coming from amenable disagreement will be. We can expect to resolve uncertainty with increasing evidence or by removing biases only if the alternative reasons are in an amenable conflict, otherwise we should expect it to persist even under ideal conditions.

What is crucial now is to understand in which cases we can have radical disagreement between reasons, if ever. As we have seen, there can be uncertainty over both cognitive and non-cognitive attitudes. A central difference between the two is the “direction of fit”, which is mind-to-world for those attitudes whose content should conform to the world and world-to-mind for those whose aim is for the world to conform to their content (Björnsson & McPherson, 2014). This means that while cognitive attitudes are evaluated in terms of the accuracy of their fit with reality, non-cognitive attitudes are not. For this reason, we say that a belief is true or false, but not a desire.Footnote 7

Given that cognitive attitudes are evaluated on the accuracy of their content, we can expect evidence and other epistemic considerations to be relevant to their assessment. They can provide reasons to believe something, eliminate reasons in favour of some option, and ultimately settle the question of what the content of a cognitive attitude should be, even though we will see that the extent to which this is possible depends crucially on this content. On the other hand, non-cognitive attitudes are not evaluated in terms of accuracy, so the alternatives are not favoured in virtue of their correspondence with reality. If this is so, then radical disagreement is always possible with non-cognitive attitudes: for them, disagreement may be due to reasons that support different alternatives on grounds other than their accuracy, meaning that they would not change under better epistemic conditions.

But radical disagreement may not be a prerogative of non-cognitive attitudes, and be possible even with cognitive ones. For instance, it could arise when the proposition over which there is disagreement—i.e., the content of the attitude—is neither true nor false, perhaps because there is no corresponding fact of the matter. In that case, more evidence will not settle the disagreement on whether that proposition is true or false. However, someone who supports the classical principle of bivalence will deny that any such proposition exists, because all propositions are either true or false. And many-valued logics rejecting the principle do not provide much consolation, because rejecting bivalence does not imply that some proposition lacks a truth value: rather, it means that there are values beyond true or false. If this is so, then disagreement over the proposition could still be resolved with the assignment of a specific third value. Similarly, with views allowing indeterminacy disagreement over indeterminate propositions may be resolved by recognising them as such.

We do not need here to take a stance on the principle of bivalence, or even on whether all propositions have a truth value. Even if they did, this would not imply that all propositions are equally epistemically accessible. Fitch’s paradox, for instance, is a challenge to the claim that all propositions are knowable (Brogaard & Salerno, 2019). Moreover, moral sceptics may think that moral judgements can never be known or justified (Bambrough, 2020), and the value of transformative experiences may be inaccessible ex ante (Paul & Quiggin, 2018). If a proposition is intrinsically inaccessible, then no amount of evidence could ever settle the matter over its truth: disagreement over epistemically inaccessible propositions may be radical.

Again, someone may be sceptical with respect to the existence of inaccessible propositions (Van Ditmarsch et al., 2012; Edgington, 1985; Van Benthem, 2004). And again, we do not need to settle the issue here. After all, we are trying to understand the notion of uncertainty for decisions—which means that we may considerably restrict our scope. It may be possible that there are no in principle inaccessible propositions, but that some propositions are hard enough that they cannot be known given the horizons of the decision, and can therefore be considered inaccessible for the purposes of the decision at hand. A moral realist could think that there are moral facts, that propositions expressing them are either true or false, and that these facts are accessible to our knowledge—and still believe that we are in no way close to discovering such facts, so that uncertainties over moral facts within a decision can safely be treated as if these were inaccessible, given that no epistemic investigation is likely to settle disagreements over the issue within the time horizon of the decision.

I am not contending that this is the case, and that even if there are moral facts then we cannot know them in time for our decision. I am not making claims with respect to what is metaphysically or epistemically the case; rather, I am tracing the route of how it could be possible for radical disagreement to exist. Disagreement can always be radical when concerning alternative non-cognitive attitudes; but it could arise even with cognitive ones. Disagreement could be radical when the proposition at stake does not have a truth value (if any such proposition exists), given that then no amount of evidence could settle the issue of its truth. It could also be radical with propositions that do have a specific truth value, but that are not epistemically accessible (if any such proposition exists), given that then no information could make them knowable to us. Furthermore, for the purposes of decision making disagreement could be radical with propositions that have a truth value and that are in principle accessible, but not within the horizons relevant to the decision at hand.

The fact of the matter that remains is that there are cases in which disagreement does not tend to go away with the removal of cognitive or epistemic obstacles, even with cognitive attitudes. Whatever the metaphysics or epistemology behind it, the phenomenon persists (and may even be rational under some circumstances, see Nielsen & Stewart, 2020), and there is no reason why we should not expect disagreements between reasons to show the same behaviour. Moreover, admitting the possibility of radical disagreement does not lead to scepticism. The phenomenon could well be very limited in scope, far from requiring that we cannot know anything. Finally, it is important to note that none of these conditions implies radical disagreement, so that whenever they hold then there must be some radical disagreement. What they do is to allow for its possibility, but they are entirely consistent with lack of disagreement or with a purely amenable one.

Reasons disagree by supporting or undermining mutually exclusive alternatives. The uncertainty is resolved when the balance of conflicting reasons is tilted enough in favour of one of the alternatives for the agent’s purposes. Remember that, as we have seen in Sect. 2, if all the reasons are in favour of one alternative, but they are somehow insufficient or inconclusive, the insufficiency itself can be a reason against that alternative, so that the uncertainty could still persist. In case the disagreement is amenable, we can expect that the removal of some cognitive or epistemic limitation will move the balance of reasons towards one alternative. In case of radical disagreement, there is no reason to expect a convergence, and therefore a resolution of the uncertainty.

4 Types of uncertainty

Our discussion on disagreement tells us that there are uncertainties that we cannot expect to resolve with more logical or empirical inquiry. If this is so, then it is crucial for an agent to understand the uncertainty they face, and set their expectations—and course of actions—accordingly. In this section, I will characterise different types of uncertainty depending on whether they concern cognitive or non-cognitive attitudes, and on whether the contended matter is one over which there may be radical disagreement. Even though I try to make my analysis compatible with a variety of positions, I do not expect the reader to agree with it, nor do I commit to it my account of uncertainty. Rather, I hope to demonstrate how one can draw a map of types of uncertainty that help identify those potentially based on radical disagreement. The purpose of this section is thus to illustrate the mechanisms of the account of uncertainty presented, so that the reader can derive from it the typology that best suits their commitments and contextual needs. As we have seen in the introduction, the focus on different types of uncertainty is widespread in discussions on uncertainty, which employ a plurality of labels to identify various kinds of uncertainty and of contexts of application. This section will allow us to see how some of these approaches can be understood within the framework proposed in Sects. 2 and 3. Moreover, it will provide us with some vocabulary to address our starting concern, i.e., the question of uncertainty in decision-making. We will do that in Sect. 5.

Cognitive attitudes. Cognitive attitudes purport to represent reality and their content is apt for truth value, which they inherit. But it is important to remember that aptitude to truth values may not necessarily imply that the truth value is determined or accessible. We can therefore distinguish between determinate and indeterminate propositions, where the second are those whose truth value is not available to a perfectly informed agent.Footnote 8

The value of determinate propositions can usually be settled with evidence: there is some amount of information that could, at least in principle, solve the agent’s uncertainty over their value. Reasons provided by evidence tend to support the true value of the proposition. We can call this empirical uncertainty. Alternatively, it can be settled with logic: tautologies and contradictions have determinate truth values independently of empirical evidence. Uncertainty over them would be logical uncertainty. Given the sensitivity to evidence and cognitive conditions, disagreements over these propositions—due e.g., to conflicting pieces of evidence—will be amenable. However, the same cannot be said for propositions with indeterminate value (Pravato, 2020).

Let us now move to indeterminate propositions. Propositions can be indeterminate for several reasons. First, they can include vague concepts (Sorensen, 2018). These are concepts that have borderline cases, as is the case for instance with the predicates “old” or “child”: there is no specific age benchmark after which one is old or stops being a child. Someone may be uncertain whether Bob should be considered old or not. Let us assume that borderline cases are neither true nor false. Then, the proposition “Bob is old” may be true, false, or neither true nor false. An agent can legitimately be uncertain whether Bob is old or whether he falls in a borderline case, and therefore whether it is neither true nor false that he is tall. Some people may have reasons to believe that Bob is a clear example of old person; others may have reasons to be certain that he is a borderline case. With vague concepts, the borders between clear and borderline cases are themselves vague, so that there may be legitimate disagreement and uncertainty about how to consider some specific instance. Note that no amount of conceptual analysis or empirical research can settle the question. We could stipulate that “old” starts right after one passes the middle of the average life expectancy, but we would be creating a new concept: considering someone young one day and old the day immediately after does not correspond to our shared understanding of the concept “old”. We can call uncertainty concerning the status of borderline cases of vague concepts vague uncertainty.Footnote 9 It is possible that disagreement concerning borderline cases persists even under ideal epistemic and cognitive conditions, and so will the uncertainty.

Propositions may be indeterminate also because their truth value depends on some non-deterministic aspects of reality. I will call the corresponding uncertainty ontic. In the literature, this type of uncertainty has long been recognised in opposition to uncertainty as a property of the agent (see e.g., Davidson, 1996; Dequech, 2004; Fishburn, 1994; Kahneman & Tversky, 1982; Kozyreva & Hertwig, 2021; Perlman & McCann, 1996). Even though distinctions on these lines are quite widespread in the literature on uncertainty, appearing under labels like epistemic/aleatory or internal/ external, the traditional framing in terms of a reality/agent opposition is confusing: as we have defined it, uncertainty is always a property of the agent, not of the world. The relevant distinction concerns the source of the agent’s uncertainty. Ontic uncertainty does not mean that the world is uncertain, but that the agent is uncertain due to the world being non-deterministic—that is, of course, assuming some indeterminacy in the world, which could be due for instance to the actions of sentient beings or to the non-deterministic nature of the world that seems to be suggested by quantum physics.

The relevant aspect of ontic uncertainty is that it is about some state of the world that does not exist yet, which means that there is no amount of information that can solve the uncertainty before the relevant state obtains. Notice that the fact that the event has not happened yet does not mean that we know nothing about its possibility. We may have sufficient information to assign precise probabilities to its occurrence, and if we do not, we may look for it and thus reduce our uncertainty. We may have accurate information regarding the chance that a certain quantum event happens, and if we do not, we may want to ask experts for their opinion or study the phenomenon further. The uncertainty over the probability of the event may be empirical, but the uncertainty over the event itself is ontic: we have (possibly unbalanced) reasons to believe both that it will happen and that it will not, which means that we have conflicting reasons for two alternative contents of our belief and, consequently, we are uncertain about it. This uncertainty cannot be resolved with evidence.

Of course, that the world is genuinely non-deterministic is an assumption that many may decide not to share. The debate on whether free will exists and, if so, if it is compatible with determinism is well alive in philosophy (see O’Connor and Franklin (2021) for an overview and List (2019) for a recent contribution), and so is the one on the meaning of quantum mechanics in physics: while some take it to show that there is some genuine chance in the physical world (e.g., in the modal tradition: see Lombardi and Castagnino (2008)), others take it to be compatible with a deterministic world (e.g., in the Everettian tradition: see Albert and Loewer (1988) and Saunders (2010)). The possibility of adopting a fully deterministic view of reality is open. In that case, ontic uncertainty would be reduced to empirical uncertainty. In general, the scope of issues over which one can have ontic uncertainty depends on the extent of one’s determinism.

Non-cognitive attitudes. Let us now move to uncertainty about non-cognitive attitudes. Tastes and emotions can be considered to be non-cognitive: basic emotive expressions like “ouch” or “yuck” are not taken to express beliefs.

One could wonder whether uncertainty over tastes or emotions could be possible at all. It seems that one could have nuanced emotions, or feel more or less strongly about something—but there is no sense in which it is possible to be uncertain about one’s own emotions or tastes: any doubt should be solved with a little introspection. However, given our account of uncertainty, we can ask whether it is possible to disagree over tastes. Let us look at the following exchange:

A:

Coffee is tasty.

B:

No, it’s not. It’s too bitter.

There is a sense in which the exchange is an instance of disagreement, as the two agents disagree over whether coffee is tasty or not. However, it seems that neither is at fault, in the sense of making a mistake with their position: we would then have a case a faultless disagreement (Huvenes, 2014), a concept that some view as intrinsically contradictory (e.g., Iacona, 2008). If no disagreement is possible, then no uncertainty is possible. Nonetheless, while we may not have interpersonal disagreement over tastes, the situation may be different for intrapersonal disagreement: two people may not disagree, but the reasons for someone’s attitudes maybe can. I can enjoy the smell of coffee, for instance, while disliking its bitterness; I can like the taste of some food, while finding its texture offputting. These are conflicting reasons: smell is a reason to like coffee, while bitterness is a reason not to. In cases like these, my attitudes are ambivalent in Makins (2021)’s sense, and thus I can be uncertain about my taste for coffee because I have conflicting attitudes towards it. Assuming that I have a thorough experience of coffee and no taste dysfunctions, then we can assume that this disagreement and the consequent uncertainty will persist even under ideal conditions. I will call this emotive uncertainty.

Moral uncertainty. Another set of attitudes are the ones expressed by moral statements. Disagreements and uncertainty about moral issues are widespread (Tersman, 2021). While some argue that a range of improvements in human conditions count as moral progress (Sauer et al., 2021), people often disagree about what is good or right, and this disagreement seems to be importantly radical. The persistence itself of philosophical debates that do not seem to be moving towards some consensus is evidence of how disagreement over moral questions is resistant to empirical inquiry and expertise. Consequently, moral uncertainty cannot usually be entirely resolved with evidence. Stylised examples are provided by philosophical thought experiments like the famous Trolley Problem (Foot, 1967). In these scenarios, all the relevant information is provided to the reader, but the uncertainty (and the debate) over the moral thing to do is not resolved, with each side presenting conflicting reasons in the form e.g., of arguments, value judgements, or experiences.

Whether moral judgements are cognitive or not is a very open debate (see van Roojen, 2018), and for this reason I discuss them in a separate category. While moral cognitivists claim that moral considerations are apt for truth values, non-cognitivists deny that, and claim that moral statements express attitudes other than belief. Non-cognitivism is a version of anti-realism about ethics, as it implies that there are no moral properties or facts to have beliefs about. But cognitivists can be anti-realists too: error theorists claim that while moral claims do express beliefs, they are all false, because the objects they are about do not exist. On the other hand, moral realism contends that there are objective moral truths, and that therefore moral considerations purport to represent aspects of reality.

Moral realism is not sufficient to put moral uncertainty on the same level with empirical uncertainty (Shafer-Landau, 1994). Assuming the existence of moral properties and facts, taking them to be part of our environment, does not necessarily imply that we are also able to know them in the same way we can know other aspects of reality. If we cannot, then moral uncertainty, while being cognitive, would be intrinsically different from empirical uncertainty because its objects are in principle unknowable. Note that moral scepticism of this kind is possibly open to all main positions about the metaphysics of morality: if we take the traditional view of knowledge as justified true belief, we see that moral knowledege is excluded for the non-cognitivist who takes moral claims not to express beliefs, and for the error theorist because moral claims are never true. But even the realist assumption that there are true moral beliefs does not exclude the possibility that these true belief can never be justified (Joyce, 2021).

Without taking a stance on these complex debates, the discussion shows that the possibility of non-cognitive uncertainties depends on one’s metaphysical and metaethical commitments. One can be a full cognitivist by assuming that moral considerations are cognitive and that uncertainty over non-cognitive attitude is not possible. But even in that case, that is still not sufficient to justify treating moral uncertainty in the same way as empirical uncertainty, because the epistemology of moral and empirical considerations may still be significantly different. Be it as it may, widespread disagreement and uncertainty over moral questions seem to be the norm. This means that, even if there were moral facts and these were accessible, they would probably not be so within the time horizon of a decision, given that metaethical debates are far from settled, so that for decision making purposes they can be treated as if they were inaccessible.

While I have here discussed moral judgements, much of what I have said can be applied to all normative considerations: judgements on aesthetics or rationality, for instance, may be taken to express cognitive or non-cognitive attitudes and to be epistemically accessible or not—in any case, they tend to be questions over which we expect disagreement to persist under ideal circumstances, at least for the horizons relevant to much decision making.

In the next section, I will apply this account of the different types of uncertainty to decision-making. The labels I have introduced—empirical, logic, vague, ontic, emotive, moral—will be used as shortcuts to the relative discussion in this Sect. 5

5 Uncertainty in decisions

We can now try to understand how the notion proposed works by applying this tentative map to decision making. The pluralist perspective on uncertainty presented here allows to expand the understanding of uncertainty in decision-making beyond the standard assumptions of decision theory. By distinguishing different types of uncertainty based on the underlying conflicting reasons, we can identify which components of the agent’s practical uncertainty can be expected to be resolvable with evidence. In doing so, we can have a first test of the implications that the account presented here has for decision making.

Let us then have a look at a textbook decision and at how decision theory understands the uncertainty involved there. Imagine that you are leaving for work, and have to decide whether to walk there or take the bus. Taking the bus is faster—unless there is a traffic jam, in which case walking is safer. Decision theorists would model your decision problem in a table that could look like this one:

Table 1 Possible decision model

In Table 1, the left column lists the alternative options you face, the top row the different conditions you may find, and in the other cells the outcomes resulting from performing some option under some condition. In mainstream decision theory (specifically, the expected utility tradition following more or less loosely von Neumann & Morgenstern, 1944; Savage, 1954), you should then assign numerical values to the outcomes depending on their desirability and sum them for each act, weighed by the probability of the condition under which they each obtain. Then, you should choose the act that maximises that sum (or, more precisely, your choices could be represented as if you made these computations, as long as they follow some axioms).

This is how standard, textbook decision theory models decisions. This default view (Bradley & Drechsler, 2014) makes two claims: first, for the purposes of decision-making all practical uncertainty can be reduced to uncertainty over the state of the world—the possibility that some event or condition obtains or not. Second, uncertainty over the state of the world can be represented with a single probability function. Thus, probabilities are supposed to be able to represent entirely the agent’s uncertainty.

Most discussions over the default view have focused on the second claim. In some situations, the information available may be such that it does not warrant a representation of uncertainty with a single probability function. Objections to the (normative and descriptive) adequacy of the default view to cover these cases have a long history (see e.g., Knight (1921) and Luce and Raiffa (1957)), giving rise to severity-based types of uncertainty like ambiguity (Ellsberg, 1961), ignorance (Bradley, 2017; Einhorn & Hogarth, 1986), and unawareness (Schipper, 2014; Steele & Stefánsson, 2021), as well as to decision theories developed to deal with each of them.

The first claim has received less attention; and yet, knowledge of the state of the world in itself does not imply any choice. Even if the agent had perfect information, they may still have other doubts regarding what they should do. If this is so, then the default view—whatever its merits in dealing with uncertainty over the state of the world—is not a complete representation of the uncertainty decision makers could face.

Let us go back to our example on the bus/walk decision. In this example, all the uncertainty faced by the agent at the moment of leaving for work has been represented with probability values assigned to possible traffic conditions. The only thing that the agent has doubts about is whether there will be traffic or not; the rest is taken as given, as certain. But the agent may also be uncertain about whether they should include weather considerations in their deliberation, which alternatives they have, how to evaluate possible consequences, or how reliable their probabilities are. In order to understand the role that uncertainty plays in decision making and how to best respond to it, it is important to understand how it can enter into decisions. Thus, I will explore the different ways in which uncertainty can involve different elements of the decision, and analyse it in terms of the disagreement-based notion of uncertainty.

States. Uncertainty over the actual state of the world—i.e., uncertainty over what is the case or will be the case—concerns the agent’s beliefs, and is therefore a cognitive type of uncertainty. There are mutually exclusive alternatives, each of which is supported by some (evidential, theoretical) reasons. The uncertainty may concern empirical aspects, but also logical propositions or the status of borderline cases of vague concepts, as well as indeterminate aspects of the world: the uncertainty can therefore be of all the cognitive types analysed above. Second-order uncertainty over probabilities is mostly cognitive as well, as it regards beliefs over beliefs; however, the source of the uncertainty over these second-order beliefs may be uncertainty over the selection of experts, an issue which may include normative considerations over which there may be radical disagreement.

Model. Your decision about how to get to work could have been modelled in uncountably many other ways. There may be events that you did not think were relevant for your decision: being uncertain about what the possible states are is what Bradley and Drechsler (2014) refer to as state space uncertainty. Our example includes the possibility of traffic but it does not include the possibility of rain, for instance, or of meeting friends on the way. One could have included those considerations as well, and obtained a more detailed model for a more thorough decision. But in fact, there are an indefinite number of things that it has not included, the majority of which are entirely irrelevant for your decision, or just not worthy of your reflection. There are many alternative models for the same decision, and these may even lead to different recommendations: the possibility of traffic may induce you to walk, while the possibility of rain may push towards taking the bus. Models that included one and not the other may yield different conclusions.

Moreover, the agent may have doubts about the identification of the alternatives at their disposal. They may be unsure about whether some option is actually available, in terms of feasibility as well as permissibility. Perhaps using a bike would be the best option, if they managed to find one, or maybe using roller-blades, if they still knew how to skate. Or perhaps they may consider whether stealing a bike to get in time at a crucial meeting could be permissible.

This means that how the decision is modelled is a normatively important matter, and that there may be better and worse models. But if this is so, how do you know whether the model you are using is in fact the one you should use? More specifically, how do you know what to include in your decision model? This is uncertainty regarding the selection of elements for the three sets of the decision model. Which possible conditions should the agent consider? Which possible outcomes should they evaluate? Which actions should they take as their options?

Uncertainty over the selection of acts can involve both cognitive and non-cognitive attitudes. If it is a question of the act’s feasibility, then it will be a primarily empirical and in general cognitive concern. If, instead, it is a question of the act’s permissibility, then the concern will be mostly normative. The question can even have emotive traits, as the agent may consider discarding some otherwise available act on the grounds of it being too unpleasant to perform. As for uncertainty over the identification of states and consequences, it is importantly uncertainty over what matters to the agent: the consequences of the alternative options that should be considered are those that the agent wants to avoid or achieve, and the states to include in the model are those required to bring about those consequences. The agent can thus be uncertain about the model because they are uncertain about what matters to them in the decision or because they are uncertain about which conditions are required to obtain a certain outcome.

Utility. The agent may be uncertain about how to evaluate some possible outcome. This uncertainty can hit on two different dimensions: the single values and the overall function. Let me try to make this distinction clearer.

A utility function is a set of values assigned to possible outcomes. These values may be perfectly precise, but they can also be entirely unknown or only known to fall within a range. Ceteris paribus, an agent with imprecise or unknown utilities will be more uncertain about their decision than one with precise values. This uncertainty may be due to doubts about whether some outcome has a certain property, and therefore about its proper evaluation. Or it can be due to the complexity of outcomes composed of a variety of aspects: the agent may like some of them and dislike some others, to the extent that it is hard to form an overall, all-things-considered evaluation. When too many things are at stake, uncertainty about the net value of the alternative is a possibility. This uncertainty can have empirical or ontic elements—if the agent is unsure about some factual aspect of the consequences they are evaluating, for instance—but primarily it will concern emotive and/or normative aspects. The first ones will be relevant if the uncertainty comes from doubts concerning the agent’s own tastes, while the second ones if it comes from doubts on what different normative considerations say about the alternative consequences.

However, the agent may also be uncertain about two sets of equally defined values. For instance, a woman considering pregnancy may not know whether she should evaluate the prospect of motherhood according to her current utility function or according to the one she would have if she had the child: this is an example of a transformative decision, i.e., a choice that could change what you value (Pettigrew, 2019). Assuming that she knows both her current utilities and those she would have as a mother, then her uncertainty is different from one that only concerns the precision of values. This sort of uncertainty can also arise because the agent is uncertain about the relevant moral or aesthetic considerations: they may not know the answer to an ethical dilemma because they are unsure whether they should be utilitarian or not about it (and not because they do not know which values an utilitarian would assign), or they may be uncertain about the value of a painting because they do not know whether to evaluate it in terms of the pleasure they obtain from it or of its originality (and not because they are ignorant in art history and do not know how to evaluate the painting’s originality). This uncertainty is primarily normative, as it arises from doubts about the relevant moral, aesthetic, or broadly evaluative considerations.

Probability. The agent may not know whether their probability assignment is correct because they may be uncertain about which credences they have, and therefore whether the assignment accurately represents them, or about which credences they should have, and therefore how much confidence they should have on this set of probabilities. This is uncertainty about one’s uncertainty: the probability assignment is supposed to reflect the agent’s uncertainty over the possible states of the world, but the agent may be uncertain about the reliability of the information they have (see Hansson (1996)’s uncertainty of reliance), or be aware that their probability judgements are based on very poor evidential grounds. It will primarily have cognitive components, but it may include normative considerations about expertise.

Thus, uncertainty over probability is second-order uncertainty over states. This sort of uncertainty can be represented with second-order probabilities, or with weights over different probability distributions (e.g., Gärdenfors & Sahlin, 1983; Klibanoff et al., 2005; Chateauneuf & Faro, 2009). Distinctions of order can go beyond the second and into a potentially indefinite progression (Dow, 2012). Moreover, they are not necessarily limited to uncertainty over states. For instance, the agent may not know what to do—but they may also not know what to do about this practical uncertainty. When you face a decision, you not only face the uncertainty regarding that decision: you may also face the uncertainty regarding how to go about resolving that uncertainty (Smith, 1991). You may wonder, for instance, whether the decision is strategic, and therefore whether it requires game-theoretical solutions or not. Or you may face a decision concerning a plurality of stakeholders, and wonder about the right social choice procedure to address it.

This overview does not necessarily present a complete taxonomy of the uncertainties that agents may face in decision making: some of these may overlap, and perhaps others are possible. What is important, however, is that while mainstream debates tend to focus on uncertainty over possible states, other elements of a decision can be the object of some uncertainty.

As a final note, it is worth mentioning that uncertainty with respect to decisions can be uncertainty on the part of the agent or uncertainty on the part of the modeller, in case the two differ—as is usually the case with decision-theoretical models used in economics. Then, the modeller has to take into consideration all the uncertainty that the modelled agent may face, while facing uncertainties of their own, regarding for instance what they do not know about the agent or regarding the adequate theoretical tools to use. However, we are here concerned with uncertainty exclusively from the perspective of the agent, given that we are looking at uncertainty as an obstacle for effective decision making, so we will leave this distinction aside.

6 Conclusions

Arguably, any decision happens under some degree of uncertainty: the agent can have doubts concerning a variety of aspects of the decision, and each of these can make them unsure about the best course of action. When dealing with uncertain decisions, the obvious thing to do seems to try to reduce the uncertainty. Understanding the nature of the uncertainty can help in doing so effectively.

The default view in decision theory takes the agent’s practical uncertainty to be entirely reducible to uncertainty over the state of the world and representable with a single probability function. While the debate on the limits of this view has been long and lively, it has focused primarily on cases where the uncertainty is too severe for probabilistic representations to be justified. However, there may be cases in which the default view fails not because the uncertainty is too severe, but because it is not of the right type. The variety of doubts that an agent can have in a decision seems to confirm the plurality of types of uncertainty. Agents can be uncertain over elements of the decision beyond the state of the world, from the way they modelled the problem to the utility and probability functions employed. However, this plurality does not mean that there is no unitary concept of uncertainty.

Expanding on Makins (2021), I have proposed that someone is uncertain about some (cognitive or non-cognitive) attitude whenever they have inconclusive motivating reasons for mutually exclusive alternatives. This means that uncertainty arises from disagreement between reasons. In some cases, this disagreement can be resolved with an improvement in the cognitive or epistemic conditions of the agent: removing biases or learning something new can change the set of reasons to the point that it becomes conclusive. However, in other cases the disagreement is radical, in the sense that it persists under ideal conditions. An implication of this is that, whenever the uncertainty arises from radically disagreeing reasons, we cannot expect evidence to resolve it.

I have argued that radical disagreement is always possible over non-cognitive attitudes, and it may even be possible over cognitive attitudes given some epistemic or metaphysical assumptions. The type and the content of the attitudes generate a typology of uncertainties, some of which will (at least in principle) be sensitive to evidence, and some of which will not. All of these types can be at play in a decision, which means that not all the uncertainty faced by an agent will necessarily be reducible with empirical evidence.

The typology presented illustrates how different types of uncertainty can be derived from a unitary notion of disagreement-based uncertainty. Different typologies can be more or less adequate depending on one’s assumptions and contextual needs. This sort of typologies are not in contradiction with other proposals in the literature. Other authors have focused on other variable components of uncertainty: for instance, Bradley and Drechsler (2014) talk of uncertainty varying in severity and in nature, while both Hansson (1996) and Hansson and Hirsch Hadorn (2016) provide non-exhaustive lists of possible uncertainties that have been ignored by the mainstream debate. For all the uncertainties that these typologies present, one can ask what type of attitude they concern and with which content, and thus whether the disagreement at their root is radical or not. This is useful because it tells us on which of their different doubts the agent can work in order to reduce the uncertainty, and which of their uncertainties may persist even under ideal conditions.

Moreover, uncertainty is a complex notion that presents both unitary aspects and significant variability; and yet, attempts to keep both these characteristics have been surprisingly sparse, given the importance of uncertainty in our life and in our decisions. In this proposal, the variability of uncertainty has been directly connected with a unitary account of uncertainty. In turn, this account connects the discussion over types of uncertainty in decision making with fields as diverse as the philosophy of reasons, the nature of disagreement, and the debates on cognitivism. In discussing the metaphysical and epistemic assumptions of my account, I have presented some of the connections that uncertainty has with more general issues in philosophy—uncertainty is, after all, a pervasive feature of life.