Vagueness is a ubiquitous phenomenon in language and thought, which gives rise to the notorious sorites paradox and related puzzles. With view to recent philosophical theories such as epistemicism (Williamson), supervaluationism (Fine) or so-called degree-theoretic approaches (Edgington; Smith), it is fair to say that the problem of vagueness has taken center stage in philosophy of language, logic, and epistemology. With formal epistemology having emerged as one of the major methodological paradigms in contemporary analytic philosophy, it is only natural to look closer into possible applications of probability theory to the epistemology of vague languages. For another, the phenomenon of indeterminacy in credence judgements (broadly conceived) is receiving increasing attention in formal epistemology (as well as in other fields such as statistics, behavioural economics, or the psychology of reasoning). Conversely, formal frameworks of vagueness can be brought to bear on the modeling of credal indeterminacy. This special issue offers a collection of most recent contributions on the philosophy of vagueness and probability, focussing especially on probabilistic models of vagueness, and on models of indeterminacy and vagueness in probability judgements. It emerged from the Probability & Vagueness conference which took place on March 20–21, 2013, at the University of Tokyo. The contributions may be subdivided into four thematic groups.

1. Foundations of vagueness. Vagueness has been commonly conceptualised in terms of definite truth. According to this, a general term (such as red) is vague only if it has borderline applications, that is, applications that are neither definitely true nor definitely false. The seeming absence of a cut-off point between true and false application cases for a given general term in the case of vagueness, on this account, is explainable in terms of borderline cases, whose existence implies that there is no cut-off point between definitely true and definitely false application cases. Major figures in the philosophy of vagueness such as Russell or Dummett famously suggested that in cases of genuine vagueness, vagueness reemerges indefinitely for associated definitisations (definitely red, definitely definitely red, definitely definitely definitely red, etc.).Footnote 1 This notion of non-terminating higher-order vagueness has been challenged though by various impossibility results to the effect that vagueness in this sense is impossible. Kit Fine presents a novel account of the conceptual problem of higher-order vagueness, which involves a new non-classical logic for vagueness. On Fine’s account, the essential flaw in previous approaches is the idea that vagueness is a local notion that can be described in terms of borderline cases. Rather, he argues, vagueness in the genuine informal sense is a global notion, which applies not to single application cases, but to sets of application cases that are under consideration in a context.

Vague predicates seem to be tolerant with respect to relevant dimensions of variation, in the sense that they do not distinguish between sufficiently similar individuals in a domain. Peter Pagin’s central gap semantics offers a model of vagueness that accommodates tolerance as a valid principle. The basic idea is to interpret predicates together with a contextually given quantifier domain restriction. Each restriction involves a central gap (with respect to the relevant dimension of variation), which makes tolerance safe from contradiction. Whilst in earlier work by Pagin, this idea is formalised for simple languages of predicate logic, in his contributed paper, the idea is further developed for languages in which definite truth is expressible. On Pagin’s account, definitisations of predications come with an increment in the required gap between true and false applications. The presented framework offers a new model of higher-order vagueness, and with it, also a new perspective on related impossibility results.

2. Probabilistic models of vagueness. The label ‘degree theories for vagueness’ is an umbrella term for continued-valued semantics, which evaluate vague languages in terms of reals in the unit interval [0,1]. There is no common ground on the supposed structure of degrees. Whilst some authors (most notably, Edgington) have proposed structural constraints that are genuinely probabilistic, proponents of fuzzy semantics have argued that degrees obey compositionality principles which (in some way or other) generalise the classical truth-value tables for logical connectives that are adopted on bivalent semantics. Some implications of compositional degree semantics for vagueness have been received with fervid criticism. For example, if degrees are truth-value functional, it is suggested that contradictions of the form \(\ulcorner P\) and not \( P\urcorner \) receive a positive value if P has a positive value lower than 1—an implication that has been criticised as blatantly counterintuitive. In his contribution, Nicholas J.J. Smith takes a closer look at the major variants of the objection to fuzzy semantics from truth-functionality. The gist of his argument is that this objection can be put to rest, whatever position one may take on the relevance and soundness of alleged data that seem to conflict with truth-value functionality: either, the relevance of data (including the individual intuitions of single philosophers) can be in general disputed, or the data as such can be disputed, or they can be even accommodated by way of fuzzy semanticist tools.

On the classical model of subjective probability, the credence (for a sentence or proposition) can be interpreted as measure of expectation of (its semantic value being) truth.Footnote 2 More generally, allowing for the case that sentences may take any value in the unit interval and not merely the extremal values 0 and 1, one can interpret credence as measuring the expected semantic value. Importantly, depending on which semantic framework we adopt for modelling credence as an expected semantic value, we may end up with different accounts of the structure of credence. In some of his earlier work, Smith adopts this idea for motivating an account of rational credence that deviates from classical probability constraints.Footnote 3 In her contributed paper, Rosanna Keefe makes a case against Smith’s account of credence. She challenges not only the adequacy of his particular model. She urges considerations even more against the general idea that formal semantics may provide a sufficient basis for modelling credence as an expected semantic value.

Degree theories of the probabilistic variety are of special theoretical interest in that they provide a unified solution strategy for the Sorites, Lottery, and Preface paradoxes.Footnote 4 The question of what degrees are supposed to be more specifically, and why they should have a probabilistic structure, however, has been receiving only most recently due attention. Dan Lassiter and Noah D. Goodman offer a positive account of degrees for scalar adjectives (such as tall or happy), which is a prominent type of vague expressions. The basic idea is that listeners and speakers maintain probabilistic models of each other’s utterance and interpretive choice behaviour. These models are leveraged for making choices that are most useful for the communicative purposes on either part. Specifically, on Lassiter’s and Goodman’s particular model, degrees are rational credences that model the uncertainty of listeners with regard to the intended information of utterances of the relevant sentence. Their theory brings to bear recent work on Bayesian pragmatics, which combine Gricean and game-theoretic influences with an approach to inference and decision-making under uncertainty which has been influential in recent cognitive science.

Paul Egré presents and defends a different type of probabilistic degree theory of vague judgements, which is inspired by an idea of Emile Borel’s. On that account, judgements involving vague predicates involve a two-stage mental mechanism: first, mapping a stimulus (e.g. some magnitude of height, brightness, loudness, or other) onto an inner scale of magnitude, which provides a mental representation of that magnitude with some approximation; second, comparing that representation to a distinguished value, which can be understood as a threshold value for mental representations to be categorised in a certain way. As Egré shows, this idea can be formalised in terms of signal detection theory, which is a major theory of imperfect discrimination. According this, judging as to whether an object is tall involves essentially the same sort of mental mechanism that is also involved in cases where a subject has to detect the presence of a signal in a background of white noise.

The contributions by Igor Douven and Masaki Ichinose explore various ways of bringing probabilistic degrees fruitfully to bear to particular philosophical problems involving vagueness. As Douven shows, probability logic not only offers a useful tool of modelling the Sorites, Lottery, and Preface paradoxes; it may be also adopted for a new account of the Theseus’s Ship paradox, which is a paradox of material constitution. Specifically, it is shown how Edgington’s degree-theoretic account strategy for the Sorites carries over to Theseus’s Ship. In effect, the paradox can be resolved in a way which is almost non-revisionary, in the sense that the recommended remedy is just a mild restriction of some inferential rules involved.

Ichinose’s paper models the vague distinction between descriptivity and normativity in probabilistic terms. According to this, descriptivity and normativity may come in degrees that have a probabilistic structure: claims may be descriptive to some positive degree, and at the same time be normative to some positive degree. Ichinose furthermore suggests a model of the way in normativity and descriptivity degrees are supposed to relate to each other.

3. Indeterminacy and vagueness of credence. Orthodox probabilistic models of credence describe states of information by way of some probability function, which assigns to any event in a logical space a single real value, even in cases where there seems to be no warrant for such a determinate attitude. E.g., what should our rational degree of belief be that the global mean surface temperature will have risen by more than four degrees by 2070? Should it be 0.75? And how about 0.75000001? Or 0.74? It seems that there is no warrant for saying that our available evidence warranted a particular credence functions. Rather, it seems more appropriate to say that credences may be imprecise. More precisely, one may say that doxastic states should be rather modelled in terms of credal sets, which may contain more than one credal function. It has been suggested that cases of imprecision in probability may vary in nature: in epistemic instances, there is a fact of the matter as to which precise probability regarding an event is to be assigned relative to a doxastic state, it is just due to the imperfect introspectability of this state that there is no warrant for assigning any such value for this event. In non-epistemic instances, on the other hand, there is just no fact of the matter that would determine a precise probability.Footnote 5

One may have various sorts of motivation for adopting an imprecise probability approach. One motivation is the idea that probability orderings can be incomplete. Specifically, it has been suggested that there are cases of incommensurability in comparisons in probability: e.g., our expectation of rain, when we start out for walk, may sometimes be neither more likely than not, nor less likely than not, nor as likely as not—just because different parts of evidence may point into opposite directions (the barometer reading may be high, but the clouds be black).Footnote 6 As in the related discussion on value relations, it is controversial whether there are cases of genuine incommensurability in comparisons in probability: to wit, one may suggest explaining supposed cases of incommensurability away as cases where in fact it is only vague whether one event is more likely than, less likely than, or as likely as another one.Footnote 7 Wlodek Rabinowicz’s supervaluationist intersection account offers a novel model of probability relations. It not only accommodates the distinction between incommensurability and vagueness, but introduces even more refined distinctions that go beyond what could be modelled in previous accounts. The basic idea of Rabinowicz’s model is to model being likelier/less likely/as likely as as being higher than/lower than/equal to in credence respectively, for all members of permissible credence assignments.

Another motivation for imprecise probability models is the idea that credence should reflect evidence. E.g., compare a case where a coin is known to be fair with a case where a coin of unknown bias is tossed, and consider in either case the hypotheses that the coin lands heads (H) with the hypothesis that it lands tails (T). For either case, it seems appropriate to have, on balance, the same belief in both H and T. However, at the same time, it seems that the evidence in the first case has more weight. One way of modelling this intuitive difference between evidential balance and weight is to represent the said cases by different probability assignments P(H) \(=\) {0.5} and P(H) \(=\) [0, 1], respectively.Footnote 8 Aidan Lyon argues for an even more refined imprecise probabilities model of credence. The crucial point is that doxastic states may be genuinely fuzzy in the sense that they cannot be fully adequately represented in terms of a single credence function, but equally, they cannot be adequately represented in terms of a single credence set. Rather, according to Lyon, in cases of genuine fuzziness, credal functions can be at best only assigned intermediate degrees of membership in one’s credal set. The presented fuzzy credal set account provides a more refined toolbox of modelling the balance and weight of evidence.

Agents’ credences, typically, represent reality only imperfectly—if they were perfectly accurate, they should take the maximal value for all and only all truths. Notwithstanding the actual inaccuracy of agents’ credences, the question is whether their credences may possibly meet certain standards of accuracy, in the sense that their possible credences may represent actual facts accurately in the perfect sense, or at least to some sufficient degree. Eleonora Cresto discusses the question of what follows if this question was answered in the positive. The gist of her discussion is that modal constraints on the accuracy of credence with respect to actual facts may have non-trivial formal implications on admissible credence assignments for logically contingent propositions (such as it rains and one’s degree of confidence that it rains is not smaller than 50 percent). This discussion closely ties with discussions of the knowability paradox (in epistemic logic) and of the reflection principle (in formal epistemology). In effect, while violations of the reflection principle are shown to be coherent, certain probabilistic versions of the knowability paradox do arise. Cresto’s framework of modal logic for a language containing probability functions is broad in scope, allowing for a unified discussion of—and maybe, a unified solution to—the paradox of knowability and its probabilistic relatives.

4. Foundations of probability. Conditional wisdom has it that conditional probabilities are by definition certain ratios of unconditional probabilities, and that probabilistic independence is defined in terms of unconditional probabilities (taking the product of the relevant joint unconditional probabilities). Alan Hájek and Branden Fitelson argue that there are good reasons to put this conventional wisdom to rest, and with it, the standard (Kolmogorovian) approach to probability and independence. On their account, more promising options of modelling independence are available in a more general framework of conditional probability, where conditional probabilities are allowed to be defined also if conditions take an unconditional probability zero. In effect, Hájek’s and Fitelson’s case for a non-standard approach to independence is, by the same token, a case for a non-standard (Popperian) framework of probability.