Philosophical Studies

, Volume 161, Issue 3, pp 453–470

Modelling vagueness: what can we ignore?

Authors

    • Department of PhilosophyUniversity of Sheffield
Article

DOI: 10.1007/s11098-011-9750-1

Cite this article as:
Keefe, R. Philos Stud (2012) 161: 453. doi:10.1007/s11098-011-9750-1
  • 141 Views

Abstract

A theory of vagueness gives a model of vague language and of reasoning within the language. Among the models that have been offered are Degree Theorists’ numerical models that assign values between 0 and 1 to sentences, rather than simply modelling sentences as true or false. In this paper, I ask whether we can benefit from employing a rich, well-understood numerical framework, while ignoring those aspects of it that impute a level of mathematical precision that is not present in the modelled phenomenon of vagueness. Can we ignore apparent implications for the phenomena by pointing out that it is “just a model” and that the unwanted features are mere artefacts? I explore the distinction between representors and artefacts and criticise the strategy of appealing to features as mere artefacts in defence of a theory. I focus largely on theories using numerical resources, but also consider other, related theories and strategies, including theories appealing to non-linear structures.

Keywords

VaguenessModellingSorites paradoxDegree theoriesArtefacts

1 Introduction

A theory of vagueness aims to illuminate the workings of a vague language by giving a model of vague language. This involves identifying a model-theoretic structure which dictates the range of truth-values and the relations between simple and compound sentences and determines which arguments are valid. One range of theories offers numerical models that assign each sentence a value between 0 (falsity) and 1 (truth). The numbers are often interpreted as degrees of truth and I call a theory that offers such a model a Degree Theory of Vagueness. There is some intuitive appeal to building into one’s theory of vagueness the way in which, e.g., baldness, tallness and redness come in degrees. And this can be done in many different ways, yielding a range of different models of the phenomenon. In this paper, I ask whether we can benefit from employing a rich, well-understood numerical framework, while ignoring those aspects of it that impute a level of mathematical precision that is not present in the modelled phenomenon of vagueness. Can we ignore apparent implications for the phenomena by pointing out that what we are offering is “just a model”?

The chosen numerical model can be one of the many alternative many-valued logics, according to which, for example, the value of a conjunction is the minimum of the values of the conjuncts and the value of a disjunction is the maximum of those values.1 Many philosophers have objected that the various different truth-functional definitions for the connectives all deliver highly counter-intuitive results, however: to take just one example, contradictions can come out less than completely false.2 Dorothy Edgington offers an alternative, probabilistic framework, providing non-truth-functional definitions of the connectives. I briefly sketch her theory here: it illustrates one form a degree theory of vagueness can take; and Edgington herself is explicit about adopting the “just a model” response to certain objections to the use of numbers.

Instead of talking about degrees of truth, Edgington employs a notion of “verities”, which are degrees of closeness to clear truth. I will use v(p) to represent the verity of p, or, more generally within other frameworks, the number between 0 and 1 that is assigned to p. Edgington recognises that the verity of A&B doesn’t just depend on the verities of A and of B, but on the relations between them (just as the probability of A&B isn’t settled by the probabilities of each outcome alone if A and B aren’t independent). To determine the verity of A&B, we need to consider an analogue of conditional probability (e.g. the probability of A given B), which she calls “conditional verity”, where the conditional verity of A given B is the verity B would have if A were to count as true. For example, if a is borderline tall where v(a is tall) = v(a is not tall) = 0.5 and b is slightly shorter, so v(b is tall) = 0.45, the conditional verity, v(a is tall given b is tall) must be one: if b is to count as tall, then, being taller, a certainly counts as tall. The verity of a conjunction is then the conditional verity of A given B multiplied by the verity of B. Disjunctions and conditionals can also be defined in terms of conditional verity, while Edgington preserves the truth-functional definition of negation such that v(not-A) = 1 − v(A).

Whatever the specifics of the numerical framework chosen, the employment of those resources are threatened by certain pressing concerns. In brief, one compelling worry is that an account dealing in the assignment of real numbers to vague sentences imposes an unrealistic and unacceptable level of precision—e.g. a sentence is assigned 0.679 instead of 0.678—and it thereby badly misrepresents vagueness.3 This problem has been associated with problems concerning higher-order vagueness: there seems to be a tension in accommodating vagueness by denying that all sentences are assigned either 0 or 1 but then assigning some unique value between 0 and 1 to each sentence.

In response, Edgington appeals to the idea that numerical assignments are merely models and she thereby hopes to avoid certain commitments that would follow from a more realistic construal of the model. This approach has been taken in relation to other specific degree-theoretic models and could be taken with any such models, and perhaps with models of other types. Can it provide a satisfactory way to avoid objections to various features of a theory that advocates such a model?

The general idea is that in response to a criticism of some aspect of the model, a theorist may respond that it is merely an artefact of the model.4 When you provide a model of something, you can take some aspects of model seriously but not others. Some features may be, for example, merely an insignificant by-product of the way you’ve formulated it, just as its being written down in green is irrelevant. To this end, we can distinguish between representors within someone’s model and artefacts of that model. And the degree theorist may claim that the unwanted precision of a numerical model is merely an artefact of the model. By focusing on the representors in the theory, a theorist can hope to tell a plausible story about vague language and solve the paradox without incurring unwanted commitments.

This paper explores the distinction between representors and artefacts and criticises the strategy of appealing to features as mere artefacts in defence of a theory. I focus largely on theories using numerical resources, but also consider other, related theories and strategies including those using a non-linear structure of values rather than the numerical scale.

2 Representors and artefacts

How should we characterise the crucial distinction between representors and artefacts? Roy Cook, who is sympathetic to Edgington’s approach, characterises representors as “aspects of the model that are intended to correspond to real aspects of the phenomenon being modelled” (2002, pp. 236–237). This contrasts with Shapiro’s characterisation as those aspects that do correspond to aspects of the phenomenon (Shapiro 1998, p. 139 and 2006, p. 50). Cook justifies the modification by noting “the importance of the distinction between those aspects that do correspond but are not recognized as doing so and those correspondences that we recognize” (p. 237). But we surely need to allow for unrecognized representors. We want to deduce things about the phenomenon being modelled given assumptions about some of the representors, which amounts to concluding that some feature is a representor regardless of the theorist’s intentions about it. Indeed, Cook follows exactly this practice in, for example, arguing that certain features of Edgington’s account aren’t artefactual because they are based on representational features (e.g. the accounts of the connectives).5

There are different problems with Shapiro’s alternative characterisation that something is a representor if and only if it suitably corresponds to the phenomenon. For we want to allow that the same model can be regarded in different ways by different theorists who recognise different features of the model as representational (e.g. the difference between a degree theorist who takes exact values seriously and one who does not). A feature of the model might happen to correspond to the phenomenon even though the theorist explicitly regards it as merely artefactual.

On an intermediate option, representors are those features that the modeller intends to correspond to features of the phenomenon, plus any further features that it follows must also so correspond if those features explicitly taken as representors do. This fits with the practice of deducing the representational nature of certain features from assumptions about other representors.

Now, it would be possible on this third characterisation for theorists to regard a feature of the model as artefactual, when in fact it can be shown to be representational given that they take certain other features to be representational. On the one hand, we may hope to put that aside in the hope that our theorists aren’t too badly mistaken about what is artefactual and can perhaps live with the consequences of the unintended representational nature of some features of their account. On the other hand, this strategy cannot be assumed to be unproblematic. I have argued elsewhere (Keefe 1998 and chapter 5 of 2000) that degree theorists who defend the standard many-valued definitions of the connectives cannot coherently maintain that the assigned numbers are arbitrary in the way they often want to claim: the preservation of the relations between the compound sentences and their components along with certain plausible assumptions, serve to pin them down to a unique set of numerical assignments. Some of the same problems may arise for Edgington—as I will argue in Sect. 5 in relation to the definition of negation—even if they are not quite so severe.

All of these possible characterisations of the representor/artefact distinction raise further questions, however. What is meant by “correspondence” between aspects of the model and the phenomenon? On some natural understandings, correspondence will be too easy to come by. The numerical values assigned to sentences of the form, “x is tall” may be said to correspond to something in the phenomena because there is a function from height to these truth-values. The abundance of mathematical functions and sets make this form of correspondence guaranteed, but any selection of truth-value assignments meeting certain minimal conditions will count as corresponding in this sense. Something more is needed, but it isn’t clear what.

Similarly, Shapiro says that units of measurement are artefacts because “they do not correspond to anything in real physical systems” (Shapiro 2006, p. 50). But there is a physical property of length corresponding to the unit of one metre, and, moreover, the relation between it and other physical properties corresponds to the relations between the numbers (that’s why it is a successful system of measurement). One natural response to explain why these kinds of correspondences fall short of what is needed for a representor, would be to point out the arbitrariness of the choice of physical property to correspond to the number 1. A different choice of unit would have been equally successful and perhaps recognising this fact is recognising that the chosen unit is a mere artefact of the model.

More generally, perhaps the artefacts of a model are those features that vary between different acceptable models. The justification for ignoring some feature of a model because it is a mere artefact is often and naturally illustrated by the fact that the feature is not common to all the models that could have been used. This may suggest that we employ supervaluationary techniques to yield a familiar sort of story on which we trust what is common to all models of a certain type. A standard supervaluationist theory quantifies over a range of models of a certain type and declares a sentence true (false) iff it is true (false) in all those models. For example, the standard supervaluationist theory of vagueness quantifies over all the admissible precisifications of our language—the classical models corresponding to acceptable ways to make the language precise—and advocates the corresponding supervaluationary truth-conditions. A degree theorist may take the models quantified over to be infinite-valued models reflecting degrees of truth, and the supervaluationary technique could work equally well in this framework, delivering, for example, the result that v(p) = 0.7642 will not be true in all models so will count as artefactual.

If we are only to trust what is true according to all the relevant models, then it may seem that the supervaluationary model gives us a definite answer to what we should trust, so that we can regard that model as capturing the truth about the phenomenon. For anything true in only some of the acceptable models and so not true in the supervaluationary model is something not true of the phenomenon, in contrast with the truths of the supervaluationary model. But, whatever its merits, this way to employ the supervaluationary technique does not give us a version of a view that regards elements of a model as mere artefacts. The supervaluationist move gives us a model—the one resulting from the quantification—all of which we can trust, and thus the appeal to artefacts drops out. The idea of the strategy in question was that our vague language is successfully modelled by a model of the desired type—even though some elements of it are not representational—not that we should look to a model constructed by reference to a range of models of the desired type.

Perhaps instead we should see the supervaluationist model as only providing truth-conditions for claims (of the metalanguage) about what is and isn’t artefactual rather than also reflecting object-language claims (those the degree-theoretic models are designed to model). We can then deny that the supervaluationist model provides the right model at the object-language level: it is the original degree-theoretic models which do that. But, modellers who seeks to appeal to artefacts may also resist this supervaluationary move as capturing the representors because they claim that a statement can come out true on all models even though it just reflects an artefactual feature of the modelling. For example, taking the degree theorist’s models as those to be quantified over, “every sentence has a unique, precise value” would come out as non-artefactual, because true on all models (even though on each model sentences will have different values). More generally, the modeller might hope to leave space for something that is common to all the relevant models and yet still an artefact of the modelling. This leaves us without a characterisation of what the artefacts of a model are, since it involves rejecting their characterisation in terms of variation between models. It also means rejecting a clear and appealing story about what to trust in the theory, which leaves a gap that needs to be filled, as pursued in the next section.

We have not, then, settled on a definitive characterisation of when something counts as an artefact. As a reply to an objection, the claim that something is merely an artefact is, at best, slippery: it is not clear quite what it amounts to. And the reply doesn’t come for free: work needs to be done to show that you can have the artefacts you want while your theory still delivers what you need.

3 Threats and challenges for an appeal to artefacts

Suppose a theorist wants to meet an objection by claiming that some feature of their model is merely an artefact, so can be ignored. In that situation,
  1. (i)

    Their account may be threatened by the objection that the feature in question must be taken seriously, given other features of the theory and what the theory seeks to achieve.

     
  2. (ii)

    The theorist must meet the challenge of giving a compelling story about what we can trust in the theory and show that a substantive theory of vagueness is thereby given.

     

I will argue that (i) and (ii) serve to undermine degree theories. First, as a point of contrast, I consider supervaluationism. Shapiro regards the supervaluationist’s precisifications as artefacts of the system, since they are “a tool in figuring the truth conditions of vague sentences” (2006, p. 72). But the lack of clarity over the characterisation of artefacts renders this questionable: there may be space for them to be “theoretical constructs” in some other sense. For example, we can take precisifications to be—and so correspond to—set-theoretic entities whose existence we may have no inclination to deny. Similarly, assuming there’s no objection to classical valuations—and there shouldn’t be if we consider them for modelling precise language, which is a way language could be—then appeal to lots of them in a supervaluationary semantics is also reasonable and so we needn’t regard them as things to be used but not taken seriously. On the other hand, it does not seem appropriate to treat precisifications as representors, given that they do not correspond to aspects of the phenomenon in any significant sense. They are perhaps like units in a measurement theory, which help us develop a story about the phenomenon without corresponding to something that in itself plays a significant role in the phenomenon.6

Opponents of supervaluationism have sometimes objected that the supervaluationist’s appeal to these perfectly precise valuations is problematic because, first, we could not possibly use them; or second, they play no psychological role in our understanding or use of vague sentences; or, third, their employment presents vagueness as “eliminable in principle” (Edgington 1996, p. 316). Supervaluationists typically respond that precisifications do not need to play these roles. We can accept the commitment to precisifications without being committed to them playing any psychological role or representing how the language could practically have been or some such. The debate may then be framed in terms of (i): are supervaluationists committed to a more substantive role for precisifications given their other claims about what the theory delivers? I claim not, but I will not pursue this discussion here.

More importantly, however, even if precisifications are considered as artefacts, the supervaluationist can meet the challenge (ii). Whether or not we count as ignoring precisifications because we don’t need something in the world corresponding to them to play any substantive role, it is clear exactly what we do take seriously and what we can trust, namely the outputs about truth-values and logic. As I will argue, degree theorists are not so well placed. So, let us return to (i) and (ii) in relation to degree theories.

Responses to (i) and (ii) can interact: an answer to (ii) may prompt an objection of the form of (i), while an objection of the form of (i) might be met by providing a different answer to (ii). My arguments (Keefe 1998) take the form of (i): in virtue of their employment of the numerical framework and their definitions of the connectives within that framework, typical degree theorists are committed to unique values for vague sentences. For example, (in the light of considerations from measurement theory and in line with the theorists I was discussing), I took the ordering of sentences by value as a representational feature. This led to various problems, including the fact that if there is a definite answer to the comparative relations between the values of any two sentences, then there is a determinate set of sentences true to value 1, which considerations of higher-order vagueness suggest should not be the case.

A degree theorist may seek to accommodate more flexibility as to ordering. For example, although there may be a clear answer to whether the value of “a is tall” is greater than that of “b is tall” when both are borderline—as dictated by whether a is taller than b—the comparison of the value of “Amy is tall” and “Betty is thin” may be less clear when both are borderline cases. To impose a definite answer to this question may be thought to go beyond what is determined by the phenomena, so we may hope to be able to regard the answer provided within a model as one of the artefacts of that model. This also opens up the possibility of regarding the boundary to the value 1 cases within any model as a mere artefact too.

Such an approach prompts the question what is left of the numerical scale to do the work, if ordering isn’t to be taken seriously either. A detailed response to (ii) is needed to deliver the answer here. So, in the next section, I ask what the representors within a degree theory can or should be.

4 What the representors could be

Which features of a model such as Edgington’s should be taken as representational? Cook says “The assignment of a particular real number is not representative”, but “some part of the ordering must be representative” (Cook 2002, p. 241).7 Some part, note, not all. Similarly, for Edgington, something of the form “the verity of p is greater than the verity of q” is representational in some cases and not in others. In a recent unpublished paper, Edgington says: “If the number assigned to a sentence is significantly smaller than that assigned to another, there is a real difference between the sentences. A small difference, however, is not necessarily indicative of an actual difference in verity” (unpublished ms, p. 24, see also Cook p. 241). As before, where Amy is borderline tall and Betty is borderline thin, there is no significance to the decision about which of “Amy is tall” or “Betty is thin” has the higher verity value. Modelling it one way rather than the other is a mere artefact.

If there is a property, F, that is always representational, then when we use the model to deduce that some sentence or sentences is F, then we know we have reached a conclusion about the phenomenon (rather than just about the artefacts of the model). But if F is only sometimes representational, we cannot draw this conclusion when our model declares that some sentence is F. Of course, all is not lost, since sometimes we may be able to show that the necessary features of our premises are all representational, and then we can similarly trust the conclusion. But this would, at best, make for a much more cumbersome task in employing the model. Although F is sometimes representational, we cannot treat it as such in reasoning with the model unless we know the specifics of the case.

Another important property that, according to Edgington, is sometimes representational and sometimes not is having verity 1. If a is an absolutely clear case of F, then it will have verity 1, where there is no vagueness or flexibility to this and the assignment reflects an important feature of the phenomenon. But, the boundary to the clear cases is merely artefactual, corresponding to the phenomenon of higher-order vagueness, since “it is unclear where clear truth leaves off and something very close to it begins” (1996, p. 298). She says, “the use we make of the framework should not be sensitive to that distinction [between verity 1 and verity less than 1]”. But the framework is highly sensitive to the difference: premises that are just less than verity 1 can lead to a false conclusion (with verity 0) with valid arguments, premises that are clearly true cannot. For short inferences there may be little difference as the drop in verity will be so slight—as she illustrates—but there’s a significant difference in the impact of reasoning overall. If there is an optimum model in which all premises of a valid argument have verity 1, does that guarantee the truth of the conclusion? If so, we cannot be casual about whether they have verity 1, but if not, how can we trust the model at all?

One response to these concerns might be to maintain that the arguer does not, in fact, use a single model. Instead, perhaps we should consider all the legitimate models; that way they will not rely on the artificial decision to assign some sentence verity 1 rather than fractionally less than 1. We could then take a supervaluationary route and accept only things on which all those models agree, as discussed in Sect. 2. But this is to radically change the story about how to understand reasoning with vague language. This isn’t Edgington’s strategy and it isn’t the one we’re interested in here.

What about the verities themselves? Even if the exact numbers assigned in the model are not to be taken seriously, some features of those verities must surely be representational. Here’s Cook’s line:

Truth comes in degrees. Thus the fact that degree-theoretic semantics represents truth as coming in more varieties than the traditional two (absolute truth and absolute falsity) is representative; in other words, the assignment of verities is a representor, and there are real verities in the world. (p. 239)

In other words, there is a structure of truth-values between truth and falsity that shares some of the features of the real numbers between 0 and 1 and so is well modelled by them, even though that model imports extra structure and non-representative relations.

But if there are genuine varieties of truth instantiated by sentences and artefactual features arise as a result of labelling them by numbers, then there must be determinate answers to questions of whether sentences instantiate the same variety. But that generates problems for several features that the position sought to treat as artefacts of the model. Consider the case with “Amy is tall” and “Betty is thin”, where Amy is around the middle of the borderline cases for “tall” and Betty around the middle of the borderline “thin” cases, so it is not clear whether the sentences should be assigned the same value or not, and if not which should be assigned the higher value. The advantage of the modelling approach with this kind of case was supposed to be that, although on a particular model, the relations between these values will be settled one way or another, we can merely treat that as an artefact of the model. But, on Cook’s picture here, we can ask of the actual verities instantiated by those sentences whether they are the same (even if we grant it is acceptable to model them either as the same or not) and there must be an answer one way or the other. The appeal to artefacts may ensure a flexibility of how it is modelled, but if we are still committed to a fact of the matter about whether the actual verities are the same, then that seems to defeat the purpose. Now, the response for this kind of case may be to deny that “Amy is tall” and “Betty is thin” do have the same actual verity (even if it would be acceptable to model them as having the same verity). To respect the uncertainty about their ordering, we might then deny that there is a fact of the matter about which actual verity is greater: perhaps the assumption that one of the two must be greater could be put down to an unreasonable assumption that the actual verities share the well-ordered structure of the real numbers. The structure of the actual verities may then be non-linear; in Sect. 7, I consider theories that explicitly consider such structures as part of their modelling. Some of the concerns raised there would carry over to the view in question here.

Can the above line taken by Cook allow for the desired flexibility over the cases with verity 1? The hope was to accommodate the lack of sharp boundary to the clear cases by maintaining that the boundary to those cases was merely artefactual. But, take some borderline clear case, “Harry is bald”: there must be a fact of the matter about whether or not it instantiates the same verity as an absolutely clear case such as “Hank is bald” (where Hank has no hair at all). Again, we may be able to model them either way, but there has to be an answer to whether their real verities do coincide. We can’t capture the uncertainty over this as an artefact of how we label its verity.

Could Cook reply here that this assumes there is a unique “top” value, whereas this is a feature of the real number interval that is not shared by the structure of real verities. In other words, “Harry is bald” does not instantiate the same verity as “Hank is bald” (where Hank has no hair at all), but that does not mean that “Harry is bald” doesn’t also count as definitely true. More generally, there may be flexibility over which verities count as “top” elements. But, at the very least, this position is in tension with Cook’s natural and appealing presentation of the view. For it seems crucial to Cook’s framework that we can make sense of absolute truth and falsity. He says, for example, the “verities were introduced to give us intermediate values other than the traditional two” (p. 241) and he talks of truth as “coming in more varieties than the traditional two” (p. 239, my emphasis), committing himself to the limit values and other verities as varieties between them. (So, the defence cannot be, for example, that there is no top value, as there might be if you thought, for example, that no-one can count as tall to the greatest degree because there could always be someone taller). But then, on this view sentences which instantiate a verity other than the limit one must fail to fall into the top category and there is a fact of the matter that they do, so no flexibility about the cases instantiating the traditional truth-values.

Edgington cannot accept Cook’s general picture about verities anyway, since she denies that truth comes in more varieties than the traditional two (see especially her unpublished manuscript; also 1996, p. 299). Note that degrees of closeness to clear truth aren’t merely epistemic for Edgington either (e.g. coinciding with how close to true you think the sentence is). I’m not sure if I understand the middle ground whereby failing to be clearly true is neither a way of failing to be true nor a way of failing to meet some more demanding condition (e.g. being knowably true). Moreover, the solution to the sorites paradox is hard to understand on this interpretation: if it isn’t truth that seeps away (as on the more standard interpretation of degrees of truth), why does the conclusion end up false? But anyway, this view does not help in our attempt to clarify what we should take seriously in the model. Perhaps there are real degrees of closeness to clear truth that are neither epistemic nor partial truth, but we have little grasp on these and to be told that they share some, but only some, of the features of the structure of real numbers really does not help.

5 Rules for the connectives

The rules for verities of the connectives are intended as representors. It is hard to see how a degree theorist could deny this, given the centrality of such definitions to the degree theorist’s account, and Cook and Edgington are both explicit that those rules are to be taken seriously. But what are those definitions of the connectives representing? It can’t be that the numerical relations capture numerical relations among verities. And whereas it is easy to see what the greater than and less than relations between numbers correspond to if there is something like ordering in the phenomenon itself, multiplication doesn’t have such a natural analogue in a non-numerical setting like this. Here is Cook’s answer (p. 243): “once… numbers are assigned, the rules produce exactly the right orderings between the verities of, and exactly the right logical entailments between any compound statement and its subsentences.”

Let us consider the claim that the rules produce exactly the right orderings between the verities of any compound statement and its subsentences. For a truth-functional theory, this kind of claim is threatened by the typical arguments against those truth-functional definitions. For example, if v(a is tall) = 0.5 and b is slightly shorter, so v(b is tall) = 0.45, then one would expect v(b is tall and a is not tall) to be 0 and certainly less than v(b is tall). But by the standard definition of conjunction, its value is equal to v(b is tall).

Edgington’s definitions fare somewhat better. Appropriately, for example, the verity of a disjunction is at least as high as the verity of each disjunct and the verity of a conjunction is no higher than the verity of each conjunct. The orderings that are not shared by the standard definitions depend on conditional verity values and some of these results are indeed desirable. For example, v(A&B) = v(B) when v(A given B) = 1 such as in the case of “a is tall and b is tall” when a is taller than b. I take up two worries, though.

Some of the orderings delivered by the definitions of the connectives are, at best, questionable. Suppose a is taller than b and both are borderline tall and quite far from being clearly tall, say v(a is tall) = 0.3 and v(b is tall) = 0.4. Then v(a is tall and b is tall) = 0.3 (because v(b is tall given a is tall) = 1). But, suppose b is quite close to being clearly thin: v(b is thin) = 0.8. Then v(a is tall and b is thin) = 0.3 × 0.8 = 0.24, i.e. less true than the claim that they are both tall. That seems wrong to me. Intuitively, they are better described as a tall man and a thin man than as two tall men. And this is exactly the kind of argument that Edgington herself employs in rejecting truth-functional definitions of the connectives (see, e.g., 1996, p. 304). When considering what the comparative relations between compound sentences should be, the methods that degree theorists and their opponents use to determine the answer are very varied and often not made explicit. This makes it hard to settle correct orderings and common for theorists to bite the bullet here and accept the orderings that their definitions yield. So, the above argument is not presented as conclusive. But, at best, Edgington’s definition of conjunction commits her to questionable orderings that cast doubt on the claim that her model reveals the structure of the phenomenon rather than imposing such a structure. The problems with negation are more conclusive, however.

Recall that v(not-p) = 1 − v(p). When p is around half way between clear truth and clear falsity, it will have a verity value of around half and, intuitively with the approach in question, the decision on whether to model it with a value just above or just below 0.5 will be insignificant. The aim is to avoid commitment to some particular exact value for “a is bald” (when a is borderline bald) and that includes cases where we might think its verity is around 0.5. Yet the decision whether to give it a value less than, equal to or greater than 0.5 affects the ordering between not-p and its subsentence p. If the verity of p is greater than 0.5, it will be greater than the verity of its negation, but if it is smaller than 0.5 it will be smaller than its negation. And yet, the grounds for wanting a flexibility between the verities of “a is bald” and “a is not bald” (when v(a is bald) is around 0.5) are as good as those for requiring flexibility in relation to “Amy is tall” and “Betty is thin”.

It may be tempting to reply that the reason the ordering of A and not-A is not to be taken seriously in this case, is because it is not an ordering shared by other acceptable assignments to the atomic sentences. Maybe the only orderings between compound sentences and their subsentences that the assignment can be assumed to get right are the ones that don’t rely on accidental features of the assignment. But can we make sense of that response?

We might then try stating Cook’s claim as “if the differences in values between the assignments given to atomic statements are representational, then the differences in value between the assignments given to compounds will be as well”.8 The response to my problem case with negation will then be to say that in that case the antecedent is not satisfied: there’s no representative difference between assigning p 0.49 and assigning it 0.51, so we should ignore the effects on the negation that the difference between those assignments yield. But when does the difference between assignments (e.g. v(p) as 0.49 or as 0.51) count as representative? It is single models that are supposed to represent and that have features that are representational or artefactual. So, although we can understand differences between sentences on an assignment as representational or not (e.g. the difference between “Amy is tall” and “Betty is thin” is not), this does not carry over to differences across assignments.

If we understand the difference across assignments as not representational because both assignments are acceptable, then the difference between assignments will typically not be representative. Maybe the difference between .89 and .21 is typically representative because can never have both those numbers as acceptable assignments. But will small differences ever be representative when considered across assignments?9 This renders questionable the antecedent of the proposed conditional—“if the differences in values between the assignments given to atomic statements are representational”.

It might then be responded that in a case such as a disjunction, the ordering—that the disjunction is at least as true as one of its disjuncts—is preserved across all acceptable models. Perhaps this is why the differences between the assignments in that case can count as representational. But then the claim about how the rules for the connectives are representational threatens to become trivial. For if we consider something a non-representational difference between the assignments if it delivers some significant difference in ordering (as in my problem case with negation), then it is trivial to claim that representational differences between assignments to atomic sentences deliver representational differences for the compounds.

So, it seems that not all orderings between a compound sentence and its subsentence are representational. This thus leaves unanswered the crucial question, what is representational about the definitions of the connectives and how they are to be understood. The option of maintaining that the definitions of the connectives are mere artefacts after all is unattractive, for if we are not to take even them seriously, then it is far from clear what we should take away from the theory. Moreover, Edgington and Cook both rely on the fact that orderings among compounds and their components are representational in arguing that Edgington’s is a genuine solution to the sorites paradox (see, e.g., Cook 2002, pp. 245–246).

The attempt to provide a numerical model of vagueness while denying that ordering is to be taken seriously has proved highly problematic. In the next two sections, I consider two more approaches that seek to avoid the commitment to facts about the ordering among all vague sentences without taking the same modelling approach as Edgington. The first is a different approach that acknowledges a range of acceptable numerical models and the second employs a model with a structure more complex than the numerical ones considered above.

6 Smith’s plurivaluationism

Nicholas J.J. Smith (2008) defends a theory of vagueness employing degrees of truth. His response to the problem he calls “the problem of artificial precision”—one problem we took the appeal to artefacts to be addressing—is to maintain that there is no unique intended model. There are multiple acceptable models and nothing to choose between them, in particular, our practice doesn’t single out just one. His “Fuzzy Plurivaluationism” doesn’t then quantify over the acceptable interpretations as a supervaluationist story would, however. To say that there are all these interpretations is the end of the story. If they agree on something (e.g. that v(p) = 0.3), we can talk as if that is the case, but that does not reflect a “super-interpretation” (see, e.g., 2008, p. 287).

Smith’s view does not appeal to the idea of artefacts, so let’s first explain its relation to my criticisms in this paper. Smith advocates ignoring features of a model that aren’t common to all the acceptable alternatives to that model, and this looks like the attitude taken by modellers who appeal to artefacts. But, note that Smith also wants to distance himself from commitment to the truth of certain things that are common to all the relevant models (e.g. the truth of the negation of the inductive premise of a sorites or the truth of penumbral connections), saying that we can only talk as if they are true. This is akin to theorists who appeal to artefacts of their models to meet certain objections, even in relation to something in their account that is common to all models of the relevant type—an attitude I sought to leave room for in Sect. 2. Plurivaluationism could be a way of cashing out this sort of approach.

According to Smith, we can’t do better than merely talk as if a sentence is true to degree 0.3 even if all acceptable assignments agree on that valuation: agreement across valuations does not deliver truth. Similarly, though, with a sentence that all assignments agree is completely true. We can only “talk as if” “Bertie is bald” is true, even if he has no hairs on his head, and we can only “talk as if” “he is not both bald and not bald” is true.

On this picture, then, we can never genuinely report truth simpliciter: we cannot get beyond truth in models. But we should not settle for a story in which there is no truth simpliciter. If to avoid committing ourselves to certain problematic cases, we are left unable to attribute truth to even the clearest cases, then the price is too high.

Moreover, since we interpret the language all at once, rather than sentence by sentence, the multiplicity of interpretations due to the vagueness of some terms will even affect the semantic status of precise sentences involved in the interpretation. We can talk as if “two is an even number” is true, but this too must be “mere talk”, since there is nothing beyond the multiplicity of interpretations of this sentence along with the vague ones. The vagueness of parts of our language robs unproblematic definite cases and even precise sentences of truth simpliciter too.10 Smith has no way to hive off the unproblematic cases where you have truth on all interpretations—clear cases of a predicate and precise sentences—calling them true, while denying the truth simpliciter of, say, the negation of the sorites premise when it too is true on all acceptable valuations.

For Smith, vagueness enters when meaning and facts fail to determine whether something (e.g. “Bob is bald”) is true, rendering it indeterminate and so in need of being modelled by a degree of truth strictly between 0 and 1. In definite positive and negative cases of vague predicates, by contrast, the meaning and facts do succeed in determining that, for example, Bertie is bald. Why is that not enough for genuine truth (rather than just allowing us to “talk as if” it is true)? Surely, if agreement on all valuations is what counts for something to be determined to be the case, then it is sufficient for truth. And the natural way to go would then be to take a supervaluationist approach—where the valuations quantified over are all degree theorist’s models—calling sentences true (false) if they are true (false) on all the valuations.11

Consider the analogue with subjective probability. It might be thought that there is a range of degrees of belief, which, in itself, is successfully modelled by the real numbers between 0 and 1, but where people’s actual beliefs don’t sit exactly on the scale. Van Fraassen (1984, p. 251) models someone’s degrees of belief by what he calls (confusingly for our discussion here) her “representor”, which is the set of probability functions that are compatible with her judgements about, e.g., comparative likelihoods. Every function in the representor must respect what the subject takes to be orderings of likelihood, but if there is indeterminacy as to which way she would order a pair of things, then different functions in the set can order the pair in different ways. In “vague probability theory”, we can assign a range of values to a proposition according to the range of values assigned to that proposition across the members of the representor. What is true to say about someone’s beliefs is what is true of all members of their representor. This, then, corresponds to the supervaluationist alternative to Smith’s theory.12 On the straight analogue of Smith’s plurivaluationism, by contrast, the theorist would have to say that we cannot truly report on a subject’s subjective probability, just on what that probability is according to the various acceptable models. Again, that seems to be to say too little about what we are interested in, and the prohibition on committing us to what is true on all the acceptable models is surely unreasonable.

7 Non-linear structures

Some of the problems we have been focusing on concern the fact that the linear structure of the real numbers does not correspond well with the phenomenon to be modelled. Perhaps we can provide a better model by offering a different, non-linear structure, which could more accurately reflect the structure of the phenomenon, meaning that the model would not deliver the kind of results about ordering that we need to ignore. Brian Weatherson considers such a view, and proposes modelling the intermediate degrees with a lattice structure (Weatherson 2005; see also Zardini 2008).13 According to such a picture, the value of “a is tall” can be modelled as strictly greater than “b is tall” when a is taller than b, but with “Amy is tall” and “Betty is thin”, these sentences can be assigned values such that neither is greater than the other, nor are they equal, for the values are incomparable.

Whereas the traditional degree theorist’s values can be modelled as a linear array of values arranged vertically between the points representing 0 and 1, this alternative approach can be thought of in terms of an array of different parallel lines of points for different predicates, each between 0 and 1. Simplifying to just two predicates and only two intermediate cases of each, with a downward connecting line representing truer than, part of the structure would be:

https://static-content.springer.com/image/art%3A10.1007%2Fs11098-011-9750-1/MediaObjects/11098_2011_9750_Figa_HTML.gif

This is only part of the structure, even given the simplifications, for the negations of the sentences will not coincide with any of these points, and the conjunction of, say, Fx1 and Gy1 will be a different point again, below the points for both Fx1 and Gy1, but coinciding with neither of those points, nor with 0. For Weatherson notes that if F and G are independent predicates (e.g. “tall” and “bald”), then borderline cases of F and G are never comparable, and, similarly, borderline predications are never comparable with borderline negations of the same predicate (2005, pp. 67–68). So, given the full stock of predicates, the structure of the lattice will be hugely complex.

As Weatherson recognises, consequences of his views summarised above are not clearly the results that we’d intuitively want about instances of the truer than relation (a relation which Weatherson claims we do pre-theoretically grasp, p. 51). The fact that borderline cases of independent predicates are never comparable goes against the thought that “Bob is tall” is truer than “Bill is bald”, when Bob is tall for a borderline case of tallness and Bill is towards the hairiest end of the borderline bald people. And the consequence that borderline predications are never comparable with negations of the same predicate conflicts with the intuition that “Bob is tall” is truer than “Bob is not tall” when, again, Bob is at the tall end of the borderline cases.

Weatherson is prepared to bite the bullet on these consequences claiming that the intuitions they come up against “turn on an underlying intuition that truer should be linear relation”: “once we drop the idea that truer is linear, I think the plausibility of these claims fall away” (p. 68). This diagnosis of the intuitions is implausible, however. Linearity would ensure that there is always comparability, but the intuition appealed to above does not rely on that claim, it is straightforwardly an intuition about one sentence being truer than another. If we can’t rely on those intuitions, then, first, the claim that we grasp “truer than” comes under threat,14 and, second, in throwing out unwanted comparisons that linearity committed us to, we have lost compelling ones that Edgington, for example, sought to model (recall her desire to treat large differences in verity value as representational).

The motivation to drop linearity is that some predications are not comparable whereas others are. The phenomenon is akin to the non-linearity of “more intelligent than”. There is sometimes no fact of the matter as to which of two people is more intelligent, when they are both fairly intelligent but in different ways, but nonetheless there is a fact of the matter that neither of them is as intelligent as some other, very intelligent character: they are both comparable with the latter, even though different aspects of intelligence are involved. It would misrepresent our notion to limit comparisons of intelligence so that A is more intelligent than B iff A is more intelligent in all the relevant ways, or more intelligent in some and as least as intelligent in all the others. This would be to deny any comparability across dimensions and would suggest rejecting the coherence of the multi-dimensional “intelligent” and “more intelligent than” that we actually use. Rather than embracing non-linearity, this seems to be to acknowledge multiple linear dimensions and reject any classifications that cross-cut them.15 This is what Weatherson’s treatment of truer than is like, however. His truer than relation is, at best, highly artificial. And, yet, truer than is at the very centre of Weatherson’s story; he writes, “I claim that the concept truer, and the associated concept as true as, are the only theoretical tools we need to provide a complete theory of vagueness.” (p. 52)

Might a lattice-based theory nonetheless succeed in solving the sorites paradox and providing a logic of vagueness? Weatherson addresses the paradox, but acknowledges that the central premise is false and so he cannot avail himself of the degree theorist’s alleged solution appealing to near truth (pp. 61–65). His response to the paradox appeals to a “determinately” operator and generalisations involving it that resemble the inductive premise. It is a familiar response that does not invoke the truer than relation and is independent of the distinctive features of his theory. The logic that the theory yields is classical (it being a Boolean Algebra interpreted in the standard way, with conjunction and disjunction treated as meet and join). And his constraints on the truer than relation are trivially satisfied in a bivalent system (for example, consider “existential quantification is a least upper bound with respect to ≥T”). Is the concept truer than then doing any substantive work in revealing the logic of vague language? At best we can say that if we were compelled to accept intermediate sentences standing in “truer than” relations, then we have been given principles that show how logic can remain classical; but that is not to regard the truer than relation as the tool “we need to provide a complete theory of vagueness”.16

Theories appealing to a numerical structure face problems in imposing a structure that does not fit the phenomenon. In Sects. 1, 2, 3, 4, and 5, I considered an approach that involves ignoring the elements that don’t fit, but this had no successful story to tell about what was left. By contrast, the type of view considered in this last section appeals to a different structure lacking some of the undesirable features and thus seems to avoid the questions about what to take seriously. But if the previous approach over-shoots on its consequences, this one under-shoots and fails to respect key instances of the relation it tries to model. For example, recall (Sect. 5 above) the objections to degree theories with the standard definition of negation, that they always dictate an ordering between a sentence and its negation. The alternative considered here cannot allow that there is ever such an ordering (when borderline cases are involved). Neither approach offers a middle ground for which in some (borderline) cases a sentence is comparable with its negation and in others it is not. This is the middle ground needed if we are to model orderings with respect to truth among intermediate sentences. I suggest we abandon such comparisons altogether and employ a theory of vagueness that does not rely on the unmodellable truer than relation and which does not require a numerical structure or other such structures that fail to match the phenomenon.

Footnotes
1

See, for example, Machina (1976) and Smith (2008); for discussion of these views, see also Williamson (1994), chapter 4, and Keefe (2000), chapter 4.

 
2

See, e.g., Williamson (1994, p. 137), Keefe (2000, p. 96) and Edgington (1996, p. 304). See Sect. 5 below for some discussion of definitions of the connectives.

 
3

On this familiar worry, see, e.g., Tye (1994) and Keefe (2000, p. 113ff).

 
4

In Keefe (2000), chapter 2, I talk about “the modelling approach”, while Cook (2002) and Shapiro (2006) discuss a “logic as modelling” view. See also, e.g., MacFarlane (2010).

 
5

Cook (2002, pp. 245–246).

 
6

Shapiro suggests a more substantive role for sharpenings where the interpretations that are quantified over “represent a possible state of a conversation among competent speakers with vague predicates” (2006, p. 69). This rests on a substantive and controversial philosophical contextualist theory about how vague predicates function; I claim that no such role is needed for precisifications.

 
7

There is no difference in verity so small that it is never representative, however, as Cook shows (p. 244). So, for example, small differences in verity of the predication of “tall” to consecutive members of a sorites series will always be representative, though we can make the difference in height between them as small as we like.

 
8

Thanks to an anonymous referee for this suggestion.

 
9

The thought that small differences within an assignment can be representative does not help here: on any model, “Tek is tall” must get a higher value than “Tim is tall” if Tek is slightly taller than Tim, but a range of different assignments preserving that relation will be acceptable.

 
10

The problem also affects the very claims that Smith uses to motivate his appeal to a degree theory. According to “the closeness picture of vague predicates”, “closeness of x and y in F-relevant respects makes for closeness of ‘Fx’ and ‘Fy’ in respect of truth.” (2008, p. 146). But if such closeness in truth-value in all the acceptable assignments only means that we can “talk as if” they are close in respect of truth, then we cannot truly say that the condition is satisfied or that vagueness is successfully captured even in his own terms.

 
11

A many-valued supervaluationist view of this type was defended in Sanford (1993).

 
12

I would argue that the many-valued supervaluationist theory is inferior to the classical supervaluationist view, but I will not pursue that argument here. This paper is specifically concerned with the appeal to artefacts in modelling vagueness and this issue is unlikely to be crucial to the comparison between these two theories.

 
13

Zardini’s theory involves certain other very striking features, such as the denial of the transitivity of validity. I won’t enter into these issues here.

 
14

Weatherson maintains that the truer than relation is implicitly defined by its role in the false theory according to which it is linear (i.e. a standard degree theory, which he calls M). We can understand that theory, so, he claims, we grasp the truer than relation. Implicit definition by a false theory is fine, he observes, since “we know what phlogiston and ether mean because of their role in some false theories” (p. 51). The analogy does not work, however, since we understand phlogiston but know that it applies to nothing. If we were to admit that the theory was false but still seek phlogiston in the real world, we might be guided by the false theory up to a point, but we could not take it as defined by that theory since nothing can fulfil the required criteria. If there is a question as to whether we understand a non-linear truer than, it won’t do to point out that we can understand a different relation without the questionable feature.

 
15

I use “more intelligent than” as a non-linear comparative for the purposes of comparison with “truer than”. We might also wonder whether Weatherson’s framework can successfully model truer than relations with two sentences predicating a multi-dimensional predicate such as “intelligent”. If the different dimensions of variation represented by different predicates blocks comparability, might the different dimensions relevant to a single predicate also block it? If so, the account would only represent “a is intelligent” as more true than “b is intelligent” in a very limited range of the cases where a and b are borderline intelligent and a is intuitively more intelligent than b.

 
16

This is not yet to rule out the possibility of a very different sort of theory that also adopts a non-linear structure but that is unlike Weatherson’s in various respects. It is far from clear how such a theory could work, however, and there is no space to consider the options here.

 

Acknowledgments

This paper began as a short reply to a paper by Dorothy Edgington at the Princeton Conference on Philosophical Logic, organised by Delia Graff Fara; many thanks to Dorothy and Delia for this great opportunity. I gave later versions of the longer paper at a Barcelona Workshop on Vagueness and Metaphysics, at a Workshop on Vagueness and Self-reference in Lisbon, at the 4th Cambridge Graduate Conference on the Philosophy of Maths and Logic, at the colloquium of the Logic & Language research group of the ILLC at the University of Amsterdam and at a departmental seminar at the University of Nottingham. I am very grateful to audiences on all these occasions. I am also very grateful to Dominic Gregory and a very helpful referee for Philosophical Studies for objections, questions and suggestions. And I acknowledge the ‘Borderlineness and Tolerance’ project, of which I am a part (ref. FFI2010-16984, MICINN, funded by the Government of Spain).

Copyright information

© Springer Science+Business Media B.V. 2011