Philosophical Studies

, Volume 158, Issue 1, pp 131–148

Implicit definition and the application of logic

Authors

    • Department of PhilosophyHumboldt University of Berlin
    • Institut für PhilosophieHumboldt-Universität zu Berlin
Article

DOI: 10.1007/s11098-010-9675-0

Cite this article as:
Kroedel, T. Philos Stud (2012) 158: 131. doi:10.1007/s11098-010-9675-0

Abstract

The paper argues that the theory of Implicit Definition cannot give an account of knowledge of logical principles. According to this theory, the meanings of certain expressions are determined such that they make certain principles containing them true; this is supposed to explain our knowledge of the principles as derived from our knowledge of what the expressions mean. The paper argues that this explanation succeeds only if Implicit Definition can account for our understanding of the logical constants, and that fully understanding a logical constant in turn requires the ability to apply it correctly in particular cases. It is shown, however, that Implicit Definition cannot account for this ability, even if it draws on introduction rules for the logical constants. In particular, Implicit Definition cannot account for our ability to apply negation in particular cases. Owing to constraints relating to the unique characterisation of logical constants, invoking the notion of rejection does not remedy the situation. Given its failure to explain knowledge of logic, the prospects of Implicit Definition to explain other kinds of a priori knowledge are even worse.

Keywords

A priori knowledgeKnowledge of logicImplicit definitionUnderstandingNegationRejection

1 Introduction

These days, few people endorse the logical positivists’ doctrine that principles knowable a priori are true purely by virtue of meaning.1 However, the general approach to a priori knowledge from the meanings of words is still popular. One of the best worked-out accounts in this spirit is the so-called theory of Implicit Definition, which has been advocated by Paul Boghossian (1996, 1997). According to this theory, our knowledge of certain principles can be explained from the fact that they implicitly define certain expressions: the expressions mean whatever makes the principles true or valid. Prima facie, Implicit Definition is particularly plausible with respect to logical principles, and there is a long-standing tradition of holding that logical constants are implicitly defined by certain logical principles containing them.2 I shall argue, however, that Implicit Definition fails to explain our knowledge of logical principles because it fails to account for our ability to apply logical expressions in particular cases.3

2 Implicit Definition

By the theory of Implicit Definition with respect to logic, I shall understand the claim that the meanings of the logical constants are determined by certain logical principles in which they feature: specifically, their meanings are such that the principles come out true.4 In the following, I shall also use the formulation that the constants mean ‘whatever makes the principles true’. This is not supposed to indicate that the constants somehow refer to the truthmakers of the principles (if there are such things); rather, it is intended to be an equivalent way of saying that the constants have meanings that makes the principles come out true. (For simplicity I shall use ‘true’ when talking about principles in general, although some of them might be inferences and/or schemata, which should properly be called ‘valid’ instead.) For instance, an Implicit Definition theorist might say that ‘is identical with’ means whatever makes ‘Everything is identical with itself’ and certain further principles true.

Implicit Definition is supposed to provide the resources for explaining our knowledge of logic, such as our knowledge that everything is identical with itself. In a nutshell, Implicit Definition claims to be able to explain our knowledge of logic as follows: if a given logical constant means whatever makes certain principles containing it true, then these principles are automatically true; knowing how the meanings of logical constants are determined therefore yields knowledge that these principles are true. In more detail, the following argument-schema has been suggested as a starting point for this explanation5:
  1. (1)

    If logical constant C means whatever makes principle P(C) true, then P(C) is true.

     
  2. (2)

    C means whatever makes P(C) true.

     
  3.  

    https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figa_HTML.gif

     
  4. (3)

    P(C) is true.

     
In this argument, C and P(C) stand for linguistic entities; for instance, C may stand for ‘is identical with’ and P(C) may stand for ‘Everything is identical with itself’, so that we get the following argument:
  1. (4)

    If ‘is identical with’ means whatever makes ‘Everything is identical with itself’ true, then ‘Everything is identical with itself’ is true.

     
  2. (5)

    ‘Is identical with’ means whatever makes ‘Everything is identical with itself’ true.

     
  3.  

    https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figa_HTML.gif

     
  4. (6)

    ‘Everything is identical with itself’ is true.

     
Note that the conclusion of this argument is not a statement of the principle that everything is identical with itself; rather, it is a metalinguistic statement about the truth of ‘Everything is identical with itself’. This point will be crucial for later arguments.

Knowledge of the premises of such an argument is somehow supposed to explain knowledge of its conclusion, which is in turn supposed to explain our knowledge of logic. For instance, knowledge of (4) and (5) is somehow supposed to explain knowledge of (6),6 which is in turn supposed to explain our knowledge that everything is identical with itself. This suggestion gives rise to two prima facie problems. First, it seems that some logical expressions already have to have a meaning before we can come to know premises of forms (1) and (2). This seems clear for premises of form (1), which contain the conditional. A proponent of Implicit Definition might hope that the conditional, like all other logical constants, acquires its meaning via an instance of (2). However, instances of (2) themselves can only be known if some logical constants already have a meaning. For as Quine (1936, 1960) has emphasised, the implicit definitions of logical constants have to be general. They can only be sufficiently general if they use logical expressions (along the lines of ‘For all sentences A, ifA is of such-and-such a form thenA is to be true’). Thus, some logical expressions already have to have a meaning before the project of implicitly defining any logical expressions, and utilising instances of the argument from (1) and (2) to (3), can begin. Second, one might hold that extending knowledge via inference requires knowing that the inference in question is valid. All the inferences that follow the schema ‘(1), (2); therefore, (3)’ have the form of Modus Ponens (mp) and would thus yield knowledge of the conclusion only if the thinker knew that mp was valid. In particular, one might hold, the inference whose conclusion states that mp is valid could only yield knowledge of this conclusion if we already knew that mp was valid. This leaves unexplained how we know that mp is valid. I shall not pursue these prima facie problems, however.7 I shall grant the proponent of Implicit Definition that we can somehow come to know the premises of his explanatory argument and transmit this knowledge to its conclusion. However, this still does not suffice to explain genuine knowledge of logic. Or so I shall argue.

3 Understanding

If we know instances of (3), we have metalinguistic knowledge, such as knowledge that the sentence ‘Everything is identical with itself’ is true. However, we can know that a certain sentence is true without having knowledge of what this sentence says (that is, knowledge of the proposition expressed by the sentence). For instance, reading a trustworthy Finnish newspaper, I come to know that ‘Yhtiö on itse keskeyttänyt lentotoiminnan ja hakenut velkasaneeraukseen’ is true. But I have no idea what this sentence means; I do not understand it and therefore do not have any knowledge of the proposition it expresses. As things stand, the situation with Implicit Definition might be similar. Implicit Definition will only explain our knowledge that certain logical principles are true but not genuine knowledge of logic (that is, knowledge of the propositions expressed by these principles) unless we understand these principles. For instance, unless we understand ‘Everything is identical with itself’, Implicit Definition will not be able to explain our knowledge that everything is identical with itself, but merely our knowledge that the sentence ‘Everything is identical with itself’ is true. Not understanding some constituent in a sentence is sufficient for not understanding the sentence.8 Thus, unless we understand the logical constants, Implicit Definition will only explain certain metalinguistic knowledge, but not genuine knowledge of logic.9

The Implicit Definition theorist might be unimpressed by this possibility. He might concede that it is possible that someone should not understand the constants and hence at best acquire metalinguistic knowledge of logical principles. He might hold, however, that such a situation will not arise if someone knows the premises of his argument, particularly instances of (2). For if someone knows instances of (2), she knows that a certain logical constant means whatever makes certain principles containing it true, and this suffices for understanding the constant. In fact, it seems that this connection between knowing a constant’s implicit definition and understanding it is not just an optional feature of Implicit Definition, but something the Implicit Definition theorist has to endorse. He holds that the essence of a logical constant’s meaning is captured by his claim that it means whatever makes certain principles containing it true. If this is all there is to the meaning of a logical constant, it seems that one cannot (at least not in the absence of psychological defects or highly unusual circumstances) know that it means whatever makes certain principles containing it true without understanding it. Moreover, even if bestowing a meaning on the logical expressions by explicit stipulation might turn out to be impossible, the Implicit Definition theorist will still hold that the situation is as if the logical constants were newly introduced expressions that owed their meaning exclusively to this stipulation. Thus, he would have to hold that we will understand the logical constants even if we merely know that they mean whatever makes certain principles containing them true. The Implicit Definition theorist, then, has to endorse something like the following principle:
understanding

If someone knows that C means whatever makes P(C) true, then ceteris paribus she understands C.

The issue of understanding and metalinguistic knowledge that led to understanding is not unique to Implicit Definition with respect to logic. We can reconstruct Kripke’s examples of reference-fixing so that they resemble the theory of Implicit Definition (see Kripke 1980, p. 54  ff., Boghossian 1997, p. 350). Suppose that we stipulate that ‘metre’ means whatever makes the sentence ‘The platinum–iridium stick in Paris is one metre long’ true. This will put us in a position to know that the sentence ‘The platinum–iridium stick in Paris is one metre long’ is true. To make sure that the stipulation also puts us in a position to know that the platinum–iridium stick in Paris is one metre long, however, knowledge of this stipulation has to be sufficient for understanding ‘metre’. (Of course we also have to understand the other words in ‘The platinum–iridium stick in Paris is one metre long’ in order to come to know, through the stipulation, that the platinum–iridium stick in Paris is one metre long.) Similarly, suppose that we stipulate that ‘Jack the Ripper’ means whatever makes the sentence ‘Jack the Ripper committed the Whitechapel murders of 1888’ true.10 In order for this stipulation to put us in a position to know that Jack the Ripper committed the Whitechapel murders of 1888, and not merely that ‘Jack the Ripper committed the Whitechapel murders of 1888’ is true, knowledge of the stipulation needs to be sufficient for understanding ‘Jack the Ripper’.

In order to assess whether Implicit Definition can satisfy understanding, we need to find out whether there are any necessary conditions for understanding C that are not satisfied by someone who merely knows that C means whatever makes P(C) true. In the parallel cases of ‘metre’ and ‘Jack the Ripper’, it is notoriously unclear what understanding these expressions requires if they are introduced by implicit definition, particularly whether understanding them requires any epistemic relation to their referents beyond knowing their implicit definitions (see Salmon 1988). As we shall see, however, in the case of the logical constants there is a clear-cut necessary condition for understanding them, namely our ability to apply them in particular cases. This condition will prove fatal to Implicit Definition if merely knowing that the constants mean whatever makes certain principles containing them true should turn out not to account for this ability.

What exactly is meant by the ability to apply logical constants in particular cases? Suppose you go for a walk with someone on a dry and sunny day. Your companion understands the non-logical expressions of English, her senses function normally, she is rational, etc. Then you would expect her to be able to verify a sentence like ‘It is not raining’; that is, you would expect her to be able to come to know that this sentence is true. If she were not able to do this while the other conditions obtained, this would cast serious doubt on her understanding of ‘not’. The rationale behind considerations like these is captured by the following principle:
application

If someone understands a standard logical constant C, then ceteris paribus she is able to verify some simple non-logically true sentence whose principal operator is C.

I am using some of the expressions in application in a technical sense. As mentioned above, by verifying a sentence S, I mean coming to know that S is true. By a simple sentence, I mean one that contains at most one logical expression. For instance, sentences of the form not-A with A atomic are simple, while sentences of the form not-not-A are not, even though sentences of either form may be non-logically true. By a standard logical constant, I mean constants such as ‘not’, ‘and’, ‘or’, ‘every’, ‘some’, and ‘is identical with’, as opposed, say, to the Sheffer stroke (|) or the falsum (⊥). Despite these technicalities in its formulation, the idea expressed in application is simple: the principle says that understanding the logical constants enables one to apply them in particular cases. The ceteris paribus condition is supposed to absorb putative counterexamples, such as irrationality, hallucination, lack of understanding of the non-logical vocabulary, etc. Note that application does not say that understanding a logical constant is the ability to verify such-and-such sentences; it merely says that the ability to verify some such sentence is (ceteris paribus) a necessary condition.11 Further, application does not demand that everyone should be able to verify a certain specific sentence involving, say, ‘not’. All that is required is that someone who understands ‘not’ should be able to verify some simple non-logically true sentence of the form not-A.

Some more remarks on the technical details of application are in order. Why is the principle restricted to standard logical constants? There are two reasons for this. First, for some non-standard constants it is unclear what exactly understanding them requires. It is highly plausible that someone who understands, say, ‘not’, ‘and’, ‘every’, and ‘is (identical with)’ is normally able to verify sentences such as, respectively, ‘It is not raining’, ‘It is raining and it is cold’, ‘Everything is in place’, ‘This is (identical with) John’. By contrast, it is not immediately clear that someone who understands the Sheffer stroke is normally able to verify sentences such as ‘The earth is flat | Grass is green’. If this is a requirement after all, then so be it, but it is better to err on the side of caution. Second, for some other non-standard logical constants, including them in the application principle would have the absurd consequence that no one can understand them. Take ⊥. I concede that it sounds slightly awkward to classify this constant as an operator. But we may conceive of the principal operator of a sentence in a somewhat loose sense as the smallest constituent of that sentence whose scope is the whole sentence; then ⊥ itself is the only simple sentence whose principal operators is ⊥. Since ⊥ is always false by definition, the sentence ⊥ cannot be true; a fortiori it cannot be non-logically true. So there is no simple non-logically true sentence whose main operator is ⊥; hence no one can verify such a sentence; hence, by application, no one can understand ⊥. In order to avoid this consequence, non-standard logical constants such as ⊥ are better excluded from the scope of application.

Another technical detail of application is that it requires the ability to acquire metalinguistic knowledge in order to understand a given logical constant. In order for someone to understand ‘not’, for instance, application requires her to be able to come to know a proposition such as the proposition that ‘It is not raining’ is true, as opposed to the proposition that it is not raining. The reason for this requirement is that in the following section I shall put forward arguments the presentation of which should remain neutral on whether someone understands the logical expressions; this will be easier to achieve when the requirement is formulated in terms of metalinguistic knowledge. I concede that having metalinguistic knowledge is generally more demanding than having the corresponding non-metalinguistic knowledge. Thus, it might be that someone can come to know that it is not raining without being able to come to know that ‘It is not raining’ is true. However, the cases that are relevant for the ceteris paribus condition in application (and in understanding) may involve somewhat idealised thinkers, for whom metalinguistic knowledge as such should not be problematic.12

understanding and application together yield the claim that knowing the implicit definition of a constant enables one to apply this constant in particular cases. More precisely, we get the following claim:
understanding + application

If C is a standard logical constant and someone knows that C means whatever makes P(C) true, then ceteris paribus she is able to verify some simple non-logically true sentence whose principal operator is C.13

If someone could know the implicit definition of some logical constant yet fail to be able to apply this constant in particular cases (while understanding the non-logical fragment of English, etc.), understanding application would yield that Implicit Definition is false. In the following section, I shall argue that Implicit Definition does indeed fail for this reason. The crucial question that will be discussed is whether Implicit Definition can account for our ability to apply negation in particular cases; for under any sensible choice of primitive logical constants, negation will have to be one of them.

4 Introduction rules

The previous section raised the problem that Implicit Definition might not be able to account for our ability to apply the standard logical constants in particular cases. It is likely that in response a proponent of Implicit Definition will invoke introduction rules.14 More precisely, he will hold that our ability to verify a simple non-logically true sentence with some standard logical constant as its principal operator is explained by our knowledge that the introduction rule for this operator is valid owing to the rule’s featuring in the implicit definition of the operator. For instance, he will claim that the introduction rule for ‘or’, that is, the rule that allows us to infer ‘A or B’ from A (and from B), is part of the implicit definition of ‘or’: ‘or’ means whatever makes this rule (and further principles) valid. Knowing that this rule is valid and knowing that, say, ‘Grass is green’ is true, the rule allows us to infer that ‘Grass is green or Elvis is alive’ is true and thus explains our knowledge that ‘Grass is green or Elvis is alive’ is true. (As was the case in the previous section, in this explanation the Implicit Definition theorist assumes that knowledge can be transferred via logical inference; again I shall grant that this is feasible without making his explanation circular.) Similarly, the Implicit Definition theorist will hold, the introduction rule for negation will explain our ability to apply negation in particular cases: he will hold that the introduction rule for ‘not’ allows us to verify non-logically true simple sentences whose principal operator is negation, such as the sentence ‘It is not raining’. (Henceforth, I shall call sentences whose principal operator is negation negative sentences.)

The standard introduction rule for negation is Reductio Ad Absurdum (raa)15:
  • [A]

  • https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figb_HTML.gif

  • not-A

This rule says that if we introduce A as an assumption (which is indicated by the square brackets) and if this, possibly together with further assumptions (called side premises), logically implies ⊥, then we may discharge the assumption of A and infer not-A. Let us assume, for the time being, that ⊥ stands for an arbitrary contradiction.16

Does knowledge of the validity of raa explain our ability to verify non-logically true simple negative sentences such as ‘It is not raining’? We have to distinguish two kinds of cases, depending on whether or not side premises are involved. Suppose that there are no side premises involved. Then we would have to show that ‘It is raining’ alone logically implies ⊥. But no such implication can hold. A set of premises logically implies a given conclusion just in case there is no interpretation of the non-logical expressions in the premises and conclusion such that the premises are true while the conclusion is false. Being a contradiction, ⊥ is false on any interpretation of the non-logical expressions in ‘It is raining’. But there are lots of interpretations of the non-logical expressions in ‘It is raining’ that make the sentence true even if it is actually false. Thus, there are interpretations of ‘It is raining’ on which ‘It is raining’ is true while ⊥ is false on any interpretation; therefore, the former does not logically imply the latter.

Suppose, then, that there are side premises involved in our derivation. The side premise (or premises) we need in order to derive ⊥ from the assumption of A is something that logically implies ‘It is raining ⊃ ⊥’. This could be (i) ⊥, (ii) ‘It is raining ⊃ ⊥’ itself, or (iii) some other premise (or set of premises) that logically implies ‘It is raining ⊃ ⊥’. The side premises have the role of background assumptions, but unlike the assumption of ‘It is raining’, they are not mere assumptions which are discharged later in the argument. If our knowledge that ‘It is not raining’ is true is inferred via an argument that involves side premises, then the side premises have to be known to be true as well. So a fortiori the side premises of our argument have to be true. This excludes ⊥ as a side premise, since ⊥ is by definition a contradiction; thus (i) is ruled out. If we opt for the side premise ‘It is raining ⊃ ⊥’ mentioned under (ii), we have to know that ‘It is raining ⊃ ⊥’ is true. How can we explain this knowledge in turn? A proponent of introduction rules would probably suggest that we verify ‘It is raining ⊃ ⊥’ by means of the introduction rule for ⊃. However, this would lead us back to our old problem, since the standard introduction rule for ⊃ is
  • [A]

  • B

  • https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figb_HTML.gif

  • A ⊃ B

Thus, in order to explain our knowledge that ‘It is raining ⊃ ⊥’ is true, we would have to show that ‘It is raining’, possibly together with some side premises known to be true, logically implies ⊥. But this derivation is just what we have been trying to explain, so we are back to our original problem. Given these results, it seems plausible to conjecture that objections similar to those just raised against side premises as specified in (i) and (ii) also apply to anything that falls under (iii).

In fact, there is no need to resort to conjecture, as the point against using introduction rules in order to verify non-logically true negative sentences can also be made rigorously in another way. The suggestion of the Implicit Definition theorist is that non-logically true simple negative sentences can be verified via the introduction rule for negation with the help of certain side premises. In order for this to yield knowledge that such-and-such a negative sentence is true, the side premises have to be known to be true as well. If some of the side premises are complex (as they are in case (ii), for instance), this shifts the problem to the question of how these side premises are known to be true. Maintaining the spirit of his reply, the Implicit Definition theorist will respond that our knowledge that the side premises are true can in turn be explained from the introduction rules of their respective principal operators. These rules may again involve side premises; if they are complex, knowledge of their truth has to be explained via introduction rules in turn. In order to avoid an infinite regress, we will ultimately have to draw on atomic side premises. If there is supposed to be a logically valid argument to a non-logically true simple negative sentence from these side premises plus certain assumptions that are discharged in the course of the argument, the side premises have to logically imply the non-logically true simple negative sentence. However, if all the side premises are atomic, they will not logically imply any non-logically true simple negative sentence.17 For any non-logically true simple negative sentence will be of the form not-A with A atomic. Whatever atomic side premises we have, there will always be interpretations of them on which they are all true while the conclusion of the form not-A is false. Thus, logical rules alone will not explain our ability to verify non-logically true simple negative sentences such as ‘It is not raining’.18

In sum, if ⊥ stands for a contradiction, mere knowledge that the introduction rule for ‘not’ is valid cannot explain our ability to verify non-logically true negative sentences like ‘It is not raining’ and hence cannot account for our understanding of ‘not’. What else might ⊥ be? It might be suggested that ⊥ stands for an absurdity, where this absurdity does not have to be a literal contradiction. Then the Implicit Definition theorist might try to explain our ability to verify sentences like ‘It is not raining’ as follows. On a dry day, we can come to know that ‘It is dry’ is true. Suppose that the introduction rule for ‘and’ is unproblematic. Suppose further that we know that ‘It is raining and it is dry’ is an absurd sentence. Then we can reason as follows: Assume that ‘It is raining’ is true. We know that ‘It is dry’ is true. We may then infer that ‘It is raining and it is dry’ is true. But this is absurd. Therefore, ‘It is not raining’ is true. However, this strategy of invoking absurd sentences merely trades one explanandum for another. The Implicit Definition theorist tries to explain how we can come to know that certain sentences of the form not-A are true. What he invokes now is the ability to come to know that certain sentences (such as ‘It is raining and it is dry’) are absurd. But the ability to establish that a sentence is absurd stands in need of explanation as much as the ability to establish that a sentence is false (or that its negation is true) does, especially if we conceive of an absurd sentence as something like a clear falsehood.

The Implicit Definition theorist might try to circumvent the problem of establishing non-logically true simple negative sentences by claiming that these sentences are inferred via some non-logical rules. For instance, he might claim that there is a non-logical rule that allows us to infer ‘It is not raining’ from, say, ‘It is dry’, and that using rules such as this one explains our ability to apply negation in particular cases. However, it is far from clear that such a rule is always at hand when we apply negation in particular cases. Even if we ignore this worry and assume that we can always find a non-logical rule to verify the target sentence, problems arise. For what would the status of this rule be? First, it might be an empirical generalisation. This, however, requires us to establish, in particular cases, that it is not raining, or that it is not dry, so our ability to apply negation in particular cases is presupposed rather than explained. Second, it might be that the rule that allows us to infer ‘It is not raining’ from ‘It is dry’ is part of the implicit definition of negation: ‘not’ means not only whatever makes certain purely logical principles true; rather, it means whatever makes certain logical principles plus the rule that allows us to infer ‘It is not raining’ from ‘It is dry’ true. However, this move would deprive Implicit Definition of much of its initial plausibility. Implicit Definition says that the logical constants mean whatever makes certain principles containing them true. This seems plausible only if the principles are logical principles and if there are a small number of principles that define each logical constant. If we allow rules such as the rule that allows us to infer ‘It is not raining’ from ‘It is dry’, both conditions are false: the rule is not a logical one (it contains non-logical terms, and is not logically valid), and it seems that by parity of reasoning we have to allow a number of such rules to feature in the implicit definition of ‘not’ once we introduce some of them. Thus, claiming that rules such as the rule that allows us to infer ‘It is not raining’ from ‘It is dry’ are part of the implicit definition of ‘not’ commits the Implicit Definition theorist to a dose of holism that many would find unpalatable. Third, it might be that the rule that allows us to infer ‘It is not raining’ from ‘It is dry’, while not meaning-constituting for ‘not’, is part of an implicit definition of ‘raining’ or ‘dry’ (or perhaps ‘it is’). Again, generalising from this case will yield an inflation of implicit definition beyond what was originally envisaged. And it will not just yield additional sentences that feature in the implicit definitions of the logical constants; in order to solve the problem of applying the logical constants in particular cases, we would have to admit implicit definitions for a number of (perhaps all) non-logical expressions as well. In sum, invoking a rule that allows us to infer ‘It is not raining’ from ‘It is dry’ will either presuppose the ability to apply negation in particular cases, or it will make the theory of Implicit Definition much more wide-ranging, and much less plausible, than initially envisaged.

Let me summarise the results reached so far. The theory of Implicit Definition with respect to logic says that the meanings of the logical constants are determined such that certain logical principles containing them are true. This is supposed to explain our knowledge of logic as follows: since we know that the logical constants mean whatever makes certain principles true, we know that these principles are true. However, this only explains genuine knowledge of logic if Implicit Definition can account for our understanding of the logical constants. Otherwise, all that Implicit Definition explains is metalinguistic knowledge that certain principles are true, but not knowledge of the propositions expressed by these principles. But Implicit Definition cannot account for our understanding of the logical constants since it cannot explain our ability to apply them in particular cases. Therefore, Implicit Definition cannot explain knowledge of logic.

5 Uniqueness and rejection

In response to the above arguments, a proponent of Implicit Definition might invoke a logic that involves the notion of rejection in order to explain our ability to apply negation in particular cases. I shall return to this response shortly. Before that, I shall discuss the issue of unique characterisations of logical constants, which will not only raise a further problem for Implicit Definition, but will also turn out to be relevant in my reply to the response that invokes the notion of rejection.

By the unique characterisation of an operator by a set of principles, I shall understand the following: principles P1, …, Pn characterise an operator O uniquely if and only if they characterise O up to logical equivalence. That is, if O1 and O2 are operators, and if A(O1) is a sentence with O1 as its principal operator, then P1, …, Pn for O1 and O2 imply that A(O1) is logically equivalent to A(O2) (where A(O2) is the result of replacing O1 in A(O1) with O2).19 All logical constants can be uniquely characterised, respectively, by a small number of logical principles (see Harris 1982). For instance, identity is uniquely characterised by the principles of Reflexivity (∀x x = x) and Leibniz’s Law (∀xy(x = y ⊃ (… x … ⊃ … y …))) (see Quine 1961, p. 326). Negation is uniquely characterised by raa and Ex Falso Quodlibet (efq), that is, the rule that allows us to derive B from A and not-A.20

What is the relation between Implicit Definition and the results about the unique characterisation of the logical constants? The following claim should appeal to a proponent of Implicit Definition:
uniqueness

If principles P1, …, Pn uniquely characterise logical constant C, then they determine a unique meaning for C.

The uniqueness claim is valuable to the Implicit Definition theorist. It is notoriously problematic for someone who holds that meaning is implicitly determined through certain principles and inferences to say exactly which principles and inferences are supposed to be the meaning-constituting ones (see Fodor and Lepore 1991; Glüer 2003). In particular, it seems unclear when we may stop adding items to a set of principles that are supposed to implicitly define a given expression, so that there is a danger of ending up with a holistic determination of the meaning of this expression. The uniqueness claim, however, can solve this problem at least in cases where a unique characterisation of this expression is possible. For it suggests that all we need in order to implicitly define an operator is a set of principles that uniquely characterise this operator. Thus, it suggests the following maxim:
minimalism

The principles that feature in the implicit definition of an operator O should not be more numerous or complex than required to uniquely characterise O.

Often, principles that uniquely characterise a given logical constant logically imply other principles containing this constant. For instance, from Leibniz’s Law and Reflexivity we can derive the symmetry of identity (i.e. the principle ∀xy(x = y ⊃ y = x)). So if someone wished to use the theory of Implicit Definition to explain our knowledge of logic, accepting minimalism would not automatically limit her explanation to knowledge of those principles that feature in the implicit definition of a given logical constant. If those principles allow us to derive a given further principle, knowledge of the latter could be explained as derived from knowledge of the principles that feature in the implicit definition. (As we saw in the previous sections, it is doubtful that Implicit Definition can even explain knowledge of the principles that supposedly implicitly define a given constant, but let us grant that this explanation is feasible for the sake of the argument.) For instance, assuming that our knowledge of Leibniz’s Law and of Reflexivity can be explained by Implicit Definition, our knowledge of the symmetry of identity could be explained as derived from the former knowledge.

However, there is no guarantee that all logical principles that involve a given constant and that we take ourselves to know can be derived from the principles that feature in the implicit definition of this constant. Indeed, there are counterexamples. Suppose the Implicit Definition theorist claimed that raa and efq implicitly define negation. From raa and efq alone we cannot derive Double Negation Elimination (dne), that is, the rule that allows us derive A from not-not-A.21 Given that we are adherents of classical logic, we credit ourselves with knowledge of dne. But if he holds that raa and efq implicitly define negation, the Implicit Definition theorist will not be able to explain this knowledge. He will not be able to explain it as derived since these two principles do not allow the derivation of dne, and he will not be able to explain it directly from the implicit definition, since raa and efq already uniquely characterise negation and therefore, by minimalism, dne should not be part of the implicit definition of negation.

The Implicit Definition theorist might try to avoid both this problem and the difficulty about the application of negation discussed in the previous section by opting for a logic that employs rejection (see Smiley 1996; Rumfitt 2000). Such a logic contains, in addition to the standard expressions of propositional logic, the signs ‘+’ and ‘−’ which, if prefixed to a sentence, represent the acceptance and rejection of this sentence, respectively. That is, ‘+A’ stands for ‘Is it the case that A? Yes’, and ‘−A’ stands for ‘Is it the case that A? No’.22 The signs ‘−’ and ‘+’ are linked to negation (which is abbreviated as ‘¬’) by the following rules:
https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figc_HTML.gif
While ‘−A’ is thus logically equivalent to ‘+(¬A)’, ‘−’ is not a notational variant of the negation sign. The negation sign can be iterated (as in ‘¬¬A’, ‘¬¬¬A’, etc.), but the rejection sign cannot. For if such an iteration were translated back into the characterisation of ‘−’ as ‘Is it the case that A? No’, we would get ungrammatical constructions such as ‘Is it the case that Is it the case that A? No? No’ (see Rumfitt 2000, p. 803; for the same reason, ‘+’ cannot be iterated either). Thus, the sign ‘−’ “does not contribute to propositional content, but indicates the force with which that content is promulgated” (ibid.).

For a proponent of Implicit Definition who endorses a logic of rejection, it would be congenial to hold that ‘¬’ means whatever makes the four rules +-¬-I, +-¬-E, −-¬-I, and −-¬-E valid.23 Unlike raa plus efq, these rules do imply dne. Formulated in the logic of rejection, dne is the rule that allows us to infer ‘+A’ from ‘+(¬¬A)’. This rule follows from +-¬-E and −-¬-E: given +(¬¬A), we get −(¬A) from +-¬-E, and +A from −-¬-E (ibid.).

This result, however, does not yet solve the problem that dne might not be derivable from the principles that characterise negation uniquely. The original problem was that certain principles characterise negation uniquely without implying dne, in which case minimalism rules out that any further principles are needed for the implicit definition of negation. This problem has not been answered by invoking the logic of rejection, since it might still turn out that a subset of {+-¬-I, +-¬-E, −-¬-I, −-¬-E} already characterises negation uniquely without implying dne. In fact, there is a subset of these rules that uniquely characterises negation. Assume that ¬1 and ¬2 both satisfy the rules +-¬-I and +-¬-E. Then +(¬1A) yields −A by +-¬-E for ¬1, and +-¬-I for ¬2 yields +(¬2A); similarly +(¬2A) yields −A by +-¬-E for ¬2, and +-¬-I for ¬1 yields +(¬1A). In sum, we get +(¬1A) ⊣⊢ +(¬2A). Since +-¬-I and +-¬-E thus characterise negation uniquely, according to minimalism no further principles are required for an implicit definition of negation; in particular, we can dispense with the rules −-¬-I and −-¬-E in order to implicitly define negation. Furthermore, −-¬-I and −-¬-E do not follow from +-¬-I and +-¬-E24; thus, it is left unexplained how we know these rules. So just as the Implicit Definition theorist cannot explain how we know that dne is valid if he endorses standard logic, it seems that he cannot explain how we know that −-¬-I and −-¬-E are valid if he opts for a logic of rejection.

However, while −-¬-I and −-¬-E cannot be derived from +-¬-I and +-¬-E alone, they can be derived with the help of the following structural rules:
https://static-content.springer.com/image/art%3A10.1007%2Fs11098-010-9675-0/MediaObjects/11098_2010_9675_Figd_HTML.gif
The rule +-− allows us to get from +-¬-E to −-¬-I, and −-+ allows us to get from +-¬-I to −-¬-E. Thus, if we accepted +-− and −-+, it would not be a problem that our knowledge of −-¬-I and −-¬-E cannot be explained from Implicit Definition, as we could now explain this knowledge as derived from +-¬-I and +-¬-E with the help of +-− and −-+. What would the status of the rules +-− and −-+ themselves be? It would be congenial for the Implicit Definition theorist to hold that they implicitly define, perhaps inter alia, the rejection sign ‘−’. Thus, our knowledge of +-¬-I, +-¬-E, +-−, and −-+ would be explained by Implicit Definition, and our knowledge of −-¬-I and −-¬-E (and, subsequently, of dne) would be explained as derived from the former rules.

While apparently solving the problem of uniqueness, the new rules for negation also seem to solve the problem of applying negation in particular cases. For given the rule +-¬-I, we may accept ‘It is not raining’ if we reject ‘It is raining’. The question is, however, how our capacity to reject sentences is explained. As we just saw, in the course of solving the uniqueness problem by appeal to rejection, it may be held that the rejection sign ‘−’ is itself implicitly defined. But does mere knowledge of the implicit definition of ‘−’ put us in a position to reject particular sentences? Knowledge that ‘−’ means whatever makes +-− and −-+ valid does not seem to provide us with this ability. Now, it may be that further principles feature in the implicit definition of ‘−’. However, it is unclear whether any further principles (at least any purely logical ones) would enable us to apply ‘−’ in particular cases. For instance, it might be claimed that the rules +-¬-I and +-¬-E feature in the implicit definition of ‘−’. But these rules are also supposed to implicitly define ‘¬’. It may not be problematic in principle if some principles implicitly define more than one constant, but in this case this does not help to solve the problem of applying ‘−’ either. If we know that ‘−’ means whatever makes +-¬-I, +-¬-E, +-−, and −-+ valid, and know that ‘¬’ means whatever makes +-¬-I and +-¬-E valid, we know that we may affirm the negation of a sentence if we reject this sentence, and that we may reject a sentence if we affirm its negation, but this does not at all help us to find out what sentences we should reject (or what negative sentences we should affirm) tout court.

If ‘−’ is not implicitly defined, perhaps it is primitive? This may solve the problem of rejecting particular sentences if the primitiveness of ‘−’ brings with it the capacity to apply it to particular sentences. However, this response is somewhat ad hoc, at least if a primitive notion of rejection is introduced merely in order to solve the problem of applying negation. But even if we set this worry aside, a significant problem remains. In order to solve the problem that +-¬-I and +-¬-E uniquely characterise negation without implying −-¬-I and −-¬-E, the Implicit Definition theorist needed to appeal to the structural rules +-− and −-+. If rejection is primitive, we cannot explain our knowledge that these rules are valid from the fact that they implicitly define rejection anymore.25 Still, +-− and −-+ are bona fide logical principles for a logic of rejection, so opting for a primitive notion of rejection forces the Implicit Definition theorist to give up the claim that he can explain all knowledge of logic by Implicit Definition.

In sum, the logic of rejection may solve the problem of uniqueness that beset Implicit Definition with respect to standard logic, but it solves the problem of applying negation only by giving rise to new difficulties.

6 Conclusion

I have argued that the theory of Implicit Definition cannot explain knowledge of logic. I have concentrated on Implicit Definition with respect to logical expressions because it seems that if this theory has a chance of succeeding at all, it will succeed for the subject matter of logic. For the meanings of logical expressions seem to be more tightly constrained by the principles in which they feature than the meanings of expressions from other subject matters. Further, it is comparatively easy to say which logical principles involving a certain logical expression are supposed to be meaning-constituting and which are not, since for each constant one can find a set of principles that uniquely characterise it and that would thus be ideal candidates for implicitly defining this constant.

Since, despite these advantages, Implicit Definition fails to explain knowledge of logic, its prospects for explaining a priori knowledge in most other subject matters are even worse. For instance, consider an attempt to explain knowledge of colour exclusion by the strategy of Implicit Definition. Suppose that ‘red’ means whatever makes the sentence ‘Whatever is red all over is not green’ true, plus perhaps some further principles. This putative implicit definition cannot explain our knowledge that whatever is red all over is not green. For this explanation requires that the implicit definition account for our understanding of ‘red’. And someone who understands ‘red’ ceteris paribus is able to verify some sentences of the form ‘x is red’. But mere knowledge of the truth of ‘Whatever is red all over is not green’ and its kin cannot explain this ability. Unlike in the case of negation, there is no obvious candidate for an introduction rule for the expression ‘red’ that might explain our ability to apply ‘red’ in particular cases. This is not to say that there is no place whatsoever for the strategy of Implicit Definition in epistemological explanations.26 It seems unlikely, however, that it will play a role in explanations of philosophical paradigm cases of a priori knowledge such as knowledge of metaphysical and epistemological principles.

Footnotes
1

Russell (2008) is a notable exception. See Coffa (1991 part II) for a reconstruction of the logical positivists’ position and its development.

 
2

See Prawitz (2006) for a recent discussion.

 
3

In a series of papers following his 1996 and 1997 (Boghossian 2001, 2003a, b), Boghossian develops a meaning-based account of the a priori justification of inferences. While his 2001 still employs the idea that meaning-constituting inferences are truth-preserving by virtue of a kind of implicit definition, viz. “implicit stipulation” (2001, p. 33), 2003a and 2003b differ in spirit by dispensing with this requirement. It is not entirely clear whether Boghossian’s later meaning-based account of the justification of individual a priori inferences supersedes his earlier account of a priori beliefs, since, even if a priori inferences are involved in the production of a priori beliefs, appropriate, and appropriately justified, premises are needed too. In fact, in 2003b (p. 34) he states that the account of a priori inferences can be applied to what he calls the “Implicit Definition Template”, suggesting that the two accounts complement each other, and in the introduction to a collection of papers of his that contains all of those cited above, he professes agnosticism about what a successful meaning-based account of the a priori will ultimately look like (2008, p. 5). For critical discussion of Boghossian’s different accounts and similar approaches by other authors, see Horwich (2005) chapter 4, and Horwich (2010) chapter 10. Williamson (2003) is a direct response to Boghossian (2003a). Irrespective of the dynamics of Boghossian's views, his original account of Implicit Definition keeps drawing philosophical attention (Ebert 2005, Jenkins 2008 chapter 2, and García-Carpintero and Pérez Otero 2009 are recent examples) and merits to be discussed in its own right.

 
4

Compare Boghossian (1997): “Implicit definition: It is by arbitrarily stipulating that certain sentences of logic are to be true, or that certain inferences are to be valid, that we attach a meaning to the logical constants. More specifically, a particular constant means that logical object, if any, which would make valid a specified set of sentences and/or inferences involving it” (p. 348).

 
5

See Boghossian (1997, p. 348) where a slightly different formulation is used. Compare also Boghossian (2003b, p. 21).

 
6

In fact, Boghossian makes the stronger claim that we can know the conclusions of arguments such as this one a priori even if we cannot know both premises a priori (1997, p. 357). This claim is criticised in Margolis and Laurence (2001, p. 296).

 
7

For Boghossian’s own responses to these problems, see his 1997 (p. 382) and 2000 respectively. For further critical discussion of Boghossian’s approach, see Harman (1996) and Horwich (1997).

 
8

Unless the constituent is in quotation marks.

 
9

A similar worry is raised, in a different context, by Hale and Wright (2000, p. 294) and Ebert (2005).

 
10

The example is also discussed in Hale and Wright (2000, p. 294).

 
11

This ability might still be constitutive of understanding in the sense of being a prior, that is, more fundamental, necessary condition for it; however, my argument does not require this stronger assumption.

 
12

It might also generally be more demanding to come to know that such-and-such a sentence is true than to form a justified belief that such-and-such a sentence is true. The reason why application requires the ability for the former is also that it facilitates the presentation of later arguments. Like the requirement of meta-linguistic knowledge, this more demanding requirement is harmless given the ceteris paribus condition, which rules out cases of justified belief without knowledge, such as Gettier cases and, plausibly, skeptical scenarios. This is not to say that such cases are irrelevant for the epistemology of logic; on Gettier cases, see Besson (2009). Peacocke (1999, chapter 2) endorses necessary conditions for the possession of concepts that are formulated in terms of knowledge.

 
13

It might be objected that understanding + application does not follow from understanding and application for the following reason. ‘Ceteris paribus’ functions like a modal operator: ‘Ceteris paribus, A’ means that, in all normal circumstances, A is the case. However, what the normal circumstances are varies with the sentence following the ceteris paribus clause and the context of the utterance. Thus, it might be that ‘Ceteris paribus, all Fs are G’ and ‘Ceteris paribus, all Gs are H’ are both true while ‘Ceteris paribus, all Fs are H’ is false owing to different circumstances being relevant for the different sentences. I agree with everything in this objection, except that I deny that the general failure of transitivity of ceteris paribus conditionals (or universally quantified conditionals) affects our case. For there is no reason to suppose that understanding, application and understanding application give rise to different circumstances for the respective ceteris paribus operators. In all cases, the normal circumstances seem to involve a competent speaker of the non-logical fragment of English (perhaps one with slightly idealised mental powers) who is not distracted or deluded, etc.

 
14

It is typical for proponents of meaning-based accounts of the a priori to hold that a logical constant’s meaning is determined by its introduction and elimination rules; see for instance Boghossian (2003a) and Boghossian (2003b, p. 24).

 
15

I am closely following Gentzen’s (1969) formalism here, but mutatis mutandis the following arguments would also apply to other systems.

 
16

Alternatively, we could conceive of ⊥ as a constantly false sentence. A reading of ⊥ as an absurdity is discussed at the end of this section.

 
17

Dummett (1993, p. 258) anticipates this result when he remarks that “it would be difficult to provide for the derivation of ‘¬A’ with A atomic by means of a purely logical rule”. For a similar formal result see Milne (1994), who also draws pessimistic conclusions about the prospect of defining negation by its introduction rule.

 
18

Note that this result is very general: it does not require that the logical rules be introduction rules or that they involve only a single logical constant each.

 
19

One might be attracted to a more restrictive definition of unique characterisation according to which any pair of sentences such that one sentence is the result of replacing O1 with O2 in the other—irrespective of whether or not O1/O2 is the principal operator—are logically equivalent given that O1 and O2 satisfy P1, …, Pn. However, satisfaction of this more restrictive definition will follow from satisfaction of the definition endorsed here given that the logic in question has the so-called congruentiality property, according to which the logical equivalence of sentences B1 and B2 implies the logical equivalence of χ(B1) and χ(B2) for all contexts χ. Classical and intuitionistic propositional logic are congruential; see, for instance, Humberstone (2010).

 
20

See Williamson (1988, pp. 111–112), who also draws attention to the consequences for the dispute about Double Negation Elimination that is discussed below.

 
21

For raa and efq are valid in intuitionistic logic while dne is not; see Dummett (1977, p. 26).

 
22

The characterisations of ‘−’ and ‘+’ in this paragraph follow Rumfitt (2000, pp. 800–803).

 
23

Calling a formula beginning with ‘+’ or ‘−’ ‘true’ might be inadmissible, since “‘+A’ is true” would translate into the ungrammatical “‘Is it the case that A? Yes’ is true” according to the definition of ‘+’ (similarly for ‘–’). Instead, we might call a formula of the form ‘+A’ or ‘−Acorrect if and only if ‘Yes’ or ‘No’ would be, respectively, the correct answer to the question ‘Is it the case that A?’. Consequently, the validity of rules such as +-¬-I, +-¬-E, −-¬-I, and −-¬-E should be understood as correctness-preservation as opposed to truth-preservation.

 
24

Proof. To show that −-¬-I cannot be derived from +-¬-I and +-¬-E, evaluate +A as true relative to an assignment of truth-values to atomic sentences if and only if A is true or atomic (where A is evaluated in the usual way if complex), and evaluate –A as true if and only if A (evaluated as before) is false. Then +-¬-I and +-¬-E are truth-preserving relative to all assignments, but −-¬-I is not (let A be false and atomic). Similarly, to show that −-¬-E cannot be derived from +-¬-I and +-¬-E, evaluate +A as true relative to an assignment if and only if A is true and non-atomic, and evaluate –A as true if and only if A is false. Then +-¬-I and +-¬-E are truth-preserving relative to all assignments, but −-¬-E is not (let A be true and atomic).

 
25

At least if we assume further that acceptance (‘+’) is primitive as well.

 
26

The Poincaré-Hilbert approach to geometry might be regarded as a possible niche for Implicit Definition. For a recent discussion, see Ben-Menahem (2006).

 

Acknowledgments

Thanks to Corine Besson, Franz Huber, Wolfgang Künne, Erik Stei, Timothy Williamson, and an anonymous referee of Philosophical Studies for very helpful comments and suggestions.

Copyright information

© Springer Science+Business Media B.V. 2010