1 The Prawitz-Tennant Analysis of Paradoxes

In Appendix B to his monograph on natural deduction, Prawitz [1] considered a calculus for naive set theory, we will refer to it as \(\texttt{NI}^{\mathbin {\supset }\bot \in }\), obtained by extending \(\texttt{NI}^{\mathbin {\supset }\bot }\) (the \(\{\supset ,\bot \}\)-fragment of \(\texttt{NI}\)) with an introduction and an elimination rule for formulas of the form \(t \in \{x: A\}\) to express set-theoretical comprehension:

A proof explaining two equivalent statements. The first statement is A left parenthesis t slash x right parenthesis over t element of left brace x: A right brace, element of inference. The second statement is t element of left brace x: A right brace over A left parenthesis slash x right parenthesis element E.

where A(t/x) is the result of substituting t for x in A.

The rules certainly satisfy the informal statements of harmony (given on Sects. 1.2, 1.3 and 3.4) and thus it seems quite uncontroversial to regard these as being in harmony1: a reduction and an expansion can straightforwardly be defined as follows:

A proof is based on the assumption that t is an element of the set left brace x: A right brace, assuming that A is an involution. It shows that if t is in the set left brace x: A right brace, then the equation t slash x = t slash x is true for all values of t that are in the set left brace x: A right brace.

Using \(\lnot A\) to abbreviate \(A \mathbin {\supset }\bot \), choosing \(\lnot (x \in x)\) for A and \(\{x: \lnot (x \in x)\}\) for t we have that

A proof of Russell's paradox that arises from attempting to define the set of all sets that do not contain themselves. It leads to the conclusion that such a set does not exist.

and hence we obtain the following instances of \(\in \)I, \(\in \)E and \(\mathord {\in }\beta \) (we indicate these instances as \(\rho \)I, \(\rho \)E and \(\mathord {\rho }\beta \) respectively):

A proof of the identity principle for addition. The first 2 lines display the addition of 2 positive and 2 negative numbers. The next 2 lines display that the sum of 2 numbers can be expressed as the difference of their squares. The last line displays that there is an equal difference in squares for both positive and negative numbers.

Using them it is very easy to construct a closed derivation, we call it \(\lnot \mathscr {R}\), of \(\lnot \rho \):

A proof, if 1 slash rho is true, then negation rho is also true. If negation rho is true, then 1 slash rho is also true. The proof also represents 1 slash negation rho leads to a contradiction that 1 slash negation rho cannot be true.

By a further application of the instance \(\rho \)I of \(\in \)I, one obtains a closed derivation \(\mathscr {R}\) of \(\rho \) as well:

A proof states that the assumption of rho leads to the derivation of negation rho and rho using the contradiction elimination, implication introduction, and the double negation elimination rules.

and combining these two derivations with an application of \(\mathbin {\supset }\)E yields Russell’s paradox in the form of the following derivation:

A proof has an assumption negation rho that leads to the contradiction rho, perpendicular, and the elimination of the double negation using the rule implication elimination. The perpendicular derived from implication elimination indicates that the initial assumption led to a contradiction.

This situation confirms the paradoxical nature of \(\rho \): We can produce two closed derivations of \(\rho \) and its negation respectively. This obviously leads to contradiction, i.e. to a closed derivation of \(\bot \).

Observe however that, since the encircled occurrence of \(\lnot \rho \) is a maximal formula occurrence, this derivation is not \(\beta \)-normal. By applying the implication reduction \(\mathbin {\supset }\beta \) we obtain the following derivation:

A proof has the assumption that negation rho, implication rho elimination rule. The implication elimination, implication introduction, and contradiction elimination rules result in negation rho, rho contradiction rho, and rho perpendicular.

Here the encircled occurrence of \(\rho \) is a maximal formula occurrence. By an application of \(\rho \beta \) we obtain the derivation \(\textbf{R}\) we started with.

At each of the two steps there was only a single possibility to reduce the derivation, hence no \(\beta \)-reduction sequence starting from \(\textbf{R}\) ends with a \(\beta \)-normal derivation. That is, weak normalization does not hold for \(\beta \)-reduction in \(\texttt{NI}^{\mathbin {\supset }\bot \in }\) , since the process of reduction of \(\textbf{R}\) gets stuck in what Tennant [2] called an “oscillating loop”. Prawitz proposed this to be the distinctive feature of Russell’s paradox.2

Tennant [2] considers a rather rich variety of both semantic and set-theoretic paradoxes besides Russell’s—the Liar and some of its relatives, Grelling’s, and Tarski’s quotational paradox—and shows that once the assumptions required for their formulation are spelled out in terms of natural deduction rules, they all generate derivations of \(\bot \) (or, as in the case of Curry’s paradox, of an arbitrary atomic proposition) which do not normalize.3\(^,\)4

The steps playing the role of \(\in \)I and \(\in \)E are called id est inferences, as they result from extra-logical principles: In the case of Russell’s paradox, from set-theoretic comprehension. In the case of the liar paradox, to take another example, analogous id est inferences would be based on the observation that a certain sentence says of itself that it is not true. Here, “observation” is not necessarily empirical inspection, but may result from some arithmetical referencing mechanism.

Schroeder–Heister and Tranchini [3] dubbed the ‘Prawitz-Tennant analysis of paradox’ the thesis that a paradoxical derivation is a derivation of an “unwanted” sentence (such as ‘Santa Claus exists’ in the case of Curry’s paradox, or \(\bot \)) that fails to normalize.5

2 A Simplified Presentation

In the sequel we will mainly focus on a simplified formulation of the paradoxes obtained by assuming \(\rho \) to be a nullary connective governed by the rules \(\rho \)I and \(\rho \)E. We refer to the extension of \(\texttt{NI}^{\mathbin {\supset }\bot }\) with the rules for \(\rho \) as \(\texttt{NI}^{\mathbin {\supset }\bot \rho }\). The derivations of the previous section will therefore be viewed both as derivations of \(\texttt{NI}^{\mathbin {\supset }\bot \in }\) and as derivation of \(\texttt{NI}^{\mathbin {\supset }\bot \rho }\), the context always making clear what is meant.6

When \(\rho \) is treated as primitive, the labels introduction and elimination in the case of \(\rho \)I and \(\rho \)E may appear odd at first, since \(\rho \) figures both in the premise and in the conclusion of the rule. The labels are however justified, as \(\rho \) is the main operator of the conclusion of the introduction rule and the main operator of the premise of the elimination rule. (In particular they are instances of the general form of introduction and elimination rules given in the Appendix).

Being a nullary connective, \(\rho \) behaves like a proposition which is interderivable with its own negation. As is well known, a proposition like \(\rho \) is definable not only in the language of naive set theory but in other settings as well. Examples are languages which allow to refer to their own expressions—as arithmetic does by means of Gödel numbering, or natural language does by means of quotes—and that contain a transparent truth predicate—that is a predicate T governed by the following inference rules:

A proof states that the assumption of T superscript gamma A superscript gamma is true then it leads to the derivation of superscript gamma A superscript gamma and T superscript gamma A superscript gamma using conjunction introduction and conjunction elimination rules.

where \(\ulcorner A \urcorner \) is a name of the sentence A.

The brute stipulation that \(\rho \) is governed by the two rules and by the reduction above permits one to disregard the exact conditions under which a sentence like it can be defined, and to focus on what is essential for the analysis of paradoxes to be developed in this part of the present work.

Different simplified versions of paradoxes such as the one we will consider have been discussed against the background of certain extensions of both natural deduction and sequent calculus. Schroeder–Heister [4] considers extensions of both sequent calculi and natural deduction by means of clausal definitions. In this context, paradoxes are typical examples of partial inductive definitions [5] such as the following one, in which an atom R (only a notational variant of \(\rho \)) is defined through its own negation:

The equation is as follows. Derivation equals left brace Russell's paradox inversely implies negation Russell's paradox.

Definitions are ‘put in actions’ by inference rules, which are justified by a principle of definitional closure (yielding introduction rules in natural deduction and right rules in sequent calculus) and a principle of definitional reflection (yielding elimination rules and left rules respectively). The natural deduction rules putting definition \(\mathcal {D}\) in action are just the rules for \(\rho \).

A different but related approach is deduction modulo [6], where a given set of rewriting rules (essentially corresponding to a definition in the sense of Schroeder–Heister) is viewed as inducing a congruence relations on propositions, modulo which deductive reasoning takes place.7

The analysis of paradoxes to be developed here is by no means in contrast to the one arising from these settings and it is meant as being, at least in principle, applicable to them as well, as well as to those in which paradoxes are defined by the usual subtler means. This is, however, a task which will be left for further work.

3 Which Background Logic?

All paradoxes investigated by Tennant yield a non-normalizing derivation of \(\bot \) using only intuitionistically acceptable inference rules (beyond those specific for the paradox). Therefore he concludes his analysis stressing that

it appears to me an open question whether every paradoxical set of sentences [...] can be shown to be paradoxical by means of an intuitionistic proof with a looping reduction sequence.

(Tennant [2], p. 285)

The focus on intuitionistic logic of the Prawitz-Tennant analysis of paradoxes has been criticized by Rogerson [7], who considers a formulation of Curry’s paradox in classical logic and observes that the derivation fails to display the loopy feature called for by the Prawitz-Tennant analysis. We consider a slight variation of Rogerson’s proof based on Russell’s rather than Curry’s paradox.

In the presence of the classical rule of reductio ad absurdum

A proof that the negation of proposition A results in a contradiction, implying that proposition A must be true and that this process is part of reductio ad absurdum reasoning.

the derivation of Russell’s paradox \(\textbf{R}\) can be recast in a more symmetric fashion8:

An assumption of negation rho and its contradiction using the rule of falsum elimination and conditional elimination to derive rho. Another derivation has negation rho leading to the contradiction using the reductio ad absurdum. These confirm that the assumption of negation rho leads to a contradiction, and thus rho must be true.

This derivation can be reduced by an application of \(\mathbin {\supset }\beta \)-reduction to the following:

The proof starts with the assumption negation rho and derives a contradiction using the rule of reductio ad absurdum. Then, using the rule of implication elimination, it derives another expression. Hence, the contradiction leads to the conclusion the original assumption must be false, thus proving rho.

According to Rogerson, this derivation cannot be further reduced9:

No standard reduction steps given by Prawitz [1] straightforwardly apply in this case as the use of the [set-forming] operator insulates the formulas from the normalization process. It seems plausible to conclude that this proof does not reduce to a normal form and does not generate a non-terminating reduction sequence in the sense of Tennant [2, 8]. Thus, Tennant’s criterion for paradoxicality does not apply here. (This is not to say that it is inconceivable that someone might be able to define a reduction step applicable in this case that would induce a non-terminating reduction sequence.)

(Rogerson [7], p. 174)

Although it is true that no standard reduction step given by Prawitz [1] applies to this derivation, it is also well known that the normalization strategy for classical logic devised there applies only to language fragments for which the application of RAA can be restricted to atomic conclusions. In richer languages, for example in languages containing disjunction and existential quantification, the conclusion of RAA cannot be restricted without loss of generality to atomic formulas, and in order for normal derivations to enjoy the subformula property a further (family of) reduction(s) has to be considered. This new reduction is based on the idea that the conclusion of an application of RAA which is also the major premise of an elimination rule counts as a redundancy to be eliminated. The reduction, proposed by Stålmark [9], can be depicted schematically as follows:

2 pairs of proofs for the equivalence of 2 statements using Reductio ad Absurdum. The assumption that the top statement A is true leads to the contradiction that the second statement B is false. The assumption that B is true leads to the contradiction that A is false. As a result, negating A produces a contradiction, proving its equivalence with B.

where \(\dagger \)E stands for an application of an elimination rule for some connective \(\dagger \) belonging to the language fragment under consideration and \(\langle \mathscr {D}\rangle \) stands for the (possibly empty) list of derivations of the minor premises of the application of \(\dagger \)E.

In the language of naive set theory, the presence of the operators for the formation of set terms jeopardizes the notion of atomic sentence. Thus, a redundant conclusion of RAA is not always a non-atomic formula, but more generally any formula which can act as the major premise of an elimination rule. This makes it plausible to let, in the scheme for reduction proposed by Stålmarck, \(\dagger \)E range over \(\rho \)E as well. Once this is done, Rogerson’s derivation can be further reduced, and the derivation after some steps reduces back to itself.10

Tennant formulated his criterion for paradoxicality with an emphasis on intuitionistic logic, by claiming that a paradoxical sentence is one governed by id est inferences such that, in the extension of intuitionistic logic obtained by this addition, there are derivations of \(\bot \) that fail to normalize. As observed by Rogerson, the choice of intuitionistic logic is certainly motivated by the will of showing that non-constructive principles of reasoning do not play any significant role in the phenomenon of paradoxes. However, part of the reason for this choice is also the fact that the criterion for paradoxicality is formulated in terms of normalization, and intuitionistic logic (in its usual formulation at least) is well-behaved with respect to normalization. Given the crucial role played by normalization (not only from the formal, but also from the conceptual point of view), the ‘base’ calculus relative to which the non-normalizability effects of id est inference is to be tested must enjoy normalization.

Tennant may be wrong in restricting one’s attention to intuitionistic logic, but we do not believe that extending the criterion beyond this logic is as problematic as Rogerson claims. For the case of classical logic, the above observations are sufficient to show that on a proper account of normalization for classical logic, Russell’s (and Curry’s paradox as well) display the looping effect called for by the Prawitz-Tennant analysis. Rogerson hints at other possible counterexamples, but, provided the logical frameworks in the background can be given a proper proof-theoretic presentation, Tennant’s criterion should always be applicable.

In fact, the notion of harmonious calculus used to establish Fact 3 (see Sect. 1.6 above) provides some sufficient conditions for what should be meant by “a proper proof-theoretic presentation”. Whenever the id est inferences display the kind of harmony needed in order for the resulting calculus to qualify as harmonious, Fact 3 warrants that every closed \(\beta \)-normal derivation ends with an introduction rule. Hence, if in a certain calculus a proposition has no introduction rules (as it is the case for \(\bot \) in \(\texttt{NI}^{\mathbin {\supset }\bot \rho }\)), there cannot be closed \(\beta \)-normal derivations of it. Hence, by contraposition, if the calculus allows one to construct a closed derivation of a proposition without introduction rules, this derivation cannot be brought into \(\beta \)-normal form.

4 A Substructural Analysis

Current philosophical discussions on paradoxes have highlighted the key role played by “structural” properties of logical consequence, such a reflexivity, transitivity, monotonocity and contraction11) in the derivations of contradictions within paradoxical languages, and proposed solutions of the paradoxes consisting in the rejection or restriction of one of these properties (see e.g. Zardini [10] for an overview of the different directions of investigations on the topic). The term ‘structural’ is used for these properties since the rules expressing them in sequent calculus (the identity initial sequents for reflexivity, the cut rule for transitivity, weakening for monotonicity) are commonly referred to as ‘structural rules’.12 Structural rules (like the properties they express) do not make reference to any particular piece of logical vocabulary, in contrast to the “operational” rules each of which governs a particular logical connective.

Given the correspondence between normalizability in natural deduction and the admissibility of the cut rule in sequent calculus, the Prawitz-Tennant analysis of paradoxes is closely connected to the analysis that focuses on the role played by the cut rule in sequent calculus.  This has been advanced in the last decade by several authors (notably Ripley [11], who adapted ideas going back to Girard [12, 13] to a language equipped with a transparent truth predicate).

Whereas the connection between cut and transitivity of logical consequence is obvious, the one between transitivity and normalization might require a short explanation. The transitivity of derivability is hard-wired in the natural deduction setting: In (almost)13 any calculus of natural deduction, given a derivation \(\mathscr {D}\) of B from A and another one \(\mathscr {D}'\) of C from B, there is obviously one of C from A, namely the one resulting by plugging \(\mathscr {D}\) in place of the open assumptions of \(\mathscr {D}'\). However, it is the normalization theorem which warrants transitivity at the level of normal derivability in a given calculus: Given a \(\beta \)-normal derivation \(\mathscr {D}\) of B from A and another one \(\mathscr {D}'\) of C from B, the derivation resulting by plugging \(\mathscr {D}\) in place of the open assumptions of \(\mathscr {D}'\) is not necessarily normal (in the resulting derivation one or more occurrences of B could be maximal). It is only if normalization holds that we have the warrant that there is also a \(\beta \)-normal derivation of C from A. When normalization fails, as in the calculus \(\texttt{NI}^{\mathbin {\supset }\bot \rho }\) there is no such warrant. In fact, a counterexample to the transitivity of \(\beta \)-normal derivability is easily obtained by considering the derivation \(\lnot \mathscr {R}\) and the one that resulting by “removing” from \(\textbf{R}\) the two occurrences of the subderivation \(\lnot \mathscr {R}\):

A proof that negation rho indicates an assumption. The assumption negation rho over rho results in the derivation of both rho and negation rho, resulting in a contradiction. This is followed by an implication that any statement can be derived from this contradiction using the principle of explosion in logic.

This derivation of \(\bot \) from \(\lnot \rho \) is \(\beta \)-normal, and so is the derivation \(\lnot \mathscr {R}\). However, by combining together the two derivation one obtains a derivation of \(\bot \) from no undischarged assumption (viz. \({\textbf{R}}\)) which is not \(\beta \)-normal and that does not reduce to a \(\beta \)-normal derivation. In fact, as observed above, in \(\texttt{NI}^{\mathbin {\supset }\bot \rho }\) there cannot be any closed \(\beta \)-normal derivation of \(\bot \) as a consequence of Fact 3.

Another structural rule of sequent calculus which plays an essential role in connection to paradoxes is contraction.14 In the natural deduction setting, contraction corresponds to the possibility of discharging more than one copy of an assumption at once. Dropping contraction in sequent calculus corresponds to restricting \(\mathbin {\supset }\)I by allowing at most one copy of an assumption to be discharged at a time in natural deduction. In the modified calculus, the non-normalizing derivation \(\textbf{R}\) is blocked, since it is impossible to derive \(\bot \) using the rules of \(\rho \) and the ‘restricted’ implication rules. Actually it would not even be possible to derive either of \(\rho \) or \(\lnot \rho \). Moreover, all derivations would normalize (without contraction, both \(\mathbin {\supset }\beta \) and \(\rho \beta \) make the size of the derivations (i.e. the number of applications of inference rules in a derivation) decrease, therefore one can show normalization to terminate by induction on the size rather than on the number of maximal formulas of maximal degree), i.e. transitivity would be fully restored.

That contraction is an essential ingredient for triggering Russell’s paradox has already been observed by Fitch [14] who initiated the investigations of contraction-free logical settings. Formal attempts at developing naive set theory on a contraction-free base are due to Grišin [15], Girard [16] and Petersen [17]. More recently Zardini [18] investigated the possibility of developing a theory of truth in a contraction-free setting (therefore addressing also semantic paradoxes).

Contraction-free approaches to paradoxes are of great interest, but they will not be pursued in the present work. Given the semantic role of reduction in PTS, it seems that approaches to paradoxes focusing on normalization failure have the best prospects of being integrated into the PTS picture developed in Chap. 2.

Notes to This Chapter

  1. 1.

    For some authors, in order for a collection of rules to qualify as harmonious, it is essential that the introduction rules satisfy Dummett’s complexity condition (see the remarks on Sect.  2.10 following Definition 2.2 in Sect. 2.9). Hence they would deny the harmony of these rules on these grounds. Note however that not only “problematic” rules such as \(\in \)E and \(\in \)E one, but also the rules for e.g. the second-order existential quantifier (see Troelstra and Schwichtemberg, [19] Chap. 11) would fail to qualify as harmonious on these grounds.

  2. 2.

    Observe that the above analysis as well as the considerations to be developed below apply if we replace the notion of \(\beta \)-reduction and \(\beta \)-normal derivation with the notion of weak \(\beta \)-reduction and \(\beta _w\)-normal derivation, discussed in Sect. 2.7.

  3. 3.

    It is worth stressing that the nullary connective \(\bot \) counts as a contradiction only if some sort of explosion principle is associated with it. This is the case in \(\texttt{NI}^{\mathbin {\supset }\bot \in }\), where \(\bot \)E ensures that \(\bot \) is indeed something ‘bad’.

  4. 4.

    Instead of looping reduction sequences one can, more generally, consider non-terminating reduction sequences, which covers paradoxes such as Yablo’s (see Tennant, [8]). In the following, we will throughout speak of the looping feature of paradoxical derivations, keeping in mind that “non-termination” of reduction sequences is the appropriate more general term.

  5. 5.

    In more recent work, Tennant ([20,21,22], see) has proposed a different proof-theoretic analysis of Russell’s paradox. On the revised analysis (which will be presented in some detail in Sect. 6.5 of Chap. 6), instead of a non-normalising derivation of \(\bot \) from no assumption, one obtains a normal derivation of \(\bot \) from the assumption that Russell’s term \(\{x:\lnot x\in x\}\) possesses a denotation, i.e. a disproof of the existence of the set of all sets that do not belong to themselves. On these grounds, Tennant argues that the proof-theoretic anaysis of paradoxes aligns with Ramsey famous distinction between semantic and set-theoretic paradoxes. On Tennant revised view only semantic paradoxes give rise to non-normalizing derivations of \(\bot \), whereas set-theoretic ‘paradoxes’ merely yield disproofs of inconsistent assumptions, and hence do not qualify as paradoxes at all.

  6. 6.

    In the context of constructive type theory, we may introduce \(\rho \) directly by stipulating the following formation, introduction, elimination and equality rules (where ! stands for the operation associated with the introduction rule, and \(\text {!`}\) for the inverse operation to be associated with the elimination rule, whose meaning will be informally explained in Sect. 5.8):

    4 proofs are as follows. Formation rule, rho set. Introduction n rule, negation t: negation rho over affirmation exclamation mark t: rho. Elimination rule, positive condition t: rho over negation inverted exclamation mark t: negation rho. Equality rule, negation t: negation rho over the symbol not equal to dash t: not rho dash.

    The derivations \(\lnot \mathscr {R}\) and \(\mathscr {R}\) of \(\lnot \rho \) and of \(\rho \) are decorated (respectively) as follows:

    2 proofs aligned vertically. The sets have 4 lines of expressions involving variables x and rho, logical symbols, and the app function. The second set has slightly different expressions. Each set concludes with an expression involving the app function at the bottom line.

    That is, the two derivations \(\lnot \mathscr {R}\) and \(\mathscr {R}\) correspond to the two closed \(\beta \)-normal terms decorating their respective conclusions:

    $$\lambda x. \texttt{app} (\text {!`}x, x): \rho \supset \bot \qquad \qquad !\lambda x. \texttt{app} (\text {!`}x, x): \rho $$

    and the derivation \(\textbf{R}\) corresponds to the following term:

    A lambda calculus expression involving applications and abstractions reads, app left parenthesis lambda x dot app left parenthesis inverted exclamation mark x, x right parenthesis, exclamation mark lambda x dot app right parenthesis inverted exclamation mark x, x right parenthesis double right parentheses colon perpendicular.

    whose loopy reduction can be written as:

    A proof has app functions and variables inside parentheses, separated by commas. There are two parts of the expression connected by a left-pointing and a right-pointing arrowhead.

    The reader familiar with untyped \(\lambda \)-calculus will easily recognize that \(\textbf{R}\) is just a typed version of the well-known loopy combinator \((\lambda x.xx)(\lambda x.xx)\) (cf. also Schroeder–Heister [4, Sect. 4]) and Note 7 below.

  7. 7.

    The two approaches are closely related, and their main difference is manifested when one considers the type theories that can be associated with them: In definitional reflection a type-constructor—yielding a term typed by the definiens (the head of the clause) when applied to terms typed by the definiendum (the body of the clause)—is associated with each clause of the definition together with an inverse operation of type annihilation for the given defined atom. On the other hand, in deduction modulo the types of the definiens and of the definiendum are just identified modulo the congruence induced by the set of rewriting rules. In this case there is thus no explicit operation on terms corresponding to the definitional steps. In both settings the proof-terms which are associated with the above derivation \(\textbf{R}\) of \(\bot \) do not normalize. In deduction modulo, this term is just the same as the non-normalizing term \((\lambda x.xx)(\lambda x.xx)\) well-known from untyped \(\lambda \)-calculus. In the definitional setting the term has the somewhat more richer structure described in Note 6, due to the extra term-constructors associated the definitional steps.

  8. 8.

    By ‘symmetric’ we mean that the two immediate subderivations of \(\textbf{R}_{cl}\) can be obtained from each other by replacing occurrences of \(\rho \) with occurrences of \(\lnot \rho \) (and vice versa) and by switching the order of the premises of (\(\mathbin {\supset }\)E).

  9. 9.

    Although Rogerson speaks of a derivation based on Curry’s paradox, the derivation we discuss can be viewed as obtained from the last derivation on p. 174 of Rogerson, [7] by replacing \(a\in a\) with \(\rho \) and p with \(\bot \), and moreover by (i) removing in both main subderivations redundant applications of RAA, i.e. applications allowing one to pass from \(\bot \) to \(\bot \) with no discharge; (ii) \(\eta \)-reducing in both subderivations to \(\lnot \rho \). The considerations in this section apply exactly to Rogerson’s original derivation as well.

  10. 10.

    Provided that, as usual, one also reduces redundant applications of RAA (see Note 9 above). Otherwise, one ends up with the more general kind of non-termination mentioned in Note 4.

  11. 11.

    When logical consequence is assumed (as it is usually done) to hold between sets the rule of contraction does not correspond to any distinguished property of logical consequence. Not so when consequence is understood as holding between multi-sets of formulas, in which case (if desired) it has to be explicitly formulated.

  12. 12.

    The term ‘structural’ is sometimes used only for the rules of contraction and weakening but we will follow here the nowadays more common usage (in the philosophical community) of referring also to cut and identity initial sequents as structural rules.

  13. 13.

     A notable exception is Tennant’s calculus for Core Logic, in which only derivations in normal form are admitted (see Note 16 to Chap. 3). As a result of this restriction (and of other peculiarities of Core Logic), transitivity does not hold in general: given a derivation of B from \(\Gamma \) and one of C from \(B,\Delta \) there might not be a derivation of C from \(\Gamma ,\Delta \) (although there is one of either C or \(\bot \) from a subset of \(\Gamma ,Delta\).

  14. 14.

    This is true when sequents are defined as pairs of multi-sets or of sequences of formulas (whereby, in a single-conclusion setting, the second element of sequents is just a single formula). When sequents are taken to to pairs of sets of formulas, contraction becomes an implicit feature of sequent calculus, and thus it plays no explicit role in paradoxical derivations.