1 Introduction: harmony and inversion

According to Dummett (1981, 1991) in an acceptable theory of meaning there should be a balance between the two fundamental aspects of the practice of assertion, namely the conditions that must be fulfilled for the assertion of a sentence to be correct, and the consequences that can be drawn from its assertion. Such a balance is referred to as harmony.

The most important context of use of logically complex sentences is deductive reasoning. In the natural deduction setting (Gentzen 1935; Prawitz 1965), the two aspects of the practice of assertion are seized by the two types of rules which are distinctive of natural deduction: introduction and elimination rules. The introduction rules for a logical constant \(\dagger \) are those which allow one to establish a complex sentence having \(\dagger \) as main operator, and thus they specify the conditions of its correct assertion; in the elimination rules for \(\dagger \), a complex sentence having \(\dagger \) as main operator acts as the main premise of the rules, and these rules thus specify how to draw consequences from its assertion.

Hence, in natural deduction, the requirement of harmony as applying to logically complex sentences becomes a condition that should be satisfied by the collections of introduction and elimination rules:

Definition 1

(Harmony: Informal statement) What can be inferred from a logically complex sentence by means of the elimination rules for its main connective is no more and no less than what has to be established in order to infer that very logically complex sentence using the introduction rules for its main connectiveFootnote 1.

A typical example of rules which fail to be in harmony are those for Prior’s (1960) tonk:

which display no match between what can be inferred using the elimination rule and what is needed to establish the premise of the elimination rule using the introduction rule.Footnote 2 Thus, even if A and B are meaningful statements, their “contonktion” \(A\mathbin {\mathtt {tonk}}B\) is non-sense since the rule governing \(\mathbin {\mathtt {tonk}}\) are ill-formed.Footnote 3

In contrast with the rules governing \(\mathbin {\mathtt {tonk}}\), the standard rules for conjunction:

display a perfect match.

In general, when the rules for a connective are in harmony, two kinds of deductive patterns can be exhibited.

Patterns of the first kind have been described as “hillocks” (von Plato 2008) or complexity peaks (Dummett 1991) and they are constituted by an application of an introduction rule followed immediately by one of a corresponding elimination rule. The possibility of “levelling” these complexity peaks (the levelling operations being usually referred to as reductions) amounts to the fact that harmonious elimination rules allow one to infer no more than what has to be established to infer their major premise by introduction. The rules for conjunction above yield two such patterns, which can be get rid of as follows:

Patterns of the other kind, which could be described as complexity valleys, result when one infers a complex sentence from itself by first eliminating and then reintroducing its main connective. The possibility of expanding a deduction via a complexity valley amounts to the fact that harmonious elimination rules allow to infer no less than what is needed to infer their major premise by introduction:Footnote 4

In the case of implication, whose rules are:Footnote 5

we have the following reduction and expansion:

Footnote 6

Prawitz (1979) first proposed a general procedure to associate to any arbitrary collection of introduction rules a specific collection of elimination rules which is in harmony with the given collection of introduction rules. We will refer to such procedures as inversion principles.Footnote 7

Prawitz’s procedure has been later refined by Schroeder-Heister (1981, (1984, (2014a) in his calculus of higher-level rules, a deductive framework which generalizes standard natural deduction rules by allowing not only formulas but also rules themselves to be assumed and subsequently discharged in the course of a derivation.Footnote 8

Before presenting the calculus of higher-level rules in Sect. 3, we will introduce in Sect. 2 a distinction between extensional and intensional accounts of harmony, and we will show in which sense the approach to harmony sketched in this section is distinctively intensional.

We will then show in Sect. 4 that reductions and expansions can be defined for any connective governed by an arbitrary collection of introduction rules and by the collection of elimination rules obtained by the Prawitz and Schroeder-Heister’s inversion principle (henceforth PSH-inversion). Whereas the reductions associated to these connectives have already been given in the literature (even if, in fact, only by Schroeder-Heister 1981, in German), no formulation of the expansions for arbitrary connectives has been given so far. Prawitz (1971) specified expansions for standard intuitionistic connectives, but implicit in remarks by Dummett (1991) is a difficulty affecting Prawitz’s formulation of the expansion for disjunction which threatens the very idea of using expansions as a way to cash out the “no less” aspect of harmony. We will propose a pattern for expansions which is more general than Prawitz’s, we will show that it provides a solution to Dummett’s difficulty, and from it we will obtain a pattern for expansions for arbitrary connectives.

In Sect. 5 we will restate an observation of Schroeder-Heister to the effect that an exhaustive account of harmony cannot consist just in the specification of an inversion principle. We will argue that most authors (sometimes implicitly and sometimes in a more explicit manner) assume that a thorough account of harmony can be attained by coupling the inversion principle with a notion of equivalence between (collections of) rules, and that all seem to agree that the relevant notion of equivalence is interderivability. In Sect. 6, we will however show that the account of harmony obtained by coupling inversion with interderivability fails to qualify as intensional. Section 7 briefly summarizes the results of the paper.

2 Harmony: intensional vs extensional accounts

The account of harmony sketched in the previous section differs from the account of harmony stemming from Belnap (1962), who cashed out the “no more” and “no less” aspects of the informal definition of harmony in terms of conservativity and uniqueness respectively.Footnote 9

Following Dummett (who refers to conservativity as “global” harmony and to the availability of reductions as “intrinsic” harmony), for some authors (e.g. Schroeder-Heister 2014a, pp. 1204–1205) the distinctive feature of Belnap’s conditions is their being “global”, in contrast with other “local” ways of rendering the informal definition of harmony, such as the one sketched in the previous section.Footnote 10

In our opinion, however, what crucially distinguishes the account of harmony sketched in the previous section from the one of Belnap is something else: Both conservativity and uniqueness are defined in terms of derivability (i.e. of what can be derived by means of the rules for a connective) and not in terms of properties involving the internal structure of derivations (i.e. of how something can be derived). We propose to refer to accounts of harmony based on derivability as extensional, while those making explicit reference to the internal structure of derivations will be referred to as intensional.

To fully appreciate the value of the distinction, as well as the choice of the terminology, we first observe that reductions and expansions induce what is called a notion of identity of proofs.

According to intuitionists, proofs are the result of an activity of mental construction performed by an idealized mathematician. On this conception, it is natural to ask what is the relationship between proofs (as abstract entities) and derivations (as formal objects). A proposal which goes back at least to Prawitz (1971) is that of viewing formal derivations as linguistic representations of proofs, and the relation between derivations and proofs as analogous to the one between singular terms and their denotations. A further question which immediately arises, is that of when do two derivations represent the same proof. For Prawitz, reductions and expansions are transformations on derivations which preserve the identity of the proof represented. This conception draws on an analogy with arithmetic, where the rules for calculating the values of complex expressions are naturally understood as preserving the identity of the numbers which are denoted by the numerical expressions one operates with: When we transform \(`(3\times 4)-5\)’ into \(`12-5\)’, we pass over from a more complex to a simpler representation of the number seven.Footnote 11

We can thus take the symmetric, reflexive and transitive closure of the relation induced by reductions and expansions as yielding an equivalence relation on derivations: Two derivations belong to the same equivalence class if and only if there is a chain of applications of the reductions and expansions (and of their inverse operations) connecting the two derivations. Two derivations belonging to the same equivalence class will be said to denote, or represent, the same proof.Footnote 12

From now on, adopting the standard terminology of the lambda calculus, we will call the equations displaying complexity peaks \(\beta \)-equations, and the equations displaying complexity valleys \(\eta \)-equations. In the case of conjunction we thus have:

By “reduction” and “expansion” we will understand the left-to-right and right-to-left orientations of these equations respectively.

When inference rules are equipped with reductions and expansions, and thus a notion of identity of proofs is available, we are no more dealing with an extensional notion of derivability, but with an intensional one: beside being able to tell whether a sentence is derivable or not (possibly from other sentences) we can discriminate between different ways in which a sentence is derivable (possibly from other sentences).Footnote 13

Moreover, on the basis of the notion of identity of proofs, it is possible to introduce an equivalence relation on sentences which is stricter than interderivability, and which in category theory and the theory of lambda calculus is referred to as isomorphism:Footnote 14

Definition 2

(Isomorphism) Two sentences A and B are isomorphic if and only if there are two derivations \(\mathscr {D}_1\) and \(\mathscr {D}_2\) of B from A and of A from B respectively such that the two derivations obtained by composing \(\mathscr {D}_1\) and \(\mathscr {D}_2\) are \(\beta \eta \)-equivalent to the assumptions of A and B respectively:

Why is this notion called isomorphism? Intuitively, if a derivation in which all assumptions are discharged represents a proof of its conclusion, a derivation in which the conclusion depends on undischarged assumptions can be viewed as representing a function from proofs of the assumptions to proofs of the conclusion: By replacing the assumptions of a derivation with closed derivations for them (i.e. by feeding the function with proofs of the assumptions) one obtains a closed derivation of the conclusion of the derivation (i.e. a proof of the conclusion). The limit case of a derivation consisting just of the assumption of a sentence A represents the identity function on the set of proofs of A. Given this, according to the definition above, two sentences A and B are isomorphic if and only if there are two functions from the set of proofs of A to the set of proofs of B and vice versa, whose two compositions are equal (modulo the notion of identity of proofs) to the identity functions on the two sets respectively.

Typical examples of pairs of isomorphic sentences are constituted by sentences of the form \(A\wedge B\) and \(B\wedge A\), while typical examples of pairs of interderivable but non-isomorphic sentences are constituted by sentences of the form \(A\wedge A\) and A. Whereas in order to establish that two sentences of a specific form are isomorphic one can proceed syntactically (i.e. by presenting two derivations satisfying the condition required in Definition 2), to show that two sentences of a given form may not be isomorphic one usually argues semantically. One has to find an opportune mathematical structure such that (i) the \(\beta \)- and \(\eta \)-equations governing the connectives involved in the sentences in questions are satisfied in the structure; (ii) the interpretation of two sentences of the form in question are not isomorphic in the structure. In the case of a pair of sentences of the form \(A\wedge A\) and A, it is enough to take proofs of a conjunction to be pairs of proofs of its conjuncts. Under this interpretation (that can be easily seen to validate both \(\beta \)- and \(\eta \)-equations), the cardinality of the set of proofs of \(A\wedge A\) is \({\kappa }^2\). Thus, whenever A is interpreted as a finite set of proofs of cardinality \(\kappa > 1\), the sets of proofs of A and of \(A\wedge A\) have different cardinalities, and thus there cannot be an isomorphism between the two.

Another example of pairs of interderivable but non necessarily isomorphic sentences, which is of relevance for the results to be presented below in Sect. 6, is constituted by pairs of sentences of the form \(((A\mathbin {\supset }B)\wedge (B\mathbin {\supset }A))\wedge A\) and \(((A\mathbin {\supset }B)\wedge (B\mathbin {\supset }A))\wedge B\). To see that pairs of sentences of these forms need not be isomorphic, it is enough to take the proofs of an implication \(A\mathbin {\supset }B\) to be the functions from proofs of A to proofs of B.Footnote 15 Whenever A and B are interpreted on sets of proofs of different cardinalities, the sets of proofs of the two sentences will also have different cardinalities.

In the proof-theoretic semantic literature, the notion of isomorphism has been proposed as a formal explicans of the notion of synonymy or of identity of meaning. As remarked by Došen,

That two sentences are isomorphic means that they behave exactly in the same manner in proofs: by composing, we can always extend proofs involving one of them, either as assumption or as conclusion, to proofs involving the other, so that nothing is lost, nor gained. There is always a way back. By composing further with the inverses, we return to the original proofs.     (Došen 2003, p. 498)

Moreover, the fact that the relationship of isomorphism is stricter than that of mere interderivability makes isomorphism more apt than interderivability to characterize the intuitive notion of synonymy in an inferentialist setting. For instance, whereas on an account of synonymy as interderivability all provable sentences are synonymous, this can be safely denied on an account of synonymy as isomorphism (consider e.g. \(A\mathbin {\supset }A\) and \(B\mathbin {\supset }B\), for A and B interpreted on sets of proofs of distinct cardinalities greater than 1, and interpret \(\mathbin {\supset }\) as above). We may therefore say that when synonymy is explained via isomorphism, we attain a truly intensional account of meaning.

As Belnap accounts for harmony using the notions of conservativity and uniqueness, which are defined merely in terms of derivability, rather than by appealing to any property of the structure of derivations, Belnap’s criteria do not yield (nor require) any notion of identity of proofs analogous to the one induced by reductions and expansions, thus they can be formulated already on the background of an extensional understanding of consequence relations. In turn, no notion of isomorphism is in general available for sentences which are governed by harmonious rules in Belnap’s sense. Thus on Belnap’s approach to harmony, the most natural account of synonymy is in terms of interderivability and this again vindicates the claim that his approach to harmony is merely extensional.

Although the notion of isomorphism has been investigated only in the context of languages containing standard connectives, it can be naturally applied to the case of connectives characterized by arbitrary rules, provided that the rules are equipped with reductions and expansions and thus that a notion of equivalence between derivations is defined.

In the next two sections, we will present a systematic way of achieving this goal.

It should however be kept in mind, that there are no restrictions of principle to the possibility of applying the intensional account of harmony beyond the realm of logical constants. For example (see, for details, Martin-Löf 1971; Prawitz 1971, § 3.8.8 and § III.4 respectively), in a first-order setting one may consider a predicate N for the property of being a natural number governed by the following inference rules (we use \('\) in postfix notation as a unary function symbol for the successor function and with \(A\llbracket t/x\rrbracket \) we indicate capture-avoiding substitution of t for x in A:

Footnote 16

Whenever t is a numeral (i.e. a term of the form \(0^{\prime \ldots \prime }\)) a reduction is readily defined, and for any t an expansion is defined as well (as pointed out by one of the referee, by applying the elimination rule taking A to be Nx). The equations associated to reduction and expansion induce a notion of identity of proofs and thus would offer the basis for defining a notion of sentence isomorphism for the language of arithmetic.

In the following we will however restrict ourselves to rules governing propositional connectives, thereby disregarding the exact nature of the non-logical vocabulary. The isomorphisms we will be discussing are therefore schematic, i.e. they are warranted by the logical form of the sentences in question. To stress this, in the remaining part of the paper we will speak of formulas rather than of sentences, thereby highlighting that the considerations we are going to develop are independent of the choice of the language. The precise development of the ideas put forward in the paper to particular languages, such as the one of arithmetic, is of the greatest interest but requires further investigation.

3 The calculus of higher-level rules

The calculus of higher-level rules introduced by Schroeder-Heister (1981, 1984) is a proof-theoretic framework which generalizes the natural deduction systems of Gentzen (1935) and Prawitz (1965) in two respects: (i) not only formulas but also rules can be assumed in the course of derivations; (ii) when applying a rule in a derivation, not only formulas but also (previously assumed) rules can be discharged.

This yields a hierarchy of different rule-levels at the base of which we have the limit case of formulas (rules of level 0), and production rules (rules of level 1, such as \(\wedge \)I, \(\wedge \)E\(_1\), \(\wedge \)E\(_2\) and \(\supset \)E above).

A typical example of a rule of level 2 is \(\supset \)I, which allows the discharge of formulas (i.e. of rules of level 0). Informally, the content of this rule is that in order to establish \(A\supset B\) one need not be able to infer B outright, but it is enough to be able to infer B from A. As this possibility is exactly what is expressed by the rule allowing the inference of B from A, we adopt the terminological convention that the premise of \(\supset \)I is not B, but rather the rule allowing one to pass over from A to B (for the role played by B in \(\supset \)I we will use the term “immediate premise”).

In general, the premises of rules of level \(l\ge 1\) will be rules of level \(l-1\) and for each level \(l\ge 2\), the application of a rule of level l in a derivation will allow the discharge of rules of level \(l-2\).

We consider a propositional language \({\mathcal {L}}\) whose formulas are built from denumerably many atomic formulas \(\alpha _1, \alpha _2, \ldots \) using denumerably many connectives of different arities, among which we have the standard intuitionistic ones (\(\wedge ,\supset , \vee ,\ldots \)) as well as less standard ones (such as tonk above, and to be introduced below) . We will use \(\dagger \) (possibly with primes) as a metavariable for connectives of arbitrary arity. Capital letters \(A, B, \ldots \) (possibly with sub-scripts) will be used as metavariables for formulas of \({\mathcal {L}}\), and will be referred to as schematic letters. We further assume the metalanguage to be an extension of the object language \({\mathcal {L}}\) and the metalinguistic expressions obtained from the application of the connectives of \({\mathcal {L}}\) (and of the metavariables for them) to schematic letters will be called schematic formulas.

As we did so far, we will use \(\mathscr {D}\) (possibly with subscripts and primes) as a metavariable for derivations. By a schematic derivation we will understand the result of replacing in derivations formulas with schematic formulas and portions of derivations with metavariables for derivations (for notational conventions governing schematic derivation see also footnote 1).

It is important to remark that concrete (as opposed to schematic) derivations in standard natural deduction systems depend on object-language formulas, and not on metalinguistic formula schemata. In the same way, as it will made clear in Definitions 4 and 5, derivations in the calculus of higher-level rules will not properly speaking depend on rules (understood as metalinguistic schemata) but rather on the object-language instances of these rules, which we will call concrete rules. (On the other hand, introduction and elimination rules will be taken, as we implicitly did so far, as metalinguistic schemata.)

Following Schroeder-Heister (1984) we adopt a tree-like notation for concrete rules (and thus, in the metalanguage, for rules as well). We will use \(R, R_1, \ldots \) as metavariables for concrete rules:

Definition 3

(Concrete rule, consequence, (immediate) premise)

  • Every formula A is a concrete rule of level 0.

  • If A is a formula and \(R_1, \ldots , R_n\) are concrete rules of maximum level l, then

    is a concrete rule of level \(l+1\).

The root of the tree constituting a concrete rule R is called the consequence of R.

Let R be the concrete rule of level \(\ge 1\):

\(R_1,\ldots , R_n\) are called the premises of R. The consequences of \(R_1,\ldots , R_n\) are called the immediate premises of R.

The premises of each premise of a concrete rule R of level \(l+2\) (if any) are concrete rules of level l. As the following definition will make clear, these can be discharged by applications of R in a derivation. To make explicit which concrete rules can be discharged by the applications of a concrete rule R, we use a “bracketed” notation for concrete rules, so that a concrete rule R of the form indicated to the left below is written as on the right below (if R has n premise and the ith premise has in turn \(m_i\) premises, for all \(1\le i\le n\) and \(1\le j\le m_i\), we indicate the jth premise of the ith premise of R with \(R_{ij}\)):

We also use a linear notation for concrete rules, so that a concrete rule of level \(l+1\) is written \((R_1;\ldots ;R_n\Rightarrow A)\), where the outermost brackets will be often omitted.

The handling of discharge follows Troelstra and Schwichtemberg (1996) rather than Prawitz (1965), in that we treat assumptions as partitioned in classes. This is achieved by associating to each assumption a (not necessarily distinct) number. In a given derivation, the undischarged assumptions of the same form R which are marked with the same number u belong to the same class \(\mathop {[R]}\limits ^{u}\). Assumptions belonging to the same class are discharged together. For simplicity, and without loss of generality, we assume that the labels of distinct assumption classes of the same formula in a derivation are always distinct. For readability, and as it is usually done, we will omit (except in definitions) the numbers above the undischarged assumptions of derivations.

We use \(\varGamma \) and \(\Delta \) (possibly with subscripts and primes) as metavariables for multi-sets, \(\cup \) and \(\setminus \) for multi-set union and difference. With \(\varGamma ,\Delta \) and \(\varGamma , R\) we abbreviate respectively and (for arbitrary m and n). For convenience, within definitions we will use \(\mathop {[R]}\limits ^{u}\) to indicate a multi-set containing as many copies of R as the number of the spatially located occurrences of R belonging to the assumption class.

Definition 4

(Structural derivations)

  • For any formula of \({\mathcal {L}}\) and natural number u, \({\mathop {A}\limits ^{u}}\) is a derivation of A depending on the multi-set \(\{A\}\).

  • If R is the rule

    u is a natural number and, for all \(1 \le i\le n\), \(\mathscr {D}_i\) is a derivation of conclusion \(B_i\) depending on the multi-set of assumptions \(\varGamma _i\), then the following:

    is a derivation of conclusion A depending on

To avoid misunderstanding we repeat that for readability, and as it is usually done, the numeric labels above the undischarged assumptions will always be omitted (except in definitions), that is in all examples below only discharged assumptions will be explicitly numbered.

Example 1

The following structural derivation

is a derivation of conclusion \(\alpha _6\) depending on the multi-set of concrete rules (of level 2 and 3 respectively) \(\{(\alpha _1; ((\alpha _2 \wedge \alpha _3)\Rightarrow \alpha _4)\Rightarrow \alpha _5)\ , \ ((\alpha _1; ((\alpha _2 \wedge \alpha _3)\Rightarrow \alpha _4)\Rightarrow \alpha _5)\Rightarrow \alpha _6)\}\). In tree-like (bracketed) notation, the two concrete rules look respectively as follows:

As anticipated, inference rules governing connectives cannot be identified with concrete rules. The reason is twofold and is analogous to the reason why, in a natural deduction formulation of some theory, axiom schemata are fundamentally different from assumptions. Whereas an assumption is always the assumption of a specific sentence, an axiom schema is used in a derivation by instantiating it on some (object-language) sentence which, by analogy with concrete rules, one may call “concrete” axiom. Moreover, although both concrete axioms and assumptions represent the starting point of derivations, the conclusions of the derivations depend only on the former and not on the latter ones. Analogously, the rules governing logical connectives are schemata whose instances are concrete rules. For example, the two (distinct) concrete rules:

are instances of the same rule (schema), namely:

The rule \(\wedge \)I can thus be identified with the (metalinguistic) schema \(A,B\Rightarrow A\wedge B\) all of whose instances are (different) concrete rules.Footnote 17 Moreover, contrary to arbitrary concrete rules, concrete rules which are instances of \(\wedge \)I are to be considered as primitive, and thus, as for concrete axioms, the conclusion of a derivation should not depend them.

These remarks can be made precise by defining the notion of derivation in a calculus \({\mathbb {K}}\), where a calculus is a list of rule schemata whose instances are to be taken as primitive in the construction of derivations. Since the rules of \({\mathbb {K}}\) are metalinguistic schemata, the following definition is to be understood as given in the meta-metalanguage of \({\mathcal {L}}\), which we assume to be an extension of the meta-language in which we use capital bold letters \(\varvec{A, B, C, \ldots }\) (resp. \({\varvec{R}}_{{\varvec{1}}}, {\varvec{R}}_{{\varvec{2}}}, \varvec{\ldots }\), and \(\varvec{\varGamma , \Delta }\)) as meta-metalinguistic variables for metalinguistic schematic formulas (rep. rules, and multi-sets of rules).Footnote 18

Definition 5

(\({\mathbb {K}}\) -derivation)

  • All structural derivations are \({\mathbb {K}}\)-derivations;

  • if the concrete rule

is an instance of a primitive rule \(\varvec{R}\) of \({\mathbb {K}}\) and if, for all \(1 \le i\le n\), \(\mathscr {D}_i\) is a \({\mathbb {K}}\)-derivation of conclusion \( B_i\), depending on \(\varGamma _i\), then the following:

is a \({\mathbb {K}}\)-derivation of conclusion A depending on

According to the definition of structural (respectively \({\mathbb {K}}\)-)derivation, the conclusion of a structural (resp. \({\mathbb {K}}\)-)derivation is always a formula (i.e. a concrete rule of level 0). We can however introduce, as a metalinguistic abbreviation, the following notions:

Definition 6

(Derivation and derivability of rules)

If \(R = (R_1; \ldots ; R_n \Rightarrow A)\) then a structural (respectively \({\mathbb {K}}\)-)derivation of A depending on \(\varGamma , R_1, \ldots , R_n\) will be said to be a structural (resp. \({\mathbb {K}}\)-)derivation of R depending on \(\varGamma \). Such structural (resp. \({\mathbb {K}}\)-)derivations will be sometimes written in either of the following two ways:

We say that a concrete rule R is structually (resp\({\mathbb {K}}\)-)derivable from \(\varGamma \) (notation \(\varGamma \vdash _{({\mathbb {K}})} R\)) iff there is a structural (resp. \({\mathbb {K}}\)-)derivation of R depending on \(\Delta \subseteq \varGamma \). We write \(\vdash R\) for \(\emptyset {\vdash R}\).

We say that a rule \(\varvec{R}\) is structurally (resp\({\mathbb {K}}\)-)derivable from \({\varvec{\varGamma }}\) (notation \({\varvec{\varGamma }}\vdash _{({\mathbb {K}})} \varvec{R}\)) iff for all instances R of \(\varvec{R}\), R is structurally (resp. \({\mathbb {K}}\)-)derivable from instances of the rules in \({\varvec{\varGamma }}\).

We conclude this section by stating three results (Schroeder-Heister 1981, 1984) whose proofs (given in the Appendix at the end of the paper) will be needed to present the inversion principle in the next section.

Lemma 1

(Reflexivity) For all R, \(R\vdash R\)

Corollary 1

If R is an instance of a primitive rule of \({\mathbb {K}}\), then \(\vdash _{{\mathbb {K}}} R\).

Lemma 2

(Transitivity) If \(\varGamma \vdash R\) and \(R, \Delta \vdash A\) then \(\varGamma , \Delta \vdash A\)

4 PSH-inversion and harmony

Assuming \(\dagger \) to be an n-ary connective, we say that:

Definition 7

(Introduction and elimination rules) A rule of the form

figure a

is an introduction rule for \(\dagger \) provided that all schematic letters occurring in the rules \(\varvec{R}_{j}\) (\(1 \le j\le m\)) are among the \(A_i\)s (\(1\le i\le n\)).

An elimination rule for \(\dagger \) is any rule of the form

figure b

(This time no restriction is imposed on the schematic letters occurring in the rule.) The first premise \(\dagger (A_1,\ldots ,A_n)\) of the elimination rules is called major premise.Footnote 19

Particular collections of introduction (respectively elimination) rules for some connective \(\dagger \) will be indicated with \(\dagger \varvec{I}\) (resp. \(\dagger \varvec{E}\)), possibly with primes.

As anticipated, by an inversion principle we understand a recipe to associate to any given collection of introduction rules a specific collection of elimination rules which is in harmony with it. In the context of the calculus of higher-level rule, PSH-inversion can be formulated as follows:

Definition 8

(PSH -inversion) Given a collection of introduction rules \(\dagger \varvec{I}\) and a collection of elimination rules \(\dagger \varvec{E}\) for \(\dagger \), we will say that \(\dagger \varvec{I}\) and \(\dagger \varvec{E}\) obey PSH-inversion if and only if \(\dagger \varvec{E}\) consists only of the following rule:

figure c

in which C is a schematic letter different from all \(A_i\) (for all \(1\le i\le n\)), and each of the minor premises corresponds to one of the introduction rules of \(\dagger \), in the sense that the jth premise of the kth introduction rule \(\varvec{R}_{kj}\) (with \(1\le k\le r\), where r is the number of introduction rules; and \(1\le j\le m_k\) where \(m_k\) is the number of premises of the kth introduction rule) is identical to the jth premise of the kth minor premise of the elimination rule.

Given a collection of introduction rules \(\dagger \varvec{I}\), we indicate the collection \(\dagger \varvec{E}\) associated to it by PSH-inversion with \(\text {\texttt {PSH}}(\dagger \varvec{I})\).

Example 2

If \(\mathord {\vee }\varvec{I}\) consists of the two rules on the left hand side below, then \(\text {\texttt {PSH}}(\vee \varvec{I})\) consists of the rule on the right hand side below:

Example 3

If \(\wedge \varvec{I}\) consists of the rule on the left hand side below, then \(\text {\texttt {PSH}}(\wedge \varvec{I})\) consists of the rule on the right hand side below:

Example 4

If \(\mathord {\supset }\varvec{I}\) consists of the rule on the left hand side below, then \(\text {\texttt {PSH}}(\mathord {\supset }\varvec{I})\) consists of the rule on the right hand side below:

Let \({\mathbb {K}}\) be a calculus consisting of primitive rules all of which have either the form of an introduction or of an elimination rule, and in which moreover the collections \(\dagger \varvec{I}\) and \( \dagger \varvec{E}\) of all introduction and elimination rules of \(\dagger \) which are primitive in \({\mathbb {K}}\) obey PSH-inversion. We will now show that the account of harmony sketched in section 1 naturally generalizes to the rules of \(\dagger \) in \({\mathbb {K}}\).

The “no more” aspect of the informal statement of harmony can be cashed out by specifying the following \(\beta \)-equations for \({\mathbb {K}}\)-derivations. Their left-to-right orientations are reductions which permit to level the complexity peaks constituted by an application of one of the introduction rules for \(\dagger \) belonging to \({\mathbb {K}}\) followed by an application of the elimination rule of \(\dagger \) belonging to \({\mathbb {K}}\) (Schroeder-Heister 1981):

where the reduced derivation is defined as in the proof of Lemma 2 in the Appendix at the end of the paper.

To cash out the “no less” aspect of harmony, we now turn to the definition of expansions. The earliest formulation of expansions for standard intuitionistic connectives is that of Prawitz (1971, § II.3.3.3). The elimination rule constituting \(\dagger \varvec{E}\) is shaped after the pattern of the elimination rule for disjunction \(\vee \)E. One could therefore obtain a pattern for expansions for any such collection of elimination rules by generalizing Prawitz’s pattern:

in the following manner (where we abbreviate \(\dagger (A_1\ldots A_n)\) with \(\dagger \) and in the expanded derivation (with \(1\le k\le r\)) is defined as in the proof of corollary 1):

There is however a well-known argument, implicit in some remarks of Dummett (1991), against the thesis that the expansion for disjunction fully grasps the “no less” aspect of harmony for this connective.

Dummett considers a restriction (motivated by considerations about quantum logic) on the elimination rule \(\vee \)E. The restriction consists in allowing the rule to be applied only if its immediate premises C depend on no other assumptions than those of the form A and B that get discharged by the application of \(\vee \)E. The restricted elimination rule is weaker than the unrestricted one, in that it limitates what can be inferred from a logical complex formula having disjunction as its main operator. Under the assumption that \(\vee \varvec{E}\) (consisting only of the unrestricted elimination rule) is in harmony with \(\vee \varvec{I}\) (consisting of \(\vee \)I\(_1\) and \(\vee \)I\(_2\)), we expext \(\vee \varvec{E}'\) (consisting only of the restricted rule) not to be in harmony with \(\vee \varvec{I}\). In particular, since using the restricted rule one can derive less than what one can derive using the unrestricted one, we expect it not to satisfy the “no less” aspect of harmony.

However, if the “no less” aspect of harmony is cashed out in terms of expansions, the expansion pattern for \(\vee \varvec{E}\) should not work for \(\vee \varvec{E}'\). Unfortunately, in Prawitz’s pattern, the application of the elimination rule for disjunction is perfectly compatible with the quantum restriction. Therefore it looks as if the availability of an expansion is too weak a condition for the rules of a connective to satisfy the “no less” aspect of harmony.

One may think that restrictions of this kind weaken the rules in too subtler a way in order for the availability of expansions to work as a criterion to rule out the resulting disharmony. On the contrary, it is not hard to find a connection between this kind of disharmony and expansions. Let’s consider a restriction on the introduction rule for implication \(\supset \)I analogous to the one imposed on the elimination rule of quantum disjunction (i.e. we allow the rule to be applied only if the result of applying the restricted introduction rule is a derivation in which all assumptions are discharged). This time the restriction strengthens the rule, since it sets higher standards for introducing \(A\supset B\). Assuming that the collection of elimination rules consisting of modus ponens (i.e. \(\supset \)E) is in harmony with the unrestricted introduction, we expect it to fail to be in harmony with the restricted introduction rule. In particular, we expect it to fail to meet the “no less” aspect of harmony. That this is in fact the case is shown by the impossibility of shaping the expansion for the restricted rule after the model of that for the unrestricted rule (see Sect. 1 above), as the application of the introduction would violate the restriction.

We take this as a reason to consider an alternative pattern for the expansion of disjunction, one capable of detecting the disharmony of the restricted \(\vee \)E rule. The idea behind the alternative pattern is that an expansions operates on a formula which is not, in general, the conclusion of a derivation, but that occurs “within” a derivation:

Observe that all instances of Prawitz’s expansion pattern are instances of the alternative pattern in which the derivation \(\mathscr {D}'\) just consists of the assumption of \(A\vee B\). Moreover, if in the derivation \(\mathscr {D}'\) the conclusion C depends on more assumptions than just those of the form \(A\vee B\) indicated in the schema, the application of \(\vee \)E in the expanded derivation violates the quantum restriction, and therefore the pattern does not work in general for quantum disjunction.

The proposed pattern readily generalizes to arbitrary connectives whose rules obey PSH-inversion. That is, let \({\mathbb {K}}\) be a calculus consisting of rules obeying PSH-inversion, the “no less” aspect of harmony can be expressed by the following \(\eta \)-equation for \({\mathbb {K}}\)-derivations (we abbreviate again \(\dagger (A_1,\ldots ,A_n)\) with \(\dagger \)):

We have thereby shown that the rules of \(\dagger \) are in harmony in any calculus in which the primitive rules involving \(\dagger \) obey PSH-inversion.

We conclude this section with a few remarks.

First, using PSH-inversion we can find a collection of elimination rules which is in harmony with any collection of introduction rules, provided no rule in the collection involves any restriction of the kind we discussed in connection with expansions. In fact, it is not clear which is the collection of elimination rules matching the collection of introduction rules consisting only of the restricted \(\supset \)I rule discussed above.

Second, Schroeder-Heister (1981) established a normalization theorem for a particular calculus \({\mathbb {K}}\) comprising only rules obeying PSH-inversion, and such that the occurrence of \(\dagger \) in the consequence (respectively major premise) of the rules is the only occurrence of a connective in the introduction (resp. elimination) rules. The proof of the result uses the reductions stemming from \(\beta \) equations as well as permutations, which generalize the one for disjunction already considered by Prawitz (1965):

in which \(\langle \mathscr {D}\rangle \) indicate the possible presence of minor premises in \(\dagger \)E.

Although the analysis of harmony we proposed is based on the possibility of performing local transformations on derivations and not on the possibility of globally transforming any derivation into normal form, normalization is much more tight to the inversion principle than recently argued by e.g. Read (2010, p. 575) and Schroeder-Heister (2014a, p. 1207).

In particular, the adoption of the expansion pattern that we proposed makes it possible to simulate the permutations, by first expanding and then reducing the derivation on the left hand side of the permutation (on the connection between expansions and permutations see also Tranchini et al. under review). Thus, normalization is a consequence of inversion whenever introduction and elimination rule schemata are allowed to contain at most one occurrence of one connective. Conversely, when the rules of a calculus obey PSH-inversion, failure of normalization is essentially tight to the presence of more than one occurrence of a connective in the introduction and elimination rules (in particular to the presence of negative occurrences of the connective, see Dyckhoff 2016, §2), this being the feature that enables the formulation of paradoxical connectives (see also Tranchini 2015, 2016).

5 Beyond inversion

A fact which has only recently been observed (Olkhovikov and Schroeder-Heister 2014; Schroeder-Heister 2014a, b) is that even a fully precise account of a universally applicable inversion principle would not constitute an exhaustive characterization of harmony.

To see why, it suffices to consider the rules of the connective \(\wedge \) so far discussed. The collection of elimination rules \(\wedge \varvec{E}\) consisting of \(\wedge \)E\(_1\) and \(\wedge \)E\(_2\) is not the one obtained by PSH-inversion from the collection of introduction rules \(\wedge \varvec{I}\) consisting only of \(\wedge \)I. However, neither Prawitz nor Schroeder-Heister are willing to deny that \(\wedge \varvec{E}\) is in harmony with \(\wedge \varvec{I}\).

Thus, the collection of elimination rules generated by inversion from a given collection of introduction rules is not, in general, the only one which is in harmony with it.

This situation squares with the plurality of inversion principles available in the literature. In recent work, Read (2010, 2015) has suggested an alternative inversion principle (i.e. an alternative way of generating a collection of elimination rules from a given collection of introduction rules), which we will refer to as R-inversion.

Definition 9

(R -inversion) A collection \(\dagger \varvec{I}\) consisting of r introduction rules for \(\dagger \) and a collection of elimination rules \(\dagger \varvec{E}\) obey R-inversion if and only if \(\dagger \varvec{E}\) consists of \(\prod _{k=1}^r m_k\) rules (where \(m_k\), for \(1\le k\le r\), is the number of premises of the kth introduction rule), each of which having the following form:

figure d

where \(f_h\) (\(1\le h \le \prod _{k=1}^r m_k)\) is a choice function which selects one of the premises of each of the r introduction rules of \(\dagger \) (i.e. for each \(1\le k\le r\), \(1\le f_h(k)\le m_k\)).

Given a collection of introduction rules \(\dagger \varvec{I}\), we indicate the collection \(\dagger \varvec{E}\) associated to it by R-inversion with \(\text {\texttt {R}}(\dagger \varvec{I})\).

Example 5

If \(\wedge \varvec{I}\) is the collection of introduction rules consisting only of \(\wedge \)I, \(\text {\texttt {R}}(\wedge \varvec{I})\) consists of the following two rules:

Like Prawitz and Schroeder-Heister, Read does not deny the possibility that different collections of elimination rules could be in harmony with the same collection of introduction rules, and actually he himself discusses some examples (typically that of conjunction).

What all mentioned authors explicitly observe is that the alternative collections of elimination rules are interderivable with each other. For instance, both \(\wedge \)E\(_1\) and \(\wedge \)E\(_2\) (resp. \(\wedge \)E\(_{\text {\texttt {R}}1}\) and \(\wedge \)E\(_{\text {\texttt {R}}2}\)) are structurally derivable from \(\wedge \)E\(_\text {\texttt {PSH}}\), and conversely the latter rule is structurally derivable from the former ones. Similarly, both \(\wedge \)E\(_{\text {\texttt {R}}1}\) and \(\wedge \)E\(_{\text {\texttt {R}}2}\) are structurally derivable from \(\wedge \)E\(_1\) and \(\wedge \)E\(_2\) and viceversa.

Although not stated in an explicit manner, this seems to be the reason why the rules of \(\wedge \) presented in Sect. 1 are considered as much in harmony as those obtained by PSH- (or, for what matters,  R-) inversion.

More in general, we may introduce the following notion:

Definition 10

(Interderivability of collection of rules) Two collections of elimination rules \(\dagger \varvec{E}\) and \(\dagger \varvec{E}'\) are interderivable, notation \(\dagger \varvec{E}\dashv \vdash \dagger \varvec{E}'\), if and only if each rule in \(\dagger \varvec{E}\) is structurally derivable from the rules in \(\dagger \varvec{E}'\) and viceversa.

Using this notion, the conception of harmony implicitly defended by all authors considered can be made explicit as follows:

Definition 11

(PSH -harmony via interderivability) Given two collections \(\dagger \varvec{I}\) and \(\dagger \varvec{E}\) of introduction and elimination rules for a connective \(\dagger \), we say that \(\dagger \varvec{I}\) and \(\dagger \varvec{E}\) are in PSH-harmony via interderivability if and only if

$$\begin{aligned} \dagger \varvec{E}\dashv \vdash \text {\texttt {PSH}}(\dagger \varvec{I}) \end{aligned}$$

Definition 12

(R -harmony via interderivability) Given two collections \(\dagger \varvec{I}\) and \(\dagger \varvec{E}\) of introduction and elimination rules for a connective \(\dagger \), we say that \(\dagger \varvec{I}\) and \(\dagger \varvec{E}\) are in R-harmony via interderivability if and only if

$$\begin{aligned} \dagger \varvec{E}\dashv \vdash \text {\texttt {R}}(\dagger \varvec{I}) \end{aligned}$$

In fact, for any collection of introduction rules \(\dagger \varvec{I}\), \(\text {\texttt {PSH}}(\dagger \varvec{I})\dashv \vdash \text {\texttt {R}}(\dagger \varvec{I})\).Footnote 20 Thus the same collections of rules qualify as harmonious according to the two definitions.

The invariance of harmony with respect to the choice of the inversion principle has been taken by Schroeder-Heister as a reason for defining harmony without making reference to any inversion principle at all. In fact Schroeder-Heister (2014a, b) proposed two different accounts of harmony, on the basis of which he then demonstrated that the rules obeying PSH-inversion satisfy the proposed condition for harmony. Both notions of harmony are equivalent with each other, and moreover they are equivalent to those resulting from Definitions 11 and 12.

We will say that the rules satisfying these notions of harmony are in harmony by interderivability.

We fully agree with Schroeder-Heister on the need of a notion of harmony going beyond the specification of an inversion principle. However, it is doubtful whether rules which are in harmony by interderivability can, in general, be equipped with plausible reductions and expansions. In other words, it is doubtful whether the account of harmony obtained by coupling inversion with interderivability can still qualify as intensional.

In the next section we will present an example justifying this claim.

6 Harmony by interderivability is not intensional

In this section we will consider yet a further inversion principle, which however can be applied only to the very restricted case of a collection of introduction consisting of just one introduction rule, and that for this reason will be referred to as toy inversion principle (henceforth T-inversion). In spite of its limited range of applicability, it will suffice for the goals of the present section.

Definition 13

(T -inversion) A collection \(\dagger \varvec{I}\) consisting of just one introduction rule for \(\dagger \) and a collection of elimination rules \(\dagger \varvec{E}\) for \(\dagger \) obey T-inversion if and only if \(\dagger \varvec{E}\) consists of m rules (where m is the number of premises of the rule in \(\dagger \varvec{I}\)), each of which having the following form:

figure e

in which the consequence of the jth elimination rule (\(1\le j \le m\)) is identical to the consequence of the jth premise of the introduction rule in \(\dagger \varvec{I}\), and the kth minor premise of the jth rule (if any) is equal to the kth premise (\(1\le k \le p_j\)) of the jth premise of the (only) introduction rule \(\dagger \varvec{I}\).

Given an introduction rule \(\dagger \)I we indicate with \(\text {\texttt {T}}(\dagger \varvec{I})\) the collection \(\dagger \varvec{E}\) associated by T-inversion to the collection of introduction rules \(\dagger \varvec{I}\) consisting only of \(\dagger \)I.

Two examples of collections of rules obeying T-inversion are those for \(\wedge \) and \(\supset \) discussed in Sect. 1.

Connectives whose rules obey T-inversion satisfy the informal statement of harmony, as it is shown by the possibility of formulating the \(\beta \)- and \(\eta \)-equations displayed in Table 1 for \({\mathbb {K}}\)-derivations in a calculus \({\mathbb {K}}\) consisting only of introduction and elimination rules and in which the collection of introduction and elimination rules for \(\dagger \) obey T-inversion (in the equations we abbreviate \(\dagger (A_1,\ldots ,A_n)\) with \(\dagger \)).

We will now present two collections of rules obeying T-inversion for two connectives \(\sharp \) and \(\flat \) such that in the calculus consisting of both collections of rules as primitive \(A\sharp B \dashv \vdash A\flat B\) and \(A\sharp B \not \simeq A\flat B\). We will then consider the collection of rules for a third connective having the same collection of introduction of rules of \(\sharp \) and the same collection of elimination rules of \(\flat \). We will show that the rules of are in harmony as interderivability. However, although it is possible to define reductions and expansions for \(\natural \), the most obvious candidates for these equations trivialize the notion of isomorphism.

Let’s first consider the following collections of rules \(\mathord \sharp \varvec{I}\) and \(\mathord \sharp \varvec{E}\) for \( \sharp \):

Clearly, \(\mathord \sharp \varvec{E}=\text {\texttt {T}} (\mathord \sharp \varvec{I})\), and the harmonious nature of the rules is displayed by the \(\beta \)- and \(\eta \)-equations of Table 2, which are obtained by opportunely intantiating the general patterns of Table 1. Using them, it easy to show that \(A\sharp B\) is isomorphic to \(((A\supset B)\wedge (B\supset A))\wedge A\) in the calculus consisting of \(\mathord \sharp \varvec{I}\), \(\mathord \sharp \varvec{E}\) and of the rules for \(\wedge \) and \(\supset \) obeying T-inversion of Sect. 1.

Table 1 \(\beta \)- and \(\eta \)-equations for connectives obeying T-inversion
Table 2 \(\beta \)- and \(\eta \)-equations for \(\sharp \)

Consider now the collections of rules \(\mathord \flat \varvec{I}\) and \(\mathord \flat \varvec{E}\) for the connective \(\flat \). These two collection of rules differ from those of \(\sharp \) in having B instead of A as third premise of the only introduction rule, and, correspondingly, in having B instead of A as consequence of the third elimination rule:

These two collections of rules also obey T-inversion and thus \(\beta \)- and \(\eta \)-equations that follow the same pattern of those of \(\sharp \) are available. Using them, it easy to show that \(A\sharp B\) is isomorphic to \(((A\supset B)\wedge (B\supset A))\wedge B\) in the calculus consisting of \(\mathord \flat \varvec{I}\), \(\mathord \flat \varvec{E}\) and of the rules for \(\wedge \) and \(\supset \) of Sect. 1.

It is moreover easy to see that in the calculus consisting of \(\mathord \sharp \varvec{I},\sharp \varvec{E},\flat \varvec{I},\flat \varvec{E}\) we have \(A\sharp B\dashv \vdash A\flat B\) and \(A\sharp B\not \simeq A\flat B\). To establish the latter fact interpret proofs of \(A\sharp B\) and of \(A\flat B\) as triples whose first two members are functions from proofs of A to proofs of B and vice versa, and whose third members are proofs of A and of B respectively. Whenever A and B are interpreted on sets of proofs of different cardinalities so are \(A\sharp B\) and \(A\flat B\).

To show the limit of harmony by interderivability we now consider a collection of rules for a third connective, we call it \(\natural \), which is obtained by “crossing over” the collections of rules of \(\sharp \) and \(\flat \): The collection \(\mathord \natural \varvec{I}\) consists of the introduction rule obtained by replacing \(\sharp \) with \(\natural \) in \(\sharp \)I; and the collection \(\mathord \natural \varvec{E}\) consists of the elimination rules obtained by replacing \(\flat \) with \(\natural \) in \(\flat \)E\(_1\), \(\flat \)E\(_2\) and \(\flat \)E\(_3\).

Clearly, \(\mathord \natural \varvec{I}\) and \(\mathord \natural \varvec{E}\) do not obey T-inversion, due to the mismatch between the third premise A of the introduction rule and the consequence B of the third elimination rule. We list all of \(\mathord \natural \varvec{I}\), \(\mathord \natural \varvec{E}\), and \(\text {\texttt {T}}(\mathord \natural \varvec{I})\):

Although \(\mathord \natural \varvec{E} \ne \text {\texttt {T}}(\mathord \natural \varvec{I})\), the following holds:

Lemma 3 \(\mathord \natural \varvec{E}\dashv \vdash \text {\texttt {T}}(\mathord \natural \varvec{I})\)

Proof

Since both collections of rules share \(\natural \)E\(_1\) and \(\natural \)E\(_2\), to show their interderivability it is enough to show that any instance of \(\natural \)E\(_3\) is structurally derivable from some instances of \(\natural \)E\(_1\), \(\natural \)E\(_2\) and \(\natural \)E\(_3^*\) and that any instance of \(\natural \)E\(_3^*\) is structurally derivable from some instances of \(\natural \)E\(_1\), \(\natural \)E\(_2\) and \(\natural \)E\(_3\):

\(\square \)

Thus the two collections of rules \(\mathord \natural \varvec{I}\) and \(\mathord \natural \varvec{E}\) do qualify as in harmony by interderivability (in spite of the fact that they do not obey T-inversion).

The question that we want to address now is the following: Can we define appropriate \(\beta \)- and \(\eta \)-equations for \({\mathbb {K}}\)-derivations in the calculus \({\mathbb {K}}\) consisting of \(\mathord \natural \varvec{I}\) and \(\mathord \natural \varvec{E}\)?

Whereas the \(\beta \)-equations involving complexity peaks generated by \(\natural \)I and \(\natural \)E\(_1\) and \(\natural \)E\(_2\) follow the pattern of those of \(\sharp \) and \(\flat \), one may doubt in the possibility of finding a reduction for the peak generated by \(\natural \)I and \(\natural \)E\(_3\). A moment reflection however dispels the doubt, since one can come up with the following equation:

the left-to-right direction of which provides a reduction showing that, in spite of the mismatch between the third premise of \(\natural \)I and the consequence of \(\natural \)E\(_3\), this elimination rule allows one to derive no more than what is needed in order to infer its premise by introduction rule.

Similarly, although the expansion pattern cannot simply be constituted by applications of the three elimination rules followed by an application of the introduction rule, the following \(\eta \)-equation shows that what one gets from \(A\natural B\) using the elimination rules is no less than what is needed to reintroduce \(A\natural B\) by means of its introduction rule:

In spite of the fact that these equations show that the rules for \(\natural \) satisfy the informal statement of harmony of Definition 1, they are inadmissible from the viewpoint of the intensional approach to inferentialism that we advocated.

Table 3 The expansion of an arbitrary derivation \(\mathscr {D}\) ending with \(\natural \)I

To see why, consider the derivation obtained by expanding a given \({\mathbb {K}}\)-derivation \(\mathscr {D}\) of \(A\natural B\) ending with an introduction rule depicted in Table 3. In such a derivation all occurrences of \(A\natural B\) (apart from the conclusion) constitute complexity peaks. By reducing them we do not obtain the derivation \(\mathscr {D}\) of which the derivation considered is an expansion, but instead the following:

By symmetry and transitivity of the equivalence relation induced by \(\beta \)- and \(\eta \)-equations we thus have the following equivalence:

This means that all instances of these two derivation schemata (obtained by replacing \(\mathscr {D}_1\), \(\mathscr {D}_2\) and \(\mathscr {D}_3\) with actual derivations) pairwise belong to the same equivalence classes induced by \(\beta \)- and \(\eta \)-equations. This is problematic since also all derivation of the following form:

will belong to the same equivalence classes, as these are obtained by appending an application of \(\natural \)E\(_3\) to the conclusions of the previous ones.

Derivations of this form reduce by - as follows:

and in the limit case in which \(\mathscr {D}_1\) and \(\mathscr {D}_3\) simply consist of the assumption of some formula (in which case, \(A = B = C\) for some C) these schemata boil down to the following:

This means that in presence of the equations for \(\natural \), for any formula C, any derivation from C to C is equated with the derivation consisting only of the assumption of C (i.e. the identity function on the set of proofs of C). But this means that in presence of \(\natural \) any two interderivable formulas are also isomorphic, since the compositions of the proofs establishing that each is derivable from the other are equated to the identity functions.

In other words, the addition of \(\natural \) to any calculus \({\mathbb {K}}\), even one containing interderivable but not isomorphic formulas, has the result of making formula isomorphism collapse on interderivability.

This shows that, on an intensional account of harmony, in order for a collection of elimination rules to qualify as in harmony with a certain collection of introduction rules, one should require more than just its interderivability with the collection of elimination rules generated by inversion from the given collection of introduction rules.

According to Dummett (1981, 1991), Belnap’s criterion of conservativity is appealing because it amounts to the requirement that the addition of an expression to a language \({\mathcal {L}}\) should not modify the meaning of the expressions of \({\mathcal {L}}\).

From the intensional standpoint we have been advocating, however, the request of non-modifying the meaning of the expressions of the language to which a connective is added should not be understood as “conservativity over derivability”, but rather as “conservativity over proofs”. That is the addition of a new expression to the language should not modify the preexisting relationships holding between proofs. The addition of the connective \(\natural \) to a given language has exactly this effect, as it forces the identification of any two derivations of a formula from itself. Thus its addition does modifies the meaning of the expressions of the languages to which it is added. As a result of its addition, previously non-synonymous expression may “become”, unjustifiably, synonymous. The rules for \(\natural \) are therefore inadmissible in any language with a non-trivial notion of isomorphism.

7 Outlook

In this paper we have argued that when harmony is based on reductions and expansions, the inferentialist account of meaning can be understood as having an intensional character, in the sense that a notion of synonymy stricter than interderivability can be defined using the notion of isomorphism. Moreover, we have shown that such an account can be applied to complex sentences formed by means of connectives whose collection of rules satisfy the inversion principle. In particular, the novel account of expansions we provided solves the difficulty pointed out by Dummett concerning the “no less” aspect of harmony (what Dummett sometimes refers to as “stability”).

However, the specification of an inversion principle does not provide an exhaustive account of harmony, as there are more collections of elimination rules which we are willing to acknowledge as being in harmony with a given collection of introduction rules than the one generated by inversion.

Contrary to what several authors more or less implicitly acknowledge, the interderivability with the collection of elimination generated by inversion is too weak a condition for a collection of elimination rules to be in harmony with a collection of introduction rules, at least if the intensional standpoint we advocated is not to collapse on the extensional standpoint arising from the account of harmony put forward by Belnap.

Though \(\mathord \natural \varvec{I}\) and \(\mathord \natural \varvec{E}\) are in harmony by interderivability (and in fact in Belnap’s sense as well), the notion of isomorphism in any language containing \(\natural \) is trivial, i.e. it collapses onto interderivability.

How harmony is to be exactly defined, and in particular how to characterize the relationship between the different collections of elimination rules which we are willing to acknowledge as being in harmony with the same collection of introduction rules, is left as an open question. We hope to have made clear that in the quest for a proper account of harmony, the notion of formula isomorphism will have to play a more prominent role than the one it has been accorded so far in proof-theoretic investigation on meaning.