Proof-theoretic harmony: towards an intensional account

In this paper we argue that an account of proof-theoretic harmony based on reductions and expansions delivers an inferentialist picture of meaning which should be regarded as intensional, as opposed to other approaches to harmony that will be dubbed extensional. We show how the intensional account applies to any connective whose rules obey the inversion principle first proposed by Prawitz and Schroeder-Heister. In particular, by improving previous formulations of expansions, we solve a problem with quantum-disjunction first posed by Dummett. As recently observed by Schroeder-Heister, however, the specification of an inversion principle cannot yield an exhaustive account of harmony. The reason is that there are more collections of elimination rules than just the one obtained by inversion which we are willing to acknowledge as being in harmony with a given collection of introduction rules. Several authors more or less implicitly suggest that what is common to all alternative harmonious collection of rules is their being interderivable with each other. On the basis of considerations about identity of proofs and formula isomorphism, we show that this is too weak a condition for a given collection of elimination rules to be in harmony with a collection of introduction rules, at least if the intensional picture of meaning we advocate is not to collapse on an extensional one.

the conditions that must be fulfilled for the assertion of a sentence to be correct, and the consequences that can be drawn from its assertion. Such a balance is referred to as harmony.
The most important context of use of logically complex sentences is deductive reasoning. In the natural deduction setting (Gentzen 1935;Prawitz 1965), the two aspects of the practice of assertion are seized by the two types of rules which are distinctive of natural deduction: introduction and elimination rules. The introduction rules for a logical constant † are those which allow one to establish a complex sentence having † as main operator, and thus they specify the conditions of its correct assertion; in the elimination rules for †, a complex sentence having † as main operator acts as the main premise of the rules, and these rules thus specify how to draw consequences from its assertion.
Hence, in natural deduction, the requirement of harmony as applying to logically complex sentences becomes a condition that should be satisfied by the collections of introduction and elimination rules: Definition 1 (Harmony: Informal statement) What can be inferred from a logically complex sentence by means of the elimination rules for its main connective is no more and no less than what has to be established in order to infer that very logically complex sentence using the introduction rules for its main connective 1 .
A typical example of rules which fail to be in harmony are those for Prior's (1960) tonk: which display no match between what can be inferred using the elimination rule and what is needed to establish the premise of the elimination rule using the introduction rule. 2 Thus, even if A and B are meaningful statements, their "contonktion" A tonk B is non-sense since the rule governing tonk are ill-formed. 3 1 Terminologically, Dummett uses "harmony" sometimes to refer to both aspects of this condition (e.g. 1991, p. 217), sometimes to refer only to the "no more" aspect (e.g. 1991, pp. 247-248), and he refers to the "no less" aspect as stability (e.g. 1991, p. 287). We stick to the former convention. Although harmony can be applied to the rules of logical constants of any kind, in this paper we will restrict our attention to propositional connectives. On related terminological issues, see also footnote 7 below. It should also be noted that although Dummett stresses that harmony is a two-fold condition, the proof-theoretic semantic literature has mostly been concerned with the "no more" aspect of it (but see Naibo and Petrolo 2015, for a notable exception), thus making our informal characterization of harmony, to some extent, non-standard. 2 Whereas tonk's rules fail to meet both the "no less" and the "no more" aspect of the informal charactrerization of harmony, there are connectives which fail to satisfy only one of the two. For a connective failing to satisfy the "no less" aspect, but satisfying the "no more" aspect, see Naibo and Petrolo (2015, pp. 157-158). For a connective satisfying the "no less" aspect but failing to satisfy the "no more" aspect one may consider a variant of tonk with two introduction rules (corresponding to both introduction rules for disjunction). In this case using the elimination rule one would obtain "no less" than what is needed to introduce the connective again using the second introduction rule. 3 One of the referees objects that from our diagnose of A tonk B as non-sense it looks "as if harmony was a criterion for meaningfulness, although perhaps it is best interpreted as a criterion for logicality (in line with In contrast with the rules governing tonk, the standard rules for conjunction: display a perfect match. In general, when the rules for a connective are in harmony, two kinds of deductive patterns can be exhibited. Patterns of the first kind have been described as "hillocks" (von Plato 2008) or complexity peaks (Dummett 1991) and they are constituted by an application of an introduction rule followed immediately by one of a corresponding elimination rule. The possibility of "levelling" these complexity peaks (the levelling operations being usually referred to as reductions) amounts to the fact that harmonious elimination rules allow one to infer no more than what has to be established to infer their major premise by introduction. The rules for conjunction above yield two such patterns, which can be get rid of as follows: Patterns of the other kind, which could be described as complexity valleys, result when one infers a complex sentence from itself by first eliminating and then reintroducing its main connective. The possibility of expanding a deduction via a complexity valley amounts to the fact that harmonious elimination rules allow to infer no less than what is needed to infer their major premise by introduction: 4 In the case of implication, whose rules are: 5 Footnote 3 continued Dummett's own admission that it cannot be reasonably asked for all the expressions of the language)." The objection is very reasonable and a full evaluation of it, though of the uttemost importance for the current debates on proof-theoretic semantics, goes beyond the scope of the present paper. We remark however that: (i) In spite of Dummett's own admissions, it is undeniable that he is at least strongly sympathetic to the equation between harmony and meaningfullness. (One of) Dummett's (1991) aim(s) is to recast Brouwer's criticisism of classical mathematics (namely that of being incomprehensible, viz. meaningless) by showing that the rules for the logical constants in classical logic are not harmonious. (Thereby we do not want to commit ourselves either to the cogency of Dummett's arguments, nor to the tenability of Brouwer's views.) (ii) Even if harmony was not a criterion for meaningfulness, it's applicability goes certainly beyond that of logical expressions. An example is provided by the rules for the predicate "x is a natural number" (which we take to be a non-logical expression) which we briefly discuss at the end of Sect. 2. 4 The idea that expansions express the "no less" aspect of harmony has been first explicitly formulated by Pfenning and Davies (2001, §2). 5 We indicate discharge in actual (respectively schematic) derivations with numbers (resp. possibly indexed letters) placed above the discharged assumptions and to the left of the inference line at which the assumptions we have the following reduction and expansion: Prawitz (1979) first proposed a general procedure to associate to any arbitrary collection of introduction rules a specific collection of elimination rules which is in harmony with the given collection of introduction rules. We will refer to such procedures as inversion principles. 7 Prawitz's procedure has been later refined by Schroeder-Heister (1981, 2014a in his calculus of higher-level rules, a deductive framework which generalizes standard natural deduction rules by allowing not only formulas but also rules themselves to be assumed and subsequently discharged in the course of a derivation. 8 Before presenting the calculus of higher-level rules in Sect. 3, we will introduce in Sect. 2 a distinction between extensional and intensional accounts of harmony, and we will show in which sense the approach to harmony sketched in this section is distinctively intensional. We will then show in Sect. 4 that reductions and expansions can be defined for any connective governed by an arbitrary collection of introduction rules and by the collection of elimination rules obtained by the Prawitz and Schroeder-Heister's inversion principle (henceforth PSH-inversion). Whereas the reductions associated to these Footnote 5 continued are discharged. In schematic derivations, a formula in square brackets indicates an arbitrary number (≥ 0) of occurrences of that formula, if the formula is in assumption position, or of the whole sub-derivation having the formula in brackets as conclusion. Square brackets are also used in rule schemata to indicate the form of the assumptions that can be discharged by rule applications. 6 By this we mean that the application of ⊃I in the expanded derivation discharges no assumptions of the form A in D. 7 Although Prawitz (1965, Chap. 2) actually uses this term to refer to the "no more" aspect of the informal characterization of harmony given above in Definition 1, our way of using the term is certainly in the spirit of Lorenzen (1955), who coined this term to refer to a particular principle of reasoning (whose role corresponds roughly to that of an elimination rule) which he obtained by "inverting" a certain collection of defining conditions for an expression (whose role corresponds roughly to that of introduction rules). For more details, see Moriconi and Tesconi (2008). 8 Informally, the inversion principle of Prawitz and Schroeder-Heister generates a unique elimination rule shaped in accordance with the following slogan (Negri and von Plato 2001, p. 6 ): "Whatever follows from the direct grounds for deriving a proposition must follow from that proposition". It should be observed that Negri and von Plato use the principle only in the context of standard connectives, whereas Prawitz and Schroeder-Heister formulate the inversion principle in full generality for connectives governed by an arbitrary collection of introduction rules. Moreover, Negri and von Plato do not consider rules of higher-level. Though this is unproblematic in the case of standard connectives, when dealing with arbitrary connectives the use of higher-level rules is essential to obtain harmonious rules via inversion (Olkhovikov and Schroeder-Heister 2014;Read 2015;Dyckhoff 2016). connectives have already been given in the literature (even if, in fact, only by Schroeder-Heister 1981, in German), no formulation of the expansions for arbitrary connectives has been given so far. Prawitz (1971) specified expansions for standard intuitionistic connectives, but implicit in remarks by Dummett (1991) is a difficulty affecting Prawitz's formulation of the expansion for disjunction which threatens the very idea of using expansions as a way to cash out the "no less" aspect of harmony. We will propose a pattern for expansions which is more general than Prawitz's, we will show that it provides a solution to Dummett's difficulty, and from it we will obtain a pattern for expansions for arbitrary connectives.
In Sect. 5 we will restate an observation of Schroeder-Heister to the effect that an exhaustive account of harmony cannot consist just in the specification of an inversion principle. We will argue that most authors (sometimes implicitly and sometimes in a more explicit manner) assume that a thorough account of harmony can be attained by coupling the inversion principle with a notion of equivalence between (collections of) rules, and that all seem to agree that the relevant notion of equivalence is interderivability. In Sect. 6, we will however show that the account of harmony obtained by coupling inversion with interderivability fails to qualify as intensional. Section 7 briefly summarizes the results of the paper.

Harmony: intensional vs extensional accounts
The account of harmony sketched in the previous section differs from the account of harmony stemming from Belnap (1962), who cashed out the "no more" and "no less" aspects of the informal definition of harmony in terms of conservativity and uniqueness respectively. 9 Following Dummett (who refers to conservativity as "global" harmony and to the availability of reductions as "intrinsic" harmony), for some authors (e.g. Schroeder-Heister 2014a, pp. 1204-1205) the distinctive feature of Belnap's conditions is their being "global", in contrast with other "local" ways of rendering the informal definition of harmony, such as the one sketched in the previous section. 10 In our opinion, however, what crucially distinguishes the account of harmony sketched in the previous section from the one of Belnap is something else: Both conservativity and uniqueness are defined in terms of derivability (i.e. of what can be derived by means of the rules for a connective) and not in terms of properties involving the internal structure of derivations (i.e. of how something can be derived). We propose to refer to accounts of harmony based on derivability as extensional, while those making explicit reference to the internal structure of derivations will be referred to as intensional. 9 The fact that uniqueness is a way of rendering the "no less" aspect of harmony may not be obvious at first, but see Schroeder-Heister (2014a, pp. 1204-1205. Observe moreover that Belnap's aim is that of providing conditions that a collection of rules has to satisfy in order to be able to qualify as implicit definitions of a connective, rather than that of defining harmony. 10 For a contrasting opinion on the globability of uniqueness, however, see Naibo and Petrolo (2015, p.151).
To fully appreciate the value of the distinction, as well as the choice of the terminology, we first observe that reductions and expansions induce what is called a notion of identity of proofs.
According to intuitionists, proofs are the result of an activity of mental construction performed by an idealized mathematician. On this conception, it is natural to ask what is the relationship between proofs (as abstract entities) and derivations (as formal objects). A proposal which goes back at least to Prawitz (1971) is that of viewing formal derivations as linguistic representations of proofs, and the relation between derivations and proofs as analogous to the one between singular terms and their denotations. A further question which immediately arises, is that of when do two derivations represent the same proof. For Prawitz, reductions and expansions are transformations on derivations which preserve the identity of the proof represented. This conception draws on an analogy with arithmetic, where the rules for calculating the values of complex expressions are naturally understood as preserving the identity of the numbers which are denoted by the numerical expressions one operates with: When we transform '(3 × 4) − 5' into '12 − 5', we pass over from a more complex to a simpler representation of the number seven. 11 We can thus take the symmetric, reflexive and transitive closure of the relation induced by reductions and expansions as yielding an equivalence relation on derivations: Two derivations belong to the same equivalence class if and only if there is a chain of applications of the reductions and expansions (and of their inverse operations) connecting the two derivations. Two derivations belonging to the same equivalence class will be said to denote, or represent, the same proof. 12 From now on, adopting the standard terminology of the lambda calculus, we will call the equations displaying complexity peaks β-equations, and the equations displaying complexity valleys η-equations. In the case of conjunction we thus have: By "reduction" and "expansion" we will understand the left-to-right and right-to-left orientations of these equations respectively.
When inference rules are equipped with reductions and expansions, and thus a notion of identity of proofs is available, we are no more dealing with an extensional notion of derivability, but with an intensional one: beside being able to tell whether a sentence is derivable or not (possibly from other sentences) we can discriminate between different ways in which a sentence is derivable (possibly from other sentences). 13 Moreover, on the basis of the notion of identity of proofs, it is possible to introduce an equivalence relation on sentences which is stricter than interderivability, and which in category theory and the theory of lambda calculus is referred to as isomorphism: 14 Definition 2 (Isomorphism) Two sentences A and B are isomorphic if and only if there are two derivations D 1 and D 2 of B from A and of A from B respectively such that the two derivations obtained by composing D 1 and D 2 are βη-equivalent to the assumptions of A and B respectively: Why is this notion called isomorphism? Intuitively, if a derivation in which all assumptions are discharged represents a proof of its conclusion, a derivation in which the conclusion depends on undischarged assumptions can be viewed as representing a function from proofs of the assumptions to proofs of the conclusion: By replacing the assumptions of a derivation with closed derivations for them (i.e. by feeding the function with proofs of the assumptions) one obtains a closed derivation of the conclusion of the derivation (i.e. a proof of the conclusion). The limit case of a derivation consisting just of the assumption of a sentence A represents the identity function on the set of proofs of A. Given this, according to the definition above, two sentences A and B are isomorphic if and only if there are two functions from the set of proofs of A to the set of proofs of B and vice versa, whose two compositions are equal (modulo the notion of identity of proofs) to the identity functions on the two sets respectively.
Typical examples of pairs of isomorphic sentences are constituted by sentences of the form A ∧ B and B ∧ A, while typical examples of pairs of interderivable but nonisomorphic sentences are constituted by sentences of the form A∧ A and A. Whereas in order to establish that two sentences of a specific form are isomorphic one can proceed syntactically (i.e. by presenting two derivations satisfying the condition required in Definition 2), to show that two sentences of a given form may not be isomorphic one usually argues semantically. One has to find an opportune mathematical structure such that (i) the βand η-equations governing the connectives involved in the sentences in questions are satisfied in the structure; (ii) the interpretation of two sentences of the form in question are not isomorphic in the structure. In the case of a pair of sentences of the form A ∧ A and A, it is enough to take proofs of a conjunction to be pairs of proofs of its conjuncts. Under this interpretation (that can be easily seen to validate both βand η-equations), the cardinality of the set of proofs of A ∧ A is κ 2 . Thus, whenever A is interpreted as a finite set of proofs of cardinality κ > 1, the sets of proofs of A and of A ∧ A have different cardinalities, and thus there cannot be an isomorphism between the two.
Another example of pairs of interderivable but non necessarily isomorphic sentences, which is of relevance for the results to be presented below in Sect. 6, is constituted by pairs of sentences of the form To see that pairs of sentences of these forms need not be isomorphic, it is enough to take the proofs of an implication A ⊃ B to be the functions from proofs of A to proofs of B. 15 Whenever A and B are interpreted on sets of proofs of different cardinalities, the sets of proofs of the two sentences will also have different cardinalities.
In the proof-theoretic semantic literature, the notion of isomorphism has been proposed as a formal explicans of the notion of synonymy or of identity of meaning. As remarked by Došen, That two sentences are isomorphic means that they behave exactly in the same manner in proofs: by composing, we can always extend proofs involving one of them, either as assumption or as conclusion, to proofs involving the other, so that nothing is lost, nor gained. There is always a way back. By composing further with the inverses, we return to the original proofs. (Došen 2003, p. 498) Belnap's approach to harmony, the most natural account of synonymy is in terms of interderivability and this again vindicates the claim that his approach to harmony is merely extensional. Although the notion of isomorphism has been investigated only in the context of languages containing standard connectives, it can be naturally applied to the case of connectives characterized by arbitrary rules, provided that the rules are equipped with reductions and expansions and thus that a notion of equivalence between derivations is defined.
In the next two sections, we will present a systematic way of achieving this goal. It should however be kept in mind, that there are no restrictions of principle to the possibility of applying the intensional account of harmony beyond the realm of logical constants. For example (see, for details, Martin-Löf 1971; Prawitz 1971, § 3.8.8 and § III.4 respectively), in a first-order setting one may consider a predicate N for the property of being a natural number governed by the following inference rules (we use in postfix notation as a unary function symbol for the successor function and with A t/x we indicate capture-avoiding substitution of t for x in A: Whenever t is a numeral (i.e. a term of the form 0 ... ) a reduction is readily defined, and for any t an expansion is defined as well (as pointed out by one of the referee, by applying the elimination rule taking A to be N x). The equations associated to reduction and expansion induce a notion of identity of proofs and thus would offer the basis for defining a notion of sentence isomorphism for the language of arithmetic.
In the following we will however restrict ourselves to rules governing propositional connectives, thereby disregarding the exact nature of the non-logical vocabulary. The isomorphisms we will be discussing are therefore schematic, i.e. they are warranted by the logical form of the sentences in question. To stress this, in the remaining part of the paper we will speak of formulas rather than of sentences, thereby highlighting that the considerations we are going to develop are independent of the choice of the language. The precise development of the ideas put forward in the paper to particular languages, such as the one of arithmetic, is of the greatest interest but requires further investigation.

The calculus of higher-level rules
The calculus of higher-level rules introduced by Schroeder-Heister (1981,1984) is a proof-theoretic framework which generalizes the natural deduction systems of Gentzen (1935) and Prawitz (1965) in two respects: (i) not only formulas but also rules can be assumed in the course of derivations; (ii) when applying a rule in a derivation, not only formulas but also (previously assumed) rules can be discharged.
This yields a hierarchy of different rule-levels at the base of which we have the limit case of formulas (rules of level 0), and production rules (rules of level 1, such as ∧I, ∧E 1 , ∧E 2 and ⊃E above).
A typical example of a rule of level 2 is ⊃I, which allows the discharge of formulas (i.e. of rules of level 0). Informally, the content of this rule is that in order to establish A ⊃ B one need not be able to infer B outright, but it is enough to be able to infer B from A. As this possibility is exactly what is expressed by the rule allowing the inference of B from A, we adopt the terminological convention that the premise of ⊃I is not B, but rather the rule allowing one to pass over from A to B (for the role played by B in ⊃I we will use the term "immediate premise").
In general, the premises of rules of level l ≥ 1 will be rules of level l − 1 and for each level l ≥ 2, the application of a rule of level l in a derivation will allow the discharge of rules of level l − 2.
We consider a propositional language L whose formulas are built from denumerably many atomic formulas α 1 , α 2 , . . . using denumerably many connectives of different arities, among which we have the standard intuitionistic ones (∧, ⊃, ∨, . . .) as well as less standard ones (such as tonk above, and , , to be introduced below) . We will use † (possibly with primes) as a metavariable for connectives of arbitrary arity. Capital letters A, B, . . . (possibly with sub-scripts) will be used as metavariables for formulas of L, and will be referred to as schematic letters. We further assume the metalanguage to be an extension of the object language L and the metalinguistic expressions obtained from the application of the connectives of L (and of the metavariables for them) to schematic letters will be called schematic formulas.
As we did so far, we will use D (possibly with subscripts and primes) as a metavariable for derivations. By a schematic derivation we will understand the result of replacing in derivations formulas with schematic formulas and portions of derivations with metavariables for derivations (for notational conventions governing schematic derivation see also footnote 1).
It is important to remark that concrete (as opposed to schematic) derivations in standard natural deduction systems depend on object-language formulas, and not on metalinguistic formula schemata. In the same way, as it will made clear in Definitions 4 and 5, derivations in the calculus of higher-level rules will not properly speaking depend on rules (understood as metalinguistic schemata) but rather on the objectlanguage instances of these rules, which we will call concrete rules. (On the other hand, introduction and elimination rules will be taken, as we implicitly did so far, as metalinguistic schemata.) Following Schroeder-Heister (1984) we adopt a tree-like notation for concrete rules (and thus, in the metalanguage, for rules as well). We will use R, R 1 , . . . as metavariables for concrete rules: The root of the tree constituting a concrete rule R is called the consequence of R. Let R be the concrete rule of level ≥ 1: The premises of each premise of a concrete rule R of level l +2 (if any) are concrete rules of level l. As the following definition will make clear, these can be discharged by applications of R in a derivation. To make explicit which concrete rules can be discharged by the applications of a concrete rule R, we use a "bracketed" notation for concrete rules, so that a concrete rule R of the form indicated to the left below is written as on the right below (if R has n premise and the ith premise has in turn m i premises, for all 1 ≤ i ≤ n and 1 ≤ j ≤ m i , we indicate the jth premise of the ith premise of R with R i j ):

B n A
We also use a linear notation for concrete rules, so that a concrete rule of level l + 1 is written (R 1 ; . . . ; R n ⇒ A), where the outermost brackets will be often omitted.
The handling of discharge follows Troelstra and Schwichtemberg (1996) rather than Prawitz (1965), in that we treat assumptions as partitioned in classes. This is achieved by associating to each assumption a (not necessarily distinct) number. In a given derivation, the undischarged assumptions of the same form R which are marked with the same number u belong to the same class u [R]. Assumptions belonging to the same class are discharged together. For simplicity, and without loss of generality, we assume that the labels of distinct assumption classes of the same formula in a derivation are always distinct. For readability, and as it is usually done, we will omit (except in definitions) the numbers above the undischarged assumptions of derivations.
We use Γ and (possibly with subscripts and primes) as metavariables for multisets, ∪ and \ for multi-set union and difference. With Γ, and Γ, R we abbreviate respectively Γ ∪ . . . For convenience, within definitions we will use u [R] to indicate a multi-set containing as many copies of R as the number of the spatially located occurrences of R belonging to the assumption class.

Definition 4 (Structural derivations)
-For any formula of L and natural number u, u is a natural number and, for all 1 ≤ i ≤ n, D i is a derivation of conclusion B i depending on the multi-set of assumptions Γ i , then the following: To avoid misunderstanding we repeat that for readability, and as it is usually done, the numeric labels above the undischarged assumptions will always be omitted (except in definitions), that is in all examples below only discharged assumptions will be explicitly numbered.

Example 1 The following structural derivation
is a derivation of conclusion α 6 depending on the multi-set of concrete rules (of level 2 and 3 respectively) In tree-like (bracketed) notation, the two concrete rules look respectively as follows: As anticipated, inference rules governing connectives cannot be identified with concrete rules. The reason is twofold and is analogous to the reason why, in a natural deduction formulation of some theory, axiom schemata are fundamentally different from assumptions. Whereas an assumption is always the assumption of a specific sentence, an axiom schema is used in a derivation by instantiating it on some (objectlanguage) sentence which, by analogy with concrete rules, one may call "concrete" axiom. Moreover, although both concrete axioms and assumptions represent the starting point of derivations, the conclusions of the derivations depend only on the former and not on the latter ones. Analogously, the rules governing logical connectives are schemata whose instances are concrete rules. For example, the two (distinct) concrete rules: are instances of the same rule (schema), namely: The rule ∧I can thus be identified with the (metalinguistic) schema A, B ⇒ A ∧ B all of whose instances are (different) concrete rules. 17 Moreover, contrary to arbitrary concrete rules, concrete rules which are instances of ∧I are to be considered as primitive, and thus, as for concrete axioms, the conclusion of a derivation should not depend them.
These remarks can be made precise by defining the notion of derivation in a calculus K, where a calculus is a list of rule schemata whose instances are to be taken as primitive in the construction of derivations. Since the rules of K are metalinguistic schemata, the following definition is to be understood as given in the meta-metalanguage of L, which we assume to be an extension of the meta-language in which we use capital bold letters A, B, C, . . . (resp. R 1 , R 2 , . . ., and Γ, ) as meta-metalinguistic variables for metalinguistic schematic formulas (rep. rules, and multi-sets of rules). 18 Definition 5 (K-derivation) -All structural derivations are K-derivations; -if the concrete rule The informal remarks at the beginning of this section should therefore be understood in the light of these observations. For instance, when we said that ⊃I, being a rule of level of 2, discharges rules of level 0, we should have said that the concrete rules which are instances of the (metalinguistic) rule (schema) are concrete rules of level of 2 and these discharge concrete rules of level 0 (which are object-language formulas). 18 In spite of this, K-derivations are defined as object-language entities. To achieve this, the indication of which primitive rule (schema) is applied at a certain point in a K-derivation is not part of the derivation itself (this is in fact the standard way of presenting any natural deduction system). We will however always add rule labels for readability (this is also standard practice in presentations of natural deduction), except in further definitions and proofs, where we rigorously follow the "official" definition.
is an instance of a primitive rule R of K and if, for all 1 ≤ i ≤ n, D i is a K-derivation of conclusion B i , depending on Γ i , then the following: According to the definition of structural (respectively K-)derivation, the conclusion of a structural (resp. K-)derivation is always a formula (i.e. a concrete rule of level 0). We can however introduce, as a metalinguistic abbreviation, the following notions:

Definition 6 (Derivation and derivability of rules)
If R = (R 1 ; . . . ; R n ⇒ A) then a structural (respectively K-)derivation of A depending on Γ, R 1 , . . . , R n will be said to be a structural (resp. K-)derivation of R depending on Γ . Such structural (resp. K-)derivations will be sometimes written in either of the following two ways: We say that a concrete rule R is structually (resp. K-)derivable from Γ (notation Γ (K) R) iff there is a structural (resp. K-)derivation of R depending on ⊆ Γ . We write R for ∅ R.
We say that a rule R is structurally (resp. K-)derivable from Γ (notation Γ (K) R) iff for all instances R of R, R is structurally (resp. K-)derivable from instances of the rules in Γ .
We conclude this section by stating three results (Schroeder-Heister 1981 whose proofs (given in the Appendix at the end of the paper) will be needed to present the inversion principle in the next section.

Lemma 1 (Reflexivity) For all R, R R
Corollary 1 If R is an instance of a primitive rule of K, then K R.

PSH-inversion and harmony
Assuming † to be an n-ary connective, we say that:

Definition 7 (Introduction and elimination rules) A rule of the form
is an introduction rule for † provided that all schematic letters occurring in the rules An elimination rule for † is any rule of the form †(A 1 , . . . , (This time no restriction is imposed on the schematic letters occurring in the rule.) The first premise †(A 1 , . . . , A n ) of the elimination rules is called major premise. 19 Particular collections of introduction (respectively elimination) rules for some connective † will be indicated with †I (resp. †E), possibly with primes.
As anticipated, by an inversion principle we understand a recipe to associate to any given collection of introduction rules a specific collection of elimination rules which is in harmony with it. In the context of the calculus of higher-level rule, PSH-inversion can be formulated as follows:

Definition 8 (PSH-inversion) Given a collection of introduction rules †I and a collection of elimination rules †E for †, we will say that †I and †E obey PSH-inversion
if and only if †E consists only of the following rule: †(A 1 , . . . , A n ) in which C is a schematic letter different from all A i (for all 1 ≤ i ≤ n), and each of the minor premises corresponds to one of the introduction rules of †, in the sense that the jth premise of the kth introduction rule R k j (with 1 ≤ k ≤ r , where r is the number of introduction rules; and 1 ≤ j ≤ m k where m k is the number of premises of the kth introduction rule) is identical to the jth premise of the kth minor premise of the elimination rule.

Given a collection of introduction rules †I, we indicate the collection †E associated to it by PSH-inversion with PSH( †I).
Example 2 If ∨I consists of the two rules on the left hand side below, then PSH(∨I) consists of the rule on the right hand side below: 19 Sometimes, it is required that in any introduction (respectively elimination) rule the occurrence of † in the consequence (resp. major premise) is the only occurrence of a connective figuring in the rule. The requirement can however be lifted, thereby allowing the introduction and elimination rules of a certain connective † to "make reference" to other connectives, or even to itself. This possibility, envisaged already by Schroeder-Heister (1984) is typically needed in giving the rules for negation, which usually make reference either to ⊥ or to itself, and for characterizing "paradoxical connectives" (Schroeder-Heister 2012; Tranchini 2016).
C ∨E PSH C Example 3 If ∧I consists of the rule on the left hand side below, then PSH(∧I) consists of the rule on the right hand side below: C ∧E PSH C Example 4 If ⊃I consists of the rule on the left hand side below, then PSH(⊃I) consists of the rule on the right hand side below: C ⊃E PSH C Let K be a calculus consisting of primitive rules all of which have either the form of an introduction or of an elimination rule, and in which moreover the collections †I and †E of all introduction and elimination rules of † which are primitive in K obey PSH-inversion. We will now show that the account of harmony sketched in section 1 naturally generalizes to the rules of † in K.
The "no more" aspect of the informal statement of harmony can be cashed out by specifying the following β-equations for K-derivations. Their left-to-right orientations are reductions which permit to level the complexity peaks constituted by an application of one of the introduction rules for † belonging to K followed by an application of the elimination rule of † belonging to K (Schroeder-Heister 1981): where the reduced derivation is defined as in the proof of Lemma 2 in the Appendix at the end of the paper.
To cash out the "no less" aspect of harmony, we now turn to the definition of expansions. The earliest formulation of expansions for standard intuitionistic connectives is that of Prawitz (1971, § II.3.3.3). The elimination rule constituting †E is shaped after the pattern of the elimination rule for disjunction ∨E. One could therefore obtain a pattern for expansions for any such collection of elimination rules by generalizing Prawitz's pattern: (with u 1 and u 2 fresh for D) in the following manner (where we abbreviate †(A 1 . . . A n ) with † and in the expanded derivation I ( †I k ) (with 1 ≤ k ≤ r ) is defined as in the proof of corollary 1): . . . , u 1m 1 , . . . , u r 1 , . . . , u rmr ) †E PSH † (with u 11 . . . u rm r fresh for D) There is however a well-known argument, implicit in some remarks of Dummett (1991), against the thesis that the expansion for disjunction fully grasps the "no less" aspect of harmony for this connective.
Dummett considers a restriction (motivated by considerations about quantum logic) on the elimination rule ∨E. The restriction consists in allowing the rule to be applied only if its immediate premises C depend on no other assumptions than those of the form A and B that get discharged by the application of ∨E. The restricted elimination rule is weaker than the unrestricted one, in that it limitates what can be inferred from a logical complex formula having disjunction as its main operator. Under the assumption that ∨E (consisting only of the unrestricted elimination rule) is in harmony with ∨I (consisting of ∨I 1 and ∨I 2 ), we expext ∨E (consisting only of the restricted rule) not to be in harmony with ∨I. In particular, since using the restricted rule one can derive less than what one can derive using the unrestricted one, we expect it not to satisfy the "no less" aspect of harmony.
However, if the "no less" aspect of harmony is cashed out in terms of expansions, the expansion pattern for ∨E should not work for ∨E . Unfortunately, in Prawitz's pattern, the application of the elimination rule for disjunction is perfectly compatible with the quantum restriction. Therefore it looks as if the availability of an expansion is too weak a condition for the rules of a connective to satisfy the "no less" aspect of harmony.
One may think that restrictions of this kind weaken the rules in too subtler a way in order for the availability of expansions to work as a criterion to rule out the resulting disharmony. On the contrary, it is not hard to find a connection between this kind of disharmony and expansions. Let's consider a restriction on the introduction rule for implication ⊃I analogous to the one imposed on the elimination rule of quantum disjunction (i.e. we allow the rule to be applied only if the result of applying the restricted introduction rule is a derivation in which all assumptions are discharged). This time the restriction strengthens the rule, since it sets higher standards for introducing A ⊃ B. Assuming that the collection of elimination rules consisting of modus ponens (i.e. ⊃E) is in harmony with the unrestricted introduction, we expect it to fail to be in harmony with the restricted introduction rule. In particular, we expect it to fail to meet the "no less" aspect of harmony. That this is in fact the case is shown by the impossibility of shaping the expansion for the restricted rule after the model of that for the unrestricted rule (see Sect. 1 above), as the application of the introduction would violate the restriction.
We take this as a reason to consider an alternative pattern for the expansion of disjunction, one capable of detecting the disharmony of the restricted ∨E rule. The idea behind the alternative pattern is that an expansions operates on a formula which is not, in general, the conclusion of a derivation, but that occurs "within" a derivation: Observe that all instances of Prawitz's expansion pattern are instances of the alternative pattern in which the derivation D just consists of the assumption of A ∨ B. Moreover, if in the derivation D the conclusion C depends on more assumptions than just those of the form A ∨ B indicated in the schema, the application of ∨E in the expanded derivation violates the quantum restriction, and therefore the pattern does not work in general for quantum disjunction.
The proposed pattern readily generalizes to arbitrary connectives whose rules obey PSH-inversion. That is, let K be a calculus consisting of rules obeying PSH-inversion, the "no less" aspect of harmony can be expressed by the following η-equation for K-derivations (we abbreviate again †(A 1 , . . . , A n ) with †): (u 11 , . . . , u 1m 1 , . . . , u r 1 , . . . , u rm r ) †E PSH C (with u 11 . . . u rm r fresh for D) We have thereby shown that the rules of † are in harmony in any calculus in which the primitive rules involving † obey PSH-inversion.
We conclude this section with a few remarks. First, using PSH-inversion we can find a collection of elimination rules which is in harmony with any collection of introduction rules, provided no rule in the collection involves any restriction of the kind we discussed in connection with expansions. In fact, it is not clear which is the collection of elimination rules matching the collection of introduction rules consisting only of the restricted ⊃I rule discussed above.
Second, Schroeder-Heister (1981) established a normalization theorem for a particular calculus K comprising only rules obeying PSH-inversion, and such that the occurrence of † in the consequence (respectively major premise) of the rules is the only occurrence of a connective in the introduction (resp. elimination) rules. The proof of the result uses the reductions stemming from β equations as well as permutations, which generalize the one for disjunction already considered by Prawitz (1965): Although the analysis of harmony we proposed is based on the possibility of performing local transformations on derivations and not on the possibility of globally transforming any derivation into normal form, normalization is much more tight to the inversion principle than recently argued by e.g. Read (2010, p. 575) and Schroeder-Heister (2014a, p. 1207.
In particular, the adoption of the expansion pattern that we proposed makes it possible to simulate the permutations, by first expanding and then reducing the derivation on the left hand side of the permutation (on the connection between expansions and permutations see also Tranchini et al. under review). Thus, normalization is a consequence of inversion whenever introduction and elimination rule schemata are allowed to contain at most one occurrence of one connective. Conversely, when the rules of a calculus obey PSH-inversion, failure of normalization is essentially tight to the presence of more than one occurrence of a connective in the introduction and elimination rules (in particular to the presence of negative occurrences of the connective, see Dyckhoff 2016, §2), this being the feature that enables the formulation of paradoxical connectives (see also Tranchini 2015Tranchini , 2016.

Beyond inversion
A fact which has only recently been observed (Olkhovikov and Schroeder-Heister 2014;Schroeder-Heister 2014a, b) is that even a fully precise account of a universally applicable inversion principle would not constitute an exhaustive characterization of harmony.
To see why, it suffices to consider the rules of the connective ∧ so far discussed. The collection of elimination rules ∧E consisting of ∧E 1 and ∧E 2 is not the one obtained by PSH-inversion from the collection of introduction rules ∧I consisting only of ∧I. However, neither Prawitz nor Schroeder-Heister are willing to deny that ∧E is in harmony with ∧I.
Thus, the collection of elimination rules generated by inversion from a given collection of introduction rules is not, in general, the only one which is in harmony with it.
This situation squares with the plurality of inversion principles available in the literature. In recent work, Read (2010Read ( , 2015 has suggested an alternative inversion principle (i.e. an alternative way of generating a collection of elimination rules from a given collection of introduction rules), which we will refer to as R-inversion.
Definition 9 (R-inversion) A collection †I consisting of r introduction rules for † and a collection of elimination rules †E obey R-inversion if and only if †E consists of r k=1 m k rules (where m k , for 1 ≤ k ≤ r , is the number of premises of the kth introduction rule), each of which having the following form: †(A 1 , . . . , A n ) is a choice function which selects one of the premises of each of the r introduction rules of † (i.e. for each 1 ≤ k ≤ r , 1 ≤ f h (k) ≤ m k ).
Given a collection of introduction rules †I, we indicate the collection †E associated to it by R-inversion with R( †I).
Example 5 If ∧I is the collection of introduction rules consisting only of ∧I, R(∧I) consists of the following two rules: C ∧E R2 C Like Prawitz and Schroeder-Heister, Read does not deny the possibility that different collections of elimination rules could be in harmony with the same collection of introduction rules, and actually he himself discusses some examples (typically that of conjunction).
What all mentioned authors explicitly observe is that the alternative collections of elimination rules are interderivable with each other. For instance, both ∧E 1 and ∧E 2 (resp. ∧E R1 and ∧E R2 ) are structurally derivable from ∧E PSH , and conversely the latter rule is structurally derivable from the former ones. Similarly, both ∧E R1 and ∧E R2 are structurally derivable from ∧E 1 and ∧E 2 and viceversa.
Although not stated in an explicit manner, this seems to be the reason why the rules of ∧ presented in Sect. 1 are considered as much in harmony as those obtained by PSH-(or, for what matters, R-) inversion.
More in general, we may introduce the following notion: Definition 10 (Interderivability of collection of rules) Two collections of elimination rules †E and †E are interderivable, notation †E †E , if and only if each rule in †E is structurally derivable from the rules in †E and viceversa.
Using this notion, the conception of harmony implicitly defended by all authors considered can be made explicit as follows: Definition 11 (PSH-harmony via interderivability) Given two collections †I and †E of introduction and elimination rules for a connective †, we say that †I and †E are in PSH-harmony via interderivability if and only if †E PSH( †I) Definition 12 (R-harmony via interderivability) Given two collections †I and †E of introduction and elimination rules for a connective †, we say that †I and †E are in R-harmony via interderivability if and only if †E R( †I) In fact, for any collection of introduction rules †I, PSH( †I) R( †I). 20 Thus the same collections of rules qualify as harmonious according to the two definitions.
The invariance of harmony with respect to the choice of the inversion principle has been taken by Schroeder-Heister as a reason for defining harmony without making reference to any inversion principle at all. In fact Schroeder-Heister (2014a, 2014b) proposed two different accounts of harmony, on the basis of which he then demonstrated that the rules obeying PSH-inversion satisfy the proposed condition for harmony. Both notions of harmony are equivalent with each other, and moreover they are equivalent to those resulting from Definitions 11 and 12.
We will say that the rules satisfying these notions of harmony are in harmony by interderivability.
We fully agree with Schroeder-Heister on the need of a notion of harmony going beyond the specification of an inversion principle. However, it is doubtful whether rules which are in harmony by interderivability can, in general, be equipped with plausible reductions and expansions. In other words, it is doubtful whether the account of harmony obtained by coupling inversion with interderivability can still qualify as intensional.
In the next section we will present an example justifying this claim.

Harmony by interderivability is not intensional
In this section we will consider yet a further inversion principle, which however can be applied only to the very restricted case of a collection of introduction consisting of just one introduction rule, and that for this reason will be referred to as toy inversion principle (henceforth T-inversion). In spite of its limited range of applicability, it will suffice for the goals of the present section.
Definition 13 (T-inversion) A collection †I consisting of just one introduction rule for † and a collection of elimination rules †E for † obey T-inversion if and only if †E consists of m rules (where m is the number of premises of the rule in †I), each of which having the following form: in which the consequence of the jth elimination rule (1 ≤ j ≤ m) is identical to the consequence of the jth premise of the introduction rule in †I, and the kth minor premise of the jth rule (if any) is equal to the kth premise (1 ≤ k ≤ p j ) of the jth premise of the (only) introduction rule †I.
Given an introduction rule †I we indicate with T( †I) the collection †E associated by T-inversion to the collection of introduction rules †I consisting only of †I.
Two examples of collections of rules obeying T-inversion are those for ∧ and ⊃ discussed in Sect. 1.
Connectives whose rules obey T-inversion satisfy the informal statement of harmony, as it is shown by the possibility of formulating the βand η-equations displayed in Table 1 for K-derivations in a calculus K consisting only of introduction and elimination rules and in which the collection of introduction and elimination rules for † obey T-inversion (in the equations we abbreviate †(A 1 , . . . , A n ) with †).
We will now present two collections of rules obeying T-inversion for two connectives and such that in the calculus consisting of both collections of rules as primitive A B A B and A B A B. We will then consider the collection of rules for a third connective having the same collection of introduction of rules of and the same collection of elimination rules of . We will show that the rules of are in harmony as interderivability. However, although it is possible to define reductions and expansions for , the most obvious candidates for these equations trivialize the notion of isomorphism.
Let's first consider the following collections of rules I and E for : Clearly, E = T( I), and the harmonious nature of the rules is displayed by the βand η-equations of Table 2, which are obtained by opportunely intantiating the general patterns of Table 1. Using them, it easy to show that A B is isomorphic to ((A ⊃ B) ∧ (B ⊃ A)) ∧ A in the calculus consisting of I, E and of the rules for ∧ and ⊃ obeying T-inversion of Sect. 1.
Consider now the collections of rules I and E for the connective . These two collection of rules differ from those of in having B instead of A as third premise of the only introduction rule, and, correspondingly, in having B instead of A as consequence of the third elimination rule: These two collections of rules also obey T-inversion and thus βand η-equations that follow the same pattern of those of are available. Using them, it easy to show that A B is isomorphic to ((A ⊃ B) ∧ (B ⊃ A)) ∧ B in the calculus consisting of I, E and of the rules for ∧ and ⊃ of Sect. 1.
It is moreover easy to see that in the calculus consisting of I, E, I, E we have A B A B and A B A B. To establish the latter fact interpret proofs of A B and of A B as triples whose first two members are functions from proofs of A to proofs  of B and vice versa, and whose third members are proofs of A and of B respectively. Whenever A and B are interpreted on sets of proofs of different cardinalities so are A B and A B.
To show the limit of harmony by interderivability we now consider a collection of rules for a third connective, we call it , which is obtained by "crossing over" the collections of rules of and : The collection I consists of the introduction rule obtained by replacing with in I; and the collection E consists of the elimination rules obtained by replacing with in E 1 , E 2 and E 3 .
Clearly, I and E do not obey T-inversion, due to the mismatch between the third premise A of the introduction rule and the consequence B of the third elimination rule. We list all of I, E, and T( I): with u and v fresh for D Although E = T( I), the following holds:

Lemma 3 E T( I)
Proof Since both collections of rules share E 1 and E 2 , to show their interderivability it is enough to show that any instance of E 3 is structurally derivable from some instances of E 1 , E 2 and E * 3 and that any instance of E * 3 is structurally derivable from some instances of E 1 , E 2 and E 3 : Thus the two collections of rules I and E do qualify as in harmony by interderivability (in spite of the fact that they do not obey T-inversion).
The question that we want to address now is the following: Can we define appropriate βand η-equations for K-derivations in the calculus K consisting of I and E?
Whereas the β-equations involving complexity peaks generated by I and E 1 and E 2 follow the pattern of those of and , one may doubt in the possibility of finding a reduction for the peak generated by I and E 3 . A moment reflection however dispels the doubt, since one can come up with the following equation: the left-to-right direction of which provides a reduction showing that, in spite of the mismatch between the third premise of I and the consequence of E 3 , this elimination rule allows one to derive no more than what is needed in order to infer its premise by introduction rule.
Similarly, although the expansion pattern cannot simply be constituted by applications of the three elimination rules followed by an application of the introduction rule, the following η-equation shows that what one gets from A B using the elimination rules is no less than what is needed to reintroduce A B by means of its introduction rule: In spite of the fact that these equations show that the rules for satisfy the informal statement of harmony of Definition 1, they are inadmissible from the viewpoint of the intensional approach to inferentialism that we advocated.
To see why, consider the derivation obtained by expanding a given K-derivation D of A B ending with an introduction rule depicted in Table 3. In such a derivation all occurrences of A B (apart from the conclusion) constitute complexity peaks. By reducing them we do not obtain the derivation D of which the derivation considered is an expansion, but instead the following: By symmetry and transitivity of the equivalence relation induced by βand ηequations we thus have the following equivalence: Table 3 The expansion of an arbitrary derivation D ending with I This means that all instances of these two derivation schemata (obtained by replacing D 1 , D 2 and D 3 with actual derivations) pairwise belong to the same equivalence classes induced by βand η-equations. This is problematic since also all derivation of the following form: will belong to the same equivalence classes, as these are obtained by appending an application of E 3 to the conclusions of the previous ones.
Derivations of this form reduce by -β 3 as follows: and in the limit case in which D 1 and D 3 simply consist of the assumption of some formula (in which case, A = B = C for some C) these schemata boil down to the following: This means that in presence of the equations for , for any formula C, any derivation from C to C is equated with the derivation consisting only of the assumption of C (i.e. the identity function on the set of proofs of C). But this means that in presence of any two interderivable formulas are also isomorphic, since the compositions of the proofs establishing that each is derivable from the other are equated to the identity functions.
In other words, the addition of to any calculus K, even one containing interderivable but not isomorphic formulas, has the result of making formula isomorphism collapse on interderivability.
This shows that, on an intensional account of harmony, in order for a collection of elimination rules to qualify as in harmony with a certain collection of introduction rules, one should require more than just its interderivability with the collection of elimination rules generated by inversion from the given collection of introduction rules.
According to Dummett (1981Dummett ( , 1991, Belnap's criterion of conservativity is appealing because it amounts to the requirement that the addition of an expression to a language L should not modify the meaning of the expressions of L.
From the intensional standpoint we have been advocating, however, the request of non-modifying the meaning of the expressions of the language to which a connective is added should not be understood as "conservativity over derivability", but rather as "conservativity over proofs". That is the addition of a new expression to the language should not modify the preexisting relationships holding between proofs. The addition of the connective to a given language has exactly this effect, as it forces the identification of any two derivations of a formula from itself. Thus its addition does modifies the meaning of the expressions of the languages to which it is added. As a result of its addition, previously non-synonymous expression may "become", unjustifiably, synonymous. The rules for are therefore inadmissible in any language with a non-trivial notion of isomorphism.

Outlook
In this paper we have argued that when harmony is based on reductions and expansions, the inferentialist account of meaning can be understood as having an intensional character, in the sense that a notion of synonymy stricter than interderivability can be defined using the notion of isomorphism. Moreover, we have shown that such an account can be applied to complex sentences formed by means of connectives whose collection of rules satisfy the inversion principle. In particular, the novel account of expansions we provided solves the difficulty pointed out by Dummett concerning the "no less" aspect of harmony (what Dummett sometimes refers to as "stability").
However, the specification of an inversion principle does not provide an exhaustive account of harmony, as there are more collections of elimination rules which we are willing to acknowledge as being in harmony with a given collection of introduction rules than the one generated by inversion.
Contrary to what several authors more or less implicitly acknowledge, the interderivability with the collection of elimination generated by inversion is too weak a condition for a collection of elimination rules to be in harmony with a collection of introduction rules, at least if the intensional standpoint we advocated is not to collapse on the extensional standpoint arising from the account of harmony put forward by Belnap.
Though I and E are in harmony by interderivability (and in fact in Belnap's sense as well), the notion of isomorphism in any language containing is trivial, i.e. it collapses onto interderivability.
How harmony is to be exactly defined, and in particular how to characterize the relationship between the different collections of elimination rules which we are willing to acknowledge as being in harmony with the same collection of introduction rules, is left as an open question. We hope to have made clear that in the quest for a proper account of harmony, the notion of formula isomorphism will have to play a more prominent role than the one it has been accorded so far in proof-theoretic investigation on meaning.
Proof of lemma 2 In order to prove the lemma, given a derivation D of A from ∪ u [R], and given a derivation D of R from Γ , we define a derivation of A from ⊆ Γ, to be referred to as the composition of D and D . The composition of D and D will be notated as cmp (D, D ) or, graphically, as  We call D * the result of composing D with D 1 , and then of composing the obtained derivation with D 2 and so on until one composes the result of the previous series of compositions with D n . The derivation D * is (by induction hypothesis) a derivation of B from ⊆ Γ, , and thus not depending on any of the R i s, for 1 ≤ i ≤ n: By composing D * with this derivation one gets (again by induction hypothesis) a derivation D * * of A depending on the union of (⊆ Γ, ) and of a multi-set containing r − 1 copies of R. We define cmp(D, D ) to be cmp(D * * , D ).
More briefly, but in an less readable fashion: