Advances in Proof-Theoretic Semantics pp 79-93 | Cite as

# Some Remarks on Proof-Theoretic Semantics

## Abstract

This is a tripartite work. The first part is a brief discussion of what it is to be a logical constant, rejecting a view that allows a particular self-referential “constant” \(\bullet \) to be such a thing in favour of a view that leads to strong normalisation results. The second part is a commentary on the flattened version of Modus Ponens, and its relationship with rules of type theory. The third part is a commentary on work (joint with Nissim Francez) on “general elimination rules” and harmony, with a retraction of one of the main ideas of that work, i.e. the use of “flattened” general elimination rules for situations with discharge of assumptions. We begin with some general background on general elimination rules.

### Keywords

General elimination rules Harmony Strong normalisation## 1 Background on General Elimination Rules

Standard natural deduction rules for **Int** (intuitionistic predicate logic) in the style of Gentzen [9] and Prawitz [24] are presumed to be familiar. The theory of cut-elimination for sequent calculus rules is very clear: whether a derivation in a sequent calculus is cut-free or not is easily defined, according to the presence or absence of instances of the *Cut* rule. For natural deduction, normality is a less clear concept: there are several inequivalent definitions (including variations such as “full normality”) in the literature. For implicational logic it is easy; but rules such as the elimination rule for disjunction cause minor problems with the notion of “maximal formula occurrence” (should one include or not include the permutative conversions?), and more problems when minor premisses have vacuous discharge of assumptions.

One proposed solution, albeit partial, is the uniform use of *general elimination rules*, i.e. *GE-rules*. These can be motivated in terms of *Prawitz’s inversion principle*^{1}: “the conclusion obtained by an elimination does not state anything more than what must have already been obtained if the major premiss of the elimination was inferred by an introduction” [25, p. 246]. Normality is now the simple idea [39] that the major premiss of each elimination step should be an assumption; see also [13, 36].

*y*fresh in \(\exists E\)) and the same pattern was proposed (as a GE-rule) in the early 1980 s for conjunctionby various authors, notably Prawitz [26, 27], Martin-Löf [15] and Schroeder-Heister [30], inspired in part by type theory (where conjunction is a special case of the \(\Sigma \)-type constructor, with \(A \wedge B =_{\textit{def}} \Sigma (A,B)\) whenever

*B*(

*x*) is independent of

*x*) and (perhaps) in part by linear logic [10] (where conjunction appears in two flavours: multiplicative \(\otimes \) and additive &).

^{2}and universal quantification:Rules of the first kind are conveniently called “flattened” [29] (in comparison with Schroeder-Heister’s “higher-level” rules, for which see [30, 32]). López-Escobar [13] distinguishes between the premiss

*A*of \(\supset \!\! GE\) as a “minor” premiss and that of

*C*(assuming

*B*) as a “transfer” premiss.

^{3}

**Int**; such calculi, and their normalisation results, have been studied by von Plato [39], by López-Escobar [13] and by Tennant [36]. With the definition (given above) that a deduction is

*normal*iff the major premiss of every elimination step is an assumption, the main results are:

- 1.
Weak Normalisation (WN): every deduction can be replaced by a normal deduction of the same conclusion from the same assumptions [13, 20, 36, 39].

- 2.
Strong Normalisation (SN), for the implicational fragment: an obvious set of rules for reducing non-normal deductions is strongly normalising, i.e. every reduction sequence terminates [12, 13, 36, 37].

- 3.
SN, for the full language: a straightforward extension of the proof of [37] for implication

^{4}; also, the proofs for implication “directly carry over” [12] to a system with conjunctions and disjunctions. An argument (using the ordinary elimination rule for implication) is given in [35] for the rules for implication and existential quantification, with the virtue of illustrating in detail how to handle GE rules where the Tait–Martin-Löf method of induction on types familiar from [11] is not available. See also [13]. - 4.
Some straightforward arguments for normalisation (by induction on the structure of the deduction) [40].

- 5.
- 6.
Some interpolation properties [17].

- 7.
Extension of the normalisation results to classical logic [41].

- 1.
Poor generalisation of the GE rule for implication to the type-theoretic constant \(\varPi \), of which \(\supset \) can be treated as a special case [15]: details below in Sect. 3.

- 2.
Too many deductions, as in sequent calculus. Focused [aka “permutation-free”] sequent calculi [5, 6] have advantages. Sequent calculus has (for each derivable sequent) rather too many derivations, in comparison to natural deduction, since derivations often have many permutations each of which is, when translated to ordinary natural deduction, replaced by an identity of deductions. The GE-rules have the same feature, which interferes with rather than assists in root-first proof search.

- 3.
(For some complex constants, if one adopts the methodology beyond the basic intuitionistic ones) a “disharmonious mess” [4]: details below in Sect. 4.4.

- 4.
No SN results (yet, in general) for GE-rules for arbitrarily complex constants.

## 2 Is Bullet a Logical Constant?

^{5}[7] in mind), proposed as a logical constant a nullary operator \(\bullet \) (aka

*R*, for “Russell”) with the single (but impure) introduction ruleThe GE-rule justified by this (along the same lines as for implication) is thenwhich, given the usual \(\bot E\) rule and the unnecessary duplication of premisses, can be simplified toSo, by this \(\bullet E\) rule, the premiss of the \(\bullet I\) rule is deducible, hence \(\bullet \) is deducible, hence \(\bot \) is deducible.

There is however a weakness (other than just that it leads to inconsistency) in the alleged justification of \(\bullet \) as a logical constant: it is a circularity. We follow Martin-Löf [15, 16] and Dummett [2] in accepting that we understand a proposition when we understand what it means to have a *canonical* proof of it, i.e. what forms a canonical proof can take. In the case of \(\bullet \), there is a circularity: the introduction rule gives us a canonical proof only once we have a proof of \(\bot \) from the assumption of \(\bullet \), i.e. have a method for transforming *arbitrary* proofs of \(\bullet \) into proofs of \(\bot \). The reference here to “arbitrary proofs of \(\bullet \)” is the circularity.

There are similar ideas about type formers, and it is instructive to consider another case, an apparent circularity: the formation rule (in [15]) for the type *N* of natural numbers. That is a type that we understand when we know what its canonical elements are; these are 0 and, when we have an element *n* of *N*, the term *s*(*n*). The reference back to “an element *n* of *N*” looks like a circularity of the same kind; but it is rather different—we don’t need to grasp *all* elements of *N* to construct a canonical element by means of the rule, just *one* of them, namely *n*.

A formal treatment of this issue has long been available in the type theory literature, e.g. Mendler [18], Luo [14], *Coq* [1]. We will try to give a simplified version of the ideas. With the convention that propositions are interpreted as types (of their proofs), we take type theory as a generalisation of logic, with ideas and restrictions in the former being applicable to the latter. The simplest recursive case (*N*) has just been considered and the recursion explained as harmless (despite Dummett’s reservations expressed as his “complexity condition” [2]). What about more general definitions?

The definition of the type *N* can be expressed as saying that *N* is the least fixed point of the operator \(\varPhi _N =_{\textit{def}} \lambda X .(1 + X)\), i.e. \(N =_{\textit{def}} \mu X .(1+X)\). Similarly, the type of lists of natural numbers is \(\mu L .(1 + N \times L)\), and the type of binary trees with leaves in *A* and node labels in *B* is \(\mu T.A + (T \times B \times T)\). A unary operator definition \(\varPhi =_{\textit{def}} \lambda X .\ldots \) is said to be *positive* iff the only occurrences of the type variable *X* in the body \(\ldots \) are positive, where an occurrence of *X* in the expression \(A \rightarrow B \) is *positive* (resp. *negative*) iff it is a positive (resp. negative) occurrence in *B* or a negative (resp. positive) occurrence in *A*; a variable occurs *positively* in itself, and occurs *positively* (resp. negatively) in \(A+B\) and in \(A \times B\) just where it occurs positively (resp. negatively) in *A* or in *B*. A definition of a type as the least fixed point of an operator is then *positive* iff the operator definition is positive.

Read’s \(\bullet \), then, is defined as \(\mu X .(X \rightarrow \bot )\). This is not a positive definition; the negativity of the occurrence of *X* in the body \(X \rightarrow \bot \) is a symptom of the circular idea that \(\bullet \) can be grasped once we already have a full grasp of what the proofs of \(\bullet \) might be.

In practice, a stronger requirement is imposed, that the definition be strictly positive, i.e. the only occurrences of the type variable *X* in the body \(\ldots \) are strictly positive, where an occurrence of *X* in the expression \(A \rightarrow B \) is *strictly positive* iff it is a strictly positive occurrence in *B*; a variable occurs *strictly positively* in itself, and occurs *strictly positively* in \(A+B\) and in \(A \times B\) just where it occurs strictly positively in *A* or in *B*. A definition of a type as the least fixed point of an operator is then *strictly positive* iff the operator definition is strictly positive.

With such definitions, it can be shown that strong normalisation (of a suitable set of reductions) holds [18, Chap. 3]; similar accounts appear in [1, 14].

## 3 The GE-rule for Implication and the Type-Theoretic Dependent Product Type

^{6}semantics \(split((a,b), c) \rightarrow c(a,b)\) is a generalisation of the ruleNow, ordinary (but with witnesses)

*Modus Ponens*has, in the dependently-typed contextthe generalisation (in which \(ap2(f,a)\) is often just written as \( f\ a\) or \((f\ a)\)):(with \(\varPi (A,B)\) written as \(A\! \supset \! B\) whenever

*B*(

*x*) is independent of

*x*); but the “flattened” GE rulewith semantics \(ap3(\lambda (g), a,c) \rightarrow c(g(a))\) doesn’t appear to generalise:in which, note the question-mark—what should go there? In the context

*y*:

*B*(

*a*), the only ingredient is

*y*, which won’t do—it has the wrong type. Addition of an assumption such as

*x*:

*A*(and making

*c*depend on it, as in

*c*(

*x*,

*y*)) doesn’t help.

One solution is the system of higher-level rules of Schroeder-Heister [30]. Our own preference, to be advocated after a closer look at flattened GE-rules, is for implication (and universal quantification) to be taken as primitive, with *Modus Ponens* and the \(\varPi E\) rule taken as their elimination rules, with justifications as in [15].

## 4 GE-Rules in General

The wide-spread idea that the “grounds for asserting a proposition” collectively form some kind of structure which can be used to construct the assumptions in the minor premiss(es)^{7} of a GE-rule is attractive, as illustrated by the idea that, where two formulae *A*, *B* are used as the grounds for asserting \(A \wedge B\), one may make the pair *A*, *B* the assumptions of the minor premiss of \(\wedge GE\). An example of this is López-Escobar’s [13], which gives I-rules and then GE-rules for implication^{8} and disjunction, with the observation [13, p. 417] that:

Had the corresponding

I-rule had three “options with say 2, 3 and 5 premises respectively, then there would have been \(2 \times 3 \times 5\)E-rules corresponding to that logical atom.^{9}Also had there been an indirect^{10}premise, say \(\nabla \mathfrak {D} / \mathfrak {E}\), in one of the options then it would contribute a minor premise with conclusion \(\mathfrak {E}\) and a transfer premise with discharged sentence \(\mathfrak {D}\) to the appropriate^{11}E-rule.

- 1.
a logical constant, such as \(\bot \), \(\wedge \), \(\vee \), \(\equiv \) or \(\oplus \) (exclusive or), can be introduced by zero or more rules;

- 2.
each of these rules can have zero or more premisses, e.g. \(\top I\) has zero, \(\supset \! I\) and each \(\vee I_i\) have one, \(\wedge I\) has two;

- 3.
each such premiss may discharge zero or more assumptions (as in \(\supset \! I\));

- 4.
each such premiss may abstract over one or more variables, as in \(\forall I\);

- 5.
and a premiss may take a term as a parameter (as in \(\exists I\)).

It is not suggested that this list is exhaustive: conventions such as those of substructural logic about avoiding multiple or vacuous discharge will extend it, as would recursion; but it is long enough to illustrate the explosion. The paper [8] attempted^{12} to deal with all these possibilities and carry out a programme of mechanically generating GE-rules from a set of I-rules with results about harmony.

### 4.1 Several I-Rules

^{13}rules, one can formulate an appropriate GE-rule as having several minor

^{14}premisses, one for each of the I-rules, giving a case analysis. This is very familiar from the case of \(\vee \) and the usual \(\vee E\) rule:so an appropriate generalisation for \(n \ge 0\) alternative I-rules is to ensure that “the GE-rule” has

*n*minor premisses. This works well for \(\bot \), with no I-rules: the \(\bot E\)-rule, as in [9, 24], has no minor premisses.

^{15}

### 4.2 I-Rule Has Several Premisses

### 4.3 Premiss of I-Rule Discharges Some Assumptions

*B*,

*A*being discharged and some device to ensure that there are other premisses with

*A*and

*B*as conclusions. There seems to be no way of doing this coherently, i.e. with

*A*somehow tied to the discharge of

*B*and vice-versa. The alternative is to have

*two*GE-rules, along the lines discussed above for \( \wedge \), and these are clearlyby means of which it is clear that, from the assumption of \(A \equiv B\), one can construct a proof of \(A \equiv B\) using the introduction rule as the last step, implying the “local completeness” of this set of rules in a sense explored by Pfenning and Davies [22]:We are thus committed in general to the use of the second rather than the first possibility of GE-rules—the use of two such rules rather than one—when there are two premisses in an I-rule.

### 4.4 GE Harmony: A Counter-Example

Francez and the present author [8]^{16} developed these ideas (looking also at the analogues of universal and existential quantification) by defining the notion of “GE-harmony” (E-rules are GE-rules obtained according to a formal procedure, of which parts are as described above) and showing that it implied “local intrinsic harmony” (local soundness, i.e. reductions, and local completeness, as illustrated above for \(\equiv \)). The classification in [8] corresponds roughly but not exactly to the different possibilities enumerated above (\(1 \ldots 5\)): “non-combining” (zero or one premiss[es]) or “combining” (more than one premiss) corresponds to possibility 2; “hypothetical” (a premiss with assumptions discharge) or “categorical” (no such discharge) corresponds to possibility 3; “parametrized” (a premiss depends on a free variable) corresponds roughly to a mixture of 4 and 5; “conditional” (e.g. there is a freshness condition) corresponds roughly to 4.

^{17}it would be something likewhich is weird; with only the second and last of these premisses, already

*C*can be deduced. The meaning of \(A \odot B\) is thus surely not being captured, whether we go for two GE-rules or just one.

A similar example was given in 1968 by von Kutschera [38, p. 15], with two I-rules for an operator *F* based on the informal definition \(F(A,B,C) \equiv (A \supset B) \vee (C \supset B)\) but the flattened E-rule failing to capture the definition adequately.

### 4.5 Another [Counter-]Example

^{18}thereby justified is:which is clearly wrong, there being nothing to distinguish it from \(\star (A,C,B)\). Their main point is to show by a semantic argument that there is no non-obvious GE rule for \(\star \), thus defending the “idea of higher-level rules” [30].

### 4.6 In Other Words

- 1.
to a number \(({>} 1)\) of GE rules, none of which on its own suffices, and

- 2.
to a “disharmonious mess”, i.e. a failure to capture the correct meaning.

Already there are enough problems, before we start considering the cases where the premiss abstracts over several variables, instantiates a variable as a term or recurses on the constant being defined.

*Coq*[1]: to reject the idea that the rule for handling implication (and other situations where assumptions are discharged) be treated as illustrated above and instead to take implication (and its generalisation, universal quantification), together with an inductive definition mechanism, as primitive, with traditional “special” elimination rules (e.g.

*Modus Ponens*) but to allow GE rules elsewhere (e.g. for \(\wedge \) and its generalisations \(\Sigma \) and \(\exists \)). This deals with \(\equiv \); likewise, it deals with \(\odot \) as if it wereMore precisely, we note that with an introduction rule given in

*Coq*by the inductive definition we obtain as a theorem and similarly for \(\odot \) we have the inductive definition and we can obtain as a theorem

*Coq*will calculate them (and several variants) from the definitions automatically. Further details of this approach can be found in [23]. For example, existential quantification can be defined thus (we give also the obtained theorem representing the elimination rule): Short shrift is given to \(\bullet \):

This pushes the problem (of constructing and justifying elimination rules given a set of introduction rules, and establishing properties like harmony, local completeness and stability) elsewhere: into the same problem for a mechanism of inductive definitions and for the rules regarded as primitive: introduction and (non-general) elimination rules for implication and universal quantification. Apart from the issue of stability, we regard the latter as unproblematic, and the former as relatively straightforward (once we can base the syntax on implication and universal quantification).

To a large extent this approach may be regarded as just expressing first-order connectives using second-order logic, and not very different from Schroeder-Heister’s higher-level rules. The important point is that there are difficulties (we think unsurmountable) with trying to do it all without such higher-order mechanisms.

## 5 Conclusion

The main conclusion is this: although the idea that the “grounds for asserting a proposition” are easily collected together as a unit is attractive, the different ways in which it can be done (disjunctive, conjunctive, with assumption discharge, with variable abstraction or parameterisation, ..., recursion) generate (if the GE rules pattern is followed) many problems for the programme of mechanically generating one (or more) elimination rules for a logical constant, other than in simple cases. There are difficulties with the mechanical approach in [8]; there are similar difficulties in [13]. Without success of such a programme, it is hard to see what “GE harmony” can amount to, except as carried out in (e.g.) *Coq* [1] where strictly positive inductive type definitions lead automatically to rules for reasoning by induction and case analysis over objects of the types thus defined, and with strong normalisation results. A similar conclusion is to be found in [33].

## Footnotes

- 1.
- 2.
Reference [3] has an early occurrence of this.

- 3.
- 4.
Personal communication from Jan von Plato, May 2009.

- 5.
See [34] for a recent discussion.

- 6.
The notation

*c*is used to abbreviate \(\lambda xy.c(x,y)\). Similar abbreviations are used below.*c*(*a*,*b*) is then just*c*applied to*a*and then to*b*. - 7.
In the sense of “all but the major premiss”.

- 8.
He also gives primitive rules for negation; in our view this is best treated as a defined notion, since even its I-rule is impure, i.e. mentions the constant \(\bot \).

- 9.
In our terminology, “logical constant”.

- 10.
\(\nabla \mathfrak {D} / \mathfrak {E}\) is the notation of [13] for “\(\mathfrak {E}\) [assuming \(\mathfrak {D}\)]”.

- 11.
Surely, in this, \(\mathfrak {D}\) and \(\mathfrak {E}\) are the wrong way round.

- 12.
In ignorance, alas, of [13].

- 13.
López-Escobar [13] calls these “options”.

- 14.
“Transfer” premises in the terminology of [13].

- 15.
López-Escobar [13] does it differently.

- 16.
Written in 2007–9, several years before publication.

- 17.
As the formula in [13] implies, since \(1 \times 1 = 1\).

- 18.
Again, [13] implies there is just one GE rule.

### References

- 1.Bertot, Y., Casteran, P.: Interactive Theorem Proving and Program Development. Springer, Heidelberg (2004)CrossRefGoogle Scholar
- 2.Dummett, M.A.E.: The Logical Basis of Metaphysics. Duckworth, London (1991)Google Scholar
- 3.Dyckhoff, R.: Implementing a simple proof assistant. In: Derrick, J., Lewis, H.A. (eds.) Workshop on Programming for Logic Teaching, Proceedings, vol. 23.88, pp. 49–59. Centre for Theoretical Computer Science, University of Leeds (1987). http://rd.host.cs.st-andrews.ac.uk/publications/LeedsPaper.pdf
- 4.Dyckhoff, R.: Generalised elimination rules and harmony. In: Arché Seminar, St Andrews, 26 May 2009. http://rd.host.cs.st-andrews.ac.uk/talks/2009/GE.pdf
- 5.Dyckhoff, R., Pinto, L.: Cut-elimination and a permutation-free sequent calculus for intuitionistic logic. Studia Logica
**60**, 107–118 (1998)CrossRefGoogle Scholar - 6.Dyckhoff, R., Pinto, L.: Proof search in constructive logics. In: Cooper, S.B., Truss, J.K. (eds.) Proceedings of Logic Colloquium 97, i.e. Sets and Proofs, pp. 53–65. CUP (1999)Google Scholar
- 7.Ekman, J.: Propositions in propositional logic provable only by indirect proofs. Math. Log. Q.
**44**, 69–91 (1998)CrossRefGoogle Scholar - 8.Francez, N., Dyckhoff, R.: A note on harmony. J. Philos. Log.
**41**, 613–628 (2012)CrossRefGoogle Scholar - 9.Gentzen, G.: Untersuchungen über das logische Schließen. Math. Zeitschrift
**39**, 176–210 (1934)CrossRefGoogle Scholar - 10.Girard, J.Y.: Linear logic. Theor. Comput. Sci.
**50**, 1–102 (1987)CrossRefGoogle Scholar - 11.Girard, J.Y., Lafont, Y., Taylor, P.: Proofs and Types. Cambridge University Press, Cambridge (1989)Google Scholar
- 12.Joachimski, F., Matthes, R.: Short proofs of normalization for the simply-typed lambda-calculus, permutative conversions and Gödel’s T. Arch. Math. Log.
**42**, 59–87 (2003)CrossRefGoogle Scholar - 13.López-Escobar, E.G.K.: Standardizing the N systems of Gentzen. Models, Algebras, and Proofs, pp. 411–434. Dekker, New York (1999)Google Scholar
- 14.Luo, Z.H.: Computation and Reasoning. Oxford University Press, New York (1994)Google Scholar
- 15.Martin-Löf, P.: Intuitionistic Type Theory. Bibliopolis, Naples (1986)Google Scholar
- 16.Martin-Löf, P.: On the meanings of the logical constants and the justifications of the logical laws (the Siena Lectures). Nord. J. Philos. Log.
**1**, 11–60 (1996)Google Scholar - 17.Matthes, R.: Interpolation for natural deduction with generalized eliminations. In: Kahle, R., Schroeder-Heister, P., Stärk, R. (eds.) Proof Theory in Computer Science. LNCS, vol. 2183, pp. 153–169. Springer, New York (2001)CrossRefGoogle Scholar
- 18.Mendler, P.F. (Better known as Nax P. Mendler.): Inductive definition in type theory. Ph.D. thesis, Cornell (1987). http://www.nuprl.org/documents/Mendler/InductiveDefinition.pdf
- 19.Moriconi, E., Tesconi, L.: On inversion principles. Hist. Philos. Log.
**29**, 103–113 (2008)CrossRefGoogle Scholar - 20.Negri, S., von Plato, J.: Structural Proof Theory. CUP, Cambridge (2000)Google Scholar
- 21.Olkhovikov, G.K., Schroeder-Heister, P.: On flattening elimination rules. Rev. Symb. Log. p. 13 (2014). http://dx.doi.org/10.1017/S1755020313000385
- 22.Pfenning, F., Davies, R.: A judgmental reconstruction of modal logic. Math. Struct. Comput. Sci.
**11**, 511–540 (2001)CrossRefGoogle Scholar - 23.Pierce, B., et al.: Software Foundations. http://www.cis.upenn.edu/~bcpierce/sf/Logic.html 28 July 2013. Accessed 27 Jan 2014
- 24.Prawitz, D.: Natural Deduction. Almqvist and Wiksell, Stockholm (1965)Google Scholar
- 25.Prawitz, D.: Ideas and results in proof theory. In: Fenstad, J.E. (ed.) Second Scandinavian Logic Symposium Proceedings, pp. 235–307. North-Holland, Amsterdam (1971)CrossRefGoogle Scholar
- 26.Prawitz, D.: Proofs and the meaning and completeness of the logical constants. Essays on Mathematical and Philosophical Logic. Reidel, Dordrecht (1979)Google Scholar
- 27.Prawitz, D.: Beweise und die Bedeutung und Vollständigkeit der logischen Konstanten. Conceptus
**16**, 31–44 (1982) (Translation and revision of [26])Google Scholar - 28.Read, S.L.: Harmony and autonomy in classical logic. J. Philos. Log.
**29**, 123–154 (2000)CrossRefGoogle Scholar - 29.Read, S.L.: General-elimination harmony and higher-level rules. In: Wansing, H. (ed.) Dag Prawitz on Proofs and Meaning, pp. 293–312. Springer, New York (2015)Google Scholar
- 30.Schroeder-Heister, P.: A natural extension of natural deduction. J. Symb. Log.
**49**, 1284–1300 (1984)CrossRefGoogle Scholar - 31.Schroeder-Heister, P.: Generalized definitional reflection and the inversion principle. Log. Univ.
**1**, 355–376 (2007)CrossRefGoogle Scholar - 32.Schroeder-Heister, P.: Generalized elimination inferences, higher-level rules, and the implications-as-rules interpretation of the sequent calculus. In: Pereira, L.C., Haeusler, E.H., de Paiva, V. (eds.) Advances in Natural Deduction: A Celebration of Dag Prawitz’s Work, pp. 1–29. Springer, New York (2014)CrossRefGoogle Scholar
- 33.Schroeder-Heister, P.: Harmony in proof-theoretic semantics: a reductive analysis. In: Wansing, H. (ed.) Dag Prawitz on Proofs and Meaning, pp. 329–358. Springer, New York (2015)Google Scholar
- 34.Schroeder-Heister, P., Tranchini, L.: Ekman’s paradox (2014). Notre Dame Journal of Formal Logic (submitted)Google Scholar
- 35.Schwichtenberg, H., Wainer, S.: Proofs and Computations. Cambridge University Press, Cambridge (2012)Google Scholar
- 36.Tennant, N.: Ultimate normal forms for parallelized natural deductions. Log. J. IGPL
**10**, 299–337 (2002)CrossRefGoogle Scholar - 37.Tesconi, L.: Strong normalization for natural deduction with general elimination rules. Ph.D. thesis, Pisa (2004)Google Scholar
- 38.von Kutschera, F.: Die Vollständigkeit des Operatorensystems für die intuitionistiche Aussagenlogik im Rahmen der Gentzensemantik. Archiv für mathematische Logik und Grundlagenforschung
**11**, 3–16 (1968)Google Scholar - 39.von Plato, J.: Natural deduction with general elimination rules. Arch. Math. Log.
**40**, 541–567 (2001)CrossRefGoogle Scholar - 40.von Plato, J.: Explicit composition and its application in proofs of normalization (2015) (submitted)Google Scholar
- 41.von Plato, J., Siders, A.: Normal derivability in classical natural deduction. Rev. Symb. Log.
**5**, 205–211 (2012)CrossRefGoogle Scholar - 42.Zucker, J., Tragesser, R.S.: The adequacy problem for inferential logic. J. Philos. Log.
**7**, 501–516 (1978)Google Scholar

## Copyright information

**Open Access** This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.