1 Introduction: Axiomatic Thinking and Hilbert’s Programme

In his essay Axiomatic Thinking David Hilbert argues that it is necessary “to study the essence of mathematical proof itself if one wishes to answer such questions as the one about decidability in a finite number of operations” [12, p. 414 (orig.), p. 1115 (transl.)]. “We must [\(\ldots \)] make the concept of specifically mathematical proof itself into an object of investigation” (415/1115). Taking his later conception of ‘proof theory’ into account and what has afterwards been called “Hilbert’s programme”, this can be read as the claim that we should conceive proofs as formal proofs within a formal system, of which we can then, by manipulating them as formal objects, hopefully demonstrate that they never generate formal contradictions. A strong method in proving such consistency is to reduce the system in question to other systems whose consistency has already been established. In fact, the significance of such a reductive approach is already emphasised in Axiomatic Thinking by mentioning the potential reduction of arithmetic and set theory to logic [12, p. 412 (orig.), p. 1113 (transl.)]. In this sense Axiomatic Thinking can be read as the starting point of ‘reductive’ [24] proof theory whose programme is to establish advanced systems as conservative (and thus consistent) extensions of more elementary systems. This is incorporated in the finitist programme of justifying infinitist ways of reasoning as extensions of finite ways of reasoning such that at least the consistency of these systems can be proved by finite means, even if they are not conservative extensions [13].

As is well known, the original form of Hilbert’s programme failed due to Gödel’s second incompleteness theorem, according to which the inference methods codified in an elemementary system such as Peano arithmetic do not suffice, for reasons of principle, to demonstrate the consistency of the system in question. Hilbert’s programme nevertheless initiated the development of mathematical proof theory which investigates, among many other issues, the strengh of formal systems and their relative reducibility, the expressive power of such systems, including what can be reached by various forms of induction principles incorporated in such systems. As regards consistency proofs, Gerhard Gentzen’s work [11] constituted the first pioneering achievements, above all his consistency proofs for arithmetic using transfinite induction.

2 Axiomatic Thinking, General Proof Theory and Proof-Theoretic Semantics

In parallel to the development of proof theory in the spirit of Hilbert’s programme, so-called general proof theory took ground. General proof theory is interested in proofs as fundamental entities being used in deductive sciences. Here the problem of consistency, which was the starting point of Hilbert’s programme, is not in the centre of interest. Of course, consistency is essential for proofs. But it is simply not the leading point of view from which proofs are looked at. One could say that in general proof theory we are not primarily interested in the result of proofs, that is, in the assertions that are proved or can be proved in a proof system, but in the form of proofs as representing arguments. Philosophically speaking, general proof theory deals with intensional aspects of proofs, while proof theory in the spirit of Hilbert’s programme, which is interested in the logical power of proof systems, with their extensional aspects.

In fact, the initial quotation above from Axiomatic Thinking shows that the programme of general proof theory is present already in Hilbert. He explicitly speaks of the “essence” of proofs and the “concept of [\(\ldots \)] proof itself”, which is exactly what general proof theory is all about. And in the paragraphs ahead of this quote he discusses in detail the problem of entirely different methods of proving the same geometric claim [12, pp. 413–414 (orig.), pp. 1114–1115 (transl.)], which means that the idea of different proofs of a mathematical theorem and thus the problem of the identity and difference of proofs is on his agenda. In this sense it would be wrong to claim that general proof theory and the interest in proofs in themselves is something totally different from what Hilbert had in mind when creating his proof theory. Even though consistency-oriented proof theory strongly dominated Hilbert’s later writings, general proof theory was always in the background, and in Axiomatic Thinking still on equal level with reductive proof theory.

Note that the view of proofs as formal proofs, that is, as proofs in a formal system, versus proofs as arguments, that is, as entities conveying epistemic force, is not the dividing line between reductive or consistency-oriented and general proof theory. It is certainly true that when studying consistency or the reduction of theories, we are studying syntactic properties of proofs,Footnote 1 while when considering proofs as arguments, we are studying epistemic and semantic properties going beyond the syntactic level. However, even in the second case, we are still considering formal proofs, as these epistemic and semantic properties are properties of formal proofs, namely as formal proofs being representations of arguments. So ontologically it is the same sort of entities which are discussed in reductive and general proof theory. This is analogous to the situation we have in model theory, where we look at syntactically specified formulas and theories from a semantic perspective (in the sense of a denotational semantics).

The interdependency of consistency-oriented proof theory and general proof theory is fully clear in the work of Gentzen, who is at the same time the exponent of proving consistency and of laying the grounds for general proof theory. The latter is due to the fact that in his seminal Investigations into logical deduction [10] Gentzen created the calculus of natural deduction as formal system that is very near to actual reasoning, in particular to reasoning in mathematical practice. In the same work he developed the calculus of sequents, which is very well suited for certain proof-theoretic investigations. Gentzen’s formal systems as well as the results he obtained for his systems are highly significant both for reductive and general proof theory. This holds in particular for his method of cut elimination for sequent systems, which is fundamental for reductive proof theory and likewise for general proof theory.

The term “general proof theory” as well as its explicit proclamation as a research programme is due to Dag Prawitz [24, 26], after he had already, in his 1965 monograph Natural Deduction, provided the first systematic investigation of Gentzen’s calculus of natural deduction [23]. At the same time and with a similar target, Georg Kreisel [16] had proposed a modification of Hilbert’s programme towards the study of proofs as genuine objects, and not only as tools for the investigation of derivability and consequence. On the philosophical side, Michael Dummett [7] was outlining his programme of a verificationistic theory of meaning, which took place in parallel to Prawitz’s notion of proof-theoretic validity [27, 29]. Roughly at the same time Per Martin-Löf’s type theory emerged [19, 33], which built on closely related logical foundations, and which laid new foundation of mathematics as an alternative to set theory and to Frege’s and Russell’s type-theoretic conceptions.

For these and related approaches the author proposed the term “proof-theoretic semantics” [30, 31]. The reason for choosing this term was to emphasise that such investigations belong to philosophical semantics, and that therefore the term “semantics” should not be left to denotational semantics alone. Philosophically, general proof theory and proof-theoretic semantics belongs to the theory of meaning as use [39] and more specifically, to an inferentialist theory of meaning [2], though with many additional inspirations from ideas and results in proof theory [22].

General proof theory is a proof theory based on philosophical interests. This does not mean that no mathematical methods can enter when these interests are pursued. On the contrary, the application of mathematical methods on syntactically coded proofs delivers basic philosophical insights. These insights concern in particular the problem of the identity of proofs, which is the main topic of this paper. Identity of proofs is not currently the central theme of general proof theory. However, it should be in the centre of interest, because it is immediately connected to the question of the essence of proofs.

In fact, in his proclamation of general proof theory, Prawitz pointed out that one of the basic topics of this discipline is the identity of proofs, namely the question, when syntactically different proofs of the same theorem should be considered ‘essentially’ identical and when they should be considered ‘essentially’ different [24, p. 237].Footnote 2 This coincides with the discussion of Hilbert in Axiomatic Thinking about different proofs for the same result. Hilbert’s emphasis was on conceptually different proofs in the sense that these proofs used different methods or even came from different branches of mathematics, that is, proofs using different proof ideas.

We are still far from being able to formally elucidate what a proof idea is. However, as a first step we will discuss at the level of natural deduction proofs, using its very elementary conjunctive fragment, what identity of proofs can mean and which problems are connected with it. This is at least in the spirit of what Hilbert meant by “making the concept of specifically mathematical proof itself into an object of investigation” and what Prawitz had in mind when putting forward the idea of a general proof theory. What we are going to tell will be aporetic in many respects. Even in the context of our tiny fragment of natural deduction considerable problems show up. However, we hope to convince the reader of the fundamental fact that there is something on the intensional side of proofs, in addition to what is being proved, something that Hilbert called the “essence” of proofs. As a prominent example, we discuss the redundancy criterion for proof identity, according to which proofs are identical, when they only differ by adding or removing redundancies, and point to problems associated with this criterion. As an important side product, we conclude that the annotations of proofs should be considered ingredients of the proofs themselves. That is, the explicit specification which step we want to apply at a certain place, especially if the shape of the step leaves this open, is more than a metalinguistic comment on a proof, but belongs to the proof itself. This we see as an indication that the notion of intension is related to the notion of intention even in the area of formal reasoning.

3 Identity of Proofs

Quine’s slogan “no entity without identity” is one of the cornerstones of ontological reasoning in the philosophy of language [28]. It is based on the claim that, in order to refer to an individual entity, we need a criterion that tells us of (purported) entities a and b whether they are different, or whether they are perhaps a single entity referred to in different ways. If we apply this idea to mathematical proofs, this means that a mathematical proof can only be individuated as an entity, if we have a criterion that tells us of syntactically different proofs \({\mathcal D}\) and \({\mathcal D}'\) whether, with respect to their content, they should be considered the same proof or not.

Quite independently of the philosophical problem of individuation, according to which without an identity criterion we cannot speak of an individual entity, it is simply mathematically interesting to know whether two proofs, which prima facie look different, are nevertheless ‘essentially’ the same proof. Working mathematicians often have quite strong intuitions about whether two proofs of the same theorem are based on the same proof idea, and they often agree with respect to these intuitions.

As the concept of proof idea is not capable of a precise rendering, at least not with the current conceptual tools of mathematical or philosophical logic, we confine ourselves to extremely simple proofs which are formulated in a very small fragment of elementary logic. More precisely, we consider formal proofs which are only formulated by means of logical conjunction \(\wedge \). By that we mean proofs in which only the conjunctive composition of sentences is made explicit. For such proofs, we have three proof rules: One introduction rule and two elimination rules.

The introduction rule

(8.1)

allows us to generate, from proofs of A and of B, a proof

of \(A \mathbin {\wedge }B\). The expression to the right of the inference line denotes the rule being applied (“I” for “introduction”). The elimination rules for conjunction are

(8.2)

They allow us to recover, from a proof of , proofs

of A and of B. As we will see below, it is important to distinguish the two \(\wedge \)-elimination rules (“E” for “elimination”) terminologically by an index (“1” and “2”, respectively). The rule with index 1 picks the left argument of conjunction, the rule with index 2 the right one. Mathematically, we can consider the introduction rule for conjunction as the formation of a pair of proofs, and the elimination rules as the projections of such a pair on its left or right component. This very elementary framework of conjunction logic is already sufficient to point to basic problems, results and difficulties in connection with the problem of identity of proofs.

There are two opposite extremes in answering the question concerning the identity of proofs, which are equally inappropriate and both trivialise the idea of identity. One extreme consists in considering proofs \({\mathcal D}\) and \({\mathcal D}'\) to be identical, if they are identical as syntactic objects. This criterion is too narrow, as any syntactic modification of a proof, so tiny and minor it may be, would result in a different proof, though the ‘content’ of the proof has not changed at all. Two syntactically different proofs of a proposition A could never be identical. The other extreme consists in in considering proofs \({\mathcal D}\) and \({\mathcal D}'\) to be identical, if they are proofs of the same proposition A. This criterion is too wide. Because all syntactically different proofs of a provable proposition A could be identified, every provable proposition A would have only one single proof. In fact, in many areas we are solely interested in whether a proposition A is provable or not—for example whether in a theory a contradiction “C and not-C” is provable. Whether there are potentially different proofs of a proposition would then be irrelevant. However, in general proof theory we pursue the idea that the study of proofs goes beyond the study of provability. This means that in principle, though perhaps not in every single case, there can be different proofs of a provable proposition A.

Thus we need to define a plausible equivalence relation on the class of syntactically specified proofs of a proposition A, which is neither syntactic identity (every syntactic proof of A constitutes a singleton equivalence class) nor the universal relation (all syntactic proofs of A belong to the same equivalence class). If \({\mathcal D}\) and \({\mathcal D}'\) are proofs of A, we would like to define a nontrivial equivalence relation \({\mathcal D}= {\mathcal D}'\), which comes as near as possible to our intuitive idea that \({\mathcal D}\) and \({\mathcal D}'\) represent the same proof of A.

As to our terminology: When we talk of the identity of proofs \({\mathcal D}\) and \({\mathcal D}'\), and express this as \({\mathcal D}= {\mathcal D}'\) by means of the identity sign “=”, then we always mean the equivalence relation to be explicated. When we talk of the syntactic identity of proofs, we always say this explicitly, but never use the identity sign for it. If \({\mathcal D}\) is a proof of A, we often write . The expression then denotes the same as \({\mathcal D}\)—the A below \({\mathcal D}\) only serves to mention the proposition being proved and is not an extension of \({\mathcal D}\).

3.1 The Redundancy Criterion

One possibility to define identity between proofs is to point out certain redundancies in proofs and to specify procedures removing these redundancies. A proof \({\mathcal D}\) would then have to be considered identical to a proof \({\mathcal D}'\), if \({\mathcal D}'\) results from \({\mathcal D}\) by such a removal of redundancies. In natural deduction, a prominent case of that kind is the introduction of a proposition immediately followed by its elimination. This situation can be clarified by analogy with arithmetical operations.

In algebra we are often dealing with structures where with a given operation an inverse operation is associated, such as in the case of groups. If, for example, we add the integer b to an integer a and immediately afterwards subtract it, we obtain the very same integer a back:

$$\begin{aligned} (a + b) - b = a\end{aligned}$$

At the level of proofs in natural deduction we have a similar situation, as the elimination rules are inverses of the introduction rules. In the fragment considered here, the calculus for conjunction, these are the rules (8.1) and (8.2).

Analogously to the example of addition and substraction, introduction and elimination rules cancel each other out. Consider the introduction of a conjunction followed by its elimination, passing from given proofs and for A and B by \(\wedge \)-introduction to their conjunction and going back to A by \(\wedge \)-elimination:

(8.3)

Then this is obviously a redundancy, since we had already proved the proposition A before engaging in these two inference steps, namely as the left premiss of the first step. According to the redundancy criterion we want to identify two proofs, one of which is nothing but a redundant form of the other one. Therefore we postulate the following identity:

(8.4)

Correspondingly we postulate the following identity, in which the first projection is replaced with the second projection:

(8.5)

In accordance with Prawitz [23] such identities are also called “reductions”, as they reduce the redundancy in a proof. We also speak of “redundancy reductions”. Since in the theory of natural deducion, (8.4) and (8.5) are always postulated, we call these identities standard reductions for conjunction (later we will consider a further standard reduction). Corresponding standard reductions can be given for all other logical signs and also for non-logical operations.

These reductions can also be formulated algebraically, if we consider proof rules as functions \(I, E_1, E_2\) transforming given proofs into new proofs. Then the \(\wedge \)-introduction rule generates from two proofs \({\mathcal D}_1\) and \({\mathcal D}_2\) for A and B, respectively, a new proof \(I({\mathcal D}_1,{\mathcal D}_2)\) of their conjunction, and the elimination rules generate from a proof \({\mathcal D}\) of a conjunction \(A \mathbin {\wedge }B\) proofs \(E_1({\mathcal D})\) and \(E_2({\mathcal D})\) of A and B, respectively. The standard reductions (8.4) and (8.5) then become the identitities

$$\begin{aligned} E_1(I({\mathcal D}_1,{\mathcal D}_2)) = {\mathcal D}_1 \qquad E_2(I({\mathcal D}_1,{\mathcal D}_2)) = {\mathcal D}_2 \end{aligned}$$
(8.6)

The theory of natural deduction based on standard reductions was developed by Dag Prawitz in his groundbreaking monograph Natural Deduction [23], as was the idea of defining the identity of natural deduction proofs by reference to these reductions: “Two derivations represent the same proof if and only if they are equivalent” [24, p. 257], where equivalence is established by applying standard reduction steps.Footnote 3 Being redundancy reductions, the standard reductions can be generalised as follows.

Obviously, the standard reductions follow the following general pattern:

(8.7)

For the case of (8.4) corresponds to the proof , and for the case of (8.5) the proposition A corresponds to B and the proof corresponds to . All other parts of these proofs are represented in (8.7) by dots.

The idea behind (8.7) is the following: We disregard the potential proof steps between the upper and the lower A and only focus on the situation, in which we start with a proof of A and then return to A in a way not further specified. As the steps leading from to A are redundant, we can identify the extended proof of A with the initial proof of A. We call this identification the general redundancy reduction. As just explained, the standard reductions (8.4) and (8.5) are two particular cases of it, in which a specific form of redundancy, namely introduction immediately followed by elimination, is considered. What we call “general redundancy reduction” is discussed in Ekman [8].

Unfortunately, the general redundancy reduction has unwanted consequences. Consider the following situation, in which we consider any two given proofs \({\mathcal D}_1\) and \({\mathcal D}_2\) of a proposition A, used in the following extended proof of A:

(8.8)

Here it does not make a difference, of whether \(\wedge \)-elimination is conceived as left or right projection. In (8.8), there are obviously two possibilities to apply the general reduncy reduction. If we identify the lower A with the left upper A, we obtain the identity

(8.9)

If we identify the lower A with the right upper A, we obtain

(8.10)

The identities (8.9) and (8.10) immediately give us

Since \({\mathcal D}_1\) and \({\mathcal D}_2\) are arbitrary proofs of A, the general redundancy reduction allows the identification of arbitrary proofs of the same proposition A.

In this way the identity of proofs becomes the universal relation, which means that the equivalence relation of identity is trivialised in one of the two ways discussed above. Thus the redundancy criterion for identity fails. Note that this result does not depend on how exactly the standard reductions are formulated. Depending on whether the elimination step in (8.8) is conceived as left or right projection, either (8.9) or (8.10) is a standard reduction in the sense above. However, this fact plays no role as the standard reductions are instances of the general redundancy reduction. Therefore, if we assume the introduction and elimination rules (8.1) and (8.2) as rules governing conjunction, then the general redundancy reduction trivialises the identity of proofs (see also [32]).

3.2 An Example from Mathematics

Considering conjunctions of the form \(A \wedge A\) may appear artificial. To supersede this objection we consider as a concrete example Euclid’s theorem according to which there are infinitely many prime numbers, and denote it by \(P_\infty \). Furthermore, we consider two proofs of this theorem that rely on completely different concepts, for example the number-theoretic proof by Euclid himself, here denoted as \({\mathcal D}_{Euclid }\), and the proof by Euler which uses elementary calculus, here denoted as \({\mathcal D}_{Euler }\) (see, e.g., [1]). If we combine these two proofs conjunctively,

we have a duplication of the theorem \(P_\infty \), but keep at the same time the information both of Euclid’s and of Euler’s proof. From the two proofs we are forming a pair of proofs that comprises both. No information contained in any of these proofs is lost.

From this pair of proofs we can recover the respective proof by means of right or left projection: By left projection Euclid’s proof

and by right projection Euler’s proof

The kind of projection (left or right) tells us which proof we get back. We can, of course, ignore the kind of projection and thus waive proof information. That is, we can consider the proof

(8.11)

simply as a structure leading to \(P_\infty \), in whatever  way we have obtained the conjunction standing above \(P_\infty \). Nothing speaks against this way of proceeding. We must only be content with the fact that the proof achieved proves the same, namely \(P_\infty \), but that neither the proof information from \({\mathcal D}_{Euclid }\) nor that from \({\mathcal D}_{Euler }\) is available any more, after we refrained from labelling the last step of (8.11) either as left or as right projection. Based on our proof, we continue to have the right to assert \(P_\infty \), because our proof ends with this proposition. However, we can neither identify this proof with \({\mathcal D}_{Euclid }\) nor with \({\mathcal D}_{Euler }\), which was still possible, when the step to \(P_\infty \) was considered a projection. In the case of (8.11), we have, so to speak, in the course of the deviation via \(P_\infty \mathbin {\wedge }P_\infty \), thrown away our ‘luggage’ in form of proof information, even though the legitimacy of the claim \(P_\infty \) is not affected. By means of the deviation we have not simply created redundancy in the sense of additional unnecessary information, but conversely destroyed the information which would allow us to identify the proof reached with one of the proofs we started with.

3.3 Harmony Instead of Reduction of Redundancy

The standard reductions alone do not trivialise the identity of proofs, as can be seen relatively easily.Footnote 4 This suggests to refer, in the definition of identity of proofs, only to the standard reductions, rather than considering the general redundancy reduction. This requires the philosophical task to elucidate what is the distinguishing characteristics of the standard reductions beyond the fact that they are cases of the general redundancy reduction, that is, that they reduce redundancy in proofs.

Here the concept of harmony comes into play, by means of which the relationship between introduction and elimination rules is frequently characterised (the term goes back to Dummett, see [31, 37]). Consider again the case of conjunction, where we have the situation that the conditions of the introduction match with the consequences of the eliminations. The condition of the introduction of \(A \wedge B\) is the pair consisting of A and B, and the consequences of the eliminations are again this pair obtained by left and right projection.

If according to (8.3) one moves from the conditions of the introduction rule to the consequences of the elimination rules, by first introducing a conjunction and immediately afterwards eliminating it, then this is a step from a proposition A to its harmonious counterpart, that is, from a part of the condition of the introduction rules to a part of the conclusion of the elimination rules. That one does not gain any new information, is not only due to the fact that in both cases we deal with the proposition A, but because one is using the complementary steps of introduction and elimination rules, which cancel each other out due to the harmony between these rules.

In this way we even obtain a sequence of steps dual to the one considered. That the conditions of the introduction rules match the consequences of the elimination rules also means that one does not lose anything when applying an elimination rule. This means that from applications of the eliminations rules to \(A \wedge B\), that is, from the consequences of \(A \wedge B\), we can, by means of the introduction rules, go back to \(A \wedge B\). This corresponds to the reduction

(8.12)

which is here also considered a standard reduction.Footnote 5 Algebraically this corresponds to the equation

$$\begin{aligned} I(E_1({\mathcal D}),E_2({\mathcal D})) = {\mathcal D}\end{aligned}$$
(8.13)

Therefore the idea behind this approach is that due to the matching of the conditions of introduction with the consequences of elimination one has pairs of completely symmetric inference steps, which represent a specific form of redundancy reduction. The standard reductions for conjunction (including (8.12)) express the complementarity of the steps introduction-elimination or elimination-introduction, and it is this specific form of redundancy reduction, which makes the standard reductions non-trivial. This is opposed to general redundancy reduction (8.7), where between the occurrences of A, which are identified, there may lie a non-specified proof section rather than just a pair of complementary rule applications. It is possible to show that the standard reductions (8.4), (8.5), (8.12) are maximal in the sense that no further identitites may be postulated without trivialising the notion of identity [6]. This maximality result is often considered the distinguishing feature of the standard reductions, turning them into a proper base for proof identity.

Of course, it is not a philosophical necessity to base the notion of identity of proofs on the notion of harmony, that is, on symmetries between introduction and elimination rules for logical signs. Even the maximality result just mentioned does not force us to that conclusion. It cannot be excluded that there are different postulated sets of identities, which are likewise maximal. However, currently the harmony principle appears to be the only plausible way to motivate the standard reductions as sensibly restricting the general redundancy reduction, which, as we have seen, goes too far.

For further discussion of the identity of proofs from the logical and philosophical point of view see [3, 4]. For harmony in relation to identity of proofs see [37, 38].

3.4 The Annotation of Proofs

When carrying out a proof, one justifies one’s steps by telling which inference step one is just performing. In our simple case of conjunction we have written the designation of the rule used next to the inference line. Very frequently one finds the opinion that these annotations are nothing but metalinguistic comments that only serve to explicate what one is doing, without adding anything to the proof step itself.

From the point of view of identity of proofs this view is misguided, at least in its general form. In most cases it is obvious which rule has been applied in an inference step, simply because, due to the syntactic form of the propositions involved, only one single rule fits to the step. For the \(\wedge \)-introduction rule this is always the case, because a constellation

must be an application of \(\wedge \)-introduction, whatever form A and B have. For the elimination rules this is not always the case. If we apply an elimination rule to the proposition \(A \wedge A\):

then, since the right and left component of \(A \wedge A\) are identical, this can be an application of the left projection \(\wedge E_1\) as well as of the right projection \(\wedge E_2\). To disambiguate the situation, we write either

or

This means that the annotation (“\(\wedge E_1\)” or “\(\wedge E_2\)”) is part of the proof, as it gives information needed to understand it. We cannot refrain from deciding between \(\wedge E_1\) und \(\wedge E_2\). Otherwise we would have to accept both

and

as valid identities, and therefore the identification or arbitrary proofs \({\mathcal D}_1\) and \({\mathcal D}_2\) of A. This was exactly the situation found with the general redundancy reduction, in which it played no role which elimination rule was applied.

After realising that the annotation of the rule being used is part of the proof itself, we can modify the notion of proof by turning the annotion into what is proved. The proof step

would have to be written as

As the premiss \(A\wedge A\) would be annotated itself with an annotation t, one would write:

Since in this way an annotation contains all annotations of steps above, the annotation of a proven proposition codes the proof of this proposition. Thus the necessity to consider the annotations of proof steps as parts of the proof, leads to the idea to associate with a proven proposition the coding of its proof. That is, what is actually proven is not the proposition A, but the judgement (claim) t : A, where t stands for the proof itself.

Here we can bring to bear the functional view of proof steps mentioned in Sect. 8.3.1 and consider the annotation \(E_1(t)\) to be a function applied to t, and postulate certain equations corresponding to the standard reductions. In our case these are the equations (8.6) and (8.13).

This leads to the basic idea of constructive type theories, since the judgement t : A, which is now the ‘proper’ claim in a proof, in contradistinction to the proposition A alone, is structurally related to the assertion that an object t has the type A. This relationship cannot be discussed here. It underlies in particular Martin-Löf’s type theory, which in the recent two decades has gained strong ground in general mathematics through Vojvodsky’s homotopy theoretic interpretation [33, 34]. The motivation for this conception is normally quite different from what we have presented here. Our philosophical motivation was that annotations of proofs belong to the claims to be proved, so that codes of proofs become a natural ingredient of what is proved.

The idea that the ‘proper’ structure of a proved proposition A is t : A, where t is the code of the proof of A, is often viewed as an argument for or against the identification of certain proofs. However, this is only partially conclusive. The standard reductions cannot be justified that way. The situation (8.3), in which elimination follows to introduction, would now be displayed as

In order to identify \(t_1:A\) and \(E_1(I(t_1,t_2)):A\), we would need to presuppose the identity

$$\begin{aligned} E(I(t_1,t_2)) = t_1 \end{aligned}$$

and thus one of the identities (8.6), which are motivated by the standard reductions. However, even though we cannot obtain a justification of the standard reductions, we obtain a refutation of the general redundancy reduction (8.7). This general reduction would require that in the situation

the judgements t : A and \(t':A\) can be identified, which is not possible, if there is no reason to assume \(t = t'\) . Such a reason is not available for unspecified t and \(t'\). The universal assumption \(t = t'\) expresses that arbitrary proofs of A can be identified, which is the trivialisation of proof identity which we do not intend. Therefore, if we accept the idea of proof annotations as parts of proofs, we have an argument against the general redundancy reduction, which unlike the argument in Sect. 8.3.1 does not refer to the reductions themselves, but only to the identifiability of assertions. As far as the justification of the standard reductions as basis of proof identity is concerned, the harmony of introduction and elimination rules continues to be the starting point. A suitable annotation and decoration discipline certainly helps to avoid unwanted identifications of proofs, but does not by itself provide the intended identification of proofs inherent in harmony principles.

4 Conclusion

Why is our result not satisfactory in every respect? For the very restricted context of conjunction logic we have shown that general redundancy is not suitable as an identity criterion for proofs, as it leads to a trivialisation of the concept of proof identity. As the logic of conjunction is normally available in any logical system, this result has wide consequences. The non-suitability of the general redundancy criterion holds for virtually any system.

It would have been the advantage of the general redundancy criterion that its formulation is independent of which logical framework is used, and which rules of proof are available. To admit redundancy reduction only in connection with the harmony of introduction and elimination rules, means a very significant restriction: This identity criterion is only available for proof systems, which build exclusively on harmonic rules. This is the case in (constructive) propositional and predicate logic. Even in constructive type theories one tries to carry these harmony principles through all the rules. But is it necessary that proof systems are always structured that way? Already bivalent classical logic falls out of this framework. Does this mean that it does not make sense to speak of proof identity in classical logic? Should it not be possible to develop proper and non-trivial proof identity criteria for logics which are not based on a constructive conception of proof-theoretic harmony? Answering these questions seems to us to be a central desideratum of a proof-theoretic semantics of non-constructive logics.

The aporetic character of our considerations also shows how far proof theory is still away from the treatment of ‘proper’ proofs in mathematics, and how it is even further away from the explication of the ‘idea’ behind a given mathematical proof. It is proof ideas in which mathematicians are basically interested, when they compare proofs, as Hilbert does in Axiomatic Thinking. Mathematical and philosophical proof theory is only slowly progressing towards this problem.Footnote 6 On the other hand, we must concede to proof theory that it has developed a precise syntactic concept of ‘proof’ which allows one to formulate in it concrete mathematical proofs. Recent proof-theoretic research on the foundations of mathematical concept formation and reasoning show an increased colloboration between philosophy, mathematical logic and mathematical practice which gives hope for progress.Footnote 7

The discussion of identity of proofs demonstrates that intensional considerations in proof-theoretic semantics are needed, if we are not only interested in what can be proved in a given system, but also, and perhaps primarily, in what a proof is and how we carry out proofs. Proof theory that deserves its name, should be more than a tool in a theory of provability.

What we left out of the picture drawn here is the relation between proofs and algorithms,Footnote 8 which is very narrow, in particular with respect to the decoration of proofs by means of certain annotations. As these annotations can be viewed as proof terms, they can motivate a functional view of proof reductions and identity along the lines of the Curry-Howard-correspondence. However, while this would essentially be a re-iteration of the discussion of redundancy reductions by using term equations, it might be more interesting to compare intensional proof theory with the algorithmic view of intensions in the spirit of Moschovakis [21], by relating his abstract concept of an algorithm to the functional concepts used in type-theoretic proofs. This would also allow us to link to the debate about intensions in natural language semantics, which is the field where the problem of intensions showed up first, and where it is still most prominent.

Our plea for an intensional proof-theoretic semantics may be rounded up with a short philosophical remark on the notion of ‘intention’, which in the philosophical tradition has sometimes been related to the notion of ‘intension’ (e.g., Hintikka, [15]). We have argued that the annotations of a proof belong to the proof itself by giving the example of a step of conjunction elimination where, without annotation, the left and right projection are not distinguishable. The fact that such a step is to be considered, for example, as a right projection, can be viewed as my intention when carrying out the proof. That this step is a right projection, is how I want this step to be understood when giving my argument, and that this understanding can play a significant proof-theoretic role is precisely what we tried to show in the previous section. This demonstrates that ‘deep’ philosophical questions of the theory of intentions and actions are not far from general considerations concerning the meaning of proofs. Semantics and action are interrelated even in logic.Footnote 9