Most of the time, logic is said to be the science of correct reasoning. However, this definition is only apparently simple. According to a long-standing tradition, stemming from Descartes, a reasoning is a chain of steps leading from hypotheses to conclusions. Indeed, Descartes says in Regulae that a proof is

a continuous and uninterrupted movement of thought in which each individual proposition is clearly intuited. (in Cottingham et al. 1985, p. 15).

Assuming that we can accept the English translation of the Latin “singula” by “individual proposition”, Descartes seems thus to endorse the common idea that the steps in a proof are inferences, i.e. transitions from certain premises to some conclusions.

In this respect, it is possible to claim that the concept of inference must necessarily vary according to what we use this very concept for. We can shift from a rather broad idea of automatic and involuntary passages from information to information of any kind, to the more restricted demand that agents be aware of moving from and to propositions, sentences, beliefs, judgements or assertions. In particular, there is no definite answer to the question about what transitions, premises and conclusion might be. This may also apply to those inferences that are usually taken to epistemically compel towards a conclusion, under the assumption that the premises are justified. However, we could here claim that inferences of this kind must in the end amount to conscious acts, involving reflection and knowledge—epistemic compulsion is something that, in a sense, we experience, and of which we are aware. This standpoint seems more adequate also and above all when compulsion depends, or is claimed to depend, on the meaning of the involved components—the so-called analytical inferences.

Moreover, there seems to be no precise criterion even in the more crucial case of logically correct inferences. Logicians have been concerned in the inquiry about when and why deductive reasoning is logically correct since Aristotle, often referred to as the first who aimed at singling out forms of speech where

certain things being laid down, something follows of necessity from them. (in Ross 1949, p. 287).

It is precisely thanks to such a force—referred to by Aristotle through the modal word “necessity”—that proofs exert a fundamental role in the construction of knowledge, especially of scientific knowledge.

When applied to formalized and uninterpreted languages, model-theory offers notions of truth and (logical) consequence through which formal theories are standardly justified. However, it is much debated whether such a semantic setup seizes Aristotle’s modality. The model-theoretic framework might suit to an interpretation of modality in terms of a possible-worlds reading, but it seems to be doomed to fail when epistemic issues are brought in (see Prawitz 1985, 2005, 2013). Proofs, as well as the valid inferences they are made up of, yield conclusive knowledge. By carrying them out, we experience epistemic constraints towards the propositional or sentential contents. We become aware of the fact that truths are truths, and must accept them on pain of irrationality. Although it may be doubted that this phenomenon actually falls within the field of logic, many authors—including at least the Intuitionists, as well as Hilbert, Gödel and others—have considered epistemic evidence to be a core topic.

When we take into account the compulsion experienced in correct deduction, we must focus on the mental activity of believing or judging propositions as true, and/or on the linguistic practice of asserting enunciations. Hence, it is likely that this compulsion is linked to (one’s knowledge of) the meaning of propositions or sentences, and the question is reduced to how this (knowledge of the) meaning ought to be explained. A suggestive answer stems here from the Intuitionists’ rejection of bivalence, as well as from their explanation of meaning (see e.g. Brouwer 1924/2002; Heyting 1931, 1934)—a tradition that, together with some other sources, inspired Dummett’s well-known anti-realistic arguments (Dummett 1978, 1993), which in turn led to a variety of more or less verificationist theories of meaning. In addition, the conception according to which meaning must be given in terms of proofs not only relates meaning to use, according to Wittgenstein’s claim (Wittgenstein 1953); but it also inscribes the inquiry into constructive setups like Martin-Löf’s intuitionistic type theory (Martin-Löf 1984), mainly influenced by λ-calculus, or Prawitz’s proof-theoretic semantics (Prawitz 1973, 1977), mainly influenced by Gerhard Gentzen’s investigations.

This Topoi special issue on “Inferences and proofs” contains some contributions for the epistemic analysis of proofs, mostly—although not exclusively—in the tradition of intuitionism. It aims at offering a panorama of the debates and questions occurring at the moment within this tradition, as well as some tools to develop further researches and relations with other traditions.

The idea of this work arose after a Workshop—also entitled “Inferences and proofs”—held in Marseille from May 31 to June 1 2016. Organized by the editors Gabriella Crocco and Antonio Piccolomini d’Aragona, it was funded, and hence made possible, by Aix-Marseille University, particularly by one of its institutes, the CEPERC (now Center Gilles Gaston Granger), as well as by the French National Center for Scientific Research (CNRS) and by the A*MIDEX foundation. A large part of the contributors to this work took also part as lecturers in the Workshop. Although some speakers (in particular Kosta Došen, Per Martin-Löf and Peter Schroeder-Heister) did not send any paper for this issue, their work and talks have influenced, at least indirectly, the content of the work.

The papers can be divided into three main groups, as the table of contents shows: theoretical, historical and technical. This subdivision must not be understood as exclusive; it only wants to suggest a difference in the accent of the proposed analyses.

The first group is related to the specific problems implied in the idea that proofs should be explained through valid inferences. Prawitz’s, Usberti’s, Cozzo’s and Piccolomini d’Aragona’s papers belong to this group.

The problem is very clearly stated by Dag Prawitz. We may try to explain proofs as chains of valid inferences. According to this view, and at the cost of a total trivialisation of the notion of proof itself, valid inferences cannot be conceived as mere truth-preserving transitions from some premises to a certain conclusion. Indeed, a one-step truth-preserving inference from the conjunction of the axioms of a theory to one of its theorems would also be a one-step proof of the theorem. However, of course this proof has nothing to do with what we usually call a proof. Valid inferences must relate to evidence, or knowledge: thanks to them, the agent knows that the conclusion is true provided that he/she knows that the premises are true.

What does it mean to know? An answer could be: to know that a proposition is true is to know a proof of the proposition. Thus, the resulting notion of valid inference would be such that an inference is valid when it leads from proofs of the premises to a proof of the conclusion. Since we required proofs to be chains of valid inferences, though, this strategy has the effect of bringing back our analysis to a circular interdependence of the two concepts. In this respect, there would seem to be at least two ways out: (1) to explain the validity of inferences without referring to proofs, or (2) to explain proofs without referring to valid inferences. Prawitz says that “the second alternative is […] to put the natural conceptual order upside down. So, the first alternative seems to me preferable”. In the second part of his paper, after a comparative and critical analysis of some intuitionistic solutions that endorse and develop the second alternative, he explains the concept of valid inference, and hence of proof, through a theoretical notion of ground, inspired by Heyting’s constructions (Heyting 1931, 1934). Constructions for propositions/sentences (such as observations, calculations, protocols of construction processes) may be taken as grounds for judging as true the corresponding proposition, or for asserting the corresponding sentence. Given constructions for atomic propositions or sentences, and operations that fix the meaning of the logical constants, new constructions and operations can be defined, so as to open up the possibility of a non-circular account of valid inferences and proofs. Compared to his previous ground-theoretic writings (Prawitz 2009, 2012, 2013, 2015), the paper in this volume focuses specifically on the afore-mentioned circularity, as well as on the role that Heyting’s ideas play in Prawitz’s solution. Furthermore, in the closing remarks the author discusses a recognizability problem that he raised also in some previous works (Prawitz 1973, 1977, 2015), although he here suggests new ideas about how to frame it. The question is whether it is recognizable, and in what sense, that a term of Prawitz’s formal setup for grounds actually denotes a ground, so that validity of inferences be luminous.

Although referring to the shared idea that proofs should be explained through valid inferences, Cozzo and Usberti suggest strategies that differ from Prawitz’s one.

Cesare Cozzo analyses what he calls cogent inferences in a way that involves pragmatics and usage context. This is an original approach, not only in reference to Cozzo’s previous works (Cozzo 1994)—with the exception of (Cozzo 2015, 2016), where the notion of epistemic context was already taken into account—but also, more in general, for the fact that it tries to conciliate the constructivist analysis of valid inferences and proof with the so-called virtue epistemology. Instead of Heyting’s constructions, Cozzo bases his explanation of cogent inferences on speech acts performed by epistemologically virtuous agents in public contexts of intersubjective practices. Cogency is defined as epistemic compulsion, and a cogent inference is understood as carried out within a truth-seeking intersubjective context; as such, cogent inferences may be refined in two senses, according to the epistemic contexts where they meet success, and to the force of such successfulness. Proofs are defined as chains of trans-contextually cogent inferences, which induces a universal quantification on “all new epistemic contexts [...] that are generated by [a given one]”; more specifically, the inferences in a proof are valid, i.e. they remain cogent in all the epistemic contexts generated by a given one. This also paves the way to the interesting question about how the dynamic development of successive epistemic contexts should be described.

Gabriele Usberti focuses on epistemic transparency, a notion with respect to which the possession of evidence is characterized as follows: “the possession of evidence E for a sentence A is epistemically transparent if, and only if, it cannot happen that one is in possession of E without being in a position to know that one is”. He also proposes a distinction between transparency of the possession of evidence, on the one hand, and transparency of a notion standing for evidence, on the other—suggesting relations between them, e.g. a negation of the latter implies a negation of the former. He then examines Prawitz’s proof-theoretic semantics and theory of grounds, concluding that both of them lead to non-transparency. According to Usberti, Prawitz’s approach is unsatisfactory, as it reifies evidence by equating (the possession of) it with (the possession of) abstract objects like grounds. The overall analysis is led by the tenet that intuitive evidence must be transparent, because “only an intuitive notion of evidence whose possession is transparent is capable to play the role Prawitz assigns to evidence in his explanation of inference”—i.e. only such a notion could explain how deductively correct inferences yield justification. Usberti’s critical discussion of Prawitz’s proof-theoretic semantics and theory of grounds is furthermore—and quite surprisingly—based on the BHK setup, so that it relies on the very theoretical context that Prawitz himself uses, as said above, to substantiate his standpoint. The author finally proposes a conception where evidence is expressed in terms of cognitive states (also seen elsewhere, e.g. in Usberti 2015)—or, more precisely, of classes of cognitive states. Therefore, the resulting approach involves a quantification on all possible cognitive states.

We could assert that, with reference to Prawitz’s concerns, the latter two papers suggest a shift from foundation to description. Prawitz starts with the transcendental problem of explaining how a definition of mathematical proofs as chains of deductively valid inferences is possible without circularity, thus adopting a foundationalist point of view; Cozzo’s and Usberti’s papers instead deal with the problem of describing the passage from a possibly defeasible evidence—in the context of speech utterances of empirical assertions—to mathematical validity.

Finally, Antonio Piccolomini d’Aragona’s paper, also belonging to this group, compares Prawitz’s earlier proof-theoretic notions of proof and valid inference with their recent ground-theoretic version. To do this, he mostly refers to the proofs-as-chains conception, and to the above-mentioned recognizability issue. Prawitz’s theory of grounds seems to allow some good advancements, although it still suffers—in a less urgent way—from recognizability problems. Piccolomini d’Aragona questions how, once algorithmic decidability has been ruled out, the word “recognizability” should be understood. He therefore proposes a diagnosis according to a generality degree of the claim, that leads to two different versions of it: one where the order of the quantifiers is ∀∃, and another where the order of the quantifiers is ∃∀. The first version seems to be more plausible than the second one, although this standpoint may force a classical understanding of the meta-logical constants.

Two papers belong to the historical section of this special issue, the contributes of Göran Sundholm and Gabriella Crocco.

Taking into account the history of logic in its whole, Göran Sundholm shows how the neglect of epistemic considerations in logic is a relatively recent phenomenon. In today’s logic, the main stream considers inferences not primarily as acts, but as production-steps in the generation of formal derivations involving uninterpreted well-formed formulae. On the contrary, up to 1930, every logician of note had followed Frege’s lead when constructing formal calculi, combining his/her formal language to the Aristotelian conception of demonstrative science. The latter organizes a field of knowledge by using axioms, considered as self-evident in terms of primitive concepts, and proceeds to gain novel insights through the application of similarly self-evident rules of inference. According to Whitehead and Russell, Ramsey, Lesniewski, the early Carnap, Curry, Church, the early Heyting, systems of logic were interpreted calculi, understood as epistemological tools. Sundholm points out how the couple Hilfssprache/Darlegungssprache, today often misunderstood, played a major role in the projects of constructing auxiliary interpreted formal languages, such as Fregean Begriffsschrift and the like.

The transformation of formal logic into formalized mathematical logic was propounded by Hilbert’s school and by the School of Warsaw. Formal systems no longer fulfilled any epistemological role per se. Instead, strictly speaking, the “well-formed formulae” lack meaning, and as such they do not express. They are nothing but mathematical objects; in fact, formally speaking, the metamathematical expressions are elements of freely generated semi-groups of strings. With this shift in the role of the “languages” of logic, epistemic matters are driven further into the background. The logical calculi are not used for epistemological purposes anymore. The strange mixing between such a conception of logic and what Sundholm calls the ontological tradition coming from Bolzano’s 1837 Wissenschaftslehre, is the root for the equally strange notion of valid inference in terms of truth-preserving (“under all variations”) steps from premises to conclusion—which is also denounced as totally inadequate in Prawitz’s paper: “[it] is a riddle how this inadequate way of defining the validity of inferences can have come to be so widely accepted, commonly repeated in most textbooks in logic”.

Sundholm affirms that, after Gödel’s work, the attempts to resuscitate the Fregean ideal of logic seemed not viable and were abandoned: to maintain classical logic as well as impredicativity, while insisting on explicit meaning-explanations that render axioms and rules of inference self-evident, simply seems to be asking too much. Thus, he says, we may jettison meaning for the full formal language, while maintaining classical logic and impredicativity—which is the option chosen by Hilbert’s formalism and its more or less conscious followers. On the other hand, as a second option, we may jettison classical logic and Platonist impredicativity, but then offer meaning explanations for constructivist languages after the now familiar fashion of Heyting. The second part of Sundholm’s paper is devoted to tracking traces of the ontological layer in the epistemic tradition stemming from Heyting’s work and Gentzen’s analysis, and further developed by Per Martin-Löf and Dag Prawitz. Finally, it is also taken into account the difference between epistemic assumptions and truth-makers and its relation to demonstrations of judgements and proofs as objects.

Gabriella Crocco’s paper is an analysis of a significant exception to Sundholm’s assertion according to which, after Gödel’s work, the attempts to resuscitate the Fregean ideal of logic seemed no more viable and were consequently abandoned. Gödel himself is clearly interested in an epistemic account of logic in continuity with the Aristotelian conception of demonstrative science, but he strongly affords impredicativity, and insists on explicit meaning-explanations that could render axioms and rules of inference self-evident. Moreover, Crocco’s explanation of the notion of formal and informal proofs, so important in recent debates, shows how Gödel certainly has to be inserted in Prawitz’s second alternative, as he tries to explain inferences by proofs and not vice versa. An inference is for him something that, attached to a proof, gives as a result a proof, where a proof, as affirmed in note 20 of version III of Is mathematics Syntax of Language?, is not “a sequence of expressions satisfying certain formal conditions, but a sequence of thoughts convincing a sound mind” (Gödel 1995, p. 341). Why the epistemic notion of conviction, and therefore of evidence, is not considered by Gödel as a primitive atomic element for proofs? The answer should be searched in the reasons of his contrasting formalized deductions and proofs. Gödel’s first incompleteness theorem tells us that, in any setup containing at least elementary arithmetic, provability in a theory is not reducible to formal provability in a language, i.e. a calculus. His definition of recursive function provides us with a first complete characterization of the properties that a calculus on natural numbers (and hence formal provability) should have: a step-by-step process, independent from meaning, deterministic and local. However, how should we conceive of the general properties of a proof, which would not be reducible to a calculus? Non-locality seems to be the core of Gödel’s argument. It implies that the acquisition of evidence for human subjects cannot be reduced to elementary steps given once and for all. An inference can become evident when we take into account the global—i.e. non-local—features of what has been proved on the basis of previous evidence. Gödel’s ideas on this subject, detectable in his 1944 paper (Gödel 1990, pp. 119–141), in his conversations with Wang (Wang 1996), and in his philosophical notes (Crocco et al. 2016, 2017), involve a type-free logic of concepts, which therefore should be developed in a frame different from that of the intuitionistic analysis of proofs, and point to the notion of absolute proof. Some recent developments of what is nowadays a widening research area, often defined as “diagrammatic thinking”, could be related to some aspects of Gödel’s analysis of these problems, although their compatibility with a theory of meaning, a very important aspect in Gödel’s view, is still far from being clear. We will come back on this topic when we take into account Mumma’s paper below.

The last group of papers concerns different specific aspects of the general problem of the relation between inferences and proofs, mostly—although not exclusively—dealt with in the perspective of the intuitionistic tradition—like the aforementioned approaches of Martin-Löf and Prawitz. Klev’s, Tranchini’s and Petrolo and Pistone’s papers refer to such a tradition.

Ansten Klev is concerned with Martin-Löf’s type theory, that famously develops in two different modes, depending on how the elimination (and equality) rules for the identity type are defined. In the extensional type theory, the elimination rule allows to infer judgmental equality from identity. In the intensional version, the elimination rule is instead a generalized induction principle, following the same pattern as the elimination rule for the Unit type or for the N type. In the extensional case we can prove, inside the theory, that any proof of an identity Id(A, a, b) is judgmentally equal to the trivial proof Refl(A, a)—hence, identity types are proof-irrelevant. The intensional case is significantly weaker, as judgmental equality is replaced by identity in some complex types—which allows a non-trivial theory of proofs of identity types. Klev thus proposes a semantic justification of the identity elimination rule, essentially in line with the semantic justification that Martin-Löf adopts for the other constructions in his theory.

Tranchini’s and Petrolo and Pistone’s papers are concerned with a proof-theoretic approach to paradoxes. They both refer to Tennant’s characterization of paradoxical derivations as those that, in the setup of Prawitz’s proof-theoretic semantics, induce oscillating reduction-loops or non-terminating reduction sequences—thus failing with respect to “normalization”, or better “full-evaluation”, properties.

As he has already written elsewhere (Tranchini 2014), Luca Tranchini applies the Fregean sense-denotation distinction to valid derivations, and reads the latter as linguistic entities that denote BHK proofs. Since validity is based on full-evaluation—that is, valid derivations must be reduced to a canonical introductory form—and since paradoxical derivations do not satisfy this criterion, he suggests to conceive paradoxical derivations as non-denoting expressions. Although the proposal may succeed in explaining paradoxical phenomena, it is in the end challenged by the circumstance that the immediate sub-derivations of a derivation of ⊥ turns out to denote proofs of both A and ¬ A. Far from invalidating the basic tenet that denotation corresponds to full-evaluability, Tranchini says, “the provability of both ¬ A and A together with the unprovability of ⊥ forces the view that in presence of paradoxical phenomena, the functions proving an implication must be understood as being sometimes partial”. However, this change in perspective—not a little one—requires that also the notion of open valid argument be modified, which the authors does through what he calls validity* and validity**. These latter are in turn obtained via a further modification of the notion of correct inference, differing from Prawitz’s one in that it is local—whereas, in Prawitz’s setup, inferences are valid in the global sense of preserving validity throughout the whole structures they belong to. Here, it may be of interest to remark that Prawitz’s ground-theoretic notion of valid inference is local too, which could allow for a fruitful integration between Prawitz’s recent theory of grounds and Tranchini’s standpoint. Finally, the author investigates the consequences of an adoption of validity* and validity**, stressing that “both allow introduction rules which do not satisfy a strict complexity condition”.

Mattia Petrolo and Paolo Pistone show how closed derivations of ⊥ that satisfy Tennant’s requirements can be turned into closed normal derivations of either A & ¬ A or (A → A) → ⊥. Derivations of this kind, although patently violating Tennant’s conditions, are for the authors as paradoxical as the non-normalizing ones, since “in both cases one constructs two closed independent arguments for A and ¬ A”. This moral equivalence is substantiated by looking at paradoxical derivations as untyped graphical proof-objects; consequently, Petrolo and Pistone connect this special issue to another fundamental tradition in proof-theory, which arises from Girard’s Linear Logic and Geometry of Interaction (Girard 1987, 1989, 1990). Moreover, the authors pinpoint that, as regards paradoxes in proof-theoretic semantics, Prawitz’s notion of valid argument involves an ambiguity; the two main articulations—based on introduction or elimination rules—prove to be perfectly symmetric with regard to validity, but suffer from a strong asymmetry in the paradoxical case. Thus, “it is not clear whether paradoxicality should be interpreted [...] by the failure of some compositional principles [...] or by some notion of partial function”. Finally, Petrolo and Pistone discuss also Tennant’s “shrinking” reductions, and show that they conflict with the necessary prerequisites for identity of proofs—fulfilled, on the other hand, by their untyped graphical approach.

In the recent philosophical debate, the epistemology of visual thinking in mathematics is a well-developed domain. John Mumma’s paper is clearly inscribed in this field of researches, but it also suggests that diagramming can broaden Prawitz’s thesis according to which deductive inferences are acts—more specifically, operations on (alleged) grounds for the premises which yield grounds for the conclusions. The paper does not discuss in detail how this broadening may articulate, instead it focuses on a specific inference α from the premise [point a is before point b and point b is before point c] to the conclusion [point a is before point c]. Mumma provides an epistemological analysis of α, and shows how diagrams can be associated with this inference in order to explain its evidence. Here, we can rise some important questions: (a) is there any substantial difference between an epistemology of mathematical proofs based on a theory of meaning and an epistemology of mathematical proofs based instead on acts of diagramming? (b) is the act of diagramming essentially non-conceptual? Mumma’s analysis indeed suggests relevant issues about the cognitive act of diagramming involved in α, which could be used to compare the two afore-mentioned approaches: for example, they relate to the notion of modality, implying a quantification on all possible arrangements of positions of the points, or to the notion of integration of premises. Finally, Mumma discusses the notion of “seeing” diagrams, which in a sense evokes the idea of perceiving the meaning of concepts through their structural relations—proposed for example by Gödel, through the notion of absolute proof.