Keywords

1 Introduction

Gödel-Löb logic \(\textsf{GL}\) extends classical modal logic \(\textsf{K}\) with the Gödel-Löb axiom \(\square (\square \varphi \rightarrow \varphi ) \rightarrow \square \varphi \). \(\textsf{GL}\) is the provability logic of Peano Arithmetic \(\textsf{PA}\), i.e. it consists of all modal formulas that are true under any arithmetical interpretation where \(\square \varphi \) means “\(\varphi \) is provable in \(\textsf{PA}\)” (expressed in the language of \(\textsf{PA}\)).

An intuitionistic version of \(\textsf{GL}\) is \(\textsf{iGL}\) and the intuitionistic counterpart of \(\textsf{PA}\) is Heyting Arithmetic \(\textsf{HA}\). For a long time, the provability logic of \(\textsf{HA}\) was an open problem and was only known to be an extension of \(\textsf{iGL}\). However, Mojtahedi claims to have found a solution in a preprint [34] currently under review.

Several other logics also have provability interpretations, such as modalised Heyting calculus \(\textsf{mHC}\), Kuznetsov-Muravitsky logic \(\textsf{KM}\), and intuitionistic Strong Löb logic \(\textsf{iSL}\) [14, 30, 32, 35]. All these intuitionistic modal logics except \(\textsf{mHC}\) include the Gödel-Löb axiom and all except \(\textsf{iGL}\) contain the so-called completeness axiom \(\varphi \rightarrow \square \varphi \).

Important to note is that these logics are defined over the language with only the \(\square \)-modality and without \(\diamond \). In classical modal logic, \(\diamond \) is dual to \(\square \) and reads as consistency in the provability interpretation. However, for intuitionistic modal logics, in general, \(\diamond \) and \(\square \) are not interdefinable and several choices can be made. Interestingly, intuitionistic modal logics defined over the language with only the \(\square \) already reveal intrinsic intuitionistic characters. Important for us is the aforementioned completeness principle, also known as the coreflection principle. It trivializes in a classical setting, but has interesting intuitionistic readings. Indeed, in our setting of provability, \(\varphi \rightarrow \square \varphi \) reads as completeness: “if \(\varphi \) is true then \(\varphi \) is provable” (see [45] for a discussion on the completeness principle in extensions of Heyting Arithmetic). The coreflection principle also appears in intuitionistic epistemic logic and lax logic (for overviews see, e.g., [18, 32]).

Here, we consider \(\textsf{iSL}\), the minimal intuitionistic modal logic with both the Gödel-Löb axiom and the completeness axiom, which can also be axiomatised over intuitionistic modal logic \(\textsf{iK}\) by the Strong Löb axiom \((\square \varphi \rightarrow \varphi ) \rightarrow \varphi \). The logic \(\textsf{iSL}\) is the provability logic of an extension of Heyting Arithmetic with respect to so-called slow provability [46] and plays an important role in the \(\varSigma _1\)-provability logic of \(\textsf{HA}\) [3].

The Gödel-Löb axiom characterises transitive converse well-founded Kripke frames for \(\textsf{GL}\) and also for the birelational frames for \(\textsf{iGL}\), \(\textsf{iSL}\), and \(\textsf{KM}\). Interestingly, for \(\textsf{iSL}\), \(\textsf{mHC}\), and \(\textsf{KM}\), the modal relation is a part of the intuitionistic relation. This semantics plays an important role in the study of \(\textsf{iSL}\), e.g. in the characterisation of its admissible rules [19]. A natural deduction system for \(\textsf{iSL}\) can be found in [7]. The proof systems that we focus on here are sequent calculi.

From a proof-theoretic perspective, the “diagonal formula” \(\square \varphi \) in the modal (GLR) rule for \(\textsf{GL}\) causes difficulties for direct cut-elimination because the standard induction on the size of the cut-formula and the height fail. Cut-elimination is highly nontrivial as witnessed by decades of unsuccessful attempts and controversies before the proof by Valentini [44] was finally shown to be correct [23].

figure a

In backward proof search, the (GLR) rule causes loops because \(\square \varGamma \) is preserved upwards from conclusion to premise. For (GLR), a simple terminating and complete strategy consists in applying (GLR) only if \(\square \varphi \not \in \square \varGamma \). In sequent calculi for intuitionistic logic, the traditional (\(\rightarrow \!\text {L}_{\text {i}}\)) rule, shown above right, can cause backward proof search to go into loops. For termination without loop check, various authors have independently discovered the sequent calculus \(\textsf{G4ip}\) which replaces the (\(\rightarrow \!\text {L}_{\text {i}}\)) rule with multiple rules, depending on the form of \(\varphi \) [12]. Iemhoff [29] developed \(\textsf{G4}\)-like calculi for several intuitionistic modal logics.

Thus, in a sequent calculus for an intuitionistic provability logic, both the modal rule and left implication rule have the potential to cause loops and the modal rule can complicate direct cut-elimination! For logic \(\textsf{iGL}\), van der Giessen and Iemhoff have developed \(\textsf{G3iGL}\) and \(\textsf{G4iGL}\) [20], providing a direct cut-elimination procedure for the former. The initial proof of cut-elimination for \(\textsf{G4iGL}\) was indirect, via \(\textsf{G3iGL}\), but Goré and Shillito later formalised direct cut-elimination using the maximal height of derivations as induction parameter [26].

Recently, van der Giessen and Iemhoff [21] developed two sequent calculi, \(\textsf{G3iSL}\) and \(\textsf{G4iSL}\), for \(\textsf{iSL}\) for which they provided the analogue results compared to \(\textsf{G3iGL}\) and \(\textsf{G4iGL}\) mentioned above. In particular, they show that backward proof search in \(\textsf{G4iSL}\) weakly terminates: there exists a terminating (and complete) backward proof search strategy, namely one similar to the above-described for logic \(\textsf{GL}\). However, not all strategies terminate on this calculus: the naive backward proof search strategy, apply any rule in any order, does not.

Here, we present \(\textsf{G4iSLt}\) which replaces the \(\textsf{G4iSL}\) rules of the top row below, by the rules in the bottom row. As suggested by van der Giessen and Iemhoff [21], the new modal rule drops the explicit embedding of transitivity. But crucially, the new left-implication rule drops both transitivity and contraction on \(\square \varphi \rightarrow \psi \) in the left premise. The right premise \(S=\varPhi ,\square \varGamma ,\psi \Rightarrow \chi \) is kept untouched:

figure b

Our results improve on the work of van der Giessen and Iemhoff [21]. First, our new measure ensures that the naive backward proof search strategy for our new calculus terminates. This is unusual for sequent calculi for provability logics, and especially for intuitionistic provability logics. Second, we prove direct cut-elimination for \(\textsf{G4iSLt}\) using a proof technique similar to the mhd proof technique [6, 24]. Third, all our results are formalised in Coq and can be found here: https://ianshil.github.io/G4iSLT. We consequently contribute to the rapidly growing literature of formalised proof theory [1, 8, 9, 15, 17, 24, 26, 39]. We also think that our work sheds light on what one might call proof-theoretic meta considerations. Namely, it shows the subtle consequences of rule choices on termination and cut-elimination.

In Sect. 2, we introduce the preliminaries of \(\textsf{iSL}\), including our calculus \(\textsf{G4iSLt}\). Section 3 presents the admissibility of structural rules in \(\textsf{G4iSLt}\). In Sect. 4, we prove that backward proof search in \(\textsf{G4iSLt}\) strongly terminates. Finally, in Sect. 5, we directly prove cut-admissibility for \(\textsf{G4iSL}\) using a proof technique similar to the mhd proof technique [6, 24].

2 Preliminaries

In this section we successively present the syntax, axiomatic system, Kripke semantics and sequent calculus for the logic \(\textsf{iSL}\).

2.1 Syntax

Let \(\mathbb {V}=\{p,q,r\dots \}\) be a countably infinite set of propositional variables on which equality is decidable, that is \(\forall p , q \in \mathbb {V}\), we can decide whether \(p=q\) or-else \(p \ne q\). Modal formulae are defined using BNF notation as below:

$$ \varphi {:\,\!:}= p\in \mathbb {V} \mid \bot \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \varphi \rightarrow \varphi \mid \square \varphi $$

We use the greek letters \(\varphi ,\psi ,\chi ,\delta ,\dots \) for formulae and \(\varGamma ,\varDelta ,\varPhi ,\varPsi \dots \) for multisets of formulae. We say that \(\varphi \) is a boxed formula if \(\square \) is its main connective. For a multiset \(\varGamma \), we define the multiset \(\square \varGamma := \{\square \varphi : \varphi \in \varGamma \}\). By the unboxing of a multiset \(\square \varGamma \) we mean the multiset \(\varGamma \).

Following Goré et al. [24, 26], we encode formulae as an inductive type MPropF whose base case encodes \(\mathbb {V}\) as the type nat of natural numbers because nat is countably infinite and equality is decidable on it. A list of such formulae then has the type list MPropF. The usual operations on lists “append” and “cons” are respectively represented by ++ and : : but Coq also allows us to write lists in infix notation using ;. Thus the terms \(\varphi \)1 : : \(\varphi \)2 : : \(\varphi \)3 : : nil and [\(\varphi \)1] ++ [\(\varphi \)2] ++ [\(\varphi \)3] and [\(\varphi \)1 ; \(\varphi \)2 ; \(\varphi \)3] all encode the list \(\varphi _1, \varphi _2, \varphi _3\).

We straightforwardly extend Dyckhoff’s notion of weight of a formula [11], defined for the intuitionistic language, to the modal language.

Definition 1

The weight \(w(\varphi )\) of a formula \(\varphi \) is defined as follows:

$$\begin{array}{r c l} w(\bot )=w(p) &{} = &{} 1 \\ w(\psi \vee \chi )=w(\psi \rightarrow \chi ) &{} = &{} w(\psi ) + w(\chi ) + 1 \\ w(\psi \wedge \chi ) &{} = &{} w(\psi ) + w(\chi ) + 2 \\ w(\square \psi ) &{} = &{} w(\psi ) + 1 \\ \end{array}$$

The main motivation behind this weight is to ensure that \(w(\varphi \rightarrow (\psi \rightarrow \chi ))<w((\varphi \wedge \psi )\rightarrow \chi )\), which is crucial to show termination of naive backward proof search on the sequent calculus \(\textsf{G4ip}\) for intuitionistic logic.

2.2 Axiomatic Systems as Consequence Relations

Traditional Hilbert calculi are designed to capture logics as sets of theorems, that is sets of the form \(\{\varphi :\;\vdash \varphi \}\). However, when considering logics as consequence relations these systems are inadequate, and notably lead to historical confusions about properties such as the deduction theorem [25, 27].

Generalised Hilbert calculi manipulate expressions \(\varGamma \vdash \varphi \), where \(\varGamma \) is a set of formulae. They clearly distinguish between the notion of deducibility from a set of assumptions, versus theoremhood. They are particularly useful for identifying the appropriate form of deduction theorem holding for a logic [25]. Still, they correspond to traditional Hilbert calculi when restricted to consecutions of the shape \(\emptyset \vdash \varphi \), as we do here. Thus, we can connect the generalised Hilbert calculus here to the traditional Hilbert calculus considered by Ardeshir and Mojtahedi [3].

The generalised Hilbert calculus \(\textsf{iSLH}\) for \(\textsf{iSL}\), shown in Fig. 1, extends the one for intuitionistic modal logic \(\textsf{iK}\) with the Strong Löb axiom \((\square \varphi \rightarrow \varphi )\rightarrow \varphi \). We write \(\varGamma \vdash _{\textsf{iSLH}}\varphi \) if \(\varGamma \vdash \varphi \) is provable in \(\textsf{iSLH}\).

Note that if we replace the premise of the rule (Nec) by \(\varGamma \vdash \varphi \) we obtain an equivalent calculus. This is implied by the completeness axiom \(\varphi \rightarrow \square \varphi \) and the holding of the deduction theorem in \(\textsf{iSLH}\) [18].

Fig. 1.
figure 1

Generalised Hilbert calculus \(\textsf{iSLH}\) for \(\textsf{iSL}\)

2.3 Kripke Semantics

We now present the Kripke semantics for \(\textsf{iSL}\) [3, 32] to notably prove soundness of our sequent calculus \(\textsf{G4iSLt}\), and explain its rules (SLtR) and (\(\square \!\rightarrow \)L).

The Kripke semantics of \(\textsf{iSL}\) is a restriction of the Kripke semantics for intuitionistic modal logics. More precisely, the semantic interpretation of connectives is preserved, but the class of models is restricted. The models for this logic are defined below, where for a set W, we write \(\mathcal {P}(W)\) for the set of all subsets of W.

Definition 2

A Kripke model \(\mathcal M\) for \(\textsf{iSL}\) is a tuple \((W,\le ,R,I)\), where W is a non-empty set (of possible worlds), both \(\le \) (the intuitionistic relation) and R (the modal relation) are subsets of \(W\times W\), and \(I:\mathbb V\rightarrow \mathcal P(W)\), which satisfies the following: \(\le \) is reflexive and transitive; R is transitive and converse well-founded; \((\le \circ R)\,\subseteq \,R\) where “\(\circ \)” is relational composition; \(R\,\subseteq \,\le \); and for all \(p\in \mathbb V\) and \(w,v\in W\), if \(w\le v\) and \(w\in I(p)\) then \(v\in I(p)\).

Note the peculiarity of the models for \(\textsf{iSL}\): \(R\,\subseteq \,\le \), that is the modal relation is a subset of the intuitionistic relation. We recall the standard definition of forcing for intuitionistic modal logics, and show that persistence holds.

Definition 3

Given a Kripke model \(\mathcal M=(W,\le ,R,I)\), we define the forcing relation as follows, where \(v \ge w\) is just \(w \le v\):

figure c

Local consequence is as below where \(\mathcal M, w \Vdash \varGamma \) means \(\forall \varphi \in \varGamma , \mathcal M, w \Vdash \varphi \):

$$\begin{array}{l@{}c@{}l} \varGamma \models \varphi &{} \text { iff } &{} \forall \mathcal M.\forall w.\,(\mathcal M,w\Vdash \varGamma \;\;\;\text {implies}\;\;\;\mathcal M,w\Vdash \varphi ) \\ \end{array}$$

Lemma 1 (Persistence)

For any model \(\mathcal M=(W,\le ,R,I)\), formula \(\varphi \) and points \(w,v\in W\), if \(w\le v\) and \(\mathcal M,w\Vdash \varphi \) then \(\mathcal M,v\Vdash \varphi \).

Interestingly, as \(\textsf{iSL}\) satisfies the finite model property [46] it can also be characterised by the class of finite frames where R is transitive and irreflexive.

2.4 Sequent Calculus

A sequent is a pair of a finite multiset \(\varGamma \) of formulae and a formula \(\varphi \), denoted \(\varGamma \Rightarrow \varphi \). For a sequent \(\varGamma \Rightarrow \varphi \) we call \(\varGamma \) the antecedent of the sequent and \(\varphi \) the consequent of the sequent. For multisets \(\varGamma \) and \(\varDelta \), the multiset sum \(\varGamma \uplus \varDelta \) is the multiset whose multiplicity (at each formula) is a sum of the multiplicities of \(\varGamma \) and \(\varDelta \). We write \(\varGamma ,\varDelta \) to mean \(\varGamma \uplus \varDelta \). For a formula \(\varphi \), we write \(\varphi ,\varGamma \) and \(\varGamma ,\varphi \) to mean \(\{\varphi \} \uplus \varGamma \). From the formalisation perspective, a pair of a list of formulae (list MPropF) and a formula MPropF has type (list MPropF) * MPropF, using the Coq notation * for forming pairs. The latter is the type we give to sequents in our formalisation, for which we use the macro Seq. Thus the sequent \(\varphi _1, \varphi _2, \varphi _3 \Rightarrow \psi \) is encoded by the term [\(\varphi \)1 ; \(\varphi \)2 ; \(\varphi \)3] * \(\psi \), which itself can also be written as the pair ([\(\varphi \)1 ; \(\varphi \)2 ; \(\varphi \)3], \(\psi \)). Note that [\(\varphi \)1 ; \(\varphi \)2 ; \(\varphi \)3] * \(\psi \) is different from [\(\varphi \)2 ; \(\varphi \)1 ; \(\varphi \)3] * \(\psi \) since the order of the elements is crucial, so our lists do not capture multisets (yet).

A sequent calculus consists of a finite set of sequent rule schemas. Each rule schema consists of a conclusion sequent schema and some number of premise sequent schemas. A rule schema with zero premise schemas is called an initial rule. The conclusion and premises are built in the usual way from propositional-variables, formula-variables and multiset-variables. A rule instance is obtained by uniformly instantiating every variable in the rule schema with a concrete object of that type. This is the standard definition from structural proof theory.

Definition 4 (Derivation/Proof)

A derivation of a sequent S in the sequent calculus \(\textsf{C}\) is a finite tree of sequents such that (i) the root node is S; and (ii) each interior node and its direct children are the conclusion and premise(s) of a rule instance in \(\textsf{C}\). A proof is a derivation where every leaf is the conclusion of an instance of an initial rule.

Note that we explicitly define the notion of a derivation as an object rather than define the notion of derivability, as is done in some papers. We do so as we want to create a “deep” embedding of such derivations into Coq [9].

In what follows, it should be clear from context whether the word “proof” refers to the object defined in Definition 4, or to the meta-level notion. We say that a sequent is provable in \(\textsf{G4iSLt}\) if it has a proof in \(\textsf{G4iSLt}\). We elide the details of the encodings of sequent rules and derivations, as these can be found elsewhere [1, 39]. We define a predicate G4iSLt_prv on sequents to encode provability in \(\textsf{G4iSLt}\). Our encodings rely on the type Type, which bears computational content, unlike Prop, and is crucially compatible with the extraction function of Coq.

Before presenting our calculus, we recall standard notions from proof theory.

Definition 5 (Height)

For any derivation \(\delta \), its height \(h(\delta )\) is the maximum number of nodes on a path from root to leaf.

Definition 6 (Admissibility, Invertibility, Height-Preservation)

Let \(\mathsf R\) be a rule schema with premises \(S_0,\dots ,S_n\) and conclusion S. We say that \(\mathsf R\) is:

  • admissible: if for every instance of \(\mathsf R\), the instance of S is provable whenever the instances of \(S_1,\dots ,S_n\) are all provable;

  • invertible: if for every instance of \(\mathsf R\), the instances of \(S_1,\dots ,S_n\) are all provable whenever the instance of S is provable;

  • height-preserving admissible: if for every instance of \(\mathsf R\), if there are proofs \(\pi _0,\dots ,\) \(\pi _n\) of the instances of \(S_0,\dots ,S_n\) then there is a proof \(\pi \) of the instance of S such that \(h(\pi )\le h(\pi _i)\) for some \(0\le i\le n\);

  • height-preserving invertible: if for every instance of \(\mathsf R\), if \(\pi \) is a proof of the instance of S then there are proofs \(\pi _0,\dots ,\pi _n\) of the instances of \(S_0,\dots ,S_n\) such that \(h(\pi _i)\le h(\pi )\) for all \(0\le i\le n\).

The sequent calculus \(\textsf{G4iSLt}\) is given in Fig. 2. When defining rules we put the label naming of the rule on the left of the horizontal line, while the label appears on the right of the line in instances of rules.

Fig. 2.
figure 2

The sequent calculus \(\textsf{G4iSLt}\), where \(\varPhi \) contains no boxed formula.

In \({({\textrm{IdP}})}\), a propositional variable instantiating the featured occurrences of p is principal. In a rule instance of (\(\wedge \)R), (\(\wedge \)L), (\(\vee \)R\(_i\)), (\(\vee \)L) or (\(\rightarrow \)R), the principal formula of that instance is defined as usual. In a rule instance of (\(p\!\rightarrow \)L), both a propositional variable instantiating p and the formula instantiating the featured \(p\rightarrow \varphi \) are principal formulae of that instance. In a rule instance of (\(\wedge \!\rightarrow \)L), (\(\vee \!\rightarrow \)L), (\(\rightarrow \rightarrow \)L) or (\(\square \!\rightarrow \)L), the formula instantiating respectively \((\varphi \wedge \psi )\rightarrow \chi \), \((\varphi \vee \psi )\rightarrow \chi \), \((\varphi \rightarrow \psi )\rightarrow \chi \) or \(\square \varphi \rightarrow \psi \) is the principal formula of that instance. In a rule instance of (SLtR) or (\(\square \!\rightarrow \)L), \(\square \varphi \) is called the diagonal formula [38].

The non-modal rules are taken from the calculus for \(\textsf{IPC}\) for which backward proof search strongly terminates [11]. Keypoint is that the usual intuitionistic left implication rule is replaced by four implication rules depending on the main connective in the antecedent of the principal formula, in such a way that each premise is less complex than the conclusion. In particular, when considering the rule \({({\rightarrow \rightarrow \text {L}})}\), an application of the “regular” left implication rule yields the more complex left premise \(\varGamma ,(\varphi \rightarrow \psi )\rightarrow \chi \Rightarrow \varphi \rightarrow \psi \), which is (semantically) equivalent to the simpler left premise stated in rule \({({\rightarrow \rightarrow \text {L}})}\).

We proceed to give semantic intuitions for the rules (SLtR) and (\(\square \!\rightarrow \)L).

The (SLtR) rule has similarities with the rule (GLR) (shown below) from sequent calculi for provability logics such as \(\textsf{GL}\), but with two major differences: (1) the non-boxed formulae \(\varPhi \) in the antecedent of the sequent are preserved from conclusion to premise in (SLtR), while they are deleted in (GLR); and (2) the formulae in \(\square \varGamma \) are not preserved upwards in (SLtR), while they are in (GLR).

figure d

From a backward proof search perspective, both rules correspond, semantically, to a “modal jump” from a point w which falsifies the conclusion \(\varPhi ,\square \varGamma \Rightarrow \square \varphi \) to a modal successor v which forces \(\varGamma \) but falsifies the succedent \(\varphi \) of the premise. The underlying relation R in both logics is transitive and converse well-founded. Using converse well-foundedness we can assume that v is the last modal successor making \(\varphi \) false, thus v forces \(\square \varphi \) in both logics. Transitivity implies that v forces \(\square \varGamma \) in both logics, so all its successors force \(\varGamma \). But, in \(\textsf{iSL}\), the underlying relation R is also persistent so v also forces \(\varPhi \) in \(\textsf{iSL}\), but not in \(\textsf{GL}\), thus explaining difference (1). Thanks to persistence, v forcing \(\varGamma \) implies that all its successors force \(\varGamma \), meaning that v forces \(\square \varGamma \) already, thus explaining difference (2).

The two premises of (\(\square \!\rightarrow \)L) capture how \(\square \varphi \rightarrow \psi \) in the antecedent of the conclusion can be true. The simple case is when \(\psi \) is true, which corresponds to the right premise. The more complicated case is when \(\psi \) is not true, implying that \(\square \varphi \) must also be not true. Now, \(\square \varphi \) true semantically means that \(\varphi \) is true in all modal successors, hence \(\square \varphi \) not true means that \(\varphi \) is not true in a modal successor. But converse well-foundedness implies the existence of a last modal successor where \(\varphi \) is not true, with all its modal successors making \(\varphi \) true. The left premise corresponds to this last modal successor, as it encodes that \(\varphi \) is not true but \(\square \varphi \) is true. Moreover, this last modal successor is also an intuitionistic successor as \(R\,\subseteq \,\le \). By persistence, this last successor must also make \(\square \varphi \rightarrow \psi \) true. But then, a simple modus ponens on \(\square \varphi \) and \(\square \varphi \rightarrow \psi \) gives us \(\psi \).

Finally, we show that \(\textsf{G4iSLt}\) indeed captures the set of theorems of \(\textsf{iSL}\).

Theorem 1

For all \(\varphi \) we have: \(\emptyset \vdash _{\textsf{iSLH}}\varphi \) iff \(\Rightarrow \varphi \) is provable in \(\textsf{G4iSLt}\).

Proof

We proved in Coq the two following results.

figure e

The result (1), which relies on the admissibility of cut (Theorem 2), shows that \(\textsf{G4iSLt}\) is (strongly) complete with respect to \(\textsf{iSLH}\) and gives us the left-to-right direction of our theorem. The other direction involves the soundness of \(\textsf{G4iSLt}\) w.r.t. the local consequence shown in (2), as well as the (non-formalised) result of (weak) completeness of \(\textsf{iSLH}\) w.r.t. the local consequence obtained by Ardeshir and Mojtahedi [3]. \(\blacksquare \)

3 Admissible Rules in \(\textsf{G4iSLt}\)

This section aims at showing that the contraction rule is admissible. To do so, it follows the work developed by Goré and Shillito [26] on the sequent calculus \(\textsf{GL4ip}\) for the intuitionistic provability logic \(\textsf{iGL}\), which extends itself on the work of Dyckhoff and Negri [13] on \(\textsf{G4ip}\). Most of the overall structure of the argument is the same as for the case of \(\textsf{GL4ip}\), except for the crucial and typical left-unboxing rule (\(\boxtimes \)), shown to be height-preserving admissible.

Most of the results of this section are proven by inductions on the weight of formulae and/or height of derivations. We omit the Coq encodings for brevity.

Lemma 2 (Height-preserving invertibility of rules)

The rules \({({\wedge R})},\) \({({\wedge L})}\), \({({ \vee L})},{({ \rightarrow R })}, {({p\!\rightarrow L})},\) \({({\wedge \!\rightarrow L})},{({\vee \!\rightarrow L})}\) are height-preserving invertible.

We present height-preserving admissible and admissible rules in Fig. 3.

Fig. 3.
figure 3

Height-preserving admissible and admissible rules in \(\textsf{G4iSLt}\).

The structural rules of weakening (Wkn), contraction (Ctr) and exchange (Exc), are all (at least) admissible. The presence of the latter may be surprising, as the sequents we use are based on multisets. However, as mentioned earlier, our formalisation encodes sequents using lists and not multisets. So, the formal proof of the height-preserving admissibility of (Exc) shows that list-sequents of our formalisation mimic multiset-sequents of the pen-and-paper definition. In fact, we designed the formalisation of \(\textsf{G4iSLt}\) so that it admits exchange [26].

The rule (\(\boxtimes \)) is quite typical of the logic \(\textsf{iSL}\), as it reflects one of its theorems: the completeness axiom \(\varphi \rightarrow \square \varphi \). Indeed, this axiom implies that \(\varGamma \) entails \(\square \varGamma \), allowing the replacement of \(\square \varGamma \) by \(\varGamma \) in the antecedent of a provable sequent while preserving provability. The height-preserving admissibility of (\(\boxtimes \)) is crucially used in many places, notably Lemma 2 and the admissibility of cut.

The height-preserving admissibility of (\(\square \!\rightarrow \)LIR) and (\(\rightarrow \rightarrow \)LIR) shows height-preserving invertibility in the right premise of the rules (\(\square \!\rightarrow \)L) and (\(\rightarrow \rightarrow \)L).

The admissible rule (\(\rightarrow \)L) is the traditional left-implication rule. We use this rule to prove the admissibility of (\(\rightarrow \rightarrow \)LIL), resembling the invertibility in the left premise of (\(\rightarrow \rightarrow \)L). In turn, (\(\rightarrow \rightarrow \)LIL) is crucial in the admissibility of (Ctr).

In the following section we introduce a measure on sequents which we use to show that the naive backward proof search strategy for \(\textsf{G4iSLt}\) terminates. This measure could thus be used to derive the notion of maximum height of derivations (mhd) for a sequent, as was done in previous works [24, 26]. There, the mhd measure was used as secondary induction measure in the proof of admissibility of cut. Here, we simply use the termination measure instead.

4 Naive Backward Proof Search Terminates

Sequent calculi enjoying cut-elimination can often be used to decide whether a given formula \(\varphi \) is deducible from a given set of assumptions \(\varGamma \) by strategically applying the rules “backwards” from the end-sequent \(\varGamma \Rightarrow \varphi \). To obtain a decision procedure, we require a backward proof search strategy which terminates and is complete, i.e. which provides a proof for any sequent provable in the calculus.

But often, terminating complete strategies necessitate a “loop check” mechanism, that stops the search if the same sequent appears twice on a branch. For example, the sequent calculus \(\textsf{LJ}\), for propositional intuitionistic logic, only has a strategy with loop check as terminating complete strategy. The termination of these strategies is messy to reason about, as in most cases their unguarded version is not terminating and results in proof trees with infinite branches.

While some calculi have terminating complete strategies without loop checks, like \(\textsf{GLS}\) for \(\textsf{GL}\) [24] and \(\textsf{GL4ip}\) for \(\textsf{iGL}\) [20], we consider a stronger kind of calculus: calculi with strongly terminating backward proof search, such as \(\textsf{G4ip}\) for intuitionistic propositional logic [12]. Backward proof search for a sequent calculus is strongly terminating if and only if all backward proof search strategies for this calculus, complete or not, terminate. This characterisation has other equivalent forms: (1) the naive backward proof search strategy terminates, and (2) there is a well-founded ordering on sequents decreasing upwards in all the rules of the calculus. In contrast, backward proof search is weakly terminating if and only if there is a terminating complete strategy for this calculus.

In this section we show that backward proof search for \(\textsf{G4iSLt}\) is strongly terminating. More precisely, we show that the naive strategy terminates. To do this, we need two ingredients: (1) a locally defined measure on sequents, and (2) a well-founded order making this measure decrease upwards in the rules of \(\textsf{G4iSLt}\).

4.1 Shortlex: A Well-Founded Order on list \(\mathbb N\)

We define the shortlex order, which is a well-founded order on list \(\mathbb N\), i.e. the set of all lists of natural numbers.

In the following, we use < to mean the usual ordering on natural numbers. Let us recall the definition of the lexicographic order on lists of natural numbers.

Definition 7 (Lexicographic order)

Let \(n\in \mathbb N\). We define the lexicographic order \(<_{lex}^n\) on lists of natural numbers of length n. For two lists of natural numbers \([m_1;\cdots ;m_n]\) and \([k_1;\cdots ;k_n]\), we write \([m_1;\cdots ;m_n]<_{lex}^{n}[k_1;\cdots ;k_n]\) if there is a \(1\!\le \! j \!\le \! n\) such that: (1) \(m_p=k_p\) for all \(1\le p<j\), and (2) \(m_j<k_j\).

Note that as < is a well-founded order, then \(<_{lex}^{n}\) is also well-founded [36]. Finally, we define the shortlex order, also called breadth-first [31] or length-lexicographic order, over lists of natural numbers (viewed as n-tuples).

Definition 8 (Shortlex order)

The shortlex order over lists of natural numbers, noted \({{\,\mathrm{<\!\!<}\,}}\), is defined as follows. For two lists \(l_0\) and \(l_1\) of natural numbers, we say that \(l_0{{\,\mathrm{<\!\!<}\,}}l_1\) whenever one of the following conditions is satisfied:

  1. 1.

    \(length(l_0)<length(l_1)\) ;

  2. 2.

    \(length(l_0)=length(l_1)=n\) and \(l_0<_{lex}^{n}l_1\).

Intuitively, the shortlex order is ordering lists according to their length and follows the lexicographic order whenever length does not discriminate. Note that on top of being well-founded, \({{\,\mathrm{<\!\!<}\,}}\) is obviously transitive.

4.2 A (list \(\mathbb N\))-Measure on Sequents

We proceed to attach to each sequent \(\varGamma \Rightarrow \chi \) a “measure” \(\varTheta (\varGamma \Rightarrow \chi )\) which is a (finite) list of natural numbers, i.e. of type list \(\mathbb N\). For simplicity, in the following we consider a fixed sequent \(\varGamma \Rightarrow \chi \) for which we define the measure.

To introduce our measure, we first wish to explain why the measure used for \(\textsf{GL4ip}\) [26], acting as a substitute of the Dershowitz-Manna order [10] considered in Dyckhoff’s article on \(\textsf{G4ip}\) [11], does not work for our purpose. The explanation of this failure justifies the modification we made to obtain the measure for \(\textsf{G4iSLt}\).

The intuition behind the measure for \(\textsf{GL4ip}\) and \(\textsf{G4ip}\) is the following: for a multiset we create an ordered list of counters for each weight of occurrences of formulae of this weight. For more details, take a finite multiset of formulae \(\varDelta \). As it is finite, it contains a topmost formula of maximal weight n. We can create a list of length n such that at each position m in the list (counting from right to left) for \(1\le m\le n\), we find the number of occurrences in \(\varDelta \) of topmost formulae of weight m. Such a list gives the count of occurrences in \(\varDelta \) of formulae of weight n in its leftmost (i.e. n-th) component, then of occurrences of formulae of weight \(n-1\) in the next (i.e. \((n-1)\)-th) component, and so on until we reach 1.

The measure for \(\textsf{GL4ip}\) and \(\textsf{G4ip}\) consisted in attaching to \(\varGamma \Rightarrow \chi \) the list obtained by applying the above procedure on the multiset \(\varGamma \uplus \{\chi \}\). Call this function \(\varTheta _{fail}\). This measure fails to show termination of the naive strategy for \(\textsf{G4iSLt}\), as it does not decrease upwards in the following application of (SLtR).

figure f

We have that \(\varTheta _{fail}(\Rightarrow \square p)=[1,0]\) because \(\square p\) is the formula of maximum weight 2, and it is the only formula with this weight occurring in the list, while no formula of weight 1 appears in \(\Rightarrow \square p\). In addition to that, we have that \(\varTheta _{fail}(\square p\Rightarrow p)=[1,1]\). Consequently, we obtain \(\varTheta _{fail}(\Rightarrow \square p){{\,\mathrm{<\!\!<}\,}}\varTheta _{fail}(\square p\Rightarrow p)\): the measure increased upwards. So, the measure used for \(\textsf{GL4ip}\) and \(\textsf{G4ip}\) cannot be used here. We need to define another one.

With enough scrutinising, one can notice that in \(\textsf{G4iSLt}\) the principal box of a boxed formula in the antecedent of a sequent is a “deadweight”. More precisely, once a formula \(\square \varphi \) is in the antecedent of a sequent, only two things can happen to its outermost box: it is either deleted (via the modal rule (SLtR) or (\(\square \!\rightarrow \)L)), or else it is preserved (through all other rules). Intuitively, this observation suggests that boxed formulae in the antecedent are destined to be unboxed eventually in the upward application of rules, without having any other effect.

Consequently, as the top-level boxes in the antecedent of a sequent are deadweights, we can think about unboxing the antecedent of \(\varGamma \Rightarrow \chi \) before applying the procedure described above. This is precisely what we do: if \(\varGamma \) is of the shape \(\varGamma _0,\square \varGamma _1\) with no boxed formula in \(\varGamma _0\), we define \(\varTheta (\varGamma \Rightarrow \chi )\) to be the list of natural numbers obtained via the above machinery applied on the multiset \(\varGamma _0\uplus \varGamma _1\uplus \{\chi \}\).

For example, to compute \(\varTheta (\square (p\wedge q), p\vee q\Rightarrow q\rightarrow p)\), we first unbox the antecedent of this sequent by transforming \(\square (p\wedge q)\) into \(p\wedge q\) to obtain the multiset \(\{p\wedge q, p\vee q, q\rightarrow p\}\). Because \(p\wedge q\) is the only formula of maximum weight four, our list of length four begins with 1. Since both \(p\vee q\) and \(q\rightarrow p\) are of weight three, the second element is 2. Finally, since there are no formulae of weights two and one, we obtain \(\varTheta (\square (p\wedge q), p\vee q\Rightarrow q\rightarrow p) = [1,2,0,0]\). Following this explanation, observe that the issue we faced with \(\Rightarrow \square p\) and \(\square p\Rightarrow p\) is now fixed: we first unbox \(\square p\) in \(\square p\Rightarrow p\), hence \(\varTheta (\square p\Rightarrow p)=[2]{{\,\mathrm{<\!\!<}\,}}[1,0]=\varTheta (\Rightarrow \square p)\).

Two things need to be noted about such lists. First, if no topmost occurrence of a formula is of weight \(1\le k\le n\), then a 0 appears in position k in the list. This is the case for the weight 2 in the last example above. Second, as no formula is of weight 0 we do not dedicate a position for this particular weight in our list.

4.3 Every Rule of \(\textsf{G4iSLt}\) Reduces \(\varTheta \) Upwards

We obtain the sought after result about our measure \(\varTheta \): it decreases upwards through the rules of \(\textsf{G4iSLt}\) on the \({{\,\mathrm{<\!\!<}\,}}\) ordering.

Lemma 3

For all sequents \(S_0, S_1,...,S_n\) and for all \(1\le i\le n\), if there is an instance of a rule r of \(\textsf{G4iSLt}\) of the form below, then \(\varTheta (S_i){{\,\mathrm{<\!\!<}\,}}\varTheta (S_0)\):

figure g

Clearly, this result implies that the naive strategy for \(\textsf{G4iSLt}\) terminates: any rule application makes the measure decrease on \({{\,\mathrm{<\!\!<}\,}}\), ensuring termination via well-foundedness of \({{\,\mathrm{<\!\!<}\,}}\). Thus, backward proof search is strongly terminating.

Moreover, this lemma is quite crucial in the proof of admissibility of cut: as we use \(\varTheta (\varGamma \Rightarrow \chi )\) as secondary induction measure (through well-foundedness of \({{\,\mathrm{<\!\!<}\,}}\)) there, we know that we can apply the secondary induction hypothesis on any sequent S which is a premise of \(\varGamma \Rightarrow \chi \) through a rule, as \(\varTheta (S){{\,\mathrm{<\!\!<}\,}}\varTheta (\varGamma \Rightarrow \chi )\).

5 Cut-Elimination for \(\textsf{G4iSLt}\)

To reach cut-elimination, our main theorem, we first state and prove the admissibility of the cut rule in a direct and purely syntactic way. More precisely, we prove that the additive-cut rule, with cut formula \(\varphi \), is admissible. This statement and its formalisation are given below, where \(\varGamma \) is encoded as \(\varGamma \)0++\(\varGamma \)1.

Theorem 2 (Admissibility of additive-cut)

The additive cut rule below is admissible in \(\textsf{G4iSLt}\).

figure h
figure i

Proof

Let \(d_{1}\) (with last rule \(r_{1}\)) and \(d_{2}\) (with last rule \(r_{2}\)) be proofs in \(\textsf{G4iSLt}\) of \(\varGamma \Rightarrow \varphi \) and \(\varphi ,\varGamma \Rightarrow \chi \) respectively, as shown below.

figure j

We show that there is a proof in \(\textsf{G4iSLt}\) of \(\varGamma \Rightarrow \chi \). We reason by strong primary induction (PI) on the weight of the cut-formula \(\varphi \), giving the primary inductive hypothesis (PIH). We also use a strong secondary induction (SI) on \(\varTheta (\varGamma \Rightarrow \chi )\) of the conclusion of a cut, giving the secondary inductive hypothesis (SIH). Crucially, by using SIH we avoid the issues caused by the diagonal formula [23, 44].

We consider \(r_1\). In total, there are thirteen cases for \(r_1\): one for each rule in \(\textsf{G4iSLt}\). However, we can reduce the number of cases to eight. We separate them by using Roman numerals and showcase the most interesting ones.

(V) \(\textbf{r}_{\textbf{1}}={{({\boldsymbol{\rightarrow }\textbf{R}})}}:\) Then \(r_1\) has the following form where \(\varphi =\varphi _0\rightarrow \varphi _1\):

figure k

For the cases where \(\varphi _0\rightarrow \varphi _1\) is principal in \(r_2\) and \(r_2\ne {({\square \!\rightarrow \text {L}})}\), or where \(r_2\in \{{({\text {IdP}})},{({\bot \text {L}})}\}\), we refer to Dyckhoff and Negri’s proof [13] as the cuts produced in these cases involve the traditional induction hypothesis PIH. We are left with seven sub-cases, but here again focus on the most interesting ones.

(V-d) If \(r_2\) is (\(\rightarrow \rightarrow \text {L}\)) where the cut formula is not principal in \(r_2\), then it must have the following form where \((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0=\varGamma \).

figure l

Thus, \(\varGamma \Rightarrow \chi \) is of the form \((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0\Rightarrow \chi \) and \(\varGamma \Rightarrow \varphi _0\rightarrow \varphi _1\) is of the form \((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0\Rightarrow \varphi _0\rightarrow \varphi _1\). Using the admissible rule (\(\rightarrow \rightarrow \)LIR) on the latter we obtain a proof of the sequent \(\gamma _2,\varGamma _0\Rightarrow \varphi _0\rightarrow \varphi _1\). Then consider the following proof of the sequent \(\gamma _1\rightarrow \gamma _2,\varGamma _0\Rightarrow \gamma _0\rightarrow \gamma _1\), where the rule (\(\rightarrow \rightarrow \)LIL) deconstructs the implication \((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2\), rule (Ctr) contracts \(\gamma _1\rightarrow \gamma _2\) and Lemma 2 is the invertibility of the rule (\(\rightarrow \)R).

figure m

The crucial point here is to see that the use of SIH is justified, in other words, that \(\varTheta (\gamma _0,\gamma _1\rightarrow \gamma _2,\varGamma _0\Rightarrow \gamma _1){{\,\mathrm{<\!\!<}\,}}\varTheta ((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0\Rightarrow \chi )\). This is the case as the rule applications (\(\rightarrow \rightarrow \)L) and (\(\rightarrow \)R) entail \(\varTheta (\gamma _0,\gamma _1\rightarrow \gamma _2,\varGamma _0\Rightarrow \gamma _1)\) \({{\,\mathrm{<\!\!<}\,}}\varTheta (\gamma _1\rightarrow \gamma _2,\varGamma _0\Rightarrow \gamma _0\rightarrow \gamma _1){{\,\mathrm{<\!\!<}\,}}\varTheta ((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0\Rightarrow \chi )\) by Lemma 3, hence \(\varTheta (\gamma _0,\gamma _1\rightarrow \gamma _2,\varGamma _0\Rightarrow \gamma _1){{\,\mathrm{<\!\!<}\,}}\varTheta ((\gamma _0\rightarrow \gamma _1)\rightarrow \gamma _2,\varGamma _0\Rightarrow \chi )\) by transitivity of \({{\,\mathrm{<\!\!<}\,}}\). So, we are done. Note that the created cut could not be justified by usual induction on height, as the admissibility of (\(\rightarrow \rightarrow \)LIL) is not height-preserving.

(V-f) If \(r_2\) is (\(\square \!\rightarrow \)L) with a principal formula different from the cut formula, then it must have the following form where \(\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0=\varGamma \).

figure n

Thus, we have that \(\varGamma \Rightarrow \chi \) and \(\varGamma \Rightarrow \varphi _0\rightarrow \varphi _1\) are respectively of the form \(\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi \) and \(\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \varphi _0\rightarrow \varphi _1\). Using the admissible rule (\(\square \!\rightarrow \)LIR) on the latter we obtain a proof of \(\gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \varphi _0\rightarrow \varphi _1\). Then, we proceed as follows by combining the proof \(\pi \) second-below with the first one.

figure o
figure p

Note that both uses of SIH are justified here, as the last rule in the first proof is an instance of (\(\square \!\rightarrow \)L) hence \(\varTheta (\gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi ){{\,\mathrm{<\!\!<}\,}}\varTheta (\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi )\) and \(\varTheta (\gamma _1,\varPhi ,\varGamma _0,\square \gamma _0\Rightarrow \gamma _0){{\,\mathrm{<\!\!<}\,}}\varTheta (\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi )\) by Lemma 3.

(VII) \(\mathbf {r_1=}{{({\boldsymbol{\square \!\rightarrow } \textbf{L}})}}\): Then \(r_1\) is as follows, where \(\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0=\varGamma \).

figure q

Thus, the sequents \(\varGamma \Rightarrow \chi \) and \(\varphi ,\varGamma \Rightarrow \chi \) are of the form \(\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi \) and \(\varphi ,\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi \), respectively. Then, we proceed as follows.

figure r

Note that the use of SIH is justified, as the last rule in this proof gives us \(\varTheta (\gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi ){{\,\mathrm{<\!\!<}\,}}\varTheta (\square \gamma _0\rightarrow \gamma _1,\varPhi ,\square \varGamma _0\Rightarrow \chi )\) by Lemma 3.

(VIII) \(\mathbf {r_1=}{({\textbf{SLtR}})}\): Then \(\varphi \) is the diagonal formula in \(r_1\):

figure s

where \(\varphi =\square \varphi _0\) and \(\varPhi ,\square \varGamma _0=\varGamma \). Thus, we have that \(\varGamma \Rightarrow \chi \) and \(\varphi ,\varGamma \Rightarrow \chi \) are respectively of the form \(\varPhi ,\square \varGamma _0\Rightarrow \chi \) and \(\square \varphi _0,\varPhi ,\square \varGamma _0\Rightarrow \chi \). We now consider \(r_2\).

(VIII-b) If \(r_2\) is (\(\square \!\rightarrow \)L) it is of the following form, where \(\varPhi =\square \gamma _0\rightarrow \gamma _1,\varPhi _0\).

figure t

We proceed as follows.

figure u

where \(\pi _0\) is the first proof given below, which depends \(\pi _1\), the second one:

figure v
figure w

Note that both uses of SIH are justified here as the rule application (\(\square \!\rightarrow \)L) entails \(\varTheta (\gamma _1,\varPhi _0,\varGamma _0,\square \gamma _0\Rightarrow \gamma _0){{\,\mathrm{<\!\!<}\,}}\varTheta (\square \gamma _0\rightarrow \gamma _1,\varPhi _0,\square \varGamma _0\Rightarrow \chi )\) and we have \(\varTheta (\gamma _1,\varPhi _0,\square \varGamma _0\Rightarrow \chi ){{\,\mathrm{<\!\!<}\,}}\varTheta (\square \gamma _0\rightarrow \gamma _1,\varPhi _0,\square \varGamma _0\Rightarrow \chi )\) by Lemma 3.

(VIII-c) If \(r_2\) is (SLtR), then it is of the following form where \(\chi =\square \chi _0\).

figure x

We proceed as follows.

figure y

The use of SIH is justified because the last rule in this proof ensures that \(\varTheta (\varPhi ,\varGamma _0,\square \chi _0\Rightarrow \chi _0){{\,\mathrm{<\!\!<}\,}}\varTheta (\varPhi ,\square \varGamma _0\Rightarrow \square \chi _0)\) by Lemma 3. \(\blacksquare \)

The attentive reader may have noticed that our proof technique requires the use of additive, and not multiplicative, cuts. Indeed, the use of SIH relies on the decrease of the measure \(\varTheta \), which is notably ensured by the upward application of any rule of the calculus. More generally, in the proof of admissibility if the cut we initially consider has \(\varGamma \Rightarrow \chi \) as conclusion, then we can justify a cut with conclusion \(\varGamma '\Rightarrow \chi '\) using SIH as long as we have a chain \(r_0,\dots ,r_n\) of application of rules of \(\textsf{G4iSLt}\) of the following form.

figure z

However, the contraction rule does not ensure the decrease of the measure \(\varTheta \) from conclusion to premise: it is not the case that \(\varTheta (\varGamma ,\varphi ,\varphi \Rightarrow \chi ){{\,\mathrm{<\!\!<}\,}}\varTheta (\varGamma ,\varphi \Rightarrow \chi )\). So, this prevents us from allowing one of \(r_0,\dots ,r_n\) above to be \({({\text {Ctr}})}\). This is where multiplicative cuts are problematic: they most often use the contraction rule as follows, where \(\varGamma \Rightarrow \chi \) is the conclusion of the initial cut and \(\varGamma ',\varGamma ''\Rightarrow \chi '\) is the conclusion of the cut we want to justify through SIH.

figure aa

Unfortunately, the presence of the contraction rule above \(\varGamma \Rightarrow \chi \) disallows us from using SIH on \(\varGamma ',\varGamma ''\Rightarrow \chi '\), as we are not ensured that the measure decreased between the two sequents. So, our proof technique prohibited us from using multiplicative cuts, forcing us to use additive ones. This observation was already made by Goré and Shillito [26].

Using our purely syntactic proof of cut-admissibility above, we easily obtain a cut-elimination procedure for the calculus \(\textsf{G4iSLt}\) extended with (cut), by simply repetitively eliminating topmost cuts first. To effectively prove this statement in Coq we explicitly encode the additive cut rule as follows:

figure ab

We encode the calculus \(\textsf{G4iSLt}+{({cut})}\) as G4iSLt_cut_rules, i.e. G4iSLt_rules enhanced with (cut). Finally, we turn to the elimination of additive cuts:

Theorem 3

The additive cut rule is eliminable from \(\textsf{G4iSLt}+{({cut})}\).

figure ac

The above theorem shows that any proof in \(\textsf{G4iSLt}+{({cut})}\) of a sequent, i.e. G4iSLt_cut_prv s, can be transformed into a proof in \(\textsf{G4iSLt}\) of the same sequent. As this theorem is in fact a constructive function based on Type, we can use the extraction feature of Coq and obtain a cut-eliminating Haskell program.

6 Conclusion

This paper introduces a sequent calculus for \(\textsf{iSL}\), denoted \(\textsf{G4iSLt}\). It is an improvement over the sequent calculus \(\textsf{G4iSL}\) from [21], because backward proof search for \(\textsf{G4iSLt}\) is strongly terminating (instead of weakly terminating) shown via a new well-founded measure, and cut-elimination is proved directly (instead of indirectly via an equivalent calculus based on \(\textsf{G3i}\) [21]). All our results are formalised in Coq in a constructive way. In turn, Coq’s extraction mechanism can generate a Haskell program for the cut-elimination procedure for \(\textsf{G4iSLt}\).

One of the reasons to develop \(\textsf{G4iSLt}\) is to use its strongly terminating proof search to investigate uniform interpolation, a strengthening of Craig interpolation, in the setting of intuitionistic provability logics. Typically, calculi with good (weakly or strongly) terminating proof search form good grounds for constructive proofs of uniform interpolation (see e.g. [2, 5, 22, 28, 37, 41,42,43]).

We also suggest to develop a countermodel construction for \(\textsf{G4iSLt}\) similarly to the one for \(\textsf{G4iSL}\) in [21]. Furthermore, as \(\textsf{iSL}\) is an intuitionistic modal logic only defined with \(\square \), there is the question how it can be extended by \(\diamond \) operators. It is clear from the literature of intuitionistic modal logics that several choices can be made (e.g. [4, 16, 33, 40, 47]), so we leave this for future work.