## 1 Introduction

Definite descriptions (DD) are complex terms commonly applied not only in natural languages but also in mathematics and computer science. In formal languages they are usually expressed by means of the iota operator, which forms terms from formulas. Thus $$\imath x\varphi$$ means ‘the (only) x satisfying $$\varphi$$’. A DD aims to denote a unique object by virtue of a property that only it has. Sometimes a DD fails, because nothing or more than one thing has the property. A DD that succeeds to denote only one object is proper; otherwise it is improper.

Definite descriptions, proper and improper, are ubiquitous not only in natural languages but also in mathematics and science (like the proper ‘the sum of 7 and 5’ or the improper ‘the square root of n’). In formal languages the application of functional terms is the prevailing way of representing complex names. However, applying DD can outrun functional terms in many ways, since they are more expressive than functional terms, in the sense that an arbitrary functional term $$f^n(t_1, \ldots , t_n)$$ can be represented as a description $$\imath xF^{n+1}(x, t_1, \ldots , t_n)$$, where F is a predicate corresponding to the function f. On the other hand, not every definite description, even if proper, can be expressed using functional terms; it is possible only in the case of predicates expressing functional relations, whereas every sentence can be used to form a DD. For example, both ‘the father of Ben’ and ‘the daughter of Mary’ may be represented as terms using the iota operator, but only the first may be represented as a functional term. Moreover, even if we can use functional terms instead of DD we enrich a language with another sort of functors in addition to predicates. This has an impact on the formalisation of valid arguments in which very often the conclusion follows on the basis of the content expressed by functional terms which is directly expressed by predicates. For example: ‘Adam has children’ follows from ‘Adam is the father of Ben’. However to prove its validity, its formal representation $$a = f(b) \vdash \exists x(Cxa)$$ requires two enthymematic premisses: $$\forall xy(Mxy\vee Fxy\leftrightarrow Cyx)$$ and $$\forall xy(x = f(y) \leftrightarrow Fxy)$$. Let us call the latter premiss a bridge principle allowing us to transfer information conveyed by predicates to related functions and vice versa. In general they have a form: $$\forall x_1, \ldots , x_n, y(y=f^n(x_1, \ldots , x_n)\leftrightarrow F^{n+1}(y, x_1, \ldots , x_n)$$ and show how the information encoded by the functional predicates is represented by predicates. In the case of using DD instead of functional terms we do not need such extra bridge principles, whereas in languages with functional terms they are necessary in an analysis of obviously valid arguments.Footnote 1

The usefulness of formal devices like the iota operator and other term-forming operators has recently been better recognised (cf. Tennant’s [32] or Scott and Benzmüller’s implementation of free logic using proof assistant Isabelle/HOL [3]) also in the fields connected with computer science, like differential dynamic logic used for verification of hybrid systems [5] or description logics (see [1] or [25]). Logics with DD are often implemented to enable formalisation of deep philosophical problems. e.g. Anselm’s ontological argument (see the work by Oppenheimer and Zalta using the automated reasoning tool PROVER9 [26] or its encoding by Blumson [4]).

Since several rival theories of DD were formulated, the applicability and potential usefulness of DD was underestimated so far. It leads to a question which approach is the best one, at least for some specific kind of applications. In this paper we focus on the Russellian approach to definite descriptions ([28] and [35]) which plays a central role in this area. Although Russell’s theory of DD has some controversial points, it became a standard point of reference of almost all works devoted to the analysis of definite descriptions. Moreover, it is still widely accepted by formal logicians as a proper way of handling descriptions; the scores of textbooks that use it as their official theory of definite descriptions count as witnesses for this claim. Russell’s theory has also strong affinities to logics closely connected with applications in constructive mathematics and computer science like the logic of the existence predicate by Scott [30] or the definedness logic (or the logic of partial terms) of Beeson [2] and Feferman [8]. These connections were elaborated in [14].

Russell treated DD as incomplete signs and defined their use by contextual definitions of the form:

$$\psi [x/\imath y\varphi ] \ := \ \exists x(\forall y(\varphi \leftrightarrow y =x )\wedge \psi )$$

but this solution leads to scoping difficulties if $$\psi$$ is not elementary. $$\lnot \psi [x/\imath y\varphi ]$$, e.g., is ambiguous: is the whole formula negated or only the predicate $$\psi$$? The method which Russell introduced in [35] to draw scope distinctions is rather clumsy. Fortunately, it is possible to develop a logic which treats DD as genuine terms and yet retains desirable features of the Russellian approach. Such a logic was formalised as a natural deduction system by Kalish, Montague, and Mar [18] and by Francez and Więckowski [11]. These systems involve complex rules and axioms, but recently Indrzejczak [16] provided an analytic and cut-free sequent calculus equivalent to the Russellian logic as formalised in [18]. However, in all these systems the formal counterpart of the Russellian policy of eliminating DD from sentences must be restricted to predicate letters, which is connected with the scoping difficulties of the Russellian approach just mentioned.

Can we offer any improvement on the state of the art? A possible strategy of avoiding these problems is to treat DD by means of a binary quantifier; this approach was formally developed by Kürbis (cf. [19,20,21,22,23]). However, if we want to treat DD as terms, then the introduction of the lambda operator to construct complex predicate abstracts from formulas offers a good solution. $$\lambda x\varphi$$ means ‘the property of being $$\varphi$$’ and applied to some term, in particular to a DD, forms a formula called a lambda atom. This device was introduced into studies of modal predicate logic by Thomason and Stalnaker [31], and the idea was further developed by Bressan [6] and Fitting [9], in particular, to distinguish between de dicto and de re reading of modal operators. Independently, this technique was used by Scales [29] in his formulation of attributional logic, where Aristotle’s distinction between the negation of a sentence and of a predicate is formally expressible. In fact, Scales seems to be the first one to apply predicate abstraction to formalise a theory of DD which relates closely to Russell’s. Predicate abstracts were also successfully applied by Fitting and Mendelsohn [10] to obtain a theory of DD in a modal setting. This approach, with slight modifications, was further developed independently by Orlandelli [27] and Indrzejczak [12] to obtain cut-free sequent calculi for modal logics with DD and predicate abstracts.

In this article we focus on a different logic RL, first introduced in [17], which also combines the iota and lambda operators. It avoids the shortcomings of the Russellian approach while saving all its plausible features. Predicate abstracts permit us to draw scope distinctions rather more elegantly than with the Russellian scope markers and their application is more general. RL is essentially Russellian but with DD treated as genuine terms. Nonetheless, the reductionist aspect of Russell’s approach is retained in several ways. On the level of syntax the occurrences of DD are restricted to arguments of predicate abstracts to form lambda atoms. On the level of semantics DD are not defined by an interpretation function but by satisfaction clauses for lambda atoms. Eventually, on the level of calculus DD cannot be instantiated for variables in quantifier rules but are subject to special rules for lambda atoms. This strict connection of DD with predicate abstracts avoids disadvantages of the Russellian approach connected with scoping difficulties, and, at the same time, simplifies proofs of metalogical properties.

RL was originally characterised semantically and formalised as an analytic tableau calculus in [17], where it was also applied for proving the Craig interpolation theorem. Here we are completing the research on RL by providing an adequate sequent calculus for which the cut elimination theorem is proved constructively. We characterise the language, semantics and axiomatisation of RL in Sect. 2. Then we present the sequent calculus GRL for RL and show its equivalence with an axiomatic Hilbert style system HRL. Section 4 contains a proof of the cut elimination theorem, and Sect. 5 a Henkin-style proof of completeness. The paper finishes with some comparative remarks.

## 2 Preliminaries

The language $$\mathcal {L}$$ of RL is standard, except that it contains the operators $$\imath$$ and $$\lambda$$. Following the remarks on the functional terms from the Introduction, as well as the original Russellian attitude towards terms, the ‘official’ language has neither constant nor function symbols; in the completeness proof we add constants solely for the purpose of constructing models from consistent sets. As is customary in proof theoretic investigations since Gentzen, we distinguish free and bound variables graphically in deductions. It is not customary to make this distinction in semantics, and so there we won’t make it either. This blend of two customs should not lead to confusion, and we are following Fitting and Mendelsohn [10] in this respect. There are two disjoint sets VAR of variables and PAR of parameters. The former plays the role of the bound, the latter of the free variables in the presentation of the proof theory of RL; in the presentation of the semantics, this restriction is relaxed and members of VAR are permitted as free variables. The terms of the language in the strict sense are the variables and parameters. Expressions formed by $$\imath$$ are admitted as terms in a more general sense: their application is restricted to predicate abstracts and they are called quasi-terms. We mention only the following formation rules for the more general notion of a formula used in the semantics:

• If $$P^n$$ is a predicate symbol (including $$=$$) and $$t_1\ldots t_n\in VAR\cup PAR$$, then $$P^n(t_1, ..., t_n)$$ is a formula (atomic formula).

• If $$\varphi$$ is a formula, then $$(\lambda x\varphi )$$ is a predicate abstract.

• If $$\varphi$$ is a formula, then $$\imath x\varphi$$ is a quasi-term.

• If $$\varphi$$ is a predicate abstract and t a term or quasi-term, then $$\varphi t$$ is a formula (lambda atom).

$$\varphi [x/t]$$ denotes the result of replacing x by t in $$\varphi$$. To save space, we’ll often write $$\varphi _t^x$$ instead of $$\varphi [x/t]$$. If t is a variable y, it is assumed that y is free for x in $$\varphi$$, that is, no occurrence of y becomes bound in $$\varphi$$ in the replacement. To save space and simplify things in the statement of semantics and in the completeness proof in Sect. 4, we treat $$\vee , \rightarrow , \exists$$ as defined notions.

A model is a structure $$M=\langle D, I \rangle$$, where for each n-argument predicate $$P^n$$, $$I(P^n)\subseteq D^n$$. An assignment v is a function $$v:VAR\cup PAR\longrightarrow D$$. An x-variant $$v'$$ of v agrees with v on all arguments, save possibly x. We write $$v^x_o$$ to denote the x-variant of v with $$v^x_o(x) = o$$. The notion of satisfaction of a formula $$\varphi$$ with v, in symbols $$M, v \models \varphi$$, is defined as follows, where $$t\in VAR\cup PAR$$:

 $$M, v \models P^n(t_1, ..., t_n)$$ iff $$\langle v(t_1), \ldots , v(t_n) \rangle \in I (P^n)$$ $$M, v \models t_1 = t_2$$ iff $$v(t_1) = v(t_2)$$ $$M, v \models (\lambda x\psi )t$$ iff $$M, v^x_o \models \psi$$, where $$o = v(t)$$ $$M, v \models (\lambda x\psi )\imath y\varphi$$ iff there is an $$o\in D$$ such that $$M, v^x_o \models \psi$$, and $$M, v^x_o \models \varphi [y/x]$$, and for any y-variant $$v'$$ of $$v^x_o$$, if $$M, v' \models \varphi$$, then $$v'(y)=o$$ $$M, v \models \lnot \varphi$$ iff $$M, v \not \models \varphi$$, $$M, v \models \varphi \wedge \psi$$ iff $$M, v\models \varphi$$ and $$M, v \models \psi$$, $$M, v \models \forall x\varphi$$ iff $$M, v^x_o \models \varphi$$, for all $$o\in D$$

A formula $$\varphi$$ is satisfiable if there are a model M and an assignment v such that $$M, v \models \varphi$$. A formula is valid if, for all models M and assignments v, $$M, v \models \varphi$$. Semantically, HRL is identified with the set of valid formulas, RL with the set of valid sequents. A set of formulas $$\varGamma$$ is satisfiable iff there is some structure M and an assignment v such that M satisfies every member of $$\varGamma$$ with v. A sequent $$\varGamma \Rightarrow \varDelta$$ is satisfied by a structure M with an assignment v if and only if, if for all $$\varphi \in \varGamma$$, $$M, v\models \varphi$$, then for some $$\psi \in \varDelta$$, $$M, v \models \psi$$. We symbolise this by $$M, v \models \varGamma \Rightarrow \varDelta$$. A sequent $$\varGamma \Rightarrow \varDelta$$ is valid iff it is satisfied by every structure with every assignment v. In this case we write $$\models \varGamma \Rightarrow \varDelta$$.

Note that we do not characterise DD semantically by means of interpretation function I as it is usually done (for example in [10, 27])). The syntactic restriction making DD only arguments in lambda atoms allows us to define them together as a separate satisfaction clause instead. It is closer to the original Russellian treatment of descriptions and simplifies the completeness proof.

Before presenting the sequent calculus, we briefly give the Hilbert system HRL. As we noted Russell treated DD as incomplete symbols and eliminated them by means of contextual definitions. Adopting the following axiom corresponding to his definitions would be too simplistic:

R       $$\psi (\imath y\varphi ) \leftrightarrow \exists x(\forall y(\varphi \leftrightarrow y =x )\wedge \psi )$$

R must be restricted to atomic $$\psi$$ or it is necessary to add means for marking scope distinctions. Whitehead and Russell chose the latter part, but their method is far from ideal. It is possible to avoid the problem in more elegant fashion with the help of a $$\lambda$$ operator. In particular, we can use it to distinguish the application of the negated predicate $$\lnot \psi$$ to $$\imath y\varphi$$ from negating the application of $$\psi$$ to it. In the present context scoping difficulties arise only in relation to DD, and the problem is solved by restricting predication on DD to predicate abstracts. Accordingly, atomic formulas are built from predicate symbols and variables/parameters only. This is in full accordance with Russell, since the language of Principia contains no primitive constant and function symbols: they are introduced by contextual definitions by means of DD. We modify R to reflect the restriction that $$\imath$$ terms require $$\lambda$$ abstracts:

$$R_\lambda$$       $$(\lambda x\psi )\imath y\varphi \leftrightarrow \exists x(\forall y(\varphi \leftrightarrow y =x )\wedge \psi )$$

This way we avoid problems with scope while permitting complex as well as primitive predicates to be applied to DD. The axiomatic system HRL for our logic RL results from a standard axiomatization of pure first-order logic with identity and quantifier rules restricted to parameters by adding the axiom $$R_\lambda$$ and $$\beta$$-conversion for $$\lambda$$ but restricted again to parameters: $$(\lambda x\psi )t \leftrightarrow \psi [x/t]$$, where t is a parameter. The adequacy of HRL will be demonstrated below.

## 3 Sequent Calculus

We now formalise the Russellian logic RL as a sequent calculus GRL. Sequents $$\varGamma \Rightarrow \varDelta$$ are ordered pairs of finite multisets of formulas, called the antecedent and the succedent, respectively. GRL is essentially the calculus G1c of Troelstra and Schwichtenberg [34] with rules for identity and lambda atoms: see Fig. 1.

Let us recall that formulas displayed in the schemata are active, whereas the remaining ones are parametric, or form a context. In particular, all active formulas in the premisses are called side formulas, and the one in the conclusion is the principal formula of the respective rule application. Proofs are defined in the standard way as finite trees with nodes labelled by sequents. The height of a proof $$\mathcal{D}$$ of $$\varGamma \Rightarrow \varDelta$$ is defined as the number of nodes of the longest branch in $$\mathcal{D}$$. $$\vdash _k \varGamma \Rightarrow \varDelta$$ means that $$\varGamma \Rightarrow \varDelta$$ has a proof with height at most k. $$\vdash$$ means that there is a proof of the expression standing to its right, be it a formula (in the case of HRL) or a sequent (in the case of GRL).

We need some auxiliary results. In particular, since $$(=-)$$ is Leibniz’ Principle restricted to atomic formulas, we must prove its unrestricted form.

### Lemma 1

1. 1.

$$\vdash b_1=b_2, \varphi [x/b_1] \Rightarrow \varphi [x/b_2]$$, for any formula $$\varphi$$.

2. 2.

If $$\vdash _k \varGamma \Rightarrow \varDelta$$, then $$\vdash _k \varGamma [b_1/b_2] \Rightarrow \varDelta [b_1/b_2]$$, where k is the height of a proof.

### Proof

1. follows by induction over the complexity of formulas, which is standard for all cases except those concerning lambda atoms with DD. We note that $$\varphi {^z_b}{^y_c}$$ is the same as $$\varphi {^y_c}{^z_b}$$, etc. We write $$[(\lambda x\psi )\imath y\varphi ]_{b_1}^z$$ to denote substitutions in lambda atoms in more readable fashion. To simplify proofs applications of weakening and contraction rules to derive shared contexts are omitted from now on. Let $$\mathcal {D}$$ be the following deduction, where the leaves are axioms and c a fresh parameter:

Then we derive $$\vdash b_1=b_2, [(\lambda x\psi )\imath y\varphi ]]^z_{b_1} \Rightarrow [(\lambda x\psi )\imath y\varphi ]^z_{b_2}$$:

The two left leaves are provable by the induction hypothesis (if $$b_1, b_2$$ are not present in $$\psi$$ or $$\varphi$$, we have an axiomatic sequent).

The proof of 2 is by a standard induction on the height of proofs; the rules for lambda atoms with DD are treated similarly to the rules for quantifiers.    $$\square$$

Let us now show that the Russellian axiom $$R_\lambda$$ is provable in GRL. We will provide proofs for two sequents corresponding to two implications. Let $$\mathcal {D}$$ be:

The following establishes one half of $$R_\lambda$$:

where the only nonaxiomatic sequent is provable by lemma 1.1. Next, where $$\mathcal {D}$$ is:

the following establishes the other half of $$R_\lambda$$:

Conversely, the three rules for lambda atoms with DD are derivable in G1 with $$R_\lambda$$ added in the form of two axiomatic sequents. To derive $$(\imath _1\Rightarrow )$$, let $$R_\lambda ^\Rightarrow$$ be $$(\lambda x\psi )\imath y\varphi \Rightarrow \exists x(\forall y(\varphi \leftrightarrow y = x)\wedge \psi )$$:

To derive $$(\imath _2\Rightarrow )$$, use (Cut) with $$(\lambda x\psi )\imath y\varphi \Rightarrow \exists x(\forall y(\varphi \leftrightarrow y = x)\wedge \psi )$$ and:

The following derives $$(\Rightarrow \imath )$$:

where the right premiss of (Cut) is provable by lemma 1.1, and the conclusion of the rule follows by (Cut) with $$\exists x(\forall y(\varphi \leftrightarrow y=x)\wedge \psi )\Rightarrow (\lambda x\psi )\imath y\varphi$$.

Since the proofs of the interderivability of the axiom of $$\lambda$$ conversion and $$(\lambda \Rightarrow ), (\Rightarrow \lambda )$$ are trivial we are done and conclude with:

### Theorem 1

$$\vdash _{HRL}\varphi$$ iff $$\vdash _{GRL} \ \Rightarrow \varphi$$

## 4 Cut Elimination

We will show that (Cut) is eliminable from every proof in GRL using the general strategy of cut elimination proofs applied originally for hypersequent calculi in Metcalfe, Olivetti and Gabbay [24], which works well also in the context of standard sequent calculi (see [15]). Such a proof has a particularly simple structure and allows us to avoid many complexities inherent in other methods of proving cut elimination. In particular, we avoid well known problems with contraction, since two auxiliary lemmata deal with this problem in advance. We assume that all proofs are regular in the sense that every parameter a which is fresh by the side condition of the respective rule must be fresh in the entire proof, not only on the branch where the application of this rule takes place. There is no loss of generality since every proof may be systematically transformed into a regular one by lemma 1.2. The following notions are crucial for the proof:

1. 1.

The cut-degree is the complexity of the cut-formula $$\varphi$$, i.e. the number of logical constants (connectives, quantifiers and operators) occurring in $$\varphi$$; it is denoted by $$d\varphi$$.

2. 2.

The proof-degree ($$d\mathcal{D}$$) is the maximal cut-degree in $$\mathcal{D}$$.

The proof of the cut elimination theorem is based on two lemmata which successively make a reduction: first of the height of the right, and then of the height of the left premiss of cut. $$\varphi ^k, \varGamma ^k$$ denote $$k > 0$$ occurrences of $$\varphi , \varGamma$$, respectively.

### Lemma 2 (Right reduction)

Let $$\mathcal{D}_1 \vdash \varGamma \Rightarrow \varDelta , \varphi$$ and $$\mathcal{D}_2 \vdash \varphi ^k, \varPi \Rightarrow \varSigma$$ with $$d\mathcal{D}_1, d\mathcal{D}_2 < d\varphi$$, and $$\varphi$$ principal in $$\varGamma \Rightarrow \varDelta , \varphi$$, then we can construct a proof $$\mathcal{D}$$ such that $$\mathcal{D} \vdash \varGamma ^k, \varPi \Rightarrow \varDelta ^k, \varSigma$$ and $$d\mathcal{D} < d\varphi$$.

### Proof

By induction on the height of $$\mathcal{D}_2$$. The basis is trivial, since $$\varGamma \Rightarrow \varDelta , \varphi$$ is identical with $$\varGamma ^k, \varPi \Rightarrow \varDelta ^k, \varSigma$$. The induction step requires examination of all cases of possible derivations of $$\varphi ^k, \varPi \Rightarrow \varSigma$$, and the role of the cut-formula in the transition. In cases where all occurrences of $$\varphi$$ are parametric we simply apply the induction hypothesis to the premisses of $$\varphi ^k, \varPi \Rightarrow \varSigma$$ and then apply the respective rule – it is essentially due to the context independence of almost all rules and the regularity of proofs, which together prevent violation of side conditions on eigenvariables. If one of the occurrences of $$\varphi$$ in the premiss(es) is a side formula of the last rule we must additionally apply weakening to restore the missing formula before the application of the relevant rule.

In cases where one occurrence of $$\varphi$$ in $$\varphi ^k, \varPi \Rightarrow \varSigma$$ is principal we make use of the fact that $$\varphi$$ in the left premiss is also principal; for the cases of contraction and weakening this is trivial. We consider the cases of lambda atoms with DD. Hence $$\mathcal{D}_1$$ finishes with:

$$\dfrac{{\varGamma \!\!\Rightarrow \varDelta , \varphi [y/b] \varGamma \!\!\Rightarrow \varDelta , \psi [x/b] \varphi [y/a], \varGamma \Rightarrow \varDelta , a=b }}{{ \varGamma \Rightarrow \varDelta , (\lambda x\psi )\imath y\varphi }}$$

and $$\mathcal{D}_2$$ finishes with:

$$\dfrac{{\varphi [y/a'], \psi [x/a'], (\lambda x\psi )\imath y\varphi ^{k-1}, \varPi \!\!\Rightarrow \varSigma }}{{(\lambda x\psi )\imath y\varphi ^k, \varPi \!\!\Rightarrow \varSigma }}$$

or

$$\dfrac{{(\lambda x\psi )\imath y\varphi ^{k-1}, \varPi \!\!\Rightarrow \varSigma , \varphi [y/b_1] (\lambda x\psi )\imath y\varphi ^{k-1}, \varPi \!\!\Rightarrow \varSigma , \varphi [y/b_2] b_1 = b_2, (\lambda x\psi )\imath y\varphi ^{k-1}, \varPi \Rightarrow \varSigma }}{{(\lambda x\psi )\imath y\varphi ^k, \varPi \Rightarrow \varSigma }}$$

In the first case, by the induction hypothesis and lemma 1.2 we obtain $$\varphi [y/b], \psi [x/b], \varGamma ^{k-1}, \varPi \!\!\Rightarrow \varDelta ^{k-1}, \varSigma$$ and by two cuts with the leftmost and central premiss of $$(\Rightarrow \imath )$$ in $$\mathcal{D}_1$$ we obtain $$\varGamma ^{k+1}, \varPi \!\!\Rightarrow \varDelta ^{k+1}, \varSigma$$, which by contraction yields the result.

In the second case note first that by lemma 1.2 from the rightmost premiss of $$(\Rightarrow \imath )$$ in $$\mathcal{D}_1$$ we obtain

1. a.

$$\varphi [y/b_1], \varGamma \Rightarrow \varDelta , b_1=b$$ and

2. b.

$$\varphi [y/b_2], \varGamma \Rightarrow \varDelta , b_2=b$$.

Again by the induction hypothesis from the three premisses we get:

1. 1.

$$\varGamma ^{k-1}, \varPi \!\!\Rightarrow \varDelta ^{k-1}, \varSigma , \varphi [y/b_1]$$

2. 2.

$$\varGamma ^{k-1}, \varPi \!\!\Rightarrow \varDelta ^{k-1}, \varSigma , \varphi [y/b_2]$$

3. 3.

$$b_1=b_2, \varGamma ^{k-1}, \varPi \!\!\Rightarrow \varDelta ^{k-1}, \varSigma$$

We proceed as follows with a series of the applications of cut, followed by contractions, using the provable sequent $$b_1=b, b_2=b \Rightarrow b_1=b_2$$:

$$\square$$

### Lemma 3 (Left reduction)

Let $$\mathcal{D}_1 \vdash \varGamma \Rightarrow \varDelta , \varphi ^k$$ and $$\mathcal{D}_2 \vdash \varphi , \varPi \Rightarrow \varSigma$$ with $$d\mathcal{D}_1, d\mathcal{D}_2 < d\varphi$$, then we can construct a proof $$\mathcal{D}$$ such that $$\mathcal{D} \vdash \varGamma , \varPi ^k \Rightarrow \varDelta , \varSigma ^k$$ and $$d\mathcal{D} < d\varphi$$.

### Proof

By induction on the height of $$\mathcal{D}_1$$ but with some important differences to the proof of the right reduction lemma. First note that we do not require $$\varphi$$ to be principal in $$\varphi , \varPi \Rightarrow \varSigma$$, so it includes the case where $$\varphi$$ is atomic. In all these cases we just apply the induction hypothesis. This guarantees that even if an atomic cut formula was introduced in the right premiss by $$(=-)$$ the reduction of the height is achieved only on the left premiss, and we always obtain the expected result. Now, in cases where one occurrence of $$\varphi$$ in $$\varGamma \Rightarrow \varDelta , \varphi ^k$$ is principal, we first apply the induction hypothesis to eliminate all other $$k-1$$ occurrences of $$\varphi$$ in the premisses and then we apply the respective rule. Since the only new occurrence of $$\varphi$$ is principal, we can make use of the right reduction lemma again and obtain the result, possibly after some applications of structural rules.    $$\square$$

Now we are ready to prove the cut elimination theorem:

### Theorem 2

Every proof in GRL can be transformed into cut-free proof.

### Proof

By double induction: primary on $$d\mathcal{D}$$ and subsidiary on the number of maximal cuts (in the basis and in the inductive step of the primary induction). We always take the topmost maximal cut and apply lemma 3 to it. By successive repetition of this procedure we reduce either the degree of a proof or the number of cuts in it until we obtain a cut-free proof.    $$\square$$

In this section, we’ll make use of the fact that for every set there is a corresponding multiset, so if $$\varGamma$$, $$\varDelta$$ are sets of formulas, we may write $$\varGamma \Rightarrow \varDelta$$. We recall that we treat $$\vee , \rightarrow , \exists$$ as defined notions. For the completeness proof we assume that a denumerable set of individual constants may be added to the language. I assigns objects in the domain D of the model $$\langle D, I\rangle$$ to these constants. For brevity we introduce the notation $$I_v$$, where if t is a variable or parameter, $$I_v(t)=v(t)$$ and where t is a constant, $$I_v(t)=I(t)$$.

Recall the distinction between terms and pseudo-terms, the former variables and parameters and now also constants, the latter iota terms. In the following lemma, t denotes a variable, parameter or constant, not a DD, hence the proof is standard, with the case of lambda atoms similar to the case of quantifiers. In the rest of this section, too, t will refer to terms only. In particular, there is no need to consider pseudo-terms in the Lindenbaum-Henkin construction (theorem 4), because in substitution in the formulas concerned only terms can be used. Pseudo-terms are treated, just as they are in the semantics, as occurring in lambda atoms, and thus like the logical constants by the consideration of the consistent addition of formulas to a set in the construction of its maximally consistent extension.

### Lemma 4 (The Substitution Lemma.)

$$M, v \models \varphi ^x_t$$ iff $$M, v^x_{I_v(t)} \models \varphi$$, if t is free for x in $$\varphi$$.

### Proof

See e.g. [7, 133f] and adjust.    $$\square$$

Next, the soundness of GRL.

### Theorem 3 (Soundness of GRL)

If $$\vdash \varGamma \Rightarrow \varDelta$$, then $$\models \varGamma \Rightarrow \varDelta$$

### Proof

By induction on the height of the proof. Since it is well-known that the rules of G1 are validity preserving, and it is obvious for both lambda rules, we show this property only for $$(\imath _2\Rightarrow )$$ and $$(\Rightarrow \imath )$$, leaving $$(\imath _1\Rightarrow )$$ as an exercise.

$$(\imath _2\Rightarrow )$$. Suppose (1) $$\models \varGamma \Rightarrow \varDelta , \varphi _{b_1}^y$$, (2) $$\models \varGamma \Rightarrow \varDelta , \varphi _{b_2}^y$$, (3) $$\models b_1=b_2, \varGamma \Rightarrow \varDelta$$, and $$\not \models (\lambda x\psi )\imath y\varphi , \varGamma \Rightarrow \varDelta$$. By the last, there are a structure $$M=\langle D, I\rangle$$ and assignment v, such that $$M, v\models (\lambda x\psi )\imath y\varphi$$, for all $$\gamma \in \varGamma$$, $$M, v\models \gamma$$ and for all $$\delta \in \varDelta$$, $$M, v\not \models \delta$$. Thus by (1), (2) and (3): (4) $$M, v\models \varphi _{b_1}^y$$, (5) $$M, v\models \varphi _{b_2}^y$$ and (6) $$M, v\not \models b_1=b_2$$. And there is an $$o\in D$$ such that $$M, v^x_o \models \psi$$, and $$M, v^x_o \models \varphi [y/x]$$, and (7) for any y-variant $$v'$$ of $$v^x_o$$, if $$M, v' \models \varphi$$, then $$v'(y)=o$$. By the conventions on the use of free and bound variables in sequents, x is not free in $$\varphi _{b_1}^y$$ or $$\varphi _{b_2}^y$$, so v and $$v_o^x$$ agree on them, and so by (4) and (5) $$M, v_o^x\models \varphi _{b_1}^y$$ and $$M, v_o^x\models \varphi _{b_2}^y$$. By the substitution lemma, $$M, v{_o^x}{^y_{I_v({b_1})}}\models \varphi$$ and $$M, v{_o^x}{^y_{I_v({b_2})}}\models \varphi$$. So the y-variants $$v'$$ and $$v''$$ of $$v_o^x$$ that assign $$I_{v_o^x}(b_1)$$ and $$I_{v_o^x}(b_2)$$ to y satisfy $$\varphi$$ with M, so by (7) $$I_{v'}(b_1)=I_{v''}(b_2)=o$$. But $$v'$$ and $$v''$$ differ from v only in what they assign to x and y, and by (6) $$I_v(b_1)\not =I_v(b_2)$$. Contradiction.

$$(\Rightarrow \imath )$$. Suppose (1) $$\models \varGamma \Rightarrow \varDelta , \varphi _b^y$$, (2) $$\models \varGamma \Rightarrow \varDelta , \psi _b^x$$, (3) $$\models \varphi _a^y, \varGamma \Rightarrow \varDelta , a=b$$, but $$\not \models \varGamma \Rightarrow \varDelta , (\lambda x\psi )\imath y\varphi$$, a not free in any formulas in $$\varGamma$$ and $$\varDelta$$ nor in $$\varphi$$. Then there are a structure $$M=\langle D, I\rangle$$ and assignment v such that for all $$\gamma \in \varGamma$$, $$M, v\models \gamma$$, for all $$\delta \in \varDelta$$, $$M, v\not \models \delta$$ and (4) $$M, v\not \models (\lambda x\psi )\imath y\varphi$$. So by (1), $$M, v\models \varphi _b^y$$, by (2), $$M, v\models \psi _b^x$$, and by (4), it is not the case that there is an $$o\in D$$ such that $$M, v^x_o \models \psi$$, and $$M, v^x_o \models \varphi _x^y$$, and for any y-variant $$v'$$ of $$v^x_o$$, if $$M, v' \models \varphi$$, then $$v'(y)=o$$, i.e. for every $$o\in D$$, either $$M, v^x_o \not \models \psi$$, or $$M, v_o^x\not \models \varphi _x^y$$, or for some y-variant $$v'$$ of $$v^x_o$$, $$M, v' \models \varphi$$ and $$v'(y)\not =o$$. Consider $$I_v(b)$$. We have either (5) $$M, v^x_{I_v(b)} \not \models \psi$$, or (6) $$M, v_{I_v(b)}^x\not \models \varphi _x^y$$, or (7) for some y-variant $$v'$$ of $$v^x_{I_v(b)}$$, $$M, v' \models \varphi$$ and $$v'(y)\not ={I_v(b)}$$. By the substitution lemma from (5) and (6) we have $$M, v\not \models \psi _b^x$$ and $$M, v\not \models \varphi {_y^x}{^y_b}$$, and as $$\varphi {_x^y}{^x_b}$$ is the same as $$\varphi ^y_b$$, this contradicts consequences of (1) and (2). By conventions on the use of free and bound variables in sequents, x and y are not free in any of their formulas, so $$v^x_{I_v(b)}$$ agrees with v on all formulas in $$\varGamma$$, $$\varDelta$$, so for all $$\gamma \in \varGamma$$, $$M, v^x_{I_v(b)}\models \gamma$$, and for all $$\delta \in \varDelta$$, $$M, v^x_{I_v(b)}\not \models \delta$$. So by (3), if $$M, v^x_{I_v(b)}\models \varphi _a^y$$, then $$M, v^x_{I_v(b)}\models a=b$$. By the substitution lemma and the semantic clause for identity, if $$M, v{^x_{I_v(b)}}{^y_{I_v(a)}}\models \varphi$$, then $$I_v(a)=I_v(b)$$. Now evidently $$v{^x_{I_v(b)}}{^y_{I_v(a)}} (y)=I_v(a)$$, so $$v{^x_{I_v(b)}}{^y_{I_v(a)}} (y)=I_v(b)$$. But $$v{^x_{I_v(b)}}{^y_{I_v(a)}}$$ is a y-variant of $$v{^x_{I_v(b)}}$$, and the reasoning holds for any such y-variant, contradicting (7).    $$\square$$

Let $$\bot$$ represent an arbitrary contradiction. A set of formulas $$\varGamma$$ is inconsistent iff $$\varGamma \vdash \bot$$. $$\varGamma$$ is consistent iff it is not inconsistent. A set of formulas $$\varGamma$$ is maximal iff for any formula A, either $$A\in \varGamma$$ or $$\lnot A\in \varGamma$$. A set of formulas $$\varGamma$$ is deductively closed iff, if $$\varGamma \vdash A$$, then $$A\in \varGamma$$. We state without proof this standard result:

### Lemma 5

Any maximally consistent set is deductively closed.

Extend $$\mathcal {L}$$ to a language $$\mathcal {L}^+$$ by adding countably new constants ordered by a list $$\mathcal {C}=c_1, c_2\ldots$$. We will say that such a constant occurs parametrically if its occurrence satisfies the restrictions imposed on parameters in $$(\Rightarrow \forall )$$ and $$(\imath _1\Rightarrow )$$.

### Theorem 4

Any consistent set of formulas $$\varDelta$$ can be extended to a maximally consistent set $$\varDelta ^+$$ such that:

(a) for any formula $$\varphi$$ and variable x, if $$\lnot \forall x\varphi \in \varDelta ^+$$, then for some constant c, $$\varphi _c^x\not \in \varDelta ^+$$;

(b) for any formulas $$\varphi$$, $$\psi$$ and variables x, y, if $$(\lambda x\psi )\imath y\varphi \in \varDelta ^+$$, then for some constant c, $$\varphi _c^y, \psi _c^x\in \varDelta ^+$$ and for all terms t, if $$\varphi _t^y\in \varDelta ^+$$, then $$t=c\in \varDelta ^+$$;

(c) for any formulas $$\varphi$$, $$\psi$$ and variables x, y, if $$\lnot (\lambda x\psi )\imath y\varphi \in \varDelta ^+$$, then for all terms t, either $$\varphi _t^y\not \in \varDelta ^+$$, or for some constant c, $$\varphi _c^y\in \varDelta ^+$$ and $$c=t\not \in \varDelta ^+$$, or $$\psi _t^x\not \in \varDelta ^+$$.

### Proof

Extend $$\varDelta$$ by following an enumeration $$\phi _1, \phi _2\ldots$$ of the formulas of $$\mathcal {L}^+$$ on which every formula occurs infinitely many times as follows:

$$\varDelta _0=\varDelta$$

If $$\varDelta _n, \phi _n$$ is inconsistent, then

$$\varDelta _{n+1} = \varDelta _n$$.

If $$\varDelta _n, \phi _n$$ is consistent, then:

1. (i)

If $$\phi _n$$ has neither the form $$\lnot \forall x\varphi$$ nor $$(\lambda x\psi )\imath y\varphi$$ nor $$\lnot (\lambda x\psi )\imath y\varphi$$, then

$$\varDelta _{n+1}=\varDelta _n, \phi _n$$.

2. (ii)

If $$\phi _n$$ has the form $$\lnot \forall x\varphi$$, then

$$\varDelta _{n+1}=\varDelta _n, \lnot \forall x\varphi , \lnot \varphi _c^x$$

where c is the first constant of $$\mathcal {C}$$ that does not occur in $$\varDelta _n$$ or $$\phi _n$$.

3. (iii)

If $$\phi _n$$ has the form $$(\lambda x\psi )\imath y\varphi$$, then

$$\varDelta _{n+1}=\varDelta _n, (\lambda x\psi )\imath y\varphi , \varphi _c^y, \psi _c^x$$

where c is the first constant of $$\mathcal {C}$$ that does not occur in $$\varDelta _n$$ or $$\phi _n$$.

4. (iv)

If $$\phi _n$$ has the form $$\lnot (\lambda x\psi )\imath y\varphi$$, then

$$\varDelta _{n+1}=\varDelta _n, \lnot (\lambda x\psi )\imath y \varphi , \varSigma _n$$

where $$\varSigma _n$$ is constructed in the following way. Take a sequence of formulas $$\sigma _1, \sigma _2\ldots$$ of the form $$\varphi _t^y\rightarrow ( \psi _t^x\rightarrow \lnot (\varphi _c^y\rightarrow c=t))$$, where t is a term in $$\varDelta _n, \phi _n$$, and c is a constant of $$\mathcal {C}$$ not in $$\varDelta _n, \phi _n$$ or any previous formulas in the sequence. Let $$\mathcal {T}=t_1, t_2, \ldots$$ be an enumeration of all terms occurring in $$\varDelta _n, \phi _n$$. In case $$\varDelta _0$$ contains infinitely many formulas, it must be ensured that $$\mathcal {C}$$ is not depleted of constants needed later. So pick constants from $$\mathcal {C}$$ by a method that ensures some constants are always left over for later use. The following will do. Let $$\sigma _1$$ be $$\varphi _{t_1}^y\rightarrow (\psi _{t_1}^x\rightarrow \lnot (\varphi _{c_1}^y\rightarrow c_1=t_1))$$, where $$t_1$$ is the first term of $$\mathcal {T}$$ and $$c_1$$ is the first constant of $$\mathcal {C}$$ not in $$\varDelta _n, \phi _n$$; let $$\sigma _2$$ be $$\varphi _{t_2}^y\rightarrow (\psi _{t_2}^x\rightarrow \lnot (\varphi _{c_2}^y\rightarrow c_2=t_2))$$, where $$t_2$$ is the second term on $$\mathcal {T}$$ and $$c_2$$ is the $$2^2=4$$th constant of $$\mathcal {C}$$ not in $$\varDelta _n, \phi _n, \sigma _1$$. In general, let $$\sigma _n$$ be $$\varphi _{t_n}^y\rightarrow (\psi _{t_n}^x\rightarrow \lnot (\varphi _{c_n}^y\rightarrow c_n=t_n))$$, where $$t_n$$ is the nth term of $$\mathcal {T}$$ and $$c_n$$ is the $$2^n$$th constant of $$\mathcal {C}$$ not in $$\varDelta _n, \phi _n$$ nor any $$\sigma _i$$, $$i<n$$. The entire collection of $$\sigma _i$$s is $$\varSigma _n$$.

$$\varDelta _{n+1}$$ is consistent if $$\varDelta _n, \phi _n$$ is:

Case (i). Trivial.

Case (ii). Suppose $$\varDelta _{n+1}=\varDelta _n, \lnot \forall x\varphi , \lnot \varphi _c^x$$ is inconsistent. Then for some finite $$\varDelta _n'\subseteq \varDelta _n$$: $$\vdash \varDelta _n', \lnot \forall x\varphi , \lnot \varphi _c^x\Rightarrow \bot$$. Hence $$\vdash \varDelta _n', \lnot \forall x\varphi \Rightarrow \varphi _c^x$$ by deductive properties of negation. c does not occur in any formula in $$\varDelta _n'$$ nor in $$\lnot \forall x\varphi$$, so it occurs parametrically, and so by $$(\Rightarrow \forall )$$, $$\vdash \varDelta _n', \lnot \forall x\varphi \Rightarrow \forall x\varphi$$. Hence $$\vdash \varDelta _n' \Rightarrow \forall x\varphi$$, again by deductive properties of negation. But then $$\varDelta _n', \lnot \forall x\varphi$$ is inconsistent, and hence so is $$\varDelta _n, \lnot \forall x\varphi$$.

Case (iii). Suppose $$\varDelta _{n+1}=\varDelta _n, (\lambda x\psi )\imath y\varphi , \varphi _c^y, \psi _c^x$$ is inconsistent. Then for some finite $$\varDelta _n'\subseteq \varDelta _n$$, $$\vdash \varDelta _n', (\lambda x\psi )\imath y\varphi , \varphi _c^y, \psi _c^x\Rightarrow \bot$$. c does not occur in $$\varDelta _n', (\lambda x\psi )\imath y\varphi$$, so it occurs parametrically, and hence by $$(\imath _1\Rightarrow )$$, $$\vdash \varDelta _n', (\lambda x\psi )\imath y\varphi \Rightarrow \bot$$, that is to say $$\varDelta _n', (\lambda x\psi )\imath y\varphi$$ is inconsistent, and so is $$\varDelta _n, (\lambda x\psi )\imath y\varphi$$.

Case (iv). Suppose $$\varDelta _{n+1}=\varDelta _n, \lnot (\lambda x\psi )\imath y\varphi , \varSigma _n$$ is inconsistent. Then for some finite $$\varDelta _n'\subseteq \varDelta _n$$ and a finite $$\{\sigma _j\ldots \sigma _k\}\subseteq \varSigma _n$$, $$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _k\Rightarrow \bot$$. Let $$\sigma _k$$ be $$\varphi _{t_k}^y\rightarrow (\psi _{t_k}^x\rightarrow \lnot (\varphi _{c_k}^y\rightarrow \ c_k=t_k))$$. Then by the deductive properties of implication and negation:

$$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}\Rightarrow \varphi _{t_k}^y$$

$$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}\Rightarrow \psi _{t_k}^x$$

$$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}, \varphi _{c_k}^y\Rightarrow c_k=t_k$$

$$c_k$$ was chosen so as not to occur in any previous $$\sigma _i$$, $$i<k$$, nor in $$\varDelta _n, \phi _n$$. Hence it occurs parametrically and the conditions for $$(\Rightarrow \imath )$$ are fulfilled. Thus $$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}\Rightarrow (\lambda x\psi )\imath y\varphi$$. But $$\vdash \varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}\Rightarrow \lnot (\lambda x\psi )\imath y\varphi$$. So $$\varDelta _n', \lnot (\lambda x\psi )\imath y\varphi , \sigma _j\ldots \sigma _{k-1}$$ is inconsistent. Repeat this process from $$\sigma _{k-1}$$ all the way down to $$\sigma _j$$, showing that $$\varDelta _n', \lnot (\lambda x\psi )\imath y\varphi$$ is inconsistent. Hence so is $$\varDelta _n, \lnot (\lambda x\psi )\imath y\varphi$$.

Let $$\varDelta ^+$$ be the union of all $$\varDelta _i$$. $$\varDelta ^+$$ is maximal, for if neither $$\varphi$$ not $$\lnot \varphi$$ are in $$\varDelta ^+$$, then there is a $$\varDelta _k\subseteq \varDelta ^+$$ such that $$\varDelta _k, \varphi \vdash \bot$$ and $$\varDelta _k, \lnot \varphi \vdash \bot$$, but then $$\varDelta _k$$ is inconsistent, contradicting the method of construction of $$\varDelta _k$$. $$\varDelta ^+$$ is consistent, because otherwise some $$\varDelta _i$$ would have to be inconsistent, but they are not.

$$\varDelta ^+$$ satisfies (a) by construction.

To see that it satisfies (b), suppose $$(\lambda x\psi )\imath y\varphi \in \varDelta ^+$$. Then there is a $$\varDelta _{n+1}=\varDelta _n, (\lambda x\psi )\imath y\varphi , \varphi _c^y, \psi _c^x$$, and so $$\varphi _c^y, \psi _c^x\in \varDelta ^+$$. Suppose $$\varphi _t^y\in \varDelta ^+$$. Then there is a $$\varDelta '\subseteq \varDelta ^+$$ such that $$\vdash \varDelta '\Rightarrow \varphi _c^y$$, $$\vdash \varDelta '\Rightarrow \varphi _t^y$$ and by properties of identity $$\vdash t=c\Rightarrow t=c$$. But then by $$(\imath _2\Rightarrow )$$, $$\vdash \varDelta ', (\lambda x\psi )\imath y \varphi \Rightarrow t=c$$, hence $$t=c\in \varDelta ^+$$ by the deductive closure of $$\varDelta ^+$$.

To see that it satisfies (c), suppose $$\lnot (\lambda x\psi )\imath y\varphi \in \varDelta ^+$$, but for some term t, $$\varphi _t^y\in \varDelta ^+$$, (1) for all constants c, if $$\varphi _c^y\in \varDelta ^+$$, then $$c=t\in \varDelta ^+$$, and $$\psi _t^x\in \varDelta ^+$$. As every formula occurs infinitely many times on the enumeration of formulas of $$\mathcal {L}^+$$, there is a $$\varDelta _n$$ that contains $$\varphi _t^y$$ and $$\psi _t^x$$ and $$\varDelta _{n+1}=\varDelta _n, \lnot (\lambda x\psi )\imath y\varphi , \varSigma _n$$. Thus $$\varphi _t^y\rightarrow (\psi _t^x \rightarrow \lnot (\varphi _b^y\rightarrow b=t))\in \varSigma _n$$, for some constant b of $$\mathcal {C}$$. Consequently, this formula is in $$\varDelta ^+$$, too. By the deductive properties of implication and negation and the deductive closure and consistency of $$\varDelta ^+$$, (2) $$\varphi _b^y\in \varDelta ^+$$ and $$b=t\not \in \varDelta ^+$$. But by (1) and (2), $$b=t\in \varDelta ^+$$. Contradiction.

This completes the proof of Theorem 4.    $$\square$$

### Theorem 5

If $$\varDelta$$ is a consistent set of formulas, then $$\varDelta$$ is satisfiable.

### Proof

Extend $$\varDelta$$ to a maximally consistent set $$\varDelta ^+$$ as per Theorem 4. We construct a structure $$M=\langle D, I\rangle$$ and function $$v:VAR\cup PAR\rightarrow D$$ from $$\varDelta ^+$$ which will satisfy $$\varDelta$$. D is the set of equivalence classes of terms under identities $$t_1=t_2\in \varDelta ^+$$. Denote the equivalence class to which t belongs by [t]. For all predicate letters P, $$\langle [t_1], ..., [t_n]\rangle \in I(P^n)$$ iff $$P^n(t_1, ..., t_n)\in \varDelta ^+$$. For all variables $$v(x)=[x]$$, and for all parameters $$v(a)=[a]$$. In these latter cases $$I_v=v$$, and for all new constants of $$\mathcal {C}$$, $$I_v(c)=[c]$$. We’ll show by induction over the number of logical constants (connectives, quantifiers, $$\imath$$ and $$\lambda$$ symbols) in formula $$\varphi$$ that $$M, v\models \varphi$$ if and only if $$\varphi \in \varDelta ^+$$.

Suppose $$\varphi$$ is an atomic formula. (a) $$\varphi$$ is $$P^n(t_1, ..., t_n)$$. Then $$M, v\models P^n(t_1, ..., t_n)$$ iff $$\langle I_v(t_1), ..., I_v(t_n)\rangle \in I(P^n)$$, iff $$\langle [t_1]\ldots [t_n]\rangle \in I(P^n)$$, iff $$P^n(t_1, ..., t_n)\in \varDelta ^+$$. (b) $$\varphi$$ is $$t_1=t_2$$. Then $$M, v\models t_1=t_2$$ iff $$I_v(t_1)=I_v(t_2)$$, iff $$[t_1]=[t_2]$$, and as these are equivalence classes under identities in $$\varDelta ^+$$, iff $$t_1=t_2\in \varDelta ^+$$.

For the rest of the proof suppose $$M, v\models \varphi$$ if and only if $$\varphi \in \varDelta$$, where $$\varphi$$ has fewer than n connectives. We skip the standard cases of $$\lnot , \wedge , \forall$$ (see e.g. [7]).

Case 4. $$\varphi$$ is $$(\lambda x\psi ) t$$.

$$(\lambda x\psi )t\in \varDelta ^+$$ iff $$\psi _t^x\in \varDelta ^+$$ by deductive closure of $$\varDelta ^+$$, iff $$M, v\models \psi _t^x$$ by induction hypothesis. t must be free for x in $$\psi$$, hence by the substitution lemma, $$M, v\models \psi _t^x$$ iff $$M, v_{I_v(t)}^x \models \psi$$, iff $$M, v_{[t]}^x \models \psi$$ and $$I_v(t)=[t]$$, as the latter holds by construction of M, and this in turn is the case iff $$M, v\models (\lambda x\psi ) t$$ by the first semantic clause for lambda atoms.

Case 5. $$\varphi$$ is $$(\lambda x\psi )\imath y\chi$$.

(a) If $$(\lambda x\psi )\imath y\chi \not \in \varDelta ^+$$, then by deductive closure $$\lnot (\lambda x\psi )\imath y\chi \in \varDelta ^+$$, and so for all terms t, either $$\chi _t^y\not \in \varDelta ^+$$, or for some constant c, $$\chi _c^y\in \varDelta ^+$$ and $$c=t\not \in \varDelta ^+$$, or $$\psi _t^x\not \in \varDelta ^+$$. $$[t]\in D$$ iff t is a term, so by induction hypothesis, for all $$[t]\in D$$, either $$M, v\nvDash \chi _t^y$$, or there is a $$[c]\in D$$ such that $$M, v\models \chi _c^y$$ and $$M, v\nvDash c=t$$, or $$M, v\nvDash \psi _t^x$$. $$\chi _t^y$$ is the same formula as $$\chi {_x^y}{_t^x}$$, so $$M, v\nvDash \chi {_x^y}{_t^x}$$. Furthermore, x and y are not free in $$\chi _c^y$$, so for any $$o\in D$$, $$M, v\models \chi _c^y$$ iff $$M, v_o^x\models \chi _c^y$$. By the substitution lemma, either $$M, v_{I_v(t)}^x\nvDash \chi _x^y$$, or $$M, v_{I_v(t)}^x\nvDash \psi$$, or there is a $$[c]\in D$$ such that $$M, v_{I_v(t)}^x{_{I_v(c)}^y}\models \chi$$ and $$M, v_{I_v(t)}^x{_{I_v(c)}^y}\nvDash y=x$$. $$I_v(t)=[t]$$ and $$I_v(c)=[c]$$, so either $$M, v_{[t]}^x\nvDash \chi _x^y$$, or $$M, v_{[t]}^x\nvDash \psi$$, or there is a $$[c]\in D$$ such that $$M, v_{[t]}^x{_{[c]}^y}\models \chi$$ and $$M, v_{[t]}^x{_{[c]}^y}\nvDash y=x$$, i.e. $$v_{[t]}^x{_{[c]}^y}(y)\not =[t]$$. $$v_{[t]}^x{_{[c]}^y}$$ is a y-variant of $$v_{[t]}^x$$, hence $$M, v\nvDash (\lambda x\psi )\imath y\chi$$.

(b) If $$(\lambda x\psi )\imath y\chi \in \varDelta ^+$$, then for some constant c, $$\psi _c^x, \chi _c^y \in \varDelta ^+$$ and for all terms t, if $$\chi _t^y\in \varDelta ^+$$, then $$c=t\in \varDelta ^+$$. By induction hypothesis, $$M, v\models \psi _c^x$$ and $$M, v\models \chi _c^y$$. As y is either identical to x or x is not free in $$\chi$$, $$\chi _c^y$$ is the same formula as $$\chi {_x^y}{_c^x}$$ and $$I_v(c)=[c]$$, so by the substitution lemma $$M, v_{[c]}^x\models \psi$$ and $$M, v_{[c]}^x\models \chi _x^y$$. Furthermore, for all $$[t]\in D$$, if $$M, v\models \chi _t^y$$, then $$M, v\models c=t$$, i.e. $$I_v(t)=I_v(c)$$, i.e. $$I_v(t)=[c]$$. Let $$v'$$ be a y-variant of $$v_{[c]}^x$$, i.e. $$v'=v{_{[c]}^x}{_{[s]}^y}$$, for some $$[s]\in D$$. Either y is identical to x or x is not free in $$\chi$$, so $$v{_{[c]}^x}{_{[s]}^y}$$ and v agree on the assignments of elements of D to all variables in $$\chi$$ except possibly y, and so $$M, v{_{[c]}^x}{_{[s]}^y}\models \chi$$ iff $$M, v_{[s]}^y\models \chi$$. So suppose now $$M, v'\models \chi$$ and $$v'(y)\not =[c]$$. $$v'(y)=[s]$$, so $$[c]\not =[s]$$. Then $$M, v_{[s]}^y\models \chi$$, and also if $$M, v\models \chi _s^y$$, then $$M, v\models c=s$$, i.e. $$I_v(s)=I_v(c)$$, i.e. $$I_v(s)=[c]$$. But $$I_v(s)=[s]$$, so $$I_v(s)\not =[c]$$. Hence $$M, v\nvDash \chi _s^y$$, and so by the substitution lemma, $$M, v_{[s]}^y\nvDash \chi$$. Contradiction.

Finally, restrict the language again to the language of $$\varDelta$$: structure M constructed from $$\varDelta ^+$$ satisfies $$\varDelta$$. This completes the proof of Theorem 5.    $$\square$$

### Theorem 6 (Completeness for Sequents)

If $$\models \varGamma \Rightarrow \varDelta$$, then $$\vdash \varGamma \Rightarrow \varDelta$$.

### Proof

Let $$\lnot \varDelta$$ be the negation of all formulas in $$\varDelta$$. If $$\models \varGamma \Rightarrow \varDelta$$, then $$\varGamma , \lnot \varDelta$$ is not satisfiable. Hence by Theorem 5 it is inconsistent, and as they are both finite, $$\vdash \varGamma , \lnot \varDelta \Rightarrow \bot$$. Hence by the properties of negation $$\vdash \varGamma \Rightarrow \varDelta$$.    $$\square$$

### Theorem 7 (Completeness for Sets)

If $$\varGamma \models A$$, then $$\varGamma \vdash A$$.

### Proof

Suppose $$\varGamma \models A$$. Then $$\varGamma , \lnot A$$ is not satisfiable, hence by Theorem 5 it is inconsistent and $$\varGamma , \lnot A\vdash \bot$$. So for some finite $$\varSigma \subseteq \varGamma , \lnot A$$, $$\varSigma \Rightarrow \bot$$. If $$\lnot A\in \varSigma$$, then by the deductive properties of negation, $$\varSigma -\{\lnot A\}\Rightarrow A$$, and as $$\varSigma -\{\lnot A \}$$ is certain to be a subset of $$\varGamma$$, $$\varGamma \vdash A$$. If $$\lnot A\not \in \varSigma$$, then $$\varSigma \Rightarrow A$$ by the properties of negation, and again $$\varGamma \vdash A$$.    $$\square$$

By theorem 1 and 7 we also obtain the (strong) completeness of HRL.

## 6 Conclusion

Summing up, RL saves the essential features of the Russellian approach to definite descriptions. It avoids problems like the arbitrary restriction of axiom R to predicate symbols and scoping difficulties. In the semantics it retains the reductionist Russellian flavour in the sense that DD are not characterised by an interpretation function, but instead they are treated as a case in the clauses of the forcing definition for lambda atoms. In this respect RL is different from the approach provided by Fitting and Mendelsohn [10] which is closer to the Fregean tradition.

The rules of GRL are in principle direct counterparts of the tableau rules from [17] but with two important exceptions. The tableau rule corresponding to $$(=-)$$ is not restricted to atomic formulas and the tableau rule corresponding to $$(\imath _2\Rightarrow )$$ is not branching. Its counterpart in sequent calculus would be:

$$(\imath _2\Rightarrow ')$$ $$\dfrac{{b_1=b_2, \varGamma \!\!\Rightarrow \varDelta }}{{(\lambda x\psi )\imath y\varphi , \varphi [y/b_1], \varphi [y/b_2], \varGamma \!\!\Rightarrow \varDelta }}$$

Such a non-branching rule is certainly much better for proof search, but it is not possible to prove the cut elimination theorem in its presence. The same applies to $$(=-)$$ without restriction to atomic formulas. In both cases the occurrences of arbitrary formulas $$\varphi$$ in the antecedent of the conclusion can be cut formulas and, in case the cut formula in the left premiss of the cut application is principal, it is not possible to make a reduction of the complexity of the cut formulas.

There is an interesting advantage of introducing the sequent characterisation of RL over tableau formalisation from [17]. Since no rule specific to GRL has more than one active formula in the succedent they are also correct in the setting of intuitionistic logic as characterised by G1i [34]. It is sufficient to change the background calculus for the intuitionistic version (with $$(\leftrightarrow \Rightarrow )$$, $$(\Rightarrow \vee )$$ split into two rules, and $$(\Rightarrow C), (\Rightarrow W)$$ deleted) and check that all proofs from Sect. 3, 4 hold also for a (syntactically characterised) intuitionistic version of RL. By comparison, the changes in the tableau setting would be rather more involved and connected with the introduction of labels for naming the states of knowledge in the constructed model.

The approach provided here may be modified also to cover some more expressive logics (like modal ones) and some other theories of DD like those proposed in the context of free logics. Some preliminary work in this direction is found in [12] and [13]. On the other hand the problems briefly mentioned in Sect. 1 need serious examination and this may be carried out only after the implementation of the presented formal systems. This is one of the most important future tasks.