A Classical Realizability Model for a Semantical Value Restriction
 7 Citations
 1 Mentions
 745 Downloads
Abstract
We present a new type system with support for proofs of programs in a callbyvalue language with control operators. The proof mechanism relies on observational equivalence of (untyped) programs. It appears in two type constructors, which are used for specifying program properties and for encoding dependent products.The main challenge arises from the lack of expressiveness of dependent products due to the value restriction. To circumvent this limitation we relax the syntactic restriction and only require equivalence to a value.The consistency of the system is obtained semantically by constructing a classical realizability model in three layers (values, stacks and terms).
1 Introduction
In this work we consider a new type system for a callbyvalue language, with control operators, polymorphism and dependent products. It is intended to serve as a theoretical basis for a proof assistant focusing on program proving, in a language similar to OCaml or SML. The proof mechanism relies on dependent products and equality types \(t \equiv u\), where t and u are (possibly untyped) terms of the language. Equality types are interpreted as \(\top \) if the denoted equivalence holds and as \(\bot \) otherwise.
In our system, proofs are written using the same language as programs. For instance, a patternmatching corresponds to a case analysis in a proof, and a recursive call to the use of an induction hypothesis. A proof is first and foremost a program, hence we may say that we follow the “program as proof” principle, rather than the usual “proof as program” principle. In particular, proofs can be composed as programs and with programs to form proof tactics.
which only requires u to be equivalent to some value v. The same idea can be applied to every rule requiring value restriction. The obtained system is conservative over the one with the syntactic restriction. Indeed, finding a value equivalent to a term that is already a value can always be done using the reflexivity of the equivalence relation.
Although the idea seems simple, proving the soundness of the new typing rules semantically is surprisingly subtle. A model is built using classical realizability techniques in which the interpretation of a type A is spread among two sets: a set of values \(\llbracket A \rrbracket \) and a set of terms \(\llbracket A \rrbracket ^{\bot \bot }\). The former contains all values that should have type A. For example, \(\llbracket \)nat\(\rrbracket \) should contain the values of the form S[S[...Z[]...]]. The set \(\llbracket A \rrbracket ^{\bot \bot }\) is the completion of \(\llbracket A \rrbracket \) with all the terms behaving like values of \(\llbracket A \rrbracket \) (in the observational sense). To show that the relaxation of the value restriction is sound, we need the values of \(\llbracket A \rrbracket ^{\bot \bot }\) to also be in \(\llbracket A \rrbracket \). In other words, the completion operation should not introduce new values. To obtain this property, we need to extend the language with a new, noncomputable instruction internalizing equivalence. This new instruction is only used to build the model, and will not be available to the user (nor will it appear in an implementation).
2 About Effects and Value Restriction
A soundness issue related to sideeffects and callbyvalue evaluation arose in the seventies with the advent of ML. The problem stems from a bad interaction between sideeffects and HindleyMilner polymorphism. It was first formulated in terms of references [30, 4, 14, 15, 29]). However, they all introduced a complexity that contrasted with the elegance and simplicity of ML’s type system (for a detailed account, see [31, Sect. 2] and [5, Sect. 2]).
cannot be proved safe (in a callbyvalue system with sideeffects) if t is not a syntactic value. Similarly, the elimination rule for dependent product (shown previously) requires value restriction. It is possible to exhibit a counterexample breaking the type safety of our system if it is omitted [13].
In this paper, we consider control structures, which have been shown to give a computational interpretation to classical logic by Timothy Griffin [6]. In 1991, Robert Harper and Mark Lillibridge found a complex program breaking the type safety of ML extended with Lisp’s call/cc [7]. As with references, value restriction solves the inconsistency and yields a sound type system. Instead of using control operators like call/cc, we adopt the syntax of Michel Parigot’s \(\lambda \mu \)calculus [24]. Our language hence contains a new binder \(\mu \alpha \,t\) capturing the continuation in the \(\mu \)variable \(\alpha \). The continuation can then be restored in t using the syntax \(u*\alpha \)^{2}. In the context of the \(\lambda \mu \)calculus, the soundness issue arises when evaluating \(t\,(\mu \alpha \,u)\) when \(\mu \alpha \,u\) has a polymorphic type. Such a situation cannot happen with value restriction since \(\mu \alpha \,u\) is not a value.
3 Main Results
The main contribution of this paper is a new approach to value restriction. The syntactic restriction on terms is replaced by a semantical restriction expressed in terms of an observational equivalence relation denoted \((\equiv )\). Although this approach seems simple, building a model to prove soundness semantically (Theorem 6) is surprisingly subtle. Subject reduction is not required here, as our model construction implies type safety (Theorem 7). Furthermore our type system is consistent as a logic (Theorem 8).
4 Related Work
To our knowledge, combining callbyvalue evaluation, sideeffects and dependent products has never been achieved before. At least not for a dependent product fully compatible with effects and callbyvalue. For example, the Aura language [10] forbids dependency on terms that are not values in dependent applications. Similarly, the \(F^\star \) language [28] relies on (partial) letnormal forms to enforce values in argument position. Daniel Licata and Robert Harper have defined a notion of positively dependent types [16] which only allow dependency over strictly positive types. Finally, in language like ATS [32] and DML [33] dependent types are limited to a specific index language.
The system that seems the most similar to ours is NuPrl [2], although it is inconsistent with classical reasoning. NuPrl accommodates an observational equivalence \((\sim )\) (Howe’s “squiggle” relation [8]) similar to our \((\equiv )\) relation. It is partially reflected in the syntax of the system. Being based on a Kleene style realizability model, NuPrl can also be used to reason about untyped terms.
The central part of this paper consists in a classical realizability model construction in the style of JeanLouis Krivine [12]. We rely on a callbyvalue presentation which yields a model in three layers (values, terms and stacks). Such a technique has already been used to account for classical MLlike polymorphism in callbyvalue in the work of Guillaume MunchMaccagnoni [21]^{3}. It is here extended to include dependent products.
The most actively developed proof assistants following the CurryHoward correspondence are Coq and Agda [18, 22]. The former is based on Coquand and Huet’s calculus of constructions and the latter on MartinL?f’s dependent type theory [3, 17]. These two constructive theories provide dependent types, which allow the definition of very expressive specifications. Coq and Agda do not directly give a computational interpretation to classical logic. Classical reasoning can only be done through the definition of axioms such as the law of the excluded middle. Moreover, these two languages are logically consistent, and hence their typecheckers only allow terminating programs. As termination checking is a difficult (and undecidable) problem, many terminating programs are rejected. Although this is not a problem for formalizing mathematics, this makes programming tedious.
The TRELLYS project [1] aims at providing a language in which a consistent core can interact with typesafe dependentlytyped programming with general recursion. Although the language defined in [1] is callbyvalue and allows effect, it suffers from value restriction like Aura [10]. The value restriction does not appear explicitly but is encoded into a wellformedness judgement appearing as the premise of the typing rule for application. Apart from value restriction, the main difference between the language of the TRELLYS project and ours resides in the calculus itself. Their calculus is Churchstyle (or explicitly typed) while ours is Currystyle (or implicitly typed). In particular, their terms and types are defined simultaneously, while our type system is constructed on top of an untyped calculus.
Another similar system can be found in the work of Alexandre Miquel [20], where propositions can be classical and Currystyle. However the rest of the language remains Church style and does not embed a full MLlike language. The PVS system [23] is similar to ours as it is based on classical higherorder logic. However this tool does not seem to be a programming language, but rather a specification language coupled with proof checking and model checking utilities. It is nonetheless worth mentioning that the undecidability of PVS’s type system is handled by generating proof obligations. Our system will take a different approach and use a nonbacktracking typechecking and typeinference algorithm.
5 Syntax, Reduction and Equivalence

\(\mathcal {V}_\lambda = \{x, y, z...\}\) for \(\lambda \)variables,

\(\mathcal {V}_\mu = \{\alpha , \beta , \gamma ...\}\) for \(\mu \)variables (also called stack variables) and

\(\mathcal {V}_\iota = \{a, b, c...\}\) for term variables. Term variables will be bound in formulas, but never in terms.
We also require a countable set \(\mathcal {L} = \{l, l_1, l_2...\}\) of labels to name record fields and a countable set \(\mathcal {C} = \{C, C_1, C_2...\}\) of constructors.
Definition 1
Terms and values form a variation of the \(\lambda \mu \)calculus [24] enriched with MLlike constructs (i.e. records and variants). For technical purposes that will become clear later on, we extend the language with a special kind of term \(\delta _{v,w}\). It will only be used to build the model and is not intended to be accessed directly by the user. One may note that values and processes are terms. In particular, a process of the form \(t *\alpha \) corresponds exactly to a named term \([\alpha ]t\) in the most usual presentation of the \(\lambda \mu \)calculus. A stack can be either a stack variable, a value pushed on top of a stack, or a stack frame containing a term on top of a stack. These two constructors are specific to the callbyvalue presentation, only one would be required in callbyname.
Remark 1
Definition 2
Given a value, term, stack or process \(\psi \) we denote \(FV_\lambda (\psi )\) (resp. \(FV_\mu (\psi )\), \(TV(\psi )\)) the set of free \(\lambda \)variables (resp. free \(\mu \)variables, term variables) contained in \(\psi \). We say that \(\psi \) is closed if it does not contain any free variable of any kind. The set of closed values and the set of closed terms are denoted \(\varLambda _v^*\) and \(\varLambda ^*\) respectively.
Remark 2
A stack, and hence a process, can never be closed as they always at least contain a stack variable.
5.1 CallbyValue Reduction Relation
Processes form the internal state of our abstract machine. They are to be thought of as a term put in some evaluation context represented using a stack. Intuitively, the stack \(\pi \) in the process \(t *\pi \) contains the arguments to be fed to t. Since we are in callbyvalue the stack also handles the storing of functions while their arguments are being evaluated. This is why we need stack frames (i.e. stacks of the form \([t] \pi \)). The operational semantics of our language is given by a relation \((\succ )\) over processes.
Definition 3
The first three rules are those that handle \(\beta \)reduction. When the abstract machine encounters an application, the function is stored in a stackframe in order to evaluate its argument first. Once the argument has been completely computed, a value faces the stackframe containing the function. At this point the function can be evaluated and the value is stored in the stack ready to be consumed by the function as soon as it evaluates to a \(\lambda \)abstraction. A captureavoiding substitution can then be performed to effectively apply the argument to the function. The fourth and fifth rules handle the classical part of computation. When a \(\mu \)abstraction is reached, the current stack (i.e. the current evaluation context) is captured and substituted for the corresponding \(\mu \)variable. Conversely, when a process is reached, the current stack is thrown away and evaluation resumes with the process. The last two rules perform projection and case analysis in the expected way. Note that for now, states of the form \(\delta _{v,w} *\pi \) are unaffected by the reduction relation.
Remark 3
For the abstract machine to be simpler, we use righttoleft callbyvalue evaluation, and not the more usual lefttoright callbyvalue evaluation.
Lemma 1

for all \(x \in \mathcal {V}_\lambda \) and \(v \in \varLambda _v\), \(p[x := v] \succ q[x := v]\),

for all \(\alpha \in \mathcal {V}_\mu \) and \(\pi \in \varPi \), \(p[\alpha := \pi ] \succ q[\alpha := \pi ]\),

for all \(a \in \mathcal {V}_\iota \) and \(t \in \varLambda \), \(p[a := t] \succ q[a := t]\).
Consequently, if \(\sigma \) is a substitution for variables of any kind and if \(p \succ q\) (resp. \(p \succ ^{*} q\), \(p \succ ^{+} q\), \(p \succ ^k q\)) then \(p\sigma \succ q\sigma \) (resp. \(p\sigma \succ ^{*} q\sigma \), \(p\sigma \succ ^{+} q\sigma \), \(p\sigma \succ ^k q\sigma \)).
Proof
Immediate case analysis on the reduction rules.
We are now going to give the vocabulary that will be used to describe some specific classes of processes. In particular we need to identify processes that are to be considered as the evidence of a successful computation, and those that are to be recognised as expressing failure.
Definition 4

final if there is a value \(v \in \varLambda _v\) and a stack variable \(\alpha \in \mathcal {V}_\mu \) such that \(p = v *\alpha \),

\(\delta \)like if there are values \(v, w \in \varLambda _v\) and a stack \(\pi \in \varPi \) such that \(p = \delta _{v,w} *\pi \),

blocked if there is no \(q \in \varLambda \times \varPi \) such that \(p \succ q\),

stuck if it is not final nor \(\delta \)like, and if for every substitution \(\sigma \), \(p\sigma \) is blocked,

nonterminating if there is no blocked process \(q \in \varLambda \times \varPi \) such that \(p \succ ^{*} q\).
Lemma 2
Let p be a process and \(\sigma \) be a substitution for variables of any kind. If p is \(\delta \)like (resp. stuck, nonterminating) then \(p\sigma \) is also \(\delta \)like (resp. stuck, nonterminating).
Proof
Immediate by definition.
Lemma 3
Proof
Simple case analysis.
Lemma 4
Proof
Straightforward case analysis using Lemma 3.
5.2 Reduction of \(\delta _{v,w}\) and Equivalence
The idea now is to define a notion of observational equivalence over terms using a relation \((\equiv )\). We then extend the reduction relation with a rule reducing a state of the form \(\delta _{v,w} *\pi \) to \(v *\pi \) if \(v \not \equiv w\). If \(v \equiv w\) then \(\delta _{v,w}\) is stuck. With this rule reduction and equivalence will become interdependent as equivalence will be defined using reduction.
Definition 5
Given a reduction relation R, we say that a process \(p \in \varLambda \times \varPi \) converges, and write \(p \Downarrow _R\), if there is a final state \(q \in \varLambda \times \varPi \) such that \(p R^{*} q\) (where \(R^{*}\) is the reflexivetransitive closure of R). If p does not converge we say that it diverges and write \(p \Uparrow _R\). We will use the notations \(p \Downarrow _i\) and \(p \Uparrow _i\) when working with indexed notation symbols like \((\twoheadrightarrow _i)\).
Definition 6
Definition 7
Remark 4
Obviously \((\twoheadrightarrow _i) \subseteq (\twoheadrightarrow _{i+1})\) and \((\equiv _{i+1}) \subseteq (\equiv _i)\). As a consequence the construction of \((\twoheadrightarrow _i)_{i\in \mathbb {N}}\) and \((\equiv _i)_{i\in \mathbb {N}}\) converges. In fact \((\twoheadrightarrow )\) and \((\equiv )\) form a fixpoint at ordinal \(\omega \). Surprisingly, this property is not explicitly required.
Theorem 1
Let t and u be terms. If \(t \equiv u\) then for every stack \(\pi \in \varPi \) and substitution \(\sigma \) we have \(t\sigma *\pi \Downarrow _{\twoheadrightarrow } \Leftrightarrow u\sigma *\pi \Downarrow _{\twoheadrightarrow }\).
Proof
We suppose that \(t \equiv u\) and we take \(\pi _0 \in \varPi \) and a substitution \(\sigma _0\). By symmetry we can assume that \({t\sigma _0 *\pi _0} \Downarrow _\twoheadrightarrow \) and show that \({u\sigma _0 *\pi _0} \Downarrow _\twoheadrightarrow \). By definition there is \(i_0 \in \mathbb {N}\) such that \({t\sigma _0 *\pi _0} \Downarrow _{i_0}\). Since \(t \equiv u\) we know that for every \(i \in \mathbb {N}\), \(\pi \in \varPi \) and substitution \(\sigma \) we have \({t\sigma *\pi } \Downarrow _i \Leftrightarrow {u\sigma *\pi } \Downarrow _i\). This is true in particular for \(i = i_0\), \(\pi = \pi _0\) and \(\sigma = \sigma _0\). We hence obtain \({u\sigma _0 *\pi _0} \Downarrow _{i_0}\) which give us \({u\sigma _0 *\pi _0} \Downarrow _\twoheadrightarrow \).
Remark 5
The converse implication is not true in general: taking \(t = \delta _{\lambda x\,x,\{\}}\) and \(u = \lambda x\,x\) gives a counterexample. More generally \({p\Downarrow _\twoheadrightarrow } \Leftrightarrow {q\Downarrow _\twoheadrightarrow }\) does not necessarily imply \({p\Downarrow _i} \Leftrightarrow {q\Downarrow _i}\) for all \(i\in \mathbb {N}\).
Corollary 1
Let t and u be terms and \(\pi \) be a stack. If \(t \equiv u\) and \({t *\pi } \Downarrow _\twoheadrightarrow \) then \({u *\pi } \Downarrow _\twoheadrightarrow \).
Proof
Direct consequence of Theorem 1 using \(\pi \) and an empty substitution.
5.3 Extensionality of the Language
In order to be able to work with the equivalence relation \((\equiv )\), we need to check that it is extensional. In other words, we need to be able to replace equals by equals at any place in terms without changing their observed behaviour. This property is summarized in the following two theorems.
Theorem 2
Let v and w be values, E be a term and x be a \(\lambda \)variable. If \(v \equiv w\) then \(E[x := v] \equiv E[x := w]\).
Proof
We are going to prove the contrapositive so we suppose \(E[x := v] \not \equiv E[x := w]\) and show \(v \not \equiv w\). By definition there is \(i \in \mathbb {N}\), \(\pi \in \varPi \) and a substitution \(\sigma \) such that \((E[x := v])\sigma *\pi \Downarrow _i\) and \((E[x := w])\sigma *\pi \Uparrow _i\) (up to symmetry). Since we can rename x in such a way that it does not appear in \(dom(\sigma )\), we can suppose \(E\sigma [x := v\sigma ] *\pi \Downarrow _i\) and \(E\sigma [x := w\sigma ] *\pi \Uparrow _i\). In order to show \(v \not \equiv w\) we need to find \(i_0 \in \mathbb {N}\), \(\pi _0 \in \varPi \) and a substitution \(\sigma _0\) such that \(v\sigma _0 *\pi _0 \Downarrow _{i_0}\) and \(w\sigma _0 *\pi _0 \Uparrow _{i_0}\) (up to symmetry). We take \(i_0 = i\), \(\pi _0 = [\lambda x\;E\sigma ]\pi \) and \(\sigma _0 = \sigma \). These values are suitable since by definition \({v\sigma _0 *\pi _0 } \twoheadrightarrow _{i_0} {E\sigma [x := v\sigma ] *\pi } \Downarrow _{i_0}\) and \({w\sigma _0 *\pi _0} \twoheadrightarrow _{i_0} {E\sigma [x := w\sigma ] *\pi } \Uparrow _{i_0}\).
Lemma 5

\(p = v *\alpha \) for some value v and a stack variable \(\alpha \),

\(p = a *\pi \) for some stack \(\pi \),

\(k > 0\) and \(p = \delta (v,w) *\pi \) for some values v and w and stack \(\pi \), and in this case \(v[a := t] \not \equiv _j w[a := t]\) for some \(j < k\).
Proof
Let \(\sigma \) be the substitution \([a := t]\). If s is nonterminating, Lemma 2 tells us that \(s\sigma \) is also nonterminating, which contradicts \(s\sigma \Downarrow _k\). Consequently, there is a blocked process p such that \(s \succ ^{*} p\) since \((\succ ) \subseteq (\twoheadrightarrow _k)\). Using Lemma 1 we get \(s\sigma \succ ^{*} p\sigma \) from which we obtain \(p\sigma \Downarrow _{k}\). The process p cannot be stuck, otherwise \(p\sigma \) would also be stuck by Lemma 2, which would contradict \(p\sigma \Downarrow _{k}\). Let us now suppose that \(p = \delta _{v,w} *\pi \) for some values v and w and some stack \(\pi \). Since \(\delta _{v\sigma ,w\sigma } *\pi \Downarrow _k\) there must be \(i < k\) such that \(v\sigma \not \equiv _j w\sigma \), otherwise this would contradict \(\delta _{v\sigma ,w\sigma } *\pi \Downarrow _k\). In this case we necessarily have \(k > 0\), otherwise there would be no possible candidate for i. According to Lemma 4 we need to rule out four more forms of therms: \(x.l *\pi \), \(x *v.\pi \), \(case_x\;B *\pi \) and \(b *\pi \) in the case where \(b \not = a\). If p was of one of these forms the substitution \(\sigma \) would not be able to unblock the reduction of p, which would contradict again \(p\sigma \Downarrow _{k}\).
Lemma 6
Let \(t_1\), \(t_2\) and E be terms and a be a term variable. For every \(k \in \mathbb {N}\), if \(t_1 \equiv _k t_2\) then \(E[a \!:=\! t_1] \equiv _k E[a \!:=\! t_2]\).
Proof
Let us take \(k \in \mathbb {N}\), suppose that \(t_1 \equiv _k t_2\) and show that \(E[a \!:=\! t_1] \equiv _k E[a \!:=\! t_1]\). By symmetry we can assume that we have \(i \le k\), \(\pi \in \varPi \) and a substitution \(\sigma \) such that \((E[a \!:=\! t_1])\sigma *\pi \Downarrow _i\) and show that \((E[a \!:=\! t_2])\sigma *\pi \Downarrow _i\). As we are free to rename a, we can suppose that it does not appear in \(dom(\sigma )\), \(TV(\pi )\), \(TV(t_1)\) or \(TV(t_2)\). In order to lighten the notations we define \(E' = E\sigma \), \(\sigma _1 = [a \!:=\! t_1\sigma ]\) and \(\sigma _2 = [a \!:=\! t_2\sigma ]\). We are hence assuming \(E'\sigma _1 *\pi \Downarrow _i\) and trying to show \(E'\sigma _2 *\pi \Downarrow _i\).
We will now build a sequence \((E_i,\pi _i,l_i)_{i \in I}\) in such a way that \(E'\sigma _1 *\pi \twoheadrightarrow ^{*}_k E_i\sigma _1 *\pi _i\sigma _1\) in \(l_i\) steps for every \(i \in I\). Furthermore, we require that \((l_i)_{i \in I}\) is increasing and that it has a strictly increasing subsequence. Under this condition our sequence will necessarily be finite. If it was infinite the number of reduction steps that could be taken from the state \(E'\sigma _1 *\pi \) would not be bounded, which would contradict \(E'\sigma _1 *\pi \Downarrow _i\). We now denote our finite sequence \((E_i,\pi _i,l_i)_{i \le n}\) with \(n \in \mathbb {N}\). In order to show that \((l_i)_{i \le n}\) has a strictly increasing subsequence, we will ensure that it does not have three equal consecutive values. More formally, we will require that if \(0 < i < n\) and \(l_{i1} = l_i\) then \(l_{i+1} > l_i\).
To define \((E_0,\pi _0,l_0)\) we consider the reduction of \(E' *\pi \). Since we know that \((E' *\pi )\sigma _1 = E'\sigma _1 *\pi \Downarrow _i\) we use Lemma 5 to obtain a blocked state p such that \({E' *\pi } \succ ^j p\). We can now take \(E_0 *\pi _0 = p\) and \(l_0 = j\). By Lemma 1 we have \((E' *\pi )\sigma _1 \succ ^j {E_0\sigma _1 *\pi _0\sigma _1}\) from which we can deduce that \((E' *\pi )\sigma _1 \twoheadrightarrow ^{*}_k {E_0\sigma _1 *\pi _0\sigma _1}\) in \(l_0 = j\) steps.
To define \((E_{i+1},\pi _{i+1},l_{i+1})\) we consider the reduction of the process \(E_i\sigma _1 *\pi _i\). By construction we know that \({E'\sigma _1 *\pi } \twoheadrightarrow ^{*}_k {E_i\sigma _1 *\pi _i\sigma _1 = (E_i\sigma _1 *\pi _i)\sigma _1}\) in \(l_i\) steps. Using Lemma 5 we know that \(E_i *\pi _i\) might be of three shapes.

If \({E_i *\pi _i} = {v *\alpha }\) for some value v and stack variable \(\alpha \) then the end of the sequence was reached with \(n = i\).

If \(E_i = a\) then we consider the reduction of \(E_i\sigma _1 *\pi _i\). Since \((E_i\sigma _1 *\pi _i)\sigma _1 \Downarrow _k\) we know from Lemma 5 that there is a blocked process p such that \({E_i\sigma _1 *\pi _i} \succ ^j p\). Using Lemma 1 we obtain that \({E_i\sigma _1 *\pi _i\sigma _1} \succ ^j p\sigma _1\) from which we can deduce that \({E_i\sigma _1 *\pi _i\sigma _1} \twoheadrightarrow _k p\sigma _1\) in j steps. We then take \(E_{i+1} *\pi _{i+1} = p\) and \(l_{i+1} = l_i + j\). Is it possible to have \(j=0\)? This can only happen when \(E_i\sigma _1 *\pi _i\) is of one of the three forms of Lemma 5. It cannot be of the form \(a *\pi \) as we assumed that a does not appear in \(t_1\) or \(\sigma \). If it is of the form \(v *\alpha \), then we reached the end of the sequence with \(i + 1 = n\) so there is no trouble. The process \(E_i\sigma _1 *\pi _i\) may be of the form \(\delta (v,w) *\pi \), but we will have \(l_{i+2} > l_{i+1}\).

If \(E_i = \delta (v,w)\) for some values v and w we have \(m < k\) such that \(v\sigma _1 \not \equiv _m w\sigma _1\). Hence \({E_i\sigma _1 *\pi _i = \delta (v\sigma _1,w\sigma _1) *\pi _i} \twoheadrightarrow _k {v\sigma _1 *\pi _i}\) by definition. Moreover \({E_i\sigma _1 *\pi _i\sigma _1} \twoheadrightarrow _k {v\sigma _1 *\pi _i\sigma _1}\) by Lemma 1. Since \({E'\sigma _1 *\pi } \twoheadrightarrow ^{*}_k {E_i\sigma _1 *\pi _i\sigma _1}\) in \(l_i\) steps we obtain that \({E'\sigma _1 *\pi } \twoheadrightarrow ^{*}_k {v\sigma _1 *\pi _i\sigma _1}\) in \(l_i + 1\) steps. This also gives us \({(v\sigma _1 *\pi _i)\sigma _1 = v\sigma _1 *\pi _i\sigma _1} \Downarrow _k\). We now consider the reduction of the process \(v\sigma _1 *\pi _i\). By Lemma 5 there is a blocked process p such that \({v\sigma _1 *\pi _i} \succ ^j p\). Using Lemma 1 we obtain \({v\sigma _1 *\pi _i\sigma _1} \succ ^j p\sigma _1\) from which we deduce that \({v\sigma _1 *\pi _i\sigma _1} \twoheadrightarrow ^{*}_k p\sigma _1\) in j steps. We then take \(E_{i+1} *\pi _{i+1} = p\) and \(l_{i+1} = l_i + j + 1\). Note that in this case we have \(l_{i+1} > l_i\).
Intuitively \((E_i,\pi _i,l_i)_{i \le n}\) mimics the reduction of \(E'\sigma _1 *\pi \) while making explicit every substitution of a and every reduction of a \(\delta \)like state.

If \(E_i=a\) then \({t_1\sigma *\pi _i} \twoheadrightarrow ^{*}_k {E_{i+1} *\pi _{i+1}}\). Using Lemma 1 we obtain \(t_1\sigma *\pi _i\sigma _2 \twoheadrightarrow _k E_{i+1}\sigma _2 *\pi _i\sigma _2\) from which we deduce \(t_1\sigma *\pi _i\sigma _2 \Downarrow _k\) by induction hypothesis. Since \(t_1 \equiv _k t_2\) we obtain \({t_2\sigma *\pi _i\sigma _2 = (E_i *\pi _i)\sigma _2} \Downarrow _k\).

If \(E_i = \delta (v,w)\) then \({v *\pi _i} \twoheadrightarrow _k {E_{i+1} *\pi _{i+1}}\) and hence \(v\sigma _2 *\pi _i\sigma _2 \twoheadrightarrow _k E_{i+1}\sigma _2 *\pi _{i+1}\sigma _2\) by Lemma 1. Using the induction hypothesis we obtain \({v\sigma _2 *\pi _i\sigma _2} \Downarrow _k\). It remains to show that \({\delta (v\sigma _2,w\sigma _2) *\pi _i\sigma _2} \twoheadrightarrow ^{*}_k {v\sigma _2 *\pi _i\sigma _2}\). We need to find \(j < k\) such that \(v\sigma _2 \not \equiv _j w\sigma _2\). By construction there is \(m < k\) such that \(v\sigma _1 \not \equiv _m w\sigma _1\). We are going to show that \(v\sigma _2 \not \equiv _m w\sigma _2\). By using the global induction hypothesis twice we obtain \(v\sigma _1 \equiv _m v\sigma _2\) and \(w\sigma _1 \equiv _m v\sigma _2\). Now if \(v\sigma _2 \equiv _m w\sigma _2\) then \(v\sigma _1 \equiv _m v\sigma _2 \equiv _m w\sigma _2 \equiv _m w\sigma _1\) contradicts \(v\sigma _1 \not \equiv w\sigma _1\). Hence we must have \(v\sigma _2 \not \equiv _m w\sigma _2\).
Theorem 3
Let \(t_1\), \(t_2\) and E be three terms and a be a term variable. If \(t_1 \equiv t_2\) then \(E[a \!:=\! t_1] \equiv E[a \!:=\! t_2]\).
Proof
We suppose that \(t_1 \equiv t_2\) which means that \(t_1 \equiv _i t_2\) for every \(i \in \mathbb {N}\). We need to show that \(E[a \!:=\! t_1] \equiv E[a \!:=\! t_2]\) so we take \(i_0 \in \mathbb {N}\) and show \(E[a \!:=\! t_1] \equiv _{i_0} E[a \!:=\! t_2]\). By hypothesis we have \(t_1 \equiv _{i_0} t_2\) and hence we can conclude using Lemma 6.
6 Formulas and Semantics
The syntax presented in the previous section is part of a realizability machinery that will be built upon here. We aim at obtaining a semantical interpretation of the secondorder type system that will be defined shortly. Our abstract machine slightly differs from the mainstream presentation of Krivine’s classical realizability which is usually callbyname. Although callbyvalue presentations have rarely been published, such developments are wellknown among classical realizability experts. The addition of the \(\delta \) instruction and the related modifications are however due to the author.
6.1 Pole and Orthogonality
As always in classical realizability, the model is parametrized by a pole, which serves as an exchange point between the world of programs and the world of execution contexts (i.e. stacks).
Definition 8
A pole is a set of processes \(\bot \!\!\!\bot \subseteq \varLambda \times \varPi \) which is saturated (i.e. closed under backward reduction). More formally, if we have \(q \in \bot \!\!\!\bot \) and \(p \twoheadrightarrow q\) then \(p \in \bot \!\!\!\bot \).
The notion of orthogonality is central in Krivine’s classical realizability. In this framework a type is interpreted (or realized) by programs computing corresponding values. This interpretation is spread in a threelayered construction, even though it is fully determined by the first layer (and the choice of the pole). The first layer consists of a set of values that we will call the raw semantics. It gathers all the syntactic values that should be considered as having the corresponding type. As an example, if we were to consider the type of natural numbers, its raw semantics would be the set \(\{\bar{n} \;\; n \in \mathbb {N}\}\) where \(\bar{n}\) is some encoding of n. The second layer, called falsity value is a set containing every stack that is a candidate for building a valid process using any value from the raw semantics. The notion of validity depends on the choice of the pole. Here for instance, a valid process is a normalizing one (i.e. one that reduces to a final state). The third layer, called truth value is a set of terms that is built by iterating the process once more. The formalism for the two levels of orthogonality is given in the following definition.
Definition 9
We now give two general properties of orthogonality that are true in every classical realizability model. They will be useful when proving the soundness of our type system.
Lemma 7
If \(\phi \subseteq \varLambda _v\) is a set of values, then \(\phi \subseteq \phi ^{\bot \bot }\).
Proof
Immediate following the definition of \(\phi ^{\bot \bot }\).
Lemma 8
Let \(\phi \subseteq \varLambda _v\) and \(\psi \subseteq \varLambda _v\) be two sets of values. If \(\phi \subseteq \psi \) then \(\phi ^{\bot \bot } \subseteq \psi ^{\bot \bot }\).
Proof
Immediate by definition of orthogonality.
The construction involving the terms of the form \(\delta _{v,x}\) and \((\equiv )\) in the previous section is now going to gain meaning. The following theorem, which is our central result, does not hold in every classical realizability model. Obtaining a proof required us to internalize observational equivalence, which introduces a noncomputable reduction rule.
Theorem 4
If \(\varPhi \subseteq \varLambda _v\) is a set of values closed under \((\equiv )\), then \(\varPhi ^{\bot \bot } \cap \varLambda _v = \varPhi \).
Proof
The direction \(\varPhi \subseteq \varPhi ^{\bot \bot } \cap \varLambda _v\) is straightforward using Lemma 7. We are going to show that \(\varPhi ^{\bot \bot } \cap \varLambda _v \subseteq \varPhi \), which amounts to showing that for every value \(v \in \varPhi ^{\bot \bot }\) we have \(v \in \varPhi \). We are going to show the contrapositive, so let us assume \(v \not \in \varPhi \) and show \(v \not \in \varPhi ^{\bot \bot }\). We need to find a stack \(\pi _0\) such that \(v *\pi _0 \not \in \bot \!\!\!\bot \) and for every value \(w \in \varPhi \), \(w *\pi _0 \in \bot \!\!\!\bot \). We take \(\pi _0 = [\lambda x\;\delta _{x,v}]\;\alpha \) and show that is is suitable. By definition of the reduction relation \(v *\pi _0\) reduces to \(\delta _{v,v} *\alpha \) which is not in \(\bot \!\!\!\bot \) (it is stuck as \(v \equiv v\) by reflexivity). Let us now take \(w \in \varPhi \). Again by definition, \(w *\pi _0\) reduces to \(\delta _{w,v} *\alpha \), but this time we have \(w \not \equiv v\) since \(\varPhi \) was supposed to be closed under \((\equiv )\) and \(v \not \in \varPhi \). Hence \(w *\pi _0\) reduces to \({w *\alpha } \in \bot \!\!\!\bot \).
It is important to check that the pole we chose does not yield a degenerate model. In particular we check that no term is able to face every stacks. If it were the case, such a term could be use as a proof of \(\bot \).
Theorem 5
The pole \(\bot \!\!\!\bot \) is consistent, which means that for every closed term t there is a stack \(\pi \) such that \(t *\pi \not \in \bot \!\!\!\bot \).
Proof
Let t be a closed term and \(\alpha \) be a stack constant. If we do not have \(t *\alpha \Downarrow _\twoheadrightarrow \) then we can directly take \(\pi = \alpha \). Otherwise we know that \(t *\alpha \twoheadrightarrow ^{*}v *\alpha \) for some value v. Since t is closed \(\alpha \) is the only available stack variable. We now show that \(\pi = [\lambda x\;\{\}]\{\}.\beta \) is suitable. We denote \(\sigma \) the substitution \([\alpha := \pi ]\). Using a trivial extension of Lemma 1 to the \((\twoheadrightarrow )\) relation we obtain \(t *\pi = (t *\alpha )\sigma \twoheadrightarrow ^{*}(v *\alpha )\sigma = v\sigma *\pi \). We hence have \(t *\pi \twoheadrightarrow ^{*}v\sigma *[\lambda x\;\{\}]\{\}.\beta \twoheadrightarrow ^2 \{\} *\{\}.\beta \not \in \bot \!\!\!\bot \).
6.2 Formulas and Their Semantics
In this paper we limit ourselves to secondorder logic, even though the system can easily be extended to higherorder. For every natural number n we require a countable set \({\mathcal {V}}_n = \{{X}_n, {Y}_n, {Z}_n ...\}\) of nary predicate variables.
Definition 10
Definition 11

if \(x \in dom(\sigma )\) then \(\sigma (x) \in \varLambda _v\),

if \(\alpha \in dom(\sigma )\) then \(\sigma (\alpha ) \in \varPi \),

if \(a \in dom(\sigma )\) then \(\sigma (a) \in \varLambda \),

if \(X_n \in dom(\sigma )\) then \(\sigma (X_n) \in {\varLambda ^n \rightarrow \mathcal {P}({{\varLambda }_v}/\!\!\equiv )}\).
Remark 6
A predicate variable of arity n will be substituted by a nary predicate. Semantically, such predicate will correspond to some total (settheoretic) function building a subset of \(\varLambda _v/\!\!\equiv \) from n terms. In the syntax, the binding of the arguments of a predicate variables will happen implicitly during its substitution.
Definition 12
Given a formula A we denote FV(A) the set of its free variables. Given a substitution \(\sigma \) such that \(FV(A) \subseteq dom(\sigma )\) we write \(A[\sigma ]\) the closed formula built by applying \(\sigma \) to A.
In the semantics we will interpret closed formulas by sets of values closed under the equivalence relation \((\equiv )\).
Definition 13
In the model, programs will realize closed formulas in two different ways according to their syntactic class. The interpretation of values will be given in terms of raw semantics, and the interpretation of terms in general will be given in terms of truth values.
Definition 14

\(v \in \varLambda _v\) realizes \(A[\sigma ]\) if \(v \in \llbracket A \rrbracket _\sigma \),

\(t \in \varLambda \) realizes \(A[\sigma ]\) if \(t \in \llbracket A \rrbracket _\sigma ^{\bot \bot }\).
6.3 Contexts and Typing Rules
Before giving the typing rules of our system we need to define contexts and judgements. As explained in the introduction, several typing rules require a value restriction in our context. This is reflected in typing rule by the presence of two forms of judgements.
Definition 15
Definition 16

\(\varGamma \vdash _{\!\!\!\text {val}}v : A\) meaning that the value v has type A in context \(\varGamma \),

\(\varGamma \vdash t : A\) meaning that the term t has type A in context \(\varGamma \).
The typing rules of the system are given in Fig. 2. Although most of them are fairly usual, our type system differs in several ways. For instance the last four rules are related to the extensionality of the calculus. One can note the value restriction in several places: both universal quantification introduction rules and the introduction of the membership predicate. In fact, some value restriction is also hidden in the rules for the elimination of the existential quantifiers and the elimination rule for the restriction connective. These rules are presented in their lefthand side variation, and only values can appear on the left of the sequent. It is not surprising that elimination of an existential quantifier requires value restriction as it is the dual of the introduction rule of a universal quantifier.
6.4 Adequacy
We are now going to prove the soundness of our type system by showing that it is compatible with our realizability model. This property is specified by the following theorem which is traditionally called the adequacy lemma.
Definition 17

for every x : A in \(\varGamma \) we have \(\sigma (x) \in \llbracket A \rrbracket _\sigma \),

for every \(\alpha : \lnot A\) in \(\varGamma \) we have \(\sigma (\alpha ) \in \llbracket A \rrbracket _\sigma ^\bot \),

for every a : Term in \(\varGamma \) we have \(\sigma (a) \in \varLambda \),

for every \(X_n : Pred_n\) in \(\varGamma \) we have \(\sigma (X_n) \in \varLambda ^n \rightarrow \varLambda _v/\!\!\equiv \),

for every \(t \equiv u\) in \(\varGamma \) we have \(t\sigma \equiv u\sigma \) and

for every \(t \not \equiv u\) in \(\varGamma \) we have \(t\sigma \not \equiv u\sigma \).
Theorem 6
(Adequacy). Let \(\varGamma \) be a (valid) context, A be a formula such that \(FV(A) \subseteq dom(\varGamma )\) and \(\sigma \) be a substitution realizing \(\varGamma \).

If \(\varGamma \vdash _{\!\!\!\text {val}}v : A\) then \(v\sigma \in \llbracket A \rrbracket _\sigma \),

if \(\varGamma \vdash t : A\) then \(t\sigma \in \llbracket A \rrbracket _\sigma ^{\bot \bot }\).
Proof
We proceed by induction on the derivation of the judgement \(\varGamma \vdash _{\!\!\!\text {val}}v:A\) (resp. \(\varGamma \vdash t:A\)) and we reason by case on the last rule used.
(\(\text {ax}\)) By hypothesis \(\sigma \) realizes \(\varGamma , x : A\) from which we directly obtain \(x\sigma \in \llbracket A \rrbracket _\sigma \).
(\(\uparrow \)) and (\(\downarrow \)) are direct consequences of Lemma 7 and Theorem 4 respectively.
(\(\Rightarrow _e\)) We need to prove that \(t\sigma \;u\sigma \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \), hence we take \(\pi \in \llbracket B \rrbracket ^{\bot }_\sigma \) and show \(t\sigma \;u\sigma *\pi \in \bot \!\!\!\bot \). Since \(\bot \!\!\!\bot \) is saturated, we can take a reduction step and show \(u\sigma *[t\sigma ]\pi \in \bot \!\!\!\bot \). By induction hypothesis \(u\sigma \in \llbracket A \rrbracket ^{\bot \bot }_\sigma \) so we only have to show \([t\sigma ]\pi \in \llbracket A \rrbracket ^{\bot }_\sigma \). To do so we take \(v \in \llbracket A \rrbracket _\sigma \) and show \(v *[t\sigma ]\pi \in \bot \!\!\!\bot \). Here we can again take a reduction step and show \(t\sigma *v.\pi \in \bot \!\!\!\bot \). By induction hypothesis we have \(t\sigma \in \llbracket A \Rightarrow B \rrbracket ^{\bot \bot }_\sigma \), hence it is enough to show \(v.\pi \in \llbracket A \Rightarrow B \rrbracket ^{\bot }_\sigma \). We now take a value \(\lambda x\;t_x \in \llbracket A \Rightarrow B \rrbracket _\sigma \) and show that \(\lambda x\;t_x *v.\pi \in \bot \!\!\!\bot \). We then apply again a reduction step and show \(t_x[x := v] *\pi \in \bot \!\!\!\bot \). Since \(\pi \in \llbracket B \rrbracket ^{\bot }_\sigma \) we only need to show \(t_x[x := v] \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \) which is true by definition of \(\llbracket A \Rightarrow B \rrbracket _\sigma \).
(\(\Rightarrow _i\)) We need to show \(\lambda x\;t\sigma \in \llbracket A \Rightarrow B \rrbracket _\sigma \) so we take \(v \in \llbracket A \rrbracket _\sigma \) and show \(t\sigma [x \!:=\! v] \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \). Since \(\sigma [x := v]\) realizes \(\varGamma , x:A\) we can conclude using the induction hypothesis.
(\(\mu \)) We need to show that \(\mu \alpha \;t\sigma \in \llbracket A \rrbracket ^{\bot \bot }_\sigma \) hence we take \(\pi \in \llbracket A \rrbracket ^{\bot }_\sigma \) and show \(\mu \alpha \;t\sigma *\pi \in \bot \!\!\!\bot \). Since \(\bot \!\!\!\bot \) is saturated, it is enough to show \(t\sigma [\alpha := \pi ] *\pi \in \bot \!\!\!\bot \). As \(\sigma [\alpha := \pi ]\) realizes \(\varGamma , \alpha :\lnot A\) we conclude by induction hypothesis.
(\(*\)) We need to show \(t\sigma *\alpha \sigma \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \), hence we take \(\pi \in \llbracket B \rrbracket ^{\bot }_\sigma \) and show that \((t\sigma *\alpha \sigma ) *\pi \in \bot \!\!\!\bot \). Since \(\bot \!\!\!\bot \) is saturated, we can take a reduction step and show \(t\sigma *\alpha \sigma \in \bot \!\!\!\bot \). By induction hypothesis \(t\sigma \in \llbracket A \rrbracket ^{\bot \bot }_\sigma \) hence it is enough to show \(\alpha \sigma \in \llbracket A \rrbracket ^{\bot }_\sigma \) which is true by hypothesis.
(\(\in _i\)) We need to show \(v\sigma \in \llbracket v \in A \rrbracket _\sigma \). We have \(v\sigma \in \llbracket A \rrbracket _\sigma \) by induction hypothesis, and \(v\sigma \equiv v\sigma \) by reflexivity of \((\equiv )\).
(\(\in _e\)) By hypothesis we know that \(\sigma \) realizes \(\varGamma , x : u \in A\). To be able to conclude using the induction hypothesis, we need to show that \(\sigma \) realizes \(\varGamma , x : A, x \equiv u\). Since we have \(\sigma (x) \in \llbracket u \in A \rrbracket _\sigma \), we obtain that \(x\sigma \in \llbracket A \rrbracket _\sigma \) and \(x\sigma \equiv u\sigma \) by definition of \(\llbracket u \in A \rrbracket _\sigma \).
(\(\upharpoonright _i\)) We need to show \(t\sigma \in \llbracket A \upharpoonright u_1 \equiv u_2 \rrbracket ^{\bot \bot }_\sigma \). By hypothesis \(u_1\sigma \equiv u_2\sigma \), hence \(\llbracket A \upharpoonright u_1 \equiv u_2 \rrbracket _\sigma = \llbracket A \rrbracket _\sigma \). Consequently, it is enough to show that \(t\sigma \in \llbracket A \rrbracket ^{\bot \bot }_\sigma \), which is exactly the induction hypothesis.
(\(\upharpoonright _e\)) By hypothesis we know that \(\sigma \) realizes \(\varGamma , x : A \upharpoonright u_1 \equiv u_2\). To be able to use the induction hypothesis, we need to show that \(\sigma \) realizes \(\varGamma , x : A, u_1 \equiv u_2\). Since we have \(\sigma (x) \in \llbracket A \upharpoonright u_1 \equiv u_2 \rrbracket _\sigma \), we obtain that \(x\sigma \in \llbracket A \rrbracket _\sigma \) and that \(u_1\sigma \equiv u_2\sigma \) by definition of \(\llbracket A \upharpoonright u_1 \equiv u_2 \rrbracket _\sigma \).
(\(\forall _i\)) We need to show that \(v\sigma \in \llbracket \forall a\; A \rrbracket _\sigma = \bigcap _{t \in \varLambda } \llbracket A \rrbracket _{\sigma [a := t]}\) so we take \(t \in \varLambda \) and show \(v\sigma \in \llbracket A \rrbracket _{\sigma [a := t]}\). This is true by induction hypothesis since \(a \not \in FV(\varGamma )\) and hence \(\sigma [a:=t]\) realizes \(\varGamma \).
(\(\forall _e\)) We need to show \(t\sigma \in \llbracket A[a := u] \rrbracket ^{\bot \bot }_\sigma = \llbracket A \rrbracket ^{\bot \bot }_{\sigma [a := u\sigma ]}\) for some \(u \in \varLambda \). By induction hypothesis we know \(t\sigma \in \llbracket \forall a\;A \rrbracket ^{\bot \bot }_\sigma \), hence we only need to show that \(\llbracket \forall a\; A \rrbracket ^{\bot \bot }_\sigma \subseteq \llbracket A \rrbracket ^{\bot \bot }_{\sigma [a := u\sigma ]}\). By definition we have \(\llbracket \forall a\; A \rrbracket _\sigma \subseteq \llbracket A \rrbracket _{\sigma [a := u\sigma ]}\) so we can conclude using Lemma 8.
(\(\exists _e\)) By hypothesis we know that \(\sigma \) realizes \(\varGamma , x : \exists a\;A\). In particular, we know that \(\sigma (x) \in \llbracket \exists a\;A \rrbracket _\sigma \), which means that there is a term \(u \in \varLambda ^*\) such that \(\sigma (x) \in \llbracket A \rrbracket _{\sigma [a := u]}\). Since \(a \notin FV(\varGamma )\), we obtain that the substitution \(\sigma [a := u]\) realizes the context \(\varGamma , x : A\). Using the induction hypothesis, we finally get \(t\sigma = t\sigma [a := u] \in \llbracket B \rrbracket ^{\bot \bot }_{\sigma [a := u]} = \llbracket B \rrbracket ^{\bot \bot }_\sigma \) since \(a \notin TV(t)\) and \(a \notin FV(B)\).
(\(\exists _i\)) The proof for this rule is similar to the one for (\(\forall _e\)). We need to show that \(\llbracket A[a := u] \rrbracket ^{\bot \bot }_\sigma = \llbracket A \rrbracket ^{\bot \bot }_{\sigma [a := u\sigma ]} \subseteq \llbracket \exists a\;A \rrbracket ^{\bot \bot }_\sigma \). This follows from Lemma 8 since \(\llbracket A \rrbracket _{\sigma [a := u\sigma ]} \subseteq \llbracket \exists a\;A \rrbracket _\sigma \) by definition.
(\(\forall _I\)), \((\forall _E)\), \((\exists _E)\) and \((\exists _I)\) are similar to similar to (\(\forall _i\)), (\(\forall _e\)), (\(\exists _e\)) and (\(\exists _i\)).
(\(\times _i\)) We need to show that \(\{l_i = v_i\sigma \}_{i \in I} \in \llbracket \{l_i : A_i\}_{i \in I} \rrbracket _\sigma \). By definition we need to show that for all \(i \in I\) we have \(v_i\sigma \in \llbracket A_i \rrbracket _\sigma \). This is immediate by induction hypothesis.
(\(\times _e\)) We need to show that \(v\sigma .l_i \in \llbracket A_i \rrbracket ^{\bot \bot }_\sigma \) for some \(i \in I\). By induction hypothesis we have \(v\sigma \in \llbracket \{l_i : A_i\}_{i \in I} \rrbracket _\sigma \) and hence v has the form \(\{l_i = v_i\}_{i \in I}\) with \(v_i\sigma \in \llbracket A_i \rrbracket _\sigma \). Let us now take \(\pi \in \llbracket A_i \rrbracket ^{\bot }_\sigma \) and show that \(\{l_i = v_i\sigma \}_{i \in I}.l_i *\pi \in \bot \!\!\!\bot \). Since \(\bot \!\!\!\bot \) is saturated, it is enough to show \(v_i\sigma *\pi \in \bot \!\!\!\bot \). This is true since \(v_i\sigma \in \llbracket A_i \rrbracket _\sigma \) and \(\pi \in \llbracket A_i \rrbracket ^{\bot }_\sigma \).
(\(+_i\)) We need to show \(C_i[v\sigma ] \in \llbracket [C_i : A_i]_{i \in I} \rrbracket _\sigma \) for some \(i \in I\). By induction hypothesis \(v\sigma \in \llbracket A_i \rrbracket _\sigma \) and hence we can conclude by definition of \(\llbracket [C_i : A_i]_{i \in I} \rrbracket _\sigma \).
(\(+_e\)) We need to show \(case_{v\sigma }\;[C_i[x] \rightarrow t_i\sigma ]_{i \in I} \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \). By induction hypothesis \(v\sigma \in \llbracket [C_i of A_i]_{i \in I} \rrbracket _\sigma \) which means that there is \(i \in I\) and \(w \in \llbracket A_i \rrbracket _\sigma \) such that \(v\sigma = C_i[w]\). We take \(\pi \in \llbracket B \rrbracket ^{\bot }_\sigma \) and show \(\text {case}_{C_i[w]}\;[C_i[x] \rightarrow t_i\sigma ]_{i \in I} *\pi \in \bot \!\!\!\bot \). Since \(\bot \!\!\!\bot \) is saturated, it is enough to show \(t_i\sigma [x := w] *\pi \in \bot \!\!\!\bot \). It remains to show that \(t_i\sigma [x := w] \in \llbracket B \rrbracket ^{\bot \bot }_\sigma \). To be able to conclude using the induction hypothesis we need to show that \(\sigma [x := w]\) realizes \(\varGamma , x : A_i, C_i[x] \equiv v\). This is true since \(\sigma \) realizes \(\varGamma \), \(w \in \llbracket A_i \rrbracket _\sigma \) and \(C_i[w] \equiv v\sigma \) by reflexivity.
(\(\equiv _{v,l}\)) We need to show \(t[x := w_1]\sigma = t\sigma [x := w_1\sigma ] \in \llbracket A \rrbracket _\sigma \). By hypothesis we know that \(w_1\sigma \equiv w_2\sigma \) from which we can deduce \(t\sigma [x := w_1\sigma ] \equiv t\sigma [x := w_2\sigma ]\) by extensionality (Theorem 2). Since \(\llbracket A \rrbracket _\sigma \) is closed under \((\equiv )\) we can conclude using the induction hypothesis.
(\(\equiv _{t,l}\)), (\(\equiv _{v,r}\)) and (\(\equiv _{t,r}\)) are similar to (\(\equiv _{v,l}\)), using extensionality (Theorems 2 and 3).
Remark 7
For the sake of simplicity we fixed a pole \(\bot \!\!\!\bot \) at the beginning of the current section. However, many of the properties presented here (including the adequacy lemma) remain valid with similar poles. We will make use of this fact in the proof of the following theorem.
Theorem 7
(Safety). Let \(\varGamma \) be a context, A be a formula such that \(FV(A) \subseteq dom(\varGamma )\) and \(\sigma \) be a substitution realizing \(\varGamma \). If t is a term such that \(\varGamma \vdash t : A\) and if \(A[\sigma ]\) is pure (i.e. it does not contain any \(\_ \Rightarrow \_\)), then for every stack \(\pi \in \llbracket A \rrbracket _\sigma ^\bot \) there is a value \(v \in \llbracket A \rrbracket _\sigma \) and \(\alpha \in \mathcal {V}_\mu \) such that \({t\sigma *\pi } \twoheadrightarrow ^{*}{v *\alpha }\).
Proof
Remark 8
It is easy to see that if \(A[\sigma ]\) is closed and pure then \(v \in \llbracket A \rrbracket _\sigma \) implies that \(\bullet \vdash v : A\).
Theorem 8
(Consistency). There is no t such that \(\bullet \vdash t : \bot \).
Proof
Let us suppose that \(\bullet \vdash t : \bot \). Using adequacy (Theorem 6 ) we obtain that \(t \in \llbracket \bot \rrbracket _\sigma ^{\bot \bot }\). Since \(\llbracket \bot \rrbracket _\sigma = \emptyset \) we know that \(\llbracket \bot \rrbracket _\sigma ^\bot = \varPi \) by definition. Now using Theorem 5 we obtain \(\llbracket \bot \rrbracket _\sigma ^{\bot \bot } = \emptyset \). This is a contradiction.
7 Deciding Program Equivalence
The type system given in Fig. 2 does not provide any way of discharging an equivalence from the context. As a consequence the truth of an equivalence cannot be used. Furthermore, an equational contradiction in the context cannot be used to derive falsehood. To address these two problems, we will rely on a partial decision procedure for the equivalence of terms. Such a procedure can be easily implemented using an algorithm similar to KnuthBendix, provided that we are able to extract a set of equational axioms from the definition of \((\equiv )\). In particular, we will use the following lemma to show that several reduction rules are contained in \((\equiv )\).
Lemma 9
Let t and u be terms. If for every stack \(\pi \in \varPi \) there is \(p \in \varLambda \times \varPi \) such that \(t *\pi \succ ^* p\) and \(u *\pi \succ ^* p\) then \(t \equiv u\).
Proof
Since \((\succ ) \subseteq (\twoheadrightarrow _i)\) for every \(i \in \mathbb {N}\), we can deduce that \(t *\pi \twoheadrightarrow _i^* p\) and \(u *\pi \twoheadrightarrow _i^* p\) for every \(i \in \mathbb {N}\). Using Lemma 1 we can deduce that for every substitution \(\sigma \) we have \(t\sigma *\pi \twoheadrightarrow _i^* p\sigma \) and \(u\sigma *\pi \twoheadrightarrow _i^* p\sigma \) for all \(i \in \mathbb {N}\). Consequently we obtain \(t \equiv u\).
The equivalence relation contains callbyvalue \(\beta \)reduction, projection on records and case analysis on variants.
Theorem 9
For every \(x \in \mathcal {V}_\lambda \), \(t \in \varLambda \) and \(v \in \varLambda _v\) we have \((\lambda x\;t) v \equiv t[x := v]\).
Proof
Immediate using Lemma 9.
Theorem 10
Proof
Immediate using Lemma 9.
To observe contradictions, we also need to derive some inequivalences on values. For instance, we would like to deduce a contradiction if two values with a different head constructor are assumed to be equivalent.
Theorem 11
Let C, \(D \in \mathcal {C}\) be constructors, and v, \(w \in \varLambda _v\) be values. If \(C \ne D\) then \(C[v] \not \equiv D[w]\).
Proof
We take \(\pi = [\lambda x\; \text {case}_{x}\;[C[y] \rightarrow y\;\;D[y] \rightarrow \varOmega ]] \alpha \) where \(\varOmega \) is an arbitrary diverging term. We then obtain \(C[v] *\pi \Downarrow _0\) and \(D[w] *\pi \Uparrow _0\).
Theorem 12
Let \(\{l_i = v_i\}_{i \in I}\) and \(\{l_j = v_j\}_{j \in J}\) be two records. If k is a index such that \(k \in I\) and \(k \notin J\) then we have \(\{l_i = v_i\}_{i \in I} \not \equiv \{l_j = v_j\}_{j \in J}\).
Proof
Immediate using the stack \(\pi = [\lambda x\; x.l_k] \alpha \).
Theorem 13
Proof
The previous five theorems together with the extensionality of \((\equiv )\) and its properties as an equivalence relation can be used to implement a partial decision procedure for equivalence. We will incorporate this procedure into the typing rules by introducing a new form of judgment.
Definition 18
Definition 19
Let \(\mathcal {E}\) be an equational context. The judgement \(\mathcal {E} \vdash \bot \) is valid if and only if the partial decision procedure is able to derive a contradiction in \(\mathcal {E}\). We will write \(\mathcal {E} \vdash t \equiv u\) for \(\mathcal {E}, t \not \equiv u \vdash \bot \) and \(\mathcal {E} \vdash t \not \equiv u\) for \(\mathcal {E}, t \equiv u \vdash \bot \)
The soundness of these new rules follows easily since the decision procedure agrees with the semantical notion of equivalence. The axioms that were given at the beginning of this section are only used to partially reflect the semantical equivalence relation in the syntax. This is required if we are to implement the decision procedure.
The soundness of this rule is again immediate.
8 Further Work
The model presented in the previous sections is intended to be used as the basis for the design of a proof assistant based on a callbyvalue ML language with control operators. A first prototype (with a different theoretical foundation) was implemented by Christophe Raffalli [27]. Based on this experience, the design of a new version of the language with a clean theoretical basis can now be undertaken. The core of the system will consist of three independent components: a typechecker, a termination checker and a decision procedure for equivalence.
Working with a Curry style language has the disadvantage of making typechecking undecidable. While most proof systems avoid this problem by switching to Church style, it is possible to use heuristics making most Curry style programs that arise in practice directly typable. Christophe Raffalli implemented such a system [26] and from his experience it would seem that very little help from the user is required in general. In particular, if a term is typable then it is possible for the user to provide hints (e.g. the type of a variable) so that typechecking may succeed. This can be seen as a kind of completeness.
Proof assistants like Coq [18] or Agda [22] both have decidable typechecking algorithms. However, these systems provide mechanisms for handling implicit arguments or metavariables which introduce some incompleteness. This does not make these systems any less usable in practice. We conjecture that going even further (i.e. full Curry style) provides a similar user experience.
To obtain a practical programming language we will need support for recursive programs. For this purpose we plan on adapting Pierre Hyvernat’s termination checker [9]. It is based on size change termination and has already been used in the first prototype implementation. We will also need to extend our type system with inductive (and coinductive) types [19, 25]. They can be introduced in the system using fixpoints \(\mu X\,A\) (and \(\nu X\,A\)).
Footnotes
 1.
In ML the polymorphism mechanism is strongly linked with letbindings. In OCaml syntax, they are expressions of the form let x = u in t.
 2.
This was originally denoted \([\alpha ]u\).
 3.
 4.
Only \(E_n *\pi _n\) can be of the form \(v *\alpha \).
 5.
We use the standard secondorder encoding: \(\bot = \forall X_0\; X_0\) and \(\top = \exists X_0\; X_0\).
Notes
Acknowledgments
I would like to particularly thank my research advisor, Christophe Raffalli, for his guidance and input. I would also like to thank Alexandre Miquel for suggesting the encoding of dependent products. Thank you also to Pierre Hyvernat, Tom Hirschowitz, Robert Harper and the anonymous reviewers for their very helpful comments.
References
 1.Casinghino, C., Sjberg, V., Weirich, S.: Combining proofs and programs in a dependently typed language. In: Jagannathan, S., Sewell, P. (eds.) 41st Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL 2014, pp. 33–46. ACM, San Diego (2014)Google Scholar
 2.Constable, R.L., Allen, S.F., Bromley, M., Cleaveland, R., Cremer, J.F., Harper, R.W., Howe, D.J., Knoblock, T.B., Mendler, N.P., Panangaden, P., Sasaki, J.T., Smith, S.F.: Implementing mathematics with the Nuprl proof development system. Prentice Hall, Upper Saddle River (1986)Google Scholar
 3.Coquand, T., Huet, G.: The calculus of constructions. Inf. Comput. 76(2–3), 95–120 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
 4.Damas, L., Milner, R.: Principal typeschemes for functional programs. In: Proceedings of the 9th ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, POPL 1982, pp. 207–212. ACM, New York (1982)Google Scholar
 5.Garrigue, J.: Relaxing the value restriction. In: Kameyama, Y., Stuckey, P.J. (eds.) FLOPS 2004. LNCS, vol. 2998, pp. 196–213. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 6.Griffin, T.G.: A formulæastypes notion of control. In: Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, pp. 47–58. ACM Press (1990)Google Scholar
 7.Harper, R., Lillibridge, M.: ML with callcc is unsound (1991). http://www.seas.upenn.edu/~sweirich/types/archive/1991/msg00034.html
 8.Howe, D.J.: Equality in lazy computation systems. In: Proceedings of the Fourth Annual Symposium on Logic in Computer Science (LICS 1989), 5–8 June 1989, Pacific Grove, California, USA, pp. 198–203 (1989)Google Scholar
 9.Hyvernat, P.: The sizechange termination principle for constructor based languages. Logical Methods Comput. Sci. 10(1) (2014). http://www.lmcsonline.org/ojs/viewarticle.php?id=1409&layout=abstract
 10.Jia, L., Vaughan, J.A., Mazurak, K., Zhao, J., Zarko, L., Schorr, J., Zdancewic, S.: AURA: a programming language for authorization and audit. In: Hook, J., Thiemann, P. (eds.) Proceeding of the 13th ACM SIGPLAN International Conference on Functional Programming, ICFP 2008, 20–28 September 2008, Victoria, BC, Canada, pp. 27–38. ACM (2008)Google Scholar
 11.Krivine, J.: A callbyname lambdacalculus machine. HigherOrder Symb. Comput. 20(3), 199–207 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
 12.Krivine, J.: Realizability in classical logic. In: Interactive Models of Computation and Program Behaviour, Panoramas et synthèses, vol. 27, pp. 197–229. Société Mathématique de France (2009)Google Scholar
 13.Lepigre, R.: A realizability model for a semantical value restriction (2015). https://lama.univsavoie.fr/~lepigre/files/docs/semvalrest2015.pdf
 14.Leroy, X.: Polymorphism by name for references and continuations. In: 20th Symposium Principles of Programming Languages, pp. 220–231. ACM Press (1993)Google Scholar
 15.Leroy, X., Weis, P.: Polymorphic type inference and assignment. In: Proceedings of the 18th ACM SIGPLANSIGACT Symposium on Principles of Programming Language, POPL 1991, pp. 291–302. ACM, New York (1991)Google Scholar
 16.Licata, D.R., Harper, R.: Positively dependent types. In: Altenkirch, T., Millstein, T.D. (eds.) Proceedings of the 3rd ACM Workshop Programming Languages meets Program Verification, PLpPV 2009, 20 January 2009, Savannah, GA, USA, pp. 3–14. ACM (2009)Google Scholar
 17.MartinLöf, P.: Constructive mathematics and computer programming. In: Cohen, L.J., Pfeiffer, H., Podewski, K.P. (eds.) Logic, Methodology and Philosophy of Science VI, Studies in Logic and the Foundations of Mathematics, vol. 104, pp. 153–175. NorthHolland (1982)Google Scholar
 18.The Coq development team: The Coq proof assistant reference manual. LogiCal Project (2004). http://coq.inria.fr
 19.Mendler, N.P.: Recursive types and type constraints in secondorder lambda calculus. In: Proceedings of the Symposium on Logic in Computer Science (LICS 1987), pp. 30–36 (1987)Google Scholar
 20.Miquel, A.: Le Calcul des constructions implicites: syntaxe et Sémantique. Ph.D. thesis, Université Paris VII (2001)Google Scholar
 21.MunchMaccagnoni, G.: Focalisation and classical realisability. In: Grädel, E., Kahle, R. (eds.) CSL 2009. LNCS, vol. 5771, pp. 409–423. Springer, Heidelberg (2009)CrossRefGoogle Scholar
 22.Norell, U.: Dependently typed programming in Agda. In: Koopman, P., Plasmeijer, R., Swierstra, D. (eds.) AFP 2008. LNCS, vol. 5832, pp. 230–266. Springer, Heidelberg (2009)CrossRefGoogle Scholar
 23.Owre, S., Rajan, S., Rushby, J., Shankar, N., Srivas, M.: PVS: Combining specification, proof checking, and model checking. In: Alur, R., Henzinger, T.A. (eds.) CAV 1996. LNCS, vol. 1102, pp. 411–414. Springer, Heidelberg (1996)CrossRefGoogle Scholar
 24.Parigot, M.: \(\lambda \mu \)calculus: an algorithmic interpretation of classical. In: Voronkov, A. (ed.) LPAR 1992. LNCS, vol. 624, pp. 190–201. Springer, Heidelberg (1992)CrossRefGoogle Scholar
 25.Raffalli, C.: L’Arithmétiques Fonctionnelle du Second Ordre avec Points Fixes. Ph.D. thesis, Université Paris VII (1994)Google Scholar
 26.Raffalli, C.: A normaliser for pure and typed \(\lambda \)calculus (1996). http://lama.univsavoie.fr/~raffalli/normaliser.html
 27.Raffalli, C.: The PML programming language. LAMA  Université Savoie MontBlanc (2012). http://lama.univsavoie.fr/tracpml/
 28.Swamy, N., Chen, J., Fournet, C., Strub, P., Bhargavan, K., Yang, J.: Secure distributed programming with valuedependent types. In: Chakravarty, M.M.T., Hu, Z., Danvy, O. (eds.) Proceeding of the 16th ACM SIGPLAN international conference on Functional Programming, ICFP 2011, 19–21 September 2011, Tokyo, Japan, pp. 266–278. ACM (2011)Google Scholar
 29.Tofte, M.: Type inference for polymorphic references. Inf. Comput. 89(1), 1–34 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 30.Wright, A.K.: Simple imperative polymorphism. LISP Symb. Comput. 8, 343–356 (1995)CrossRefGoogle Scholar
 31.Wright, A.K., Felleisen, M.: A syntactic approach to type soundness. Inf. Comput. 115(1), 38–94 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
 32.Xi, H.: Applied type system. In: Berardi, S., Coppo, M., Damiani, F. (eds.) TYPES 2003. LNCS, vol. 3085, pp. 394–408. Springer, Heidelberg (2004)CrossRefGoogle Scholar
 33.Xi, H., Pfenning, F.: Dependent types in practical programming. In: Proceedings of the 26th ACM SIGPLAN Symposium on Principles of Programming Languages, pp. 214–227, San Antonio, January 1999Google Scholar