Realizability Interpretation and Normalization of Typed CallbyNeed \(\lambda \)calculus with Control
Abstract
We define a variant of Krivine realizability where realizers are pairs of a term and a substitution. This variant allows us to prove the normalization of a simplytyped callbyneed \(\lambda \)calculus with control due to Ariola et al. Indeed, in such callbyneed calculus, substitutions have to be delayed until knowing if an argument is really needed. We then extend the proof to a callbyneed \(\lambda \)calculus equipped with a type system equivalent to classical secondorder predicate logic, representing one step towards proving the normalization of the callbyneed classical secondorder arithmetic introduced by the second author to provide a proofasprogram interpretation of the axiom of dependent choice.
1 Introduction
1.1 RealizabilityBased Normalization
Normalization by realizability is a standard technique to prove the normalization of typed \(\lambda \)calculi. Originally introduced by Tait [36] to prove the normalization of System T, it was extended by Girard to prove the normalization of System F [11]. This kind of techniques, also called normalization by reducibility or normalization by logical relations, works by interpreting each type by a set of typed or untyped terms seen as realizers of the type, then showing that the way these sets of realizers are built preserve properties such as normalization. Over the years, multiple uses and generalization of this method have been done, for a more detailed account of which we refer the reader to the work of Gallier [9].
Realizability techniques were adapted to the normalization of various calculi for classical logic (see e.g. [3, 32]). A specific framework tailored to the study of realizability for classical logic has been designed by Krivine [19] on top of a \(\lambda \)calculus with control whose reduction is defined in terms of an abstract machine. In such a machinery, terms are evaluated in front of stacks; and control (thus classical logic) is made available through the possibility of saving and restoring stacks. During the last twenty years, Krivine’s classical realizability turned out to be fruitful both from the point of view of logic, leading to the construction of new models of set theory, and generalizing in particular the technique of Cohen’s forcing [20, 21, 22]; and on its computational facet, providing alternative tools to the analysis of the computational content of classical programs^{1}.
Noteworthily, Krivine realizability is one of the approaches contributing to advocating the motto that through the CurryHoward correspondence, with new programming instructions come new reasoning principles^{2}. Our original motivation for the present work is actually in line with this idea, in the sense that our longterm purpose is to give a realizability interpretation to \(\text {dPA}^\omega \), a callbyneed calculus defined by the second author [15]. In this calculus, the lazy evaluation is indeed a fundamental ingredient in order to obtain an executable proof term for the axiom of dependent choice.
1.2 Contributions of the Paper
In order to address the normalization of typed callbyneed \(\lambda \)calculus, we design a variant of Krivine’s classical realizability, where the realizers are closures (a term with a substitution for its free variables). The callbyneed \(\lambda \)calculus with control that we consider is the Open image in new window calculus. This calculus, that was defined by Ariola et al. [2], is syntactically described in an extension with explicit substitutions of the \(\lambda \mu {\tilde{\mu }}\)calculus [6, 14, 29]. The syntax of the \(\lambda \mu {\tilde{\mu }}\)calculus itself refines the syntax of the \(\lambda \)calculus by syntactically distinguishing between terms and evaluation contexts. It also contains commands which combine terms and evaluation contexts so that they can interact together. Thinking of evaluation contexts as stacks and commands as states, the \(\lambda \mu {\tilde{\mu }}\)calculus can also be seen as a syntax for abstract machines. As for a proofasprogram point of view, the \(\lambda \mu {\tilde{\mu }}\)calculus and its variants can be seen as a term syntax for proofs of Gentzen’s sequent calculus. In particular, the \(\lambda \mu {\tilde{\mu }}\)calculus contains control operators which give a computational interpretation to classical logic.
We give a proof of normalization first for the simplytyped Open image in new window calculus^{3}, then for a type system with firstorder and secondorder quantification. While we only apply our technique to the normalization of the Open image in new window calculus, our interpretation incidentally suggests a way to adapt Krivine realizability to other callbyneed settings. This paves the way to the computational interpretation of classical proofs using lazy evaluation or shared memory cells, including the case of the callbyneed second order arithmetic \(\text {dPA}^\omega \) [15].
2 The Open image in new window calculus
2.1 The CallbyNeed Evaluation Strategy
The callbyneed evaluation strategy of the \(\lambda \)calculus evaluates arguments of functions only when needed, and, when needed, shares their evaluations across all places where the argument is required. The callbyneed evaluation is at the heart of a functional programming language such as Haskell. It has in common with the callbyvalue evaluation strategy that all places where a same argument is used share the same value. Nevertheless, it observationally behaves like the callbyname evaluation strategy (for the pure \(\lambda \)calculus), in the sense that a given computation eventually evaluates to a value if and only if it evaluates to the same value (up to inner reduction) along the callbyname evaluation. In particular, in a setting with nonterminating computations, it is not observationally equivalent to the callbyvalue evaluation. Indeed, if the evaluation of a useless argument loops in the callbyvalue evaluation, the whole computation loops, which is not the case of callbyname and callbyneed evaluations.
These three evaluation strategies can be turned into equational theories. For callbyname and callbyvalue, this was done by Plotkin through continuationpassingstyle (CPS) semantics characterizing these theories [34]. For the callbyneed evaluation strategy, a specific equational theory reflecting the intensional behavior of the strategy into a semantics was proposed independently by Ariola and Felleisen [1], and by Maraist et al. [26]. A continuationpassingstyle semantics was proposed in the 90s by Okasaki et al. [30]. However, this semantics does not ensure normalization of simplytyped callbyneed evaluation, as shown in [2], thus failing to ensure a property which holds in the simplytyped callbyname and callbyvalue cases.
Continuationpassingstyle semantics de facto gives a semantics to the extension of \(\lambda \)calculus with control operators^{4}. In particular, even though callbyname and callbyneed are observationally equivalent on pure \(\lambda \)calculus, their different intentional behaviors induce different CPS semantics, leading to different observational behaviors when control operators are considered. On the other hand, the semantics of calculi with control can also be reconstructed from an analysis of the duality between programs and their evaluation contexts, and the duality between the let construct (which binds programs) and a control operator such as Parigot’s \(\mu \) (which binds evaluation contexts). Such an analysis can be done in the context of the \(\lambda \mu {\tilde{\mu }}\)calculus [6, 14].
In the callbyname and callbyvalue cases, the approach based on \(\lambda \mu {\tilde{\mu }}\)calculus leads to continuationpassing style semantics similar to the ones given by Plotkin or, in the callbyname case, also to the one by Lafont et al. [23]. As for callbyneed, in [2] is defined the Open image in new window calculus, a callbyneed version of the \(\lambda \mu {\tilde{\mu }}\)calculus. A continuationpassing style semantics is then defined via a calculus called Open image in new window [2]. This semantics, which is different from Okasaki, Lee and Tarditi’s one [30], is the object of study in this paper.
2.2 Explicit Environments
While the results presented in this paper could be directly expressed using the Open image in new window calculus, the realizability interpretation naturally arises from the decomposition of this calculus into a different calculus with an explicit environment, the Open image in new window calculus [2]. Indeed, as we shall see in the sequel, the decomposition highlights different syntactic categories that are deeply involved in the type system and in the definition of the realizability interpretation.
The Open image in new window calculus is a reformulation of the Open image in new window calculus with explicit environments, called stores and denoted by \(\tau \). Stores consists of a list of bindings of the form \([x:=t]\), where x is a term variable and t a term, and of bindings of the form \([\alpha :=e]\) where \(\alpha \) is a context variable and e a context. For instance, in the closure \(c\tau [x:=t]\tau '\), the variable x is bound to t in c and \(\tau '\). Besides, the term t might be an unevaluated term (i.e. lazily stored), so that if x is eagerly demanded at some point during the execution of this closure, t will be reduced in order to obtain a value. In the case where t indeed produces a value V, the store will be updated with the binding \([x:=V]\). However, a binding of this form (with a value) is fixed for the rest of the execution. As such, our socalled stores somewhat behave like lazy explicit substitutions or mutable environments.
To draw the comparison between our structures and the usual notions of stores and environments, two things should be observed. First, the usual notion of store refers to a structure of list that is fully mutable, in the sense that the cells can be updated at any time and thus values might be replaced. Second, the usual notion of environment designates a structure in which variables are bounded to closures made of a term and an environment. In particular, terms and environments are duplicated, i.e. sharing is not allowed. Such a structure resemble to a tree whose nodes are decorated by terms, as opposed to a machinery allowing sharing (like ours) whose underlying structure is broadly a directed acyclic graph. See for instance [24] for a Krivine abstract machine with sharing.
2.3 Syntax and Reduction Rules
The different syntactic categories can be understood as the different levels of alternation in a contextfree abstract machine (see [2]): the priority is first given to contexts at level e (lazy storage of terms), then to terms at level t (evaluation of \(\mu \alpha \) into values), then back to contexts at level E and so on until level v. These different categories are directly reflected in the definition of the abstract machine defined in [2], and will thus be involved in the definition of our realizability interpretation. We chose to highlight this by distinguishing different types of sequents already in the typing rules that we shall now present.
2.4 A Type System for the Open image in new window calculus
Theorem 1
(Subject reduction). If \(\varGamma \vdash _l c\tau \) and \(c\tau \rightarrow c'\tau '\) then \(\varGamma \vdash _l c'\tau '\).
Proof
By induction on typing derivations. \(\square \)
3 Normalization of the Open image in new window calculus
3.1 Normalization by Realizability
The proof of normalization for the Open image in new window calculus that we present in this section is inspired from techniques of Krivine’s classical realizability [19], whose notations we borrow. Actually, it is also very close to a proof by reducibility^{5}. In a nutshell, to each type A is associated a set \(A_t\) of terms whose execution is guided by the structure of A. These terms are the ones usually called realizers in Krivine’s classical realizability. Their definition is in fact indirect, and is done by orthogonality to a set of “correct” computations, called a pole. The choice of this set is central when studying models induced by classical realizability for secondorderlogic, but in the present case we only pay attention to the particular pole of terminating computations. This is where lies one of the difference with usual proofs by reducibility, where everything is done with respect to SN, while our definition are parametric in the pole (which is chosen to be SN in the end). The adequacy lemma, which is the central piece, consists in proving that typed terms belong to the corresponding sets of realizers, and are thus normalizing.
More in details, our proof can be sketched as follows. First, we generalize the usual notion of closed term to the notion of closed terminstore. Intuitively, this is due to the fact that we are no longer interested in closed terms and substitutions to close opened terms, but rather in terms that are closed when considered in the current store. This is based on the simple observation that a store is nothing more than a shared substitution whose content might evolve along the execution. Second, we define the notion of pole \({\bot \!\!\!\bot }\), which are sets of closures closed by antievaluation and store extension. In particular, the set of normalizing closures is a valid pole. This allows to relate terms and contexts thanks to a notion of orthogonality with respect to the pole. We then define for each formula A and typing level o (of e, t, E, V, F, v) a set \(A_o\) (resp. \(\Vert A\Vert _o\)) of terms (resp. contexts) in the corresponding syntactic category. These sets correspond to reducibility candidates, or to what is usually called truth values and falsity values in Krivine realizability. Finally, the core of the proof consists in the adequacy lemma, which shows that any closed term of type A at level o is in the corresponding set \(A_o\). This guarantees that any typed closure is in any pole, and in particular in the pole of normalizing closures. Technically, the proof of adequacy evaluates in each case a state of an abstract machine (in our case a closure), so that the proof also proceeds by evaluation. A more detailed explanation of this observation as well as a more introductory presentation of normalization proofs by classical realizability are given in an article by Dagand and Scherer [7].
3.2 Realizability Interpretation for the Open image in new window calculus
We begin by defining some key notions for stores that we shall need further in the proof.
Definition 2
so that we can define a closed store to be a store \(\tau \) such that \(FV(\tau ) = \emptyset \).
Definition 3
(Compatible stores). We say that two stores \(\tau \) and \(\tau '\) are independent and write \({\tau \,\#\,\tau '}\) when \({\texttt {dom}(\tau )\cap \texttt {dom}(\tau ')=\emptyset }\). We say that they are compatible and write \({\tau \diamond \tau '}\) whenever for all variables x (resp. covariables \(\alpha \)) present in both stores: \({x\in \texttt {dom}(\tau )\cap \texttt {dom}(\tau ')}\); the corresponding terms (resp. contexts) in \(\tau \) and \(\tau '\) coincide. Finally, we say that \(\tau '\) is an extension of \(\tau \) and write \(\tau \vartriangleleft \tau '\) whenever \(\texttt {dom}(\tau )\subseteq \texttt {dom}(\tau ')\) and \({\tau \diamond \tau '}\).
We denote by \(\overline{\tau \tau '}\) the compatible union join\(({\tau }{\tau '})\) of closed stores \(\tau \) and \(\tau '\), defined by:
The following lemma (which follows easily from the previous definition) states the main property we will use about union of compatible stores.
Lemma 4
If \(\tau \) and \(\tau '\) are two compatible stores, then \(\tau \vartriangleleft \overline{\tau \tau '}\) and \(\tau '\vartriangleleft \overline{\tau \tau '}\). Besides, if \(\tau \) is of the form \(\tau _0[x:=t]\tau _1\), then \(\overline{\tau \tau '}\) is of the form \({\tau _2}[x:=t]{\tau _3}\) with \(\tau _0 \vartriangleleft {\tau _2}\) and \(\tau _1\vartriangleleft {\tau _3}\).
Proof
This follows easily from the previous definition. \(\square \)
Definition 5
(Terminstore). We call closed terminstore (resp. closed contextinstore, closed closures) the combination of a term t (resp. context e, command c) with a closed store \(\tau \) such that \(FV(t)\subseteq \texttt {dom}(\tau )\). We use the notation \((t\tau )\) (resp. \((e\tau ), (c\tau )\)) to denote such a pair.
We should note that in particular, if t is a closed term, then \((t\tau )\) is a terminstore for any closed store \(\tau \). The notion of closed terminstore is thus a generalization of the notion of closed terms, and we will (ab)use of this terminology in the sequel. We denote the sets of closed closures by \(\mathcal {C}_0\), and will identify \((c\tau )\) and the closure \(c\tau \) when c is closed in \(\tau \). Observe that if \(c\tau \) is a closure in \(\mathcal {C}_0\) and \(\tau '\) is a store extending \(\tau \), then \(c\tau '\) is also in \(\mathcal {C}_0\). We are now equipped to define the notion of pole, and verify that the set of normalizing closures is indeed a valid pole.
Definition 6
(Pole). A subset \({\bot \!\!\!\bot }\subseteq \mathcal {C}_0\) is said to be saturated or closed by antireduction whenever for all \((c\tau ),(c'\tau ')\in \mathcal {C}_0\), if \(c'\tau ' \in {\bot \!\!\!\bot }\) and \(c\tau \rightarrow c'\tau '\) then \(c\tau \in {\bot \!\!\!\bot }\). It is said to be closed by store extension if whenever \(c\tau \in {\bot \!\!\!\bot }\), for any store \(\tau '\) extending \(\tau \): \(\tau \vartriangleleft \tau '\), \(c\tau '\in {\bot \!\!\!\bot }\). A pole is defined as any subset of \(\mathcal {C}_0\) that is closed by antireduction and store extension.
The following proposition is the one supporting the claim that our realizability proof is almost a reducibility proof whose definitions have been generalized with respect to a pole instead of the fixed set SN.
Proposition 7
The set \({\bot \!\!\!\bot }_{\Downarrow }=\{c\tau \in \mathcal {C}_0:~c\tau \text { normalizes }\}\) is a pole.
Proof

if \(c\tau \rightarrow c'\tau '\) and \(c'\tau '\) normalizes, then \(c\tau \) normalizes too;

if c is closed in \(\tau \) and \(c\tau \) normalizes, if \(\tau \vartriangleleft \tau '\) then \(c\tau '\) will reduce as \(c\tau \) does (since c is closed under \(\tau \), it can only use terms in \(\tau '\) that already were in \(\tau \)) and thus will normalize. \(\square \)
Definition 8
(Orthogonality). Given a pole \({\bot \!\!\!\bot }\), we say that a terminstore \((t\tau )\) is orthogonal to a contextinstore \((e\tau ')\) and write \((t\tau ){{\bot \!\!\!\bot }}(e\tau ')\) if \(\tau \) and \(\tau '\) are compatible and Open image in new window .
Remark 9
We can now relate closed terms and contexts by orthogonality with respect to a given pole. This allows us to define for any formula A the sets \(A_{v},A_{V},A_{t}\) (resp. \(\Vert A\Vert _{F}\),\(\Vert A\Vert _{E}\), \(\Vert A\Vert _{e}\)) of realizers (or reducibility candidates) at level v, V, t (resp. F, E, e) for the formula A. It is to be observed that realizers are here closed termsinstore.
Definition 10
Remark 11
We draw the reader attention to the fact that we should actually write \(A_{v}^{\bot \!\!\!\bot },\Vert A\Vert _{F}^{\bot \!\!\!\bot }\), etc. and \(\tau \Vdash _{\!\!{\bot \!\!\!\bot }}\!\varGamma \), because the corresponding definitions are parameterized by a pole \({\bot \!\!\!\bot }\). As it is common in Krivine’s classical realizability, we ease the notations by removing the annotation \({\bot \!\!\!\bot }\) whenever there is no ambiguity on the pole. Besides, it is worth noting that if coconstants do not occur directly in the definitions, they may still appear in the realizers by mean of the pole.
If the definition of the different sets might seem complex at first sight, we claim that they are quite natural in regards of the methodology of Danvy’s semantics artifacts presented in [2]. Indeed, having an abstract machine in contextfree form (the last step in this methodology before deriving the CPS) allows us to have both the term and the context (in a command) that behave independently of each other. Intuitively, a realizer at a given level is precisely a term which is going to behave well (be in the pole) in front of any opponent chosen in the previous level (in the hierarchy v, F, V, etc.). For instance, in a callbyvalue setting, there are only three levels of definition (values, contexts and terms) in the interpretation, because the abstract machine in contextfree form also has three. Here the ground level corresponds to strong values, and the other levels are somewhat defined as terms (or context) which are wellbehaved in front of any opponent in the previous one. The definition of the different sets \(A_{v},\Vert A\Vert _{F},A_{V}\), etc. directly stems from this intuition.
In comparison with the usual definition of Krivine’s classical realizability, we only considered orthogonal sets restricted to some syntactical subcategories. However, the definition still satisfies the usual monotonicity properties of biorthogonal sets:
Proposition 12
Proof
All the inclusions are proved in a similar way. We only give the proof for \(A_{v}\subseteq A_{V}\). Let \({\bot \!\!\!\bot }\) be a pole and \((v\tau )\) be in \(A_{v}\). We want to show that \((v\tau )\) is in \(A_{V}\), that is to say that v is in the syntactic category V (which is true), and that for any \((F\tau ')\in \Vert A\Vert _{F}\) such that \({\tau \diamond \tau '}\), \((v\tau ){{\bot \!\!\!\bot }}(F\tau ')\). The latter holds by definition of \((F\tau ')\in \Vert A\Vert _{F}\), since \((v\tau )\in A_{v}\). \(\square \)
We now extend the notion of realizers to stores, by stating that a store \(\tau \) realizes a context \(\varGamma \) if it binds all the variables x and \(\alpha \) in \(\varGamma \) to a realizer of the corresponding formula.
Definition 13
 1.
for any \((x:A) \in \varGamma \), \(\tau \equiv \tau _0[x:=t]\tau _1\) and \((t\tau _0) \in A_{t}\)
 2.
for any Open image in new window , \(\tau \equiv \tau _0[\alpha :=E]\tau _1\) and \((E\tau _0) \in \Vert A\Vert _{E}\)
Lemma 14
 1.
\(\overline{\tau \tau '} = \tau '\)
 2.
If \((t\tau ) \in A_{t}\) for some closed term \((t\tau )\) and type A, then \((t\tau ')\in A_{t}\). The same holds for each level e, E, V, F, v of the typing rules.
 3.
If \(\tau \Vdash \varGamma \) then \(\tau ' \Vdash \varGamma \).
Proof
 1.
Straightforward from the definition of \(\bar{\tau \tau '}\).
 2.
This essentially amounts to the following observations. First, one remarks that if \((t\tau )\) is a closed term, so then so is \((t\overline{\tau \tau '})\) for any closed store \(\tau '\) compatible with \(\tau \). Second, we observe that if we consider for instance a closed context \((E\tau '')\in \Vert A\Vert _{E}\), then \({\overline{\tau \tau '}\diamond \tau ''}\) implies \({\tau \diamond \tau ''}\), thus \((t\tau ){{\bot \!\!\!\bot }}(E\tau '')\) and finally \((t\overline{\tau \tau '}){{\bot \!\!\!\bot }}(E\tau '')\) by closure of the pole under store extension. We conclude that \((t\tau '){{\bot \!\!\!\bot }}(E\tau '')\) using the first statement.
 3.
By definition, for all \((x:A)\in \varGamma \), \(\tau \) is of the form \(\tau _0[x:=t]\tau _1\) such that \((t\tau _0)\in A_{t}\). As \(\tau \) and \(\tau '\) are compatible, we know by Lemma 4 that \(\overline{\tau \tau '}\) is of the form \(\tau '_0[x:=t]\tau '_1\) with \(\tau '_0\) an extension of \(\tau _0\), and using the first point we get that \((t\tau '_0)\in A_{t}\). \(\square \)
Definition 15

A typing judgment \(\varGamma \vdash _t t:A\) is adequate (w.r.t. the pole \({\bot \!\!\!\bot }\)) if for all stores \(\tau \Vdash \varGamma \), we have \((t\tau ) \in A_{t}\).
 More generally, we say that an inference rule is adequate (w.r.t. the pole \({\bot \!\!\!\bot }\)) if the adequacy of all typing judgments \(J_1,\ldots ,J_n\) implies the adequacy of the typing judgment \(J_0\).
Remark 16
From the latter definition, it is clear that a typing judgment that is derivable from a set of adequate inference rules is adequate too.
We will now show the main result of this section, namely that the typing rules of Fig. 2 for the Open image in new window calculus without coconstants are adequate with any pole. Observe that this result requires to consider the Open image in new window calculus without coconstants. Indeed, we consider coconstants as coming with their typing rules, potentially giving them any type (whereas constants can only be given an atomic type). Thus, there is a priori no reason^{8} why their types should be adequate with any pole.
However, as observed in the previous remark, given a fixed pole it suffices to check whether the typing rules for a given coconstant are adequate with this pole. If they are, any judgment that is derivable using these rules will be adequate.
Theorem 17
 1.
If v is a strong value such that \(\varGamma \vdash _v v:A\), then \((v\tau ) \in A_{v}\).
 2.
If F is a forcing context such that Open image in new window , then \((F\tau ) \in \Vert A\Vert _{F}\).
 3.
If V is a weak value such that \(\varGamma \vdash _V V:A\), then \((V\tau ) \in A_{V}\).
 4.
If E is a catchable context such that Open image in new window , then \((E\tau ) \in \Vert A\Vert _{F}\).
 5.
If t is a term such that \(\varGamma \vdash _t t:A\), then \((t\tau ) \in A_{t}\).
 6.
If e is a context such that Open image in new window , then \((e\tau ) \in \Vert A\Vert _{e}\).
 7.
If c is a command such that \(\varGamma \vdash _c c\), then \(c\tau \in {\bot \!\!\!\bot }\).
 8.
If \(\tau '\) is a store such that \(\varGamma \vdash _\tau \tau ':\varGamma '\), then \(\tau \tau ' \Vdash \varGamma ,\varGamma '\).
Proof
The different statements are proved by mutual induction over typing derivations. We only give the most important cases here.
Corollary 18
If \(c\tau \) is a closure such that \(\vdash _l c\tau \) is derivable, then for any pole \({\bot \!\!\!\bot }\) such that the typing rules for coconstants used in the derivation are adequate with \({\bot \!\!\!\bot }\), \(c\tau \in {\bot \!\!\!\bot }\).
We can now put our focus back on the normalization of typed closures. As we already saw in Proposition 7, the set \({\bot \!\!\!\bot }_{\Downarrow }\) of normalizing closure is a valid pole, so that it only remains to prove that any typing rule for coconstants is adequate with \({\bot \!\!\!\bot }_{\Downarrow }\).
Lemma 19
Any typing rule for coconstants is adequate with the pole \({\bot \!\!\!\bot }_{\Downarrow }\), i.e. if \(\varGamma \) is a typing context, and \(\tau \) is a store such that \(\tau \Vdash \varGamma \), if \(\varvec{\kappa }\) is a coconstant such that Open image in new window , then \((\varvec{\kappa }\tau )\in \Vert A\Vert _{F}\).
Proof
This lemma directly stems from the observation that for any store \(\tau \) and any closed strong value \((v\tau ')\in A_{v}\), Open image in new window does not reduce and thus belongs to the pole \({\bot \!\!\!\bot }_{\Downarrow }\).
As a consequence, we obtain the normalization of typed closures of the full calculus.
Theorem 20
If \(c\tau \) is a closure of the Open image in new window calculus such that \(\vdash _l c\tau \) is derivable, then \(c\tau \) normalizes.
This is to be contrasted with Okasaki, Lee and Tarditi’s semantics for the callbyneed \(\lambda \)calculus, which is not normalizing in the simplytyped case, as shown in Ariola et al. [2].
3.3 Extension to 2\(^{\text {nd}}\)Order Type Systems
4 Conclusion and Further Work
In this paper, we presented a system of simple types for a callbyneed calculus with control, which we proved to be safe in that it satisfies subject reduction (Theorem 1) and that typed terms are normalizing (Theorem 20). We proved the normalization by means of realizabilityinspired interpretation of the Open image in new window calculus. Incidentally, this opens the doors to the computational analysis (in the spirit of Krivine realizability) of classical proofs using control, laziness and shared memory.
In further work, we intend to present two extensions of the present paper. First, following the definition of the realizability interpretation, we managed to type the continuationandstore passing style translation for the Open image in new window calculus (see [2]). Interestingly, typing the translation emphasizes its computational content, and in particular, the storepassing part is reflected in a Kripke forcinglike manner of typing the extensibility of the store [28, Chap. 6].
Second, on a different aspect, the realizability interpretation we introduced could be a first step towards new ways of realizing axioms. In particular, the first author used in his Ph.D. thesis [28, Chap. 8] the techniques presented in this paper to give a normalization proof for \(\text {dPA}^\omega \), a proof system developed by the second author [15]. Indeed, this proof system allows to define a proof for the axiom of dependent choice thanks to the use of streams that are lazily evaluated, and was lacking a proper normalization proof.
Finally, to determine the range of our technique, it would be natural to investigate the relation between our framework and the many different presentations of callbyneed calculi (with or without control). Amongst other calculi, we could cite ChangFelleisen presentation of callbyneed [4], Garcia et al. lazy calculus with delimited control [10] or Kesner’s recent paper on normalizing byneed terms characterized by an intersection type system [16]. To this end, we might rely on Pédrot and Saurin’s classical byneed [33]. They indeed relate (classical) callbyneed with linear headreduction from a computational point of view, and draw the connections with the presentations of Ariola et al. [2] and ChangFelleisen [4]. Ariola et al. Open image in new window calculus being close to the Open image in new window calculus (see [2] for further details), our technique is likely to be adaptable to their framework, and thus to Pédrot and Saurin’s system.
Footnotes
 1.
 2.
For instance, one way to realize the axiom of dependent choice in classical realizability is by means of an extra instruction quote [18].
 3.
Even though it has not been done formally, the normalization of the Open image in new window calculus presented in [2] should also be derivable from Polonowski’s proof of strong normalization of the nondeterministic \(\lambda \mu {\tilde{\mu }}\)calculus [35]. The Open image in new window calculus (a bigstep variant of the Open image in new window calculus introduced in Ariola et al.) is indeed a particular evaluation strategy for the \(\lambda \mu {\tilde{\mu }}\)calculus, so that the strong normalization of the nondeterministic variant of the latter should imply the normalization of the former as a particular case.
 4.
 5.
See for instance the proof of normalization for system D presented in [17, Sect. 3.2].
 6.
The meet of forcing conditions is indeed a refinement containing somewhat the “union” of information contained in each, just like the union of two compatible stores.
 7.
Once again, we should formally write \(\tau \Vdash _{\!\!{\bot \!\!\!\bot }}\!\varGamma \) but we will omit the annotation by \({\bot \!\!\!\bot }\) as often as possible.
 8.
Think for instance of a coconstant of type Open image in new window , there is no reason why it should be orthogonal to any function in \(A\rightarrow B_{v}\).
 9.
 10.
References
 1.Ariola, Z., Felleisen, M.: The callbyneed lambda calculus. J. Funct. Program. 7(3), 265–301 (1993)MathSciNetCrossRefGoogle Scholar
 2.Ariola, Z.M., Downen, P., Herbelin, H., Nakata, K., Saurin, A.: Classical callbyneed sequent calculi: the unity of semantic artifacts. In: Schrijvers, T., Thiemann, P. (eds.) FLOPS 2012. LNCS, vol. 7294, pp. 32–46. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642298226_6CrossRefGoogle Scholar
 3.Barbanera, F., Berardi, S.: A symmetric \(\lambda \)calculus for classical program extraction. Inf. Comput. 125(2), 103–117 (1996)MathSciNetCrossRefGoogle Scholar
 4.Chang, S., Felleisen, M.: The callbyneed lambda calculus, revisited. In: Seidl, H. (ed.) ESOP 2012. LNCS, vol. 7211, pp. 128–147. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642288692_7CrossRefGoogle Scholar
 5.Crolard, T.: A confluent lambdacalculus with a catch/throw mechanism. J. Funct. Program. 9(6), 625–647 (1999)MathSciNetCrossRefGoogle Scholar
 6.Curien, P.L., Herbelin, H.: The duality of computation. In: Proceedings of ICFP 2000, SIGPLAN Notices, vol. 35, no. 9, pp. 233–243. ACM (2000)CrossRefGoogle Scholar
 7.Dagand, P.É., Scherer, G.: Normalization by realizability also evaluates. In: Baelde, D., Alglave, J. (eds.) Proceedings of JFLA 2015, Le Val d’Ajol, France, January 2015Google Scholar
 8.Felleisen, M., Friedman, D.P., Kohlbecker, E.E., Duba, B.F.: Reasoning with continuations. In: Proceedings of LICS 1986, Cambridge, Massachusetts, USA, 16–18 June 1986, pp. 131–141. IEEE Computer Society (1986)Google Scholar
 9.Gallier, J.: On girard’s “candidats de reductibilité”. In: Odifreddi, P. (ed.) Logic and Computer Science, pp. 123–203. Academic Press (1900)Google Scholar
 10.Garcia, R., Lumsdaine, A., Sabry, A.: Lazy evaluation and delimited control. Log. Methods Comput. Sci. 6(3) (2010)Google Scholar
 11.Girard, J.Y.: Une extension de L’interpretation de gödel a L’analyse, et son application a L’elimination des coupures dans L’analyse et la theorie des types. In: Fenstad, J.E., (ed.) Proceedings of the Second Scandinavian Logic Symposium. Studies in Logic and the Foundations of Mathematics, vol. 63, pp. 63–92. Elsevier (1971)Google Scholar
 12.Guillermo, M., Miquel, A.: Specifying peirce’s law in classical realizability. Math. Struct. Comput. Sci. 26(7), 1269–1303 (2016)MathSciNetCrossRefGoogle Scholar
 13.Guillermo, M., Miquey, É.: Classical realizability and arithmetical formulæ. Math. Struct. Comput. Sci. 1–40 (2016)Google Scholar
 14.Herbelin, H.: C’est maintenant qu’on calcule: au cœur de la dualité. Habilitation thesis, University Paris 11, December 2005Google Scholar
 15.Herbelin, H.: A constructive proof of dependent choice, compatible with classical logic. In: Proceedings of the 27th Annual IEEE Symposium on Logic in Computer Science, LICS 2012, Dubrovnik, Croatia, 25–28 June 2012, pp. 365–374. IEEE Computer Society (2012)Google Scholar
 16.Kesner, D.: Reasoning about callbyneed by means of types. In: Jacobs, B., Löding, C. (eds.) FoSSaCS 2016. LNCS, vol. 9634, pp. 424–441. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662496305_25CrossRefzbMATHGoogle Scholar
 17.Krivine, J.L.: Lambdacalculus, Types and Models. Ellis Horwood series in computers and their applications. Ellis Horwood, Masson (1993)zbMATHGoogle Scholar
 18.Krivine, J.L.: Dependent choice, ‘quote’ and the clock. Theor. Comp. Sc. 308, 259–276 (2003)MathSciNetCrossRefGoogle Scholar
 19.Krivine, J.L.: Realizability in classical logic. Panoramas et synthèses 27, 197–229 (2009). Interactive models of computation and program behaviourMathSciNetzbMATHGoogle Scholar
 20.Krivine, J.L.: Realizability algebras: a program to well order r. Log. Methods Comput. Sci. 7(3) (2011)Google Scholar
 21.Krivine, J.L.: Realizability algebras II: new models of ZF + DC. Log. Methods Comput. Sci. 8(1:10), 1–28 (2012)MathSciNetGoogle Scholar
 22.Krivine, J.L.: On the structure of classical realizability models of ZF (2014)Google Scholar
 23.Lafont, Y., Reus, B., Streicher, T.: Continuations semantics or expressing implication by negation. Technical report 9321, LudwigMaximiliansUniversität, München (1993)Google Scholar
 24.Lang, F.: Explaining the lazy Krivine machine using explicit substitution and addresses. High.Order Symbolic Comput. 20(3), 257–270 (2007)CrossRefGoogle Scholar
 25.Lepigre, R.: A classical realizability model for a semantical value restriction. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 476–502. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662494981_19CrossRefGoogle Scholar
 26.Maraist, J., Odersky, M., Wadler, P.: The callbyneed lambda calculus. J. Funct. Program. 8(3), 275–317 (1998)MathSciNetCrossRefGoogle Scholar
 27.Miquel, A.: Existential witness extraction in classical realizability and via a negative translation. Log. Methods Comput. Sci. 7(2), 188–202 (2011)MathSciNetCrossRefGoogle Scholar
 28.Miquey, É.: Classical realizability and sideeffects. Ph.D. thesis, Université ParisDiderot, Universidad de la República (Uruguay) (2017)Google Scholar
 29.MunchMaccagnoni, G.: Focalisation and classical realisability. In: Grädel, E., Kahle, R. (eds.) CSL 2009. LNCS, vol. 5771, pp. 409–423. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642040276_30CrossRefGoogle Scholar
 30.Okasaki, C., Lee, P., Tarditi, D.: Callbyneed and continuationpassing style. Lisp Symbolic Comput. 7(1), 57–82 (1994)CrossRefGoogle Scholar
 31.Parigot, M.: Free deduction: an analysis of “computations” in classical logic. In: Voronkov, A. (ed.) RCLP 1990. LNCS, vol. 592, pp. 361–380. Springer, Heidelberg (1992). https://doi.org/10.1007/3540554602_27CrossRefzbMATHGoogle Scholar
 32.Parigot, M.: Strong normalization of second order symmetric \(\lambda \)calculus. In: Kapoor, S., Prasad, S. (eds.) FSTTCS 2000. LNCS, vol. 1974, pp. 442–453. Springer, Heidelberg (2000). https://doi.org/10.1007/3540444505_36CrossRefGoogle Scholar
 33.Pédrot, P.M., Saurin, A.: Classical byneed. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 616–643. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662494981_24CrossRefGoogle Scholar
 34.Plotkin, G.D.: Callbyname, callbyvalue and the lambdacalculus. Theor. Comput. Sci. 1(2), 125–159 (1975)CrossRefGoogle Scholar
 35.Polonovski, E.: Strong normalization of \(\overline{\lambda }\mu \widetilde{\mu }\)calculus with explicit substitutions. In: Walukiewicz, I. (ed.) FoSSaCS 2004. LNCS, vol. 2987, pp. 423–437. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540247272_30CrossRefzbMATHGoogle Scholar
 36.Tait, W.W.: Intensional interpretations of functionals of finite type I. J. Symbolic Log. 32(2), 198–212 (1967)MathSciNetCrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.