1 Introduction

1.1 Realizability-Based Normalization

Normalization by realizability is a standard technique to prove the normalization of typed \(\lambda \)-calculi. Originally introduced by Tait [36] to prove the normalization of System T, it was extended by Girard to prove the normalization of System F [11]. This kind of techniques, also called normalization by reducibility or normalization by logical relations, works by interpreting each type by a set of typed or untyped terms seen as realizers of the type, then showing that the way these sets of realizers are built preserve properties such as normalization. Over the years, multiple uses and generalization of this method have been done, for a more detailed account of which we refer the reader to the work of Gallier [9].

Realizability techniques were adapted to the normalization of various calculi for classical logic (see e.g. [3, 32]). A specific framework tailored to the study of realizability for classical logic has been designed by Krivine [19] on top of a \(\lambda \)-calculus with control whose reduction is defined in terms of an abstract machine. In such a machinery, terms are evaluated in front of stacks; and control (thus classical logic) is made available through the possibility of saving and restoring stacks. During the last twenty years, Krivine’s classical realizability turned out to be fruitful both from the point of view of logic, leading to the construction of new models of set theory, and generalizing in particular the technique of Cohen’s forcing [20,21,22]; and on its computational facet, providing alternative tools to the analysis of the computational content of classical programsFootnote 1.

Noteworthily, Krivine realizability is one of the approaches contributing to advocating the motto that through the Curry-Howard correspondence, with new programming instructions come new reasoning principlesFootnote 2. Our original motivation for the present work is actually in line with this idea, in the sense that our long-term purpose is to give a realizability interpretation to \(\text {dPA}^\omega \), a call-by-need calculus defined by the second author [15]. In this calculus, the lazy evaluation is indeed a fundamental ingredient in order to obtain an executable proof term for the axiom of dependent choice.

1.2 Contributions of the Paper

In order to address the normalization of typed call-by-need \(\lambda \)-calculus, we design a variant of Krivine’s classical realizability, where the realizers are closures (a term with a substitution for its free variables). The call-by-need \(\lambda \)-calculus with control that we consider is the -calculus. This calculus, that was defined by Ariola et al. [2], is syntactically described in an extension with explicit substitutions of the \(\lambda \mu {\tilde{\mu }}\)-calculus [6, 14, 29]. The syntax of the \(\lambda \mu {\tilde{\mu }}\)-calculus itself refines the syntax of the \(\lambda \)-calculus by syntactically distinguishing between terms and evaluation contexts. It also contains commands which combine terms and evaluation contexts so that they can interact together. Thinking of evaluation contexts as stacks and commands as states, the \(\lambda \mu {\tilde{\mu }}\)-calculus can also be seen as a syntax for abstract machines. As for a proof-as-program point of view, the \(\lambda \mu {\tilde{\mu }}\)-calculus and its variants can be seen as a term syntax for proofs of Gentzen’s sequent calculus. In particular, the \(\lambda \mu {\tilde{\mu }}\)-calculus contains control operators which give a computational interpretation to classical logic.

We give a proof of normalization first for the simply-typed -calculusFootnote 3, then for a type system with first-order and second-order quantification. While we only apply our technique to the normalization of the -calculus, our interpretation incidentally suggests a way to adapt Krivine realizability to other call-by-need settings. This paves the way to the computational interpretation of classical proofs using lazy evaluation or shared memory cells, including the case of the call-by-need second order arithmetic \(\text {dPA}^\omega \) [15].

2 The -calculus

2.1 The Call-by-Need Evaluation Strategy

The call-by-need evaluation strategy of the \(\lambda \)-calculus evaluates arguments of functions only when needed, and, when needed, shares their evaluations across all places where the argument is required. The call-by-need evaluation is at the heart of a functional programming language such as Haskell. It has in common with the call-by-value evaluation strategy that all places where a same argument is used share the same value. Nevertheless, it observationally behaves like the call-by-name evaluation strategy (for the pure \(\lambda \)-calculus), in the sense that a given computation eventually evaluates to a value if and only if it evaluates to the same value (up to inner reduction) along the call-by-name evaluation. In particular, in a setting with non-terminating computations, it is not observationally equivalent to the call-by-value evaluation. Indeed, if the evaluation of a useless argument loops in the call-by-value evaluation, the whole computation loops, which is not the case of call-by-name and call-by-need evaluations.

These three evaluation strategies can be turned into equational theories. For call-by-name and call-by-value, this was done by Plotkin through continuation-passing-style (CPS) semantics characterizing these theories [34]. For the call-by-need evaluation strategy, a specific equational theory reflecting the intensional behavior of the strategy into a semantics was proposed independently by Ariola and Felleisen [1], and by Maraist et al. [26]. A continuation-passing-style semantics was proposed in the 90s by Okasaki et al. [30]. However, this semantics does not ensure normalization of simply-typed call-by-need evaluation, as shown in [2], thus failing to ensure a property which holds in the simply-typed call-by-name and call-by-value cases.

Continuation-passing-style semantics de facto gives a semantics to the extension of \(\lambda \)-calculus with control operatorsFootnote 4. In particular, even though call-by-name and call-by-need are observationally equivalent on pure \(\lambda \)-calculus, their different intentional behaviors induce different CPS semantics, leading to different observational behaviors when control operators are considered. On the other hand, the semantics of calculi with control can also be reconstructed from an analysis of the duality between programs and their evaluation contexts, and the duality between the let construct (which binds programs) and a control operator such as Parigot’s \(\mu \) (which binds evaluation contexts). Such an analysis can be done in the context of the \(\lambda \mu {\tilde{\mu }}\)-calculus [6, 14].

In the call-by-name and call-by-value cases, the approach based on \(\lambda \mu {\tilde{\mu }}\)-calculus leads to continuation-passing style semantics similar to the ones given by Plotkin or, in the call-by-name case, also to the one by Lafont et al. [23]. As for call-by-need, in [2] is defined the -calculus, a call-by-need version of the \(\lambda \mu {\tilde{\mu }}\)-calculus. A continuation-passing style semantics is then defined via a calculus called  [2]. This semantics, which is different from Okasaki, Lee and Tarditi’s one [30], is the object of study in this paper.

2.2 Explicit Environments

While the results presented in this paper could be directly expressed using the -calculus, the realizability interpretation naturally arises from the decomposition of this calculus into a different calculus with an explicit environment, the -calculus [2]. Indeed, as we shall see in the sequel, the decomposition highlights different syntactic categories that are deeply involved in the type system and in the definition of the realizability interpretation.

The -calculus is a reformulation of the -calculus with explicit environments, called stores and denoted by \(\tau \). Stores consists of a list of bindings of the form \([x:=t]\), where x is a term variable and t a term, and of bindings of the form \([\alpha :=e]\) where \(\alpha \) is a context variable and e a context. For instance, in the closure \(c\tau [x:=t]\tau '\), the variable x is bound to t in c and \(\tau '\). Besides, the term t might be an unevaluated term (i.e. lazily stored), so that if x is eagerly demanded at some point during the execution of this closure, t will be reduced in order to obtain a value. In the case where t indeed produces a value V, the store will be updated with the binding \([x:=V]\). However, a binding of this form (with a value) is fixed for the rest of the execution. As such, our so-called stores somewhat behave like lazy explicit substitutions or mutable environments.

To draw the comparison between our structures and the usual notions of stores and environments, two things should be observed. First, the usual notion of store refers to a structure of list that is fully mutable, in the sense that the cells can be updated at any time and thus values might be replaced. Second, the usual notion of environment designates a structure in which variables are bounded to closures made of a term and an environment. In particular, terms and environments are duplicated, i.e. sharing is not allowed. Such a structure resemble to a tree whose nodes are decorated by terms, as opposed to a machinery allowing sharing (like ours) whose underlying structure is broadly a directed acyclic graph. See for instance [24] for a Krivine abstract machine with sharing.

2.3 Syntax and Reduction Rules

The lazy evaluation of terms allows for the following reduction rule: us to reduce a command to the command \(c'\) together with the binding \([x:=\mu \alpha .c]\).

In this case, the term \(\mu \alpha .c\) is left unevaluated (“frozen”) in the store, until possibly reaching a command in which the variable x is needed. When evaluation reaches a command of the form , the binding is opened and the term is evaluated in front of the context

The reader can think of the previous rule as the “defrosting” operation of the frozen term \(\mu \alpha .c\) : this term is evaluated in the prefix of the store \(\tau \) which predates it, in front of the context where the \({\tilde{\mu }}[x]\) binder is waiting for a value. This context keeps trace of the part of the store \(\tau '\) that was originally located after the binding \([x:=...]\). This way, if a value V is indeed furnished for the binder \({\tilde{\mu }}[x]\), the original command is evaluated in the updated full store:

The brackets in \({\tilde{\mu }}[x].c\) are used to express the fact that the variable x is forced at top-level (unlike contexts of the shape in the -calculus). The reduction system resembles the one of an abstract machine. Especially, it allows us to keep the standard redex at the top of a command and avoids searching through the meta-context for work to be done.

Note that our approach slightly differ from [2] since we split values into two categories: strong values (v) and weak values (V). The strong values correspond to values strictly speaking. The weak values include the variables which force the evaluation of terms to which they refer into shared strong value. Their evaluation may require capturing a continuation. The syntax of the language, which includes constants \({\mathbf {k}}\) and co-constants \(\varvec{\kappa }\), is given in Fig. 1. As for the reduction \(\rightarrow \), we define it as the compatible reflexive transitive closure of the rules given in Fig. 1.

Fig. 1.
figure 1

Syntax and reduction rules of the -calculus

The different syntactic categories can be understood as the different levels of alternation in a context-free abstract machine (see [2]): the priority is first given to contexts at level e (lazy storage of terms), then to terms at level t (evaluation of \(\mu \alpha \) into values), then back to contexts at level E and so on until level v. These different categories are directly reflected in the definition of the abstract machine defined in [2], and will thus be involved in the definition of our realizability interpretation. We chose to highlight this by distinguishing different types of sequents already in the typing rules that we shall now present.

2.4 A Type System for the -calculus

We have nine kinds of (one-sided) sequents, one for typing each of the nine syntactic categories. We write them with an annotation on the \(\vdash \) sign, using one of the letters v, V, t, F, E, e, l, c, \(\tau \). Sequents typing values and terms are asserting a type, with the type written on the right; sequents typing contexts are expecting a type A with the type written ; sequents typing commands and closures are black boxes neither asserting nor expecting a type; sequents typing substitutions are instantiating a typing context. In other words, we have the following nine kinds of sequents:

figure a
Fig. 2.
figure 2

Typing rules of the -calculus

where types and typing contexts are defined by:

The typing rules are given on Fig. 2 where we assume that a variable x (resp. co-variable \(\alpha \)) only occurs once in a context \(\varGamma \) (we implicitly assume the possibility of renaming variables by \(\alpha \)-conversion). We also adopt the convention that constants \({\mathbf {k}}\) and co-constants \(\varvec{\kappa }\) come with a signature \(\mathcal {S}\) which assigns them a type. This type system enjoys the property of subject reduction.

Theorem 1

(Subject reduction). If \(\varGamma \vdash _l c\tau \) and \(c\tau \rightarrow c'\tau '\) then \(\varGamma \vdash _l c'\tau '\).

Proof

By induction on typing derivations.    \(\square \)

3 Normalization of the -calculus

3.1 Normalization by Realizability

The proof of normalization for the -calculus that we present in this section is inspired from techniques of Krivine’s classical realizability [19], whose notations we borrow. Actually, it is also very close to a proof by reducibilityFootnote 5. In a nutshell, to each type A is associated a set \(|A|_t\) of terms whose execution is guided by the structure of A. These terms are the ones usually called realizers in Krivine’s classical realizability. Their definition is in fact indirect, and is done by orthogonality to a set of “correct” computations, called a pole. The choice of this set is central when studying models induced by classical realizability for second-order-logic, but in the present case we only pay attention to the particular pole of terminating computations. This is where lies one of the difference with usual proofs by reducibility, where everything is done with respect to SN, while our definition are parametric in the pole (which is chosen to be SN in the end). The adequacy lemma, which is the central piece, consists in proving that typed terms belong to the corresponding sets of realizers, and are thus normalizing.

More in details, our proof can be sketched as follows. First, we generalize the usual notion of closed term to the notion of closed term-in-store. Intuitively, this is due to the fact that we are no longer interested in closed terms and substitutions to close opened terms, but rather in terms that are closed when considered in the current store. This is based on the simple observation that a store is nothing more than a shared substitution whose content might evolve along the execution. Second, we define the notion of pole \({\bot \!\!\!\bot }\), which are sets of closures closed by anti-evaluation and store extension. In particular, the set of normalizing closures is a valid pole. This allows to relate terms and contexts thanks to a notion of orthogonality with respect to the pole. We then define for each formula A and typing level o (of etEVFv) a set \(|A|_o\) (resp. \(\Vert A\Vert _o\)) of terms (resp. contexts) in the corresponding syntactic category. These sets correspond to reducibility candidates, or to what is usually called truth values and falsity values in Krivine realizability. Finally, the core of the proof consists in the adequacy lemma, which shows that any closed term of type A at level o is in the corresponding set \(|A|_o\). This guarantees that any typed closure is in any pole, and in particular in the pole of normalizing closures. Technically, the proof of adequacy evaluates in each case a state of an abstract machine (in our case a closure), so that the proof also proceeds by evaluation. A more detailed explanation of this observation as well as a more introductory presentation of normalization proofs by classical realizability are given in an article by Dagand and Scherer [7].

3.2 Realizability Interpretation for the -calculus

We begin by defining some key notions for stores that we shall need further in the proof.

Definition 2

(Closed store). We extend the notion of free variable to stores:

figure b

so that we can define a closed store to be a store \(\tau \) such that \(FV(\tau ) = \emptyset \).

Definition 3

(Compatible stores). We say that two stores \(\tau \) and \(\tau '\) are independent and write \({\tau \,\#\,\tau '}\) when \({\texttt {dom}(\tau )\cap \texttt {dom}(\tau ')=\emptyset }\). We say that they are compatible and write \({\tau \diamond \tau '}\) whenever for all variables x (resp. co-variables \(\alpha \)) present in both stores: \({x\in \texttt {dom}(\tau )\cap \texttt {dom}(\tau ')}\); the corresponding terms (resp. contexts) in \(\tau \) and \(\tau '\) coincide. Finally, we say that \(\tau '\) is an extension of \(\tau \) and write \(\tau \vartriangleleft \tau '\) whenever \(\texttt {dom}(\tau )\subseteq \texttt {dom}(\tau ')\) and \({\tau \diamond \tau '}\).

We denote by \(\overline{\tau \tau '}\) the compatible union join\(({\tau }{\tau '})\) of closed stores \(\tau \) and \(\tau '\), defined by:

figure c

The following lemma (which follows easily from the previous definition) states the main property we will use about union of compatible stores.

Lemma 4

If \(\tau \) and \(\tau '\) are two compatible stores, then \(\tau \vartriangleleft \overline{\tau \tau '}\) and \(\tau '\vartriangleleft \overline{\tau \tau '}\). Besides, if \(\tau \) is of the form \(\tau _0[x:=t]\tau _1\), then \(\overline{\tau \tau '}\) is of the form \({\tau _2}[x:=t]{\tau _3}\) with \(\tau _0 \vartriangleleft {\tau _2}\) and \(\tau _1\vartriangleleft {\tau _3}\).

Proof

This follows easily from the previous definition.    \(\square \)

As we explained in the introduction of this section, we will not consider closed terms in the usual sense. Indeed, while it is frequent in the proofs of normalization (e.g. by realizability or reducibility) of a calculus to consider only closed terms and to perform substitutions to maintain the closure of terms, this only makes sense if it corresponds to the computational behavior of the calculus. For instance, to prove the normalization of \(\lambda x.t\) in typed call-by-name \(\lambda \mu {\tilde{\mu }}\)-calculus, one would consider a substitution \(\rho \) that is suitable for with respect to the typing context \(\varGamma \), then a context \(u\cdot e\) of type \(A\rightarrow B\), and evaluates:

Then we would observe that \(t_\rho [u/x] = t_{\rho [x:=u]}\) and deduce that \(\rho [x:=u]\) is suitable for \(\varGamma ,x:A\), which would allow us to conclude by induction.

However, in the -calculus we do not perform global substitution when reducing a command, but rather add a new binding \([x:=u]\) in the store:

Therefore, the natural notion of closed term invokes the closure under a store, which might evolve during the rest of the execution (this is to contrast with a substitution).

Definition 5

(Term-in-store). We call closed term-in-store (resp. closed context-in-store, closed closures) the combination of a term t (resp. context e, command c) with a closed store \(\tau \) such that \(FV(t)\subseteq \texttt {dom}(\tau )\). We use the notation \((t|\tau )\) (resp. \((e|\tau ), (c|\tau )\)) to denote such a pair.

We should note that in particular, if t is a closed term, then \((t|\tau )\) is a term-in-store for any closed store \(\tau \). The notion of closed term-in-store is thus a generalization of the notion of closed terms, and we will (ab)use of this terminology in the sequel. We denote the sets of closed closures by \(\mathcal {C}_0\), and will identify \((c|\tau )\) and the closure \(c\tau \) when c is closed in \(\tau \). Observe that if \(c\tau \) is a closure in \(\mathcal {C}_0\) and \(\tau '\) is a store extending \(\tau \), then \(c\tau '\) is also in \(\mathcal {C}_0\). We are now equipped to define the notion of pole, and verify that the set of normalizing closures is indeed a valid pole.

Definition 6

(Pole). A subset \({\bot \!\!\!\bot }\subseteq \mathcal {C}_0\) is said to be saturated or closed by anti-reduction whenever for all \((c|\tau ),(c'|\tau ')\in \mathcal {C}_0\), if \(c'\tau ' \in {\bot \!\!\!\bot }\) and \(c\tau \rightarrow c'\tau '\) then \(c\tau \in {\bot \!\!\!\bot }\). It is said to be closed by store extension if whenever \(c\tau \in {\bot \!\!\!\bot }\), for any store \(\tau '\) extending \(\tau \): \(\tau \vartriangleleft \tau '\), \(c\tau '\in {\bot \!\!\!\bot }\). A pole is defined as any subset of \(\mathcal {C}_0\) that is closed by anti-reduction and store extension.

The following proposition is the one supporting the claim that our realizability proof is almost a reducibility proof whose definitions have been generalized with respect to a pole instead of the fixed set SN.

Proposition 7

The set \({\bot \!\!\!\bot }_{\Downarrow }=\{c\tau \in \mathcal {C}_0:~c\tau \text { normalizes }\}\) is a pole.

Proof

As we only considered closures in \(\mathcal {C}_0\), both conditions (closure by anti-reduction and store extension) are clearly satisfied:

  • if \(c\tau \rightarrow c'\tau '\) and \(c'\tau '\) normalizes, then \(c\tau \) normalizes too;

  • if c is closed in \(\tau \) and \(c\tau \) normalizes, if \(\tau \vartriangleleft \tau '\) then \(c\tau '\) will reduce as \(c\tau \) does (since c is closed under \(\tau \), it can only use terms in \(\tau '\) that already were in \(\tau \)) and thus will normalize.    \(\square \)

Definition 8

(Orthogonality). Given a pole \({\bot \!\!\!\bot }\), we say that a term-in-store \((t|\tau )\) is orthogonal to a context-in-store \((e|\tau ')\) and write \((t|\tau ){{\bot \!\!\!\bot }}(e|\tau ')\) if \(\tau \) and \(\tau '\) are compatible and .

Remark 9

The reader familiar with Krivine’s forcing machine [20] might recognize his definition of orthogonality between terms of the shape (tp) and stacks of the shape \((\pi ,q)\), where p and q are forcing conditionsFootnote 6:

$$ (t,p) {\bot \!\!\!\bot }(\pi ,q) \Leftrightarrow (t\star \pi ,p\wedge q) \in {\bot \!\!\!\bot }$$

We can now relate closed terms and contexts by orthogonality with respect to a given pole. This allows us to define for any formula A the sets \(|A|_{v},|A|_{V},|A|_{t}\) (resp. \(\Vert A\Vert _{F}\),\(\Vert A\Vert _{E}\), \(\Vert A\Vert _{e}\)) of realizers (or reducibility candidates) at level v, V, t (resp. F, E, e) for the formula A. It is to be observed that realizers are here closed terms-in-store.

Definition 10

(Realizers). Given a fixed pole \({\bot \!\!\!\bot }\), we set:

$$\begin{aligned} \begin{array}{ccl} |X|_{v} &{} = &{} \{({\mathbf {k}}|\tau ) : \quad \vdash {\mathbf {k}}:{X}\}\\ |A\rightarrow B|_{v} &{} = &{} \{(\lambda x .t|\tau ) : \forall u \tau ', {\tau \diamond \tau '}\wedge (u|\tau ')\in |A|_{t} \Rightarrow (t|\overline{\tau \tau '}[x:=u])\in |B|_{t}\}\\ \Vert A\Vert _{F} &{} = &{} \{(F|\tau ) : \forall v \tau ', {\tau \diamond \tau '}\wedge (v|\tau ')\in |A|_{v} \Rightarrow (v|\tau '){{\bot \!\!\!\bot }}(F|\tau )\}\\ |A|_{V} &{} = &{} \{(V|\tau ) : \forall F \tau ', {\tau \diamond \tau '}\wedge (F|\tau ')\in \Vert A\Vert _{F} \Rightarrow (V|\tau ) {{\bot \!\!\!\bot }}(F|\tau ')\}\\ \Vert A\Vert _{E} &{} = &{} \{(E|\tau ) : \forall V \tau ', {\tau \diamond \tau '}\wedge (V|\tau ')\in |A|_{V} \Rightarrow (V|\tau '){{\bot \!\!\!\bot }}(E|\tau )\}\\ |A|_{t} &{} = &{} \{(t|\tau ) : \forall E \tau ', {\tau \diamond \tau '}\wedge (E|\tau ')\in \Vert A\Vert _{E} \Rightarrow (t|\tau ) {{\bot \!\!\!\bot }}(E|\tau ')\}\\ \Vert A\Vert _{e} &{} = &{} \{(e|\tau ) : \forall t \tau ', {\tau \diamond \tau '}\wedge (t|\tau ')\in |A|_{t} \Rightarrow (t|\tau '){{\bot \!\!\!\bot }}(e|\tau )\}\\ \end{array} \end{aligned}$$

Remark 11

We draw the reader attention to the fact that we should actually write \(|A|_{v}^{\bot \!\!\!\bot },\Vert A\Vert _{F}^{\bot \!\!\!\bot }\), etc. and \(\tau \Vdash _{\!\!{\bot \!\!\!\bot }}\!\varGamma \), because the corresponding definitions are parameterized by a pole \({\bot \!\!\!\bot }\). As it is common in Krivine’s classical realizability, we ease the notations by removing the annotation \({\bot \!\!\!\bot }\) whenever there is no ambiguity on the pole. Besides, it is worth noting that if co-constants do not occur directly in the definitions, they may still appear in the realizers by mean of the pole.

If the definition of the different sets might seem complex at first sight, we claim that they are quite natural in regards of the methodology of Danvy’s semantics artifacts presented in [2]. Indeed, having an abstract machine in context-free form (the last step in this methodology before deriving the CPS) allows us to have both the term and the context (in a command) that behave independently of each other. Intuitively, a realizer at a given level is precisely a term which is going to behave well (be in the pole) in front of any opponent chosen in the previous level (in the hierarchy vFV, etc.). For instance, in a call-by-value setting, there are only three levels of definition (values, contexts and terms) in the interpretation, because the abstract machine in context-free form also has three. Here the ground level corresponds to strong values, and the other levels are somewhat defined as terms (or context) which are well-behaved in front of any opponent in the previous one. The definition of the different sets \(|A|_{v},\Vert A\Vert _{F},|A|_{V}\), etc. directly stems from this intuition.

In comparison with the usual definition of Krivine’s classical realizability, we only considered orthogonal sets restricted to some syntactical subcategories. However, the definition still satisfies the usual monotonicity properties of bi-orthogonal sets:

Proposition 12

For any type A and any given pole \({\bot \!\!\!\bot }\), we have:

$$\begin{aligned} { 1.}\, |A|_{v}\subseteq |A|_{V} \subseteq |A|_{t};\qquad \qquad \qquad { 2.}\, \Vert A\Vert _{F}\subseteq \Vert A\Vert _{E} \subseteq \Vert A\Vert _{e}. \end{aligned}$$

Proof

All the inclusions are proved in a similar way. We only give the proof for \(|A|_{v}\subseteq |A|_{V}\). Let \({\bot \!\!\!\bot }\) be a pole and \((v|\tau )\) be in \(|A|_{v}\). We want to show that \((v|\tau )\) is in \(|A|_{V}\), that is to say that v is in the syntactic category V (which is true), and that for any \((F|\tau ')\in \Vert A\Vert _{F}\) such that \({\tau \diamond \tau '}\), \((v|\tau ){{\bot \!\!\!\bot }}(F|\tau ')\). The latter holds by definition of \((F|\tau ')\in \Vert A\Vert _{F}\), since \((v|\tau )\in |A|_{v}\).    \(\square \)

We now extend the notion of realizers to stores, by stating that a store \(\tau \) realizes a context \(\varGamma \) if it binds all the variables x and \(\alpha \) in \(\varGamma \) to a realizer of the corresponding formula.

Definition 13

Given a closed store \(\tau \) and a fixed pole \({\bot \!\!\!\bot }\), we say that \(\tau \) realizes \(\varGamma \), which we writeFootnote 7 \(\tau \Vdash \varGamma \), if:

  1. 1.

    for any \((x:A) \in \varGamma \), \(\tau \equiv \tau _0[x:=t]\tau _1\) and \((t|\tau _0) \in |A|_{t}\)

  2. 2.

    for any , \(\tau \equiv \tau _0[\alpha :=E]\tau _1\) and \((E|\tau _0) \in \Vert A\Vert _{E}\)

In the same way than weakening rules (for the typing context) are admissible for each level of the typing system:

the definition of realizers is compatible with a weakening of the store.

Lemma 14

(Store weakening). Let \(\tau \) and \(\tau '\) be two stores such that \(\tau \vartriangleleft \tau '\), let \(\varGamma \) be a typing context and let \({\bot \!\!\!\bot }\) be a pole. The following statements hold:

  1. 1.

    \(\overline{\tau \tau '} = \tau '\)

  2. 2.

    If  \((t|\tau ) \in |A|_{t}\)  for some closed term \((t|\tau )\) and type A, then  \((t|\tau ')\in |A|_{t}\). The same holds for each level eEVFv of the typing rules.

  3. 3.

    If  \(\tau \Vdash \varGamma \)  then  \(\tau ' \Vdash \varGamma \).

Proof

  1. 1.

    Straightforward from the definition of \(\bar{\tau \tau '}\).

  2. 2.

    This essentially amounts to the following observations. First, one remarks that if \((t|\tau )\) is a closed term, so then so is \((t|\overline{\tau \tau '})\) for any closed store \(\tau '\) compatible with \(\tau \). Second, we observe that if we consider for instance a closed context \((E|\tau '')\in \Vert A\Vert _{E}\), then \({\overline{\tau \tau '}\diamond \tau ''}\) implies \({\tau \diamond \tau ''}\), thus \((t|\tau ){{\bot \!\!\!\bot }}(E|\tau '')\) and finally \((t|\overline{\tau \tau '}){{\bot \!\!\!\bot }}(E|\tau '')\) by closure of the pole under store extension. We conclude that \((t|\tau '){{\bot \!\!\!\bot }}(E|\tau '')\) using the first statement.

  3. 3.

    By definition, for all \((x:A)\in \varGamma \), \(\tau \) is of the form \(\tau _0[x:=t]\tau _1\) such that \((t|\tau _0)\in |A|_{t}\). As \(\tau \) and \(\tau '\) are compatible, we know by Lemma 4 that \(\overline{\tau \tau '}\) is of the form \(\tau '_0[x:=t]\tau '_1\) with \(\tau '_0\) an extension of \(\tau _0\), and using the first point we get that \((t|\tau '_0)\in |A|_{t}\).    \(\square \)

Definition 15

(Adequacy). Given a fixed pole \({\bot \!\!\!\bot }\), we say that:

  • A typing judgment \(\varGamma \vdash _t t:A\) is adequate (w.r.t. the pole \({\bot \!\!\!\bot }\)) if for all stores \(\tau \Vdash \varGamma \), we have \((t|\tau ) \in |A|_{t}\).

  • More generally, we say that an inference rule

    is adequate (w.r.t. the pole \({\bot \!\!\!\bot }\)) if the adequacy of all typing judgments \(J_1,\ldots ,J_n\) implies the adequacy of the typing judgment \(J_0\).

Remark 16

From the latter definition, it is clear that a typing judgment that is derivable from a set of adequate inference rules is adequate too.

We will now show the main result of this section, namely that the typing rules of Fig. 2 for the -calculus without co-constants are adequate with any pole. Observe that this result requires to consider the -calculus without co-constants. Indeed, we consider co-constants as coming with their typing rules, potentially giving them any type (whereas constants can only be given an atomic type). Thus, there is a priori no reasonFootnote 8 why their types should be adequate with any pole.

However, as observed in the previous remark, given a fixed pole it suffices to check whether the typing rules for a given co-constant are adequate with this pole. If they are, any judgment that is derivable using these rules will be adequate.

Theorem 17

(Adequacy). If \(\varGamma \) is a typing context, \({\bot \!\!\!\bot }\) is a pole and \(\tau \) is a store such that \({\tau \Vdash \varGamma }\), then the following holds in the -calculus without co-constants:

  1. 1.

    If v is a strong value such that \(\varGamma \vdash _v v:A\), then \((v|\tau ) \in |A|_{v}\).

  2. 2.

    If F is a forcing context such that , then \((F|\tau ) \in \Vert A\Vert _{F}\).

  3. 3.

    If V is a weak value such that \(\varGamma \vdash _V V:A\), then \((V|\tau ) \in |A|_{V}\).

  4. 4.

    If E is a catchable context such that , then \((E|\tau ) \in \Vert A\Vert _{F}\).

  5. 5.

    If t is a term such that \(\varGamma \vdash _t t:A\), then \((t|\tau ) \in |A|_{t}\).

  6. 6.

    If e is a context such that , then \((e|\tau ) \in \Vert A\Vert _{e}\).

  7. 7.

    If c is a command such that \(\varGamma \vdash _c c\), then \(c\tau \in {\bot \!\!\!\bot }\).

  8. 8.

    If \(\tau '\) is a store such that \(\varGamma \vdash _\tau \tau ':\varGamma '\), then \(\tau \tau ' \Vdash \varGamma ,\varGamma '\).

Proof

The different statements are proved by mutual induction over typing derivations. We only give the most important cases here.

Rule (\(\rightarrow _{l}\)). Assume that

and let \({\bot \!\!\!\bot }\) be a pole and \(\tau \) a store such that \(\tau \Vdash \varGamma \). Let \((\lambda x.t|\tau ')\) be a closed term in the set \(|A\rightarrow B|_{v}\) such that \({\tau \diamond \tau '}\), then we have:

By definition of \(|A\rightarrow B|_{v}\), this closure is in the pole, and we can conclude by anti-reduction.

Rule (x). Assume that

and let \({\bot \!\!\!\bot }\) be a pole and \(\tau \) a store such that \(\tau \Vdash \varGamma \). As \((x:A)\in \varGamma \), we know that \(\tau \) is of the form \(\tau _0[x:=t]\tau _1\) with \((t|\tau _0)\in |A|_{t}\). Let \((F|\tau ')\) be in \(\Vert A\Vert _{F}\), with \({\tau \diamond \tau '}\). By Lemma 4, we know that \(\overline{\tau \tau '}\) is of the form \(\overline{\tau _0}[x:=t]\overline{\tau _1}\). Hence we have:

and it suffices by anti-reduction to show that the last closure is in the pole \({\bot \!\!\!\bot }\). By induction hypothesis, we know that \((t|\tau _0)\in |A|_{t}\) thus we only need to show that it is in front of a catchable context in \(\Vert A\Vert _{E}\). This corresponds exactly to the next case that we shall prove now.

Rule \(({\tilde{\mu }^{[]}})\). Assume that

and let \({\bot \!\!\!\bot }\) be a pole and \(\tau \) a store such that \(\tau \Vdash \varGamma \). Let \((V|\tau _0)\) be a closed term in \(|A|_{V}\) such that \({\tau _0\diamond \tau }\). We have that:

By induction hypothesis, we obtain \(\tau [x:=V]\tau '\Vdash \varGamma ,x:A,\varGamma '\). Up to \(\alpha \)-conversion in F and \(\tau '\), so that the variables in \(\tau '\) are disjoint from those in \(\tau _0\), we have that \(\overline{\tau _0\tau }\Vdash \varGamma \) (by Lemma 14) and then \(\tau ''\triangleq \overline{\tau _0\tau }[x:=V]\tau '\Vdash \varGamma ,x:A,\varGamma '\). By induction hypothesis again, we obtain that \((F|\tau '')\in \Vert A\Vert _{F}\) (this was an assumption in the previous case) and as \((V|\tau _0)\in |A|_{V}\), we finally get that \((V|\tau _0){{\bot \!\!\!\bot }}(F|\tau '')\) and conclude again by anti-reduction.    \(\square \)

Corollary 18

If \(c\tau \) is a closure such that \(\vdash _l c\tau \) is derivable, then for any pole \({\bot \!\!\!\bot }\) such that the typing rules for co-constants used in the derivation are adequate with \({\bot \!\!\!\bot }\), \(c\tau \in {\bot \!\!\!\bot }\).

We can now put our focus back on the normalization of typed closures. As we already saw in Proposition 7, the set \({\bot \!\!\!\bot }_{\Downarrow }\) of normalizing closure is a valid pole, so that it only remains to prove that any typing rule for co-constants is adequate with \({\bot \!\!\!\bot }_{\Downarrow }\).

Lemma 19

Any typing rule for co-constants is adequate with the pole \({\bot \!\!\!\bot }_{\Downarrow }\), i.e. if \(\varGamma \) is a typing context, and \(\tau \) is a store such that \(\tau \Vdash \varGamma \), if \(\varvec{\kappa }\) is a co-constant such that , then \((\varvec{\kappa }|\tau )\in \Vert A\Vert _{F}\).

Proof

This lemma directly stems from the observation that for any store \(\tau \) and any closed strong value \((v|\tau ')\in |A|_{v}\), does not reduce and thus belongs to the pole \({\bot \!\!\!\bot }_{\Downarrow }\).

As a consequence, we obtain the normalization of typed closures of the full calculus.

Theorem 20

If \(c\tau \) is a closure of the -calculus such that \(\vdash _l c\tau \) is derivable, then \(c\tau \) normalizes.

This is to be contrasted with Okasaki, Lee and Tarditi’s semantics for the call-by-need \(\lambda \)-calculus, which is not normalizing in the simply-typed case, as shown in Ariola et al. [2].

3.3 Extension to 2\(^{\text {nd}}\)-Order Type Systems

We focused in this article on simply-typed versions of the and calculi. But as it is common in Krivine classical realizability, first and second-order quantifications (in Curry style) come for free through the interpretation. This means that we can for instance extend the language of types to first and second-order predicate logic:

$$\begin{array}{rcl} e_1,e_2 &{}{:}{:=}&{} x\mid f(e_1,\ldots ,e_k)\\ A,B &{}{:}{:=}&{} X(e_1,\ldots ,e_k)\mid A\rightarrow B \mid \forall x. A\mid \forall X. A \end{array}$$

We can then define the following introduction rules for universal quantifications:

Observe that these rules need to be restricted at the level of strong values, just as they are restricted to values in the case of call-by-valueFootnote 9. As for the left rules, they can be defined at any levels, let say the more general e:

where n is any natural number and B any formula. The usual (call-by-value) interpretation of the quantification is defined as an intersection over all the possible instantiations of the variables within the model. We do not wish to enter into too many detailsFootnote 10 on this topic here, but first-order variable are to be instantiated by integers, while second order are to be instantiated by subset of terms at the lower level, i.e. closed strong-values in store (which we write \(\mathcal {V}_0\)):

$$ |\forall x.A|_{v} = \bigcap _{n\in \mathbb {N}} |A[n/x]|_{v} \qquad \qquad |\forall X.A|_{v} = \bigcap _{S\in {\mathbb {N}^k\rightarrow \mathcal P}(\mathcal {V}_0)} |A[S/X]|_{v} $$

where the variable X is of arity k. It is then routine to check that the typing rules are adequate with the realizability interpretation.

4 Conclusion and Further Work

In this paper, we presented a system of simple types for a call-by-need calculus with control, which we proved to be safe in that it satisfies subject reduction (Theorem 1) and that typed terms are normalizing (Theorem 20). We proved the normalization by means of realizability-inspired interpretation of the -calculus. Incidentally, this opens the doors to the computational analysis (in the spirit of Krivine realizability) of classical proofs using control, laziness and shared memory.

In further work, we intend to present two extensions of the present paper. First, following the definition of the realizability interpretation, we managed to type the continuation-and-store passing style translation for the -calculus (see [2]). Interestingly, typing the translation emphasizes its computational content, and in particular, the store-passing part is reflected in a Kripke forcing-like manner of typing the extensibility of the store [28, Chap. 6].

Second, on a different aspect, the realizability interpretation we introduced could be a first step towards new ways of realizing axioms. In particular, the first author used in his Ph.D. thesis [28, Chap. 8] the techniques presented in this paper to give a normalization proof for \(\text {dPA}^\omega \), a proof system developed by the second author [15]. Indeed, this proof system allows to define a proof for the axiom of dependent choice thanks to the use of streams that are lazily evaluated, and was lacking a proper normalization proof.

Finally, to determine the range of our technique, it would be natural to investigate the relation between our framework and the many different presentations of call-by-need calculi (with or without control). Amongst other calculi, we could cite Chang-Felleisen presentation of call-by-need [4], Garcia et al. lazy calculus with delimited control [10] or Kesner’s recent paper on normalizing by-need terms characterized by an intersection type system [16]. To this end, we might rely on Pédrot and Saurin’s classical by-need [33]. They indeed relate (classical) call-by-need with linear head-reduction from a computational point of view, and draw the connections with the presentations of Ariola et al. [2] and Chang-Felleisen [4]. Ariola et al. -calculus being close to the -calculus (see [2] for further details), our technique is likely to be adaptable to their framework, and thus to Pédrot and Saurin’s system.