A FAITHFUL AND QUANTITATIVE NOTION OF DISTANT REDUCTION FOR GENERALIZED APPLICATIONS

. We introduce a call-by-name lambda-calculus λJ n with generalized applications which is equipped with distant reduction . This allows to unblock β -redexes without resorting to the standard permutative conversions of generalized applications used in the original Λ J -calculus with generalized applications of Joachimski and Matthes. We show strong normalization of simply-typed terms, and we then fully characterize strong normalization by means of a quantitative ( i.e. non-idempotent intersection) typing system. This characterization uses a non-trivial inductive deﬁnition of strong normalization –related to others in the literature–, which is based on a weak-head normalizing strategy. We also show that our calculus λJ n relates to explicit substitution calculi by means of a faithful translation, in the sense that it preserves strong normalization. Moreover, our calculus λJ n and the original Λ J -calculus determine equivalent notions of strong normalization. As a consequence, Λ J inherits a faithful translation into explicit substitutions, and its strong normalization can also be characterized by the quantitative typing system designed for λJ n , despite the fact that quantitative subject reduction fails for permutative conversions.


Introduction
In the original calculus with generalized applications ΛJ, due to Joachimski and Matthes [JM03,JM00], the standard syntax of the λ-calculus is modified by generalizing the application constructor tu into a new shape t(u, y.r), capturing a notion of sharing for applications: a term t(u, y.r) can intuitively be understood as a let-binding of the form let y = tu in r.
This new constructor can better be understood in a typed framework. Indeed, the simply-typed ΛJ-calculus is an interpretation of the implicative fragment of von Plato's system of natural deduction with generalized elimination rules [vP01] under the Curry-Howard correspondence. Besides the logical reading, the syntax with generalized applications constitutes also a minimal framework for studying the call-by-name (CbN) and call-by-value (CbV) functional paradigms, as well as various kinds of permutative conversions beyond the λ-calculus.
The operational semantics of ΛJ is given by a call-by-name 1 β-rule generalizing the one of the λ-calculus, as well as a permutative π-rule on terms. The two rules are as follows: (λx.t)(u, z.r) → β {{u/x}t/z}r t(u, y.r)(u ′ , z.r ′ ) → π t(u, y.r(u ′ , z.r ′ )) This choice does not affect (strong) normalization, which is our focus, and highlights the computational behavior of the calculus: it is at the β-step that resources are consumed, not during the permutations.
The syntax of the ΛJ-calculus will thus be equipped with an operational call-by-name semantics given by the distant rule dβ, but without π. The resulting calculus is called λJ n . As a major contribution, we prove a characterization of strong normalization in terms of typability in our quantitative system. In such proof, the soundness result (typability implies strong normalization) is obtained by combinatorial arguments, with the size of typing derivations decreasing at each dβ-step. For the completeness result (strong normalization implies typability) we need an inductive characterization of the terms that are strongly normalizing for dβ: this is a non-trivial technical contribution of the paper.
Our new calculus λJ n is then compatible with a quantitative typing system. However, this type system designed for λJ n only partially captures strong normalization for ΛJ on a quantitative level, because the bound for reduction lenghts given by the size of type derivations only holds for β, and not for π. Nevertheless, using this partial bound, we can prove that the type system designed for λJ n is also sound for strong normalization in the original calculus ΛJ, in the sense that any typable term is strongly normalizing. It immediately follows that a term t is strongly normalizable in λJ n only if t is strongly normalizable in ΛJ.
Actually, we go further and prove that this implication is an equivalence. The central role in the proof is again played by intersection type systems, together with a new encoding of generalized applications into explicit substitutions (ES). More precisely, we consider a calculus with explicit substitutions, where a new constructor [u/x]t, akin to a let-binding let x = u in t, is added to the grammar of the λ-calculus. The reading given above of t(u, y.r) as a let-binding expressing the sharing of the application tu is similar to the intuitive and known [ES07] translation of t(u, y.r) into the explicit substitution [tu/y]r. This translation, however, does not suit our goals, because it does not preserve strong normalization: a nonterminating computation generated by the interaction of t with u in t(u, y.r) will always have to be substituted for y in r, and thus may vanish if y does not occur free in r (a detailed example will be given later). We instead propose a new, type-preserving encoding of terms with generalized applications into ES, and show the dynamic behavior of our calculus λJ n to be faithful to explicit substitutions: a term is strongly normalizing in λJ n if and only if its new enconding into ES is also strongly normalizing. The proof of faithfulness essentially relies on an analysis of typability in the type system designed for λJ n .
Plan of the paper. Section 2 presents and motivates our calculus λJ n with distant β. Section 3 provides an inductive characterization of strongly normalizing terms in λJ n . Section 4 presents the non-idempotent intersection type system for λJ n , proves the characterization of strong normalization in λJ n as typability in that system, and discusses why π is not quantitative. Section 5 defines the new translation into ES and proves it to be faithful, in the sense of preserving and reflecting strong normalization. Section 6 contains comparisons with other calculi, obtained by equipping the terms of λJ n with β, β + p2, and β + π. The main focus there is to prove the respective notions of strong normalization equivalent, but we also collect the results ΛJ inherits from our study of λJ n . Section 7 summarizes our contributions and discusses future work.

A calculus with generalized applications
In this section we define our calculus with generalized applications, denoted λJ n . Starting from the issue of stuck redexes, we discuss different possibilities for the operational semantics. Next we prove some introductory properties of the calculus we propose.
2.1. Syntax. We start with some general notations. A reduction rule, denoted R or → R , is a binary relation in the syntax of some calculus, and generates a reduction relation → R , usually by closure of R under all contexts. Given a reduction relation → R , we write → * Lemma 2.2. The grammar NF dβ characterizes dβ-normal forms.
Proof. We start with soundness: t ∈ NF dβ =⇒ t is in dβ-nf. We show the following two stronger properties: (1) For all t ∈ NE dβ , t does not have an abstraction shape and t is in dβ-nf.
(2) For all t ∈ NF dβ , t is in dβ-nf. The proof is by simultaneous induction on t ∈ NE dβ and t ∈ NF dβ .
First, the cases relative to the first item. Case t = x: A variable x does not have an abstraction shape and is in dβ-nf.
Case t = s(u, x.r), with s, r ∈ NE dβ and u ∈ NF dβ : The term t does not have an abstraction shape (because r does not have an abstraction shape, due to i.h.
(1)). The term t is in dβ-nf because s, u, r are in dβ-nf (due to i.h. (1) and (2)) and because t itself is not a dβ-redex (since s does not have an abstraction shape, by i.h. (1)). Next, the cases relative to the second item.
Case t = x: A variable x is in dβ-nf. Case t = λx.s, with s ∈ NF dβ : By i.h. (2), s is in dβ-nf. Hence so is λx.s. Case t = s(u, x.r), with s ∈ NE dβ and u, r ∈ NF dβ : The term t is in dβ-nf because s, u, r are in dβ-nf (due to i.h. (1),(2)) and because t itself is not a dβ-redex (since s does not have an abstraction shape, by i.h. (1)). Now, completeness: t is in dβ-nf =⇒ t ∈ NF dβ . We show a stronger property: For all t, (1) If t does not have an abstraction shape and t is in dβ-nf, then t ∈ NE dβ ; and (2) If t is in dβ-nf, then t ∈ NF dβ . The proof is by induction on t.
(1). The subterm s does not have an abstraction shape, otherwise t would be a dβ-redex, thus s ∈ NE dβ , by the i.h. (1). Therefore, t ∈ NF dβ and (1) is proved. Moreover, suppose t does not have an abstraction shape. Then the same holds for r. By i.h.
We already saw that, once β is generalized to dβ, π is not needed anymore to unblock β-redexes; the next lemma says that π preserves dβ-nfs, so it does not bring anything new to dβ-nfs either. Lemma 2.3. If t is a dβ-nf, and t → π t ′ , then t ′ is a dβ-nf.
Proof. Given Lemma 2.2, the proof proceeds by simultaneous induction on NF dβ and NE dβ (for NE dβ one also proves that NE dβ does not have an abstraction shape).
Let us now discuss two properties related to simple typability for generalized applications, using the original type system of [JM00], which is called here ST . Recall the following typing rules, where A, B, C ::= a | A → B, and a belongs to a set of base type variables: We write Γ ST t : A to denote a type derivation in system ST ending in the sequent Γ ⊢ t : A.
Subformula property. The subformula property for normal forms is an important property of proof systems, being useful notably for proof search. It holds for von Plato's generalized natural deduction, and therefore also for the original calculus ΛJ. Despite the minimal amount of permutations used, which does not provide full normal forms, this property is still true in our system.

Lemma 2.4 (Subformula property). If Φ = Γ ST NF dβ : A then every formula in the derivation Φ is a subformula of A or a subformula of some formula in Γ.
Proof. The lemma is proved together with another statement: If Ψ = Γ ST NE dβ : A then every formula in Ψ is a subformula of some formula in Γ. The proof is by simultaneous induction of Φ and Ψ.
The subformula property confirms that executing only needed permutations still gives rise to a reasonable notion of normal form.
Strong Normalization. The second property for typed terms we show states that they are λJ n -strongly normalizable. The proof is achieved by mapping λJ n into the λ-calculus equipped with the following σ-rules [Reg94]: Proof. The proof uses a simple map into the λ-calculus, based on [ES07], given by . This map produces the following simulation: if t 1 → dβ t 2 then t # 1 → + βσ 1 t # 2 . The proof of the simulation result is by induction on t 1 → dβ t 2 . The base case needs two lemmas: the first one states that map (·) # commutes with substitution; the other, proved by induction on D, states that D λx.t # u # → + βσ 1 D {u/x}t # . Now, given a simply typable term t ∈ T J , the λ-term t # is also simply typable in the λ-calculus. Hence, t # ∈ SN(β). It is well known that this is equivalent [Reg94] to t # ∈ SN(β, σ 1 ). By the simulation result, t ∈ SN(dβ) follows.

Confluence.
We now prove confluence of the calculus. For this, we adapt the proof of [Tak95]. The same proof method is used for ΛJ by [JM00] and by [ES20] for ΛJ v . We begin by defining the following parallel reduction ⇒ dβ : The particularity of our proof is the following lemma which deals with distance.
We can assume by α-equivalence that the free variables of u ′ and s ′ are not bound by D 2 . We take Proof. Straightforward by induction on t.
Lemma 2.8. Let t 1 , t 2 , u 1 , u 2 ∈ T J . Then: Proof. The proof of the first statement is by induction on t 1 → dβ t 2 . In the base case t 1 = D λx.t (u, y.r) → dβ {{u/x}D t /y}r = t 2 , we use rule (DB) with premises D t ⇒ dβ D t , u ⇒ dβ u and r ⇒ dβ r. The other cases are straightforward by i.h. and rules (ABS) or (APP).
The proof of the second statement is by induction on t 1 ⇒ dβ t 2 . The base case (VAR) is by an empty reduction t 1 = x = t 2 . The cases (ABS) and (APP) are direct by i.h.
We have the following reduction: The proof of the third statement is also by induction on t 1 ⇒ dβ t 2 .
Case (VAR): Then t 1 is a variable. If t 1 = z, we have {u 1 /z}t 1 = u 1 , {u 2 /z}t 2 = u 2 and this is direct by the second hypothesis. If t 1 = y = z, we have {u 1 /z}t 1 = y = {u 2 /z}t 2 , this is direct by (VAR). Case (ABS): Statements (1) and (2) of the previous lemma imply that → * dβ is the transitive and reflexive closure of ⇒ dβ . We now only need to prove the diamond property for ⇒ dβ to conclude. The difference between Takahashi's method and the more usual Tait and Martin-Löfs's method [Bar84,§3.2] is to replace the proof of diamond for the parallel reduction by a proof of the triangle property. Definition 2.9 (Triangle property). Let → R be a reduction relation on T J and f a function.
Definition 2.10 (Developments). The dβ-development (t) dβ of a T J -term t is defined as follows. ( Proof. By induction on t 1 . Case t 1 = x: Then t 1 = t 2 = (t 1 ) dβ and we conclude with rule (VAR).
By i.h. and two applications of Lemma 2.8(3), we have: Proposition 2.12. The reduction relation → dβ is confluent.

Inductive Characterization of Strong Normalization
In this section we give an inductive characterization of strong normalization (ISN) for λJ n , written ISN(dβ), and prove it correct. This characterization will be useful to show completeness of the type system that we are going to present in subsection 4.1, as well as to compare strong normalization of λJ n to the ones of T J [β, p2] and ΛJ.
3.1. ISN in the λ-calculus with Weak-Head Contexts. We write ISN(R) the set of strongly normalizing terms under R given by an inductive definition. As an introduction, we first look at the case of ISN for the λ-calculus (written ISN(β)), on which our forthcoming definition of ISN(dβ) elaborates. A usual way to define ISN(β) is by the following rules [vR96], where the general notation M P abbreviates (. . . (M P 1 ) . . . )P n for some n ≥ 0.
One then shows that M ∈ SN(β) if and only if M ∈ ISN(β).
Notice that this definition is deterministic. Indeed, a reduction strategy emerges from this definition: it is a strong strategy based on a preliminary weak-head strategy. The strategy is the following: first reduce a term to a weak-head normal form λx.M or x P , and then iterate reduction under abstractions and inside arguments (in any order), without any need to come back to the head of the term. Formally, weak-head normal forms, which are those produced by the first level of the strategy, are of two kinds: (Neutral terms) n ::= x | nM (Answers) a ::= λx.M Neutral terms cannot produce any head β-redex. They are the terms of the shape x P . On the contrary, answers can create a β-redex when given at least one argument. In the case of the λ-calculus, these are only abstractions. If the term is not a weak-head normal form, a redex can be located inside a (Weak-head context) W ::= ⋄ | Wt.
These concepts give rise to a different definition of ISN(β): Weak-head contexts are an alternative to the meta-syntactic notation P of vectors of arguments used in the first definition of ISN(β). Notice in the alternative definition that there is one rule for each kind of neutral term, one rule for answers and one rule for terms which are not weak-head normal forms.
3.2. ISN for dβ. We now define ISN(dβ) with the same tools used in the last subsection. Hence, we first have to define neutral terms, answers and a notion of contexts. We call the contexts left-right contexts (R), and the underlying strategy the left-right strategy.
Definition 3.1. We consider the following grammars: Notice that n and a are disjoint and stable by dβ-reduction. Notice also that this time, answers are not only abstractions, but also abstractions under a special distant context. Moreover n(u, x.r) is never a dβ-redex, whereas a(u, x.r) is always a dβ-redex. The terminology "left-right" intends to suggest that the hole ⋄ may appear in the left (viz R(u, x.r)) or right (viz n(u, x.R)) component of generalized applications. If this last form of R was forbidden, then we would define the contexts by W ::= ⋄ | W(u, x.r), a generalized form of weak-head contexts from the λ-calculus, actually implicitly used in [Mat00] for ΛJ (see also Fig. 1 in Section 6). However, these contexts W are not convenient for defining an inductive predicate of strong normalization based on the distant rule dβ, as shown below in Remark 3.6.
To achieve a characterization of ISN(dβ), we still need to obtain a deterministic notion of decomposition, that we explain by means of an example.
Example 3.2 (Decomposition). Let t = x 1 (x 2 , y 1 .I(I, z.I))(x 3 , y.II). Then, there are two decompositions of t in terms of a dβ-redex r and a left-right context R, i.e. there are two ways to write t as R r : either R = ⋄ and r = t = D I (x 3 , y.II), for some D; or R = x 1 (x 2 , y 1 .⋄)(x 3 , y.II) and r = I(I, z.I). Notice how in the second case all the three rules in the grammar of left-right contexts are needed to generate R.
In the previous example, we will rule out the first decomposition by defining next a restriction of the dβ-rule, securing uniqueness of such kind of decomposition in all cases. For that, we introduce a restricted notion of distant context: (Neutral distant contexts) D n ::= ⋄ | n(u, x.D n ) Notice D n R; moreover, D n λx.t is an answer a and conversely every answer has that form.
The reduction relation underlying our definition of ISN(dβ) is the left-right reduction → lr , defined as the closure under R of the following restricted dβ-rule: Example 3.3 (Decomposition). Going back to Example 3.2, did we obtain a decomposition R r for t, with r a restricted dβ-redex? The first option fails because D is not a neutral distant context; and the second option succeeds because I(I, z.I) is of course a restricted redex.
Coherently with the λ-calculus, left-right normal forms are either neutral terms or answers.
Proof. First, we show that t lr-normal implies t ∈ n ∪ a, by induction on t. If t = x, then t ∈ n. If t = λx.s, then t ∈ a. Let t = s(u, x.r) where s and r are lr-normal. Then s / ∈ a, otherwise the term would lr-reduce at root. Thus by the i.h. s ∈ n. By the i.h. again r ∈ n ∪ a so that t ∈ n ∪ a.
Second, we show that t ∈ n ∪ a implies t is lr-normal, by simultaneous induction on n and a. The cases t = x (i.e. t ∈ n) and t = λx.s (i.e. t ∈ a) are straightforward. Let t = s(u, x.r) where s ∈ n and r ∈ n ∪ a. Since r, s ∈ n ∪ a, by the i.h. t does not lr-reduce in r or s. Since s ∈ n, t does not lr-reduce at root either. Then, t is lr-normal.
Lemma 3.5. The reduction → lr is deterministic.
Proof. Let t be a lr-reducible term. We reason by induction on t. If t is a variable or an abstraction, then t does not lr-reduce so that t is necessarily an application t ′ (u, y.r). By Lemma 3.4 we have three possible cases for t ′ . Case t = t ′ (u, y.r) with t ′ ∈ a: Then t = D n λx.s (u, y.r), so t reduces at the root. Since t ′ ∈ a, then we know by Lemma 3.4 that (1) t ′ ∈ NF lr , (2) t ′ / ∈ n, so that t does not lr-reduce in t ′ or r. Case t = t ′ (u, y.r) with t ′ ∈ n: Then t does not lr-reduce at the root. By Lemma 3.4, we know that t ′ ∈ NF lr and thus t necessarily reduces in r. By the i.h. this reduction is deterministic. Case t = t ′ (u, y.r) with t ′ / ∈ NF lr : Then in particular by Lemma 3.4 we know that (1) t ′ does not have an abstraction shape so that t does not reduce at the root, and (2) t ′ / ∈ n so that t does not reduce in r. Thus t lr-reduces only in t ′ . By the i.h. this reduction is deterministic.
Remark 3.6. Consider again the term t = x 1 (x 2 , y 1 .I(I, z.I))(x 3 , y.II) in Example 3.2. As we explained before, if the form n(u, x.R) of the grammar of R was disallowed, then it would not be possible to decompose t as R r , with r a restricted dβ-redex. Moreover, the reduction strategy associated with the intended definition of ISN(dβ) would consider t as a left-right normal form, and start reducing the subterms of t, including I(I, z.I). Now, this latter (internal) subterm would eventually reach I and suddenly the whole term t ′ = x 1 (x 2 , y 1 .I)(x 3 , y.r ′ ) would become an external left-right redex: the typical separation between an initial external reduction phase followed by an internal reduction phase -as it is the case in the λ-calculus-would be lost in our framework. This point due to the distant character of rule dβ explains the subtlety of Definition 3.7.
Our inductive definition of strong normalization follows.
Definition 3.7 (Inductive strong normalization). We consider the following inductive predicate: Notice that every term can be written according to the conclusions of the previous rules, so that the following grammar also defines the syntax T J .
Moreover, at most one rule in the previous definition applies to each term, i.e. the rules are deterministic. An equivalent, but non-deterministic definition, can be given by removing the side condition "r ∈ NF lr " in rule (snapp). Indeed, this (weaker) rule would overlap with rule (snbeta) for terms in which the left-right context lies in the last continuation, as for instance in x(u, y.y)(u ′ , y ′ .II). Notice the difference with the λ-calculus: due to the definition of left-right contexts R, the head of a term with generalized applications can be either on the left of the term (as in the λ-calculus), or recursively on the left in a continuation.
To show that our definition corresponds to strong normalization, we need a few intermediate statements.
Proof. In the base cases, we have t 0 = D λz.t (s, y.r) → dβ {{s/z}D t /y}r = t 1 . By αequivalence we can suppose that y, z / ∈ fv(u) and x = y, x = z. The inductive cases and the base case for item (2) are straightforward. We detail the base case of item (1).
Proof. In this proof we use a notion of reduction of contexts which is the expected one: C → C ′ iff the hole in C is outside the redex contracted in the reduction step. By hypothesis we also have r ∈ SN(dβ). We use the lexicographic order to reason by induction on ||t 0 || dβ , ||D t || dβ , ||u|| dβ . To show t ′ 0 ∈ SN(dβ) it is sufficient to show that all its reducts are in SN(dβ). We analyze all possible cases. Case t ′ 0 → dβ t 0 : We conclude by the hypothesis.
This is the only case left. Indeed, there is no redex in D λx.t other than in D or λx.t. Then, . The reduction we need to consider is: We will show that t ′ 1 ∈ SN(dβ). For this we show that The second inequality holds since t 2 has an abstraction shape, and abstraction shapes are stable under substitution, and thus t 2 (u ′ , y ′ .r ′ ) is also a redex. We can then conclude that We then have t 1 , Proof. First, we show ISN(dβ) ⊆ SN(dβ). We proceed by induction on t ∈ ISN(dβ). Case t = x: Straightforward. Case t = λx.s, where s ∈ ISN(dβ): By the i.h. s ∈ SN(dβ), so that t ∈ SN(dβ) trivially holds. Case t = s(u, x.r) ∈ NF lr where s, u, r ∈ ISN(dβ): By Lemma 3.4 we have s ∈ n and thus in particular s can not dβ-reduce to an answer. Therefore any kind of reduction starting at t only occurs in the subterms s, u and r. We conclude since by the i.h. we have s, u, r ∈ SN(dβ). Case t = R D n λx.s (u, y.r) , where R {{u/x}D n s /y}r , D n s , u ∈ ISN(dβ): The i.h. gives R {{u/x}D n s /y}r ∈ SN(dβ), D n s ∈ SN(dβ) and u ∈ SN(dβ) so that by Lemma 3.10 t = R D n λx.s (u, y.r) ∈ SN(dβ) holds, with D = D n . Next, we show SN(dβ) ⊆ ISN(dβ). Let t ∈ SN(dβ). We reason by induction on ||t|| dβ , |t| w.r.t. the lexicographic order. If ||t|| dβ , |t| is minimal, i.e. 0, 1 , then t is a variable and thus in ISN(dβ) by rule (snvar). Otherwise we proceed by case analysis. Case t = λx.s: Since ||s|| dβ ≤ ||t|| dβ and |s| < |t|, we conclude by the i.h. and rule (snabs). Case t is an application: There are two cases.

Quantitative Types Capture Strong Normalization
We proved in subsection 2.3 that simply typable terms are strongly normalizing. In this section we use non-idempotent intersection types to fully characterize strong normalization, so that not only typable terms are strongly normalizing, but also strongly normalizing terms are typable. First we introduce the typing system, next we prove the characterization, and finally we study the quantitative behavior of the permutative rule π by giving in particular an example of failure of type preservation along π.
4.1. The Typing System. We define the quantitative type system ∩J for T J -terms and we show that strong normalization in λJ n exactly corresponds to ∩J-typability.
Given a countable infinite set BT V of base type variables a, b, c, . . . , we define the following sets of types and multiset types: , where σ is an arbitrary type. This operator will be used to guarantee that there is always a typing witness for all the subterms of typed terms.
Typing environments (or just environments), written Γ, ∆, Λ, are functions from variables to multiset types assigning the empty multiset to all but a finite set of variables. The domain of Γ is given by dom( This notion is extended to several environments as expected, so that ⊎ i∈I Γ i denotes a finite union of environments (⊎ i∈I Γ i is to be understood as the empty environment when I = ∅). We write Γ \ x for the environment such that ( We write Γ; ∆ for Γ ⊎ ∆ when dom(Γ) ∩ dom(∆) = ∅. A sequent has the form Γ ⊢ t : σ or Γ ⊢ t : M, where Γ is an environment, t is a term, σ is a type and M a multiset type.
The type system ∩J is given by the following typing rules. x The typing system handles sequents assigning a type σ or a multiset [σ i ] i∈I , with I = ∅. According to the rule (many), the latter kind of sequents should be understood as a shorthand for a set of sequents of the former kind. Still, the case I = ∅ is possible in rule (app), this is precisely when the subtle use of the choice operator is required. Indeed, if I is empty in (app), meaning in particular that x : [ ] appears in the typing environment of the third premise, then the multisets [M i → τ i ] i∈I and ⊔ i∈I M i are both empty. Therefore, the choice operator must be used to type both terms t and u, which cannot be assigned the empty multiset type. In this case, the resulting types #([M i → τ i ] i∈I ) and #(⊔ i∈I M i ) are non-empty multiset types, but they are not necessarily related (c.f. forthcoming example.) If I is not empty, then the multiset typing t is non-empty as well. However, the multiset typing u may or not be empty, Notice that the typing rules (and the choice operator) force all the subterms of a typed term to be also typed. Moreover, if I = ∅ in rule (app), then, as mentioned before, the types of t and u are not necessarily related. Indeed, let t := δ(δ, x.z). Then t is dβ-stronglynormalizing so it must be typed in system ∩J. However, since the set I of x : [τ i ] i∈I in the typing of r = z is necessarily empty (see Lemma 4.1), then the unrelated types #([M i → τ i ] i∈I ) and #(⊔ i∈I M i ) of the two occurrences of δ witness the fact that these subterms will never interact during the reduction of t. Indeed, the term t can be typed as follows, where where δ is typed with ρ i as follows: System ∩J lacks weakening: it is relevant.
Proof. Straightforward by induction on the derivations.
From now on we use the following notation to indicate that we have used the second item.
(  Proof. By induction on the type derivation of t. We extend the statement to derivations ending with (many), for which the property is straightforward by the i.h. Case t = x: Then n = 1 and by hypothesis Γ = ∅ and M = [σ] (so that |M| = 1). Moreover, ∆ m ∩J u : M necessarily comes from ∆ m ∩J u : σ by rule (many). Let k = m, then we conclude ∅ ⊎ ∆ 1+m−1 u : σ = Γ ⊎ ∆ k {u/x}x : σ.
Case t = λy.s where y = x and y / ∈ fv(u): By definition we have σ = N → τ and Γ; x : We only detail the case where x ∈ fv(s) ∩ fv(o) ∩ fv(r), the other cases being similar. By definition we have Γ 1 ; x : M 1 n 1 s : Proof. Both implications are proved by induction on D. The base case D = ⋄ is trivial. Notice that we always have σ = N → ρ. Let consider the inductive case D = s(u, y.D ′ ). We first consider the left-to-right implication. So that let Γ n D λx.t : σ. We have the following derivation, with n = k + l + m + 1.
The i.h. gives a derivation Λ; y : [τ i ] i∈I m λx.D ′ t : σ and thus a derivation Λ; y : [τ i ] i∈I ; x : N m−1 D ′ t : ρ. By α-conversion, y / ∈ fv(s)∪fv(u), so that y / ∈ dom(Π⊎∆) by Lemma 4.1. We can then build the following derivation of the same size: For the right-to-left implication, we build the first derivations from the second similarly to the previous case.
By nature, subject reduction (or expansion) in the quantitative type system for strong normalization does not hold. Indeed, all subterms are typed, even the ones that will be erased. In most cases, these subterms have free variables, that are typed in the environment. When the term is erased, some bits of the environment are lost which means that the typing is not preserved by reduction steps.
Example 4.5. Let t = λx.I(y, z.z) → dβ I. The term t can be typed with the derivation below, with environment y : [σ]. However, by relevance, the term I can only be typed with an empty environment since that term has no free variables.
We thus prove subject reduction only for non-erasing steps.
Definition 4.6 (Erasing step). A reduction step t 1 → dβ t 2 is said to be erasing iff the reduced dβ-redex in t 1 is of the form D λx.t (u, y.r) with x / ∈ fv(t) or y / ∈ fv(r).
Because the step is non-erasing, the types of y and x are not empty by Lemma 4.1, so that we have the following derivation, Applying the substitution Lemma 4.3 again gives Γ n 2 t 2 = {{u/x}D n t /y}r : σ with n 2 = n r + i∈I k i < n 1 .
where t → t ′ : By hypothesis, we have σ = M → τ and Γ; x : M n 1 −1 t : σ. By the i.h. we have Γ; x : M k t ′ : τ for n 1 − 1 > k. We can build a derivation of size n 2 = k + 1 and we get n 1 > n 2 . Case t 1 = t(u, x.r) and the reduction is internal: By hypothesis, we have the derivations and a derivation Σ nt t : τ . In both cases, we apply the i.h. and derive Σ k t ′ : #([M i → τ i ] i∈I ) with k < n t . We can build a derivation of size n 2 = 1 + k + n u + n r and we get n 1 > n 2 .
then J is a singleton. We have ∆ = ⊎ j∈J ∆ j , n u = j∈J n j u and derivations ∆ j n j u u : ρ j . We apply the i.h. and derive ∆ k u : #(⊔ i∈I M i ) with k < n u . We can build a derivation of size n 2 = 1 + n t + k + n r and we get n 1 > n 2 . Subcase t 1 → t(u, x.r ′ ) = t 2 , where r → r ′ : By the i.h. we have Λ; x : [τ i ] i∈I k r : σ with k < n r . We can build a derivation of size n 2 = 1 + n t + n u + k and we get n 1 > n 2 .
Although subject reduction does not always hold, the characterization of normalizable terms as typable should. To prove this, we need a weaker form of subject reduction: the fact that the right-hand term of an erasing reduction is still typed. This is the goal of the following lemma. Notice that we do not consider any reduction, but one occurring inside a left-right R. We will use the syntax of terms given in Equation 3.1 to conclude the proof (Lemma 4.10).
(2) If y ∈ fv(r) and x / ∈ fv(s), then there are typing derivations for W t ′ = W {D n s /y}r and u having measures k W t ′ and k u resp. such that k > 1 + k W t ′ + k u .
Proof. We prove a stronger statement: the derivation for W t ′ is of the shape Γ ′ k W t ′ ∩J W t ′ : σ with the same σ but Γ ′ ⊑ Γ. We proceed by induction on R: The derivation of R t has three premises of the form: Γ 1 we get from the first premise: (1) In cases (1) and (2) a derivation Γ 2 such that Γ 2 ⊑ Γ 1 and a typing derivation for u of measure k u .
(2) In case (1) a typing derivation for D n s of measure k Dn s and the fact that k Using the type derivations for R ′ t ′ , u ′ and r ′ we can build a derivation In case (2) in the same way, but without adding k Dn s in the sum.
Case R = n(u ′ , z.R ′ ): The derivation of R t has premises: Γ n kn n : We have Γ = Γ n ⊎ ∆ ⊎ Λ 1 and k = 1 + k n + k u ′ + k R ′ t . By the i.h. we get from the third premise: (1) In cases (1) and (2) a derivation Λ 2 ; z : and I ′ ⊆ I (I ′ possibly empty), and a typing derivation for u of measure k u .
(2) In case (1) a typing derivation for D n s of measure k Dn s and the fact that k To build a derivation for R t ′ , we need in particular derivations of type #( There are three cases: Subsubcase: (M i ) i∈I are all empty, and therefore (M i ) i∈I ′ are all empty. Then we set #(⊔ i∈I ′ M i ) = #(⊔ i∈I M i ). We take the original derivation so that ∆ ′ = ∆, k ′ u ′ = k u ′ . Subsubcase: (M i ) i∈I ′ are all empty but (M i ) i∈I are not all empty. As a consequence, ⊔ i∈I M i = ∅ and we take an arbitrary type ρ of ⊔ i∈I M i as a witness for u ′ , so that, ∆ ρ kρ u ′ : ρ holds by Lemma 4.2. We have the expected derivation with rule (many) taking ∆ ′ = ∆ ρ , #(⊔ i∈I ′ M i ) = [ρ] and k ′ u ′ = k ρ . Subsubcase: #(⊔ i∈I ′ M i ) = ⊔ i∈I ′ M i . By Lemma 4.2 it is possible to construct the expected derivation from the original ones for u ′ . Finally, we conclude by the following derivation for R t ′ : (1). Similarly but without k Dn s in case (2). We can conclude since Γ ′ ⊑ Γ. Case I = I ′ = ∅: We are done by taking the original derivations. Case I = ∅ = I ′ : Let us take an arbitrary j ∈ I: the type [M j → τ j ] is set as a witness for n, whose derivation Γ ′ k n ′ n ′ : [M j → τ j ] is obtained from the derivation Γ n kn n : #([M i → τ i ] i∈I ) by the split Lemma 4.2. For u ′ , we take as a witness an arbitrary ρ ∈ #(⊔ i∈I M i ) and we set #(⊔ i∈I ′ M i ) = [ρ]. If ⊔ i∈I M i = [ ], then ρ is the original witness. Otherwise ρ is a type of one of the M i 's. In both cases we use the split Lemma 4.2 to get a derivation ∆ Using the type derivation given by the i.h. for R ′ t ′ , we conclude by the following derivation for R t ′ : In case (1) we can conclude because k Similarly but without k Dn s in case (2).
We now finish the proof of soundness by proving that all typable terms have a finite reduction length, that is bounded by the maximum number of dβ-steps until normal form. This maximal length is written ||t|| dβ for a term t.
Lemma 4.9. The following equalities hold: Proof. A consequence of Definition 3.7 and Theorem 3.11.
The completeness Lemma 4.15 is based on typability of normal forms (Lemma 4.12) and non-erasing subject expansion (Lemma 4.14). This last one is based itself on antisubstitution (Lemma 4.13). (1) For all t ∈ NF dβ , there exists Γ, σ such that Γ ∩J t : σ.
Proof. By simultaneous induction on t ∈ NF dβ and t ∈ NE dβ .
First, the cases relative to statement (1).
Proof. By induction on t 1 → dβ t 2 . Case t 1 = D λx.t (u, y.r) → β {D {u/x}t /y}r = t 2 : Since the reduction is non-erasing, we have y ∈ fv(r) and x ∈ fv(t). By Lemma 4.13, there exists Γ r , Γ ′ and N such that Γ r ; y : N r : σ, Γ ′ D {u/x}t : N and Γ = Γ ′ ⊎ Γ r . Let N = [τ i ] i∈I = [ ] since y ∈ fv(r). By rule (many), we have a decomposition ( Since neither I nor the M i 's are empty, the choice operator is in both cases the identity and we can build the following derivation using rule (app): Case t 1 = λx.t and t 1 = t(u, x.r) and the reduction is internal: These cases are direct by the i.h.
We cannot conclude completeness straightaway, given that subject expansion was only shown for non-erasing cases. Instead, we prove that from any term on the right of a reduction, we can build a derivation for the term on the left. We rely on the previous lemma for the non-erasing steps, and construct derivations for erasing ones, in which the typing environment grows with anti-reduction. We use the inductive characterization of strong normalization ISN(dβ) to recognize the left terms that are indeed strongly normalizing, which are the only ones for which we can build a typing derivation.

Lemma 4.15 (Completeness for λJ n ). If t ∈ SN(dβ), then t is ∩J-typable.
Proof. In the statement, we replace SN(dβ) by ISN(dβ), using Theorem 3.11. We use induction on ISN(dβ) to show the following stronger property P: If t ∈ ISN(dβ) then there are Γ, σ such that Γ t : σ, and if t ∈ n, then the property holds for any σ.
Case t / ∈ NF lr : That is, t = R D n λx.s (u, y.r) , where t ′ = R {{u/x}D n s /y}r ∈ ISN(dβ), D n s ∈ ISN(dβ), and u ∈ ISN(dβ). Notice that t / ∈ n by Lemma 3.4. By the i.h. t ′ , D n s and u are typable. We show by a second induction on R that Σ t ′ : σ implies Γ t : σ, for some Γ. For the base case R = ⋄, there are three cases. Subcase x ∈ fv(s) and y ∈ fv(r): Since t ′ = {{u/x}D n s /y}r is typable and t → β t ′ , then t is also typable with Σ and σ by the non-erasing subject expansion Lemma 4.14. We conclude with Γ = Σ. Subcase x / ∈ fv(s) and y ∈ fv where Γ = Π ⊎ ∆ ⊎ Λ. We then conclude. Subcase y / ∈ fv(r): Since t ′ = {{u/x}D n s /y}r is typable and t ′ = r, then there is a derivation Λ r : σ where y / ∈ dom(Λ) holds by relevance (so that Σ = Λ). We can then write Λ; y : [ ] r : σ. We construct a derivation of t ending with rule (app). For this we need two witness derivations for u and D n λx.s . Since u ∈ ISN(dβ), the i.h. gives a derivation ∆ u : ρ, and then we get ∆ u : where Γ = Π ⊎ ∆ ⊎ Λ. We then conclude. Then, there are two inductive cases. We extend the second i.h. to multi-types trivially.
Subcase R = R ′ (u ′ , z.r ′ ): Let consider the terms t 0 = R ′ D n λx.s (u, y.r) and t 1 = R ′ {{u/x}D n s /y}r so that t = t 0 (u ′ , z.r ′ ) and t ′ = t 1 (u ′ , z.r ′ ). The type derivation of t ′ ends with a rule (app) with the premises: We build a derivation for t with type σ ending with rule (app) and using the derivations for t 0 and the ones for u ′ and r ′ , so that the corresponding typing environment is Γ = Γ 0 ⊎ ∆ ⊎ Λ. We then conclude. Subcase R = n(u ′ , z.R ′ ): Let t 0 , t 1 be the same as before so that t = n(u ′ , z.t 0 ) and t ′ = n(u ′ , z.t 1 ). We detail the case where z ∈ fv(t 0 ) and z / ∈ fv(t 1 ), the other ones being similar to case 1. The type derivation of t ′ is as follows, with Σ = Γ n ⊎∆⊎Σ ′ .
We finally obtain:

Quantitative Behavior of π.
We have mentioned already that π is rejected by the quantitative type systems ∩J. Concretely, this happens in the critical case when x / ∈ fv(r) and y ∈ fv(r ′ ) in Example 4.17. We take t 1 = x(y, a.z)(w, b.b(b, c.c)) → π x(y, a.z(w, b.b(b, c.c))) = t 2 . Let b(b, c.c) : τ and the derivation Φ i for i ∈ {1, 2}: Then, for the term t 1 , we have the following derivation: x(y, a.z)(w, b.b(b, c.c) While for the term t 2 , we have: a.z(w, b.b(b, c.c) Thus, the multiset types of x and y in Γ 1 and Γ 2 resp. are not the same. Despite the fact that the step t 1 → π t 2 does not erase any subterm, the typing environment is losing quantitative information.
Notice that by replacing non-idempotent types by idempotent ones, subject reduction (and expansion) would work for π-reduction: by assigning sets to variables instead of multisets, Γ 1 and Γ 2 would be equal.
Despite the fact that quantitative subject reduction fails for some π-steps, the following weaker property is sufficient to recover (qualitative) soundness of our typing system ∩J w.r.t. the reduction relation → β,π . Soundness will be used later in section 6 to show equivalence between SN(dβ) and SN(β, π).
There are two possibilities.
and for each i ∈ I there is one derivation of t(u, x.r) having the following form: i∈I using rule (many), where J = ⊎ i∈I J i . We then construct the following derivation: : σ We then build two derivations Γ t nt t : #([N j → ρ j ] j∈J ) with Γ t ⊑ ⊎ i∈I Γ i t and n t ≤ + i∈I n i t and ∆ u nu u : #(⊔ j∈J N j ) with Γ u ⊑ ⊎ i∈I Γ i u and n u ≤ + i∈I n i u as follows: • If x ∈ fv(r), then all the J i 's, and thus also J, are non-empty by relevance so that We obtain the expected derivation for t by Lemma 4.2, with Γ t = ⊎ i∈I Γ i t , n t = + i∈I n i t . Now for u, notice that for each i ∈ I we can have either #( Then, there are two possibilities. (1) If ⊔ j∈J N j = [ ], we take an arbitrary k ∈ I and let #(⊔ j∈J N j ) = [σ k ] so that we can give a derivation ∆ u nu u : ∈ fv(r), then all the J i 's are empty by relevance. Therefore, for each We take an arbitrary k ∈ I and we take #( . We obtain the expected derivation by taking Γ t = Γ k t ⊑ ⊎ i∈I Γ i t , n t = n k t ≤ + i∈I n i t , Γ u = Γ k u ⊑ ⊎ i∈I Γ i u and n u = n k u ≤ + i∈I n i u . Finally, we build the following derivation of size n 2 . We have Σ = Γ t ⊎ ∆ u ⊎ i∈I Λ i r ⊎ ∆ u ′ ⊎ Λ r ′ ⊑ Γ and n 2 = n t + n u + i∈I n i r + n u ′ + n r ′ ≤ n 1 . Case I = ∅: Then there is some τ such that #([M i → τ i ] i∈I ) = [τ ] and the derivation of t(u, x.r) ends as follows: with Γ ′ = Γ t ⊎ ∆ u ⊎ Λ r and n ′ = n t + n u + n r . We construct the following derivation of size n 2 : We have Σ = Γ t ⊎ ∆ u ⊎ Λ r ⊎ ∆ u ′ ⊎ Λ r ′ = Γ and n 2 = n t + n u + n r + n u ′ + n r ′ = n 1 .
We have proved that reducts of typed terms are also typed. To show that typed terms terminate, we will show that the maximal length of reduction to normal form is bounded by the size of the type derivation, so finite. This is similar to what we have done for → dβ .
Lemma 4.19. If t 1 → β t 2 and t 1 → π t 3 , then there is t 4 such that t 3 → β t 4 and t 2 → * π t 4 . Proof. By case analysis of the possible overlaps of the two contracted redexes.
Lemma 4.20. If t 1 → β t 2 , then there is t 3 such that π(t 1 ) → β t 3 and t 2 → * π t 3 . Proof. By induction on the reduction sequence from t 1 to π(t 1 ) using Lemma 4.19 for the base case.
Lemma 4.21. If there is a β, π-reduction sequence ρ starting at t and containing k β-steps, then there is a β, π-reduction sequence ρ ′ starting at π(t) and also containing k β-steps.
Proof. By induction on the (necessarily finite) reduction sequence ρ. If the length of ρ is 0, then k = 0 and the property is trivial. If the length of ρ is 1 + n, we analyze the two possible cases: (1) If ρ is t → β t ′ followed by ρ 0 of length n and containing k 0 = k − 1 β-steps, then the property holds for t ′ w.r.t. π(t ′ ). But Lemma 4.20 gives a term t ′′ such that π(t) → β t ′′ and t ′ → * π t ′′ . Then we construct the β, π-reduction sequence π(t) → β t ′′ → * π π(t ′′ ) = π(t ′ ) followed by the one obtained by the i.h. This new sequence has 1 + k 0 = k β-steps.
Proof. By Lemma 4.25, the number of β-reduction steps in any β, π-reduction sequence starting at t is finite. So in any infinite β, π-reduction sequence starting at t, there is necessarily a term u from which there is an infinite amount of π-steps only. But this is impossible since π terminates, so we conclude by contradiction.

Faithfulness of the Translation
The original translation of generalized applications into ES (see [ES07]), based on t(u, x.r) * = [t * u * /x]r * , is not conservative with respect to strong normalization; this is also true for the original translation to λ-terms given by [JM03], which is based on t(u, x.r) * = {t * u * /x}r * : it preserves strong normalization but normalizes too much. Indeed, in a β-redex s := (λx.t 0 )(u, y.r), the interaction of λx.t 0 with the argument u is materialized by the internal substitution in the contractum term {{u/x}t 0 /y}r. Such interaction may be elusive: if the external substitution is vacuous (that is, if y is not free in r), β-reduction will simply throw away the λ-abstraction λx.t 0 and its argument u. In the translated term s * , the β-redex (λx.t 0 ) * u * = (λx.t 0 * )u * is also thrown away in the case of translation to λ-terms, whereas it may reduce in the context of the explicit substitution [(λx.t 0 * )u * /y]r * . The different interactions between the abstraction and its argument in the two mentioned models of computation has important consequences. Here is an example.
In this section we define an alternative encoding to the original one and prove it faithful: a term in T J is dβ-strongly normalizing iff its alternative encoding is strongly normalizing in the ES framework. In a later section, we use this connection with ES to establish the equivalence between strong normalization of λJ n and ΛJ.

A New Translation.
We define the syntax and semantics of an ES calculus borrowed from [Acc12] to which we relate λJ n . It is a simple calculus where β is implemented in two independent steps: one creating a let-binding, and another one substituting the term bound. It has a notion of distance which allows to reduce redexes such as ( The calculus λES is defined by T ES [dB, sub], meaning that T ES is the set of terms and that this set is equipped with → dB and → sub , the reduction relations obtained by closing dB, sub under all contexts, where: Now, consider the (original) translation from T J to T ES [ES07]: According to it, the notion of distance in λES corresponds to our notion of distance for λJ n . For instance, the application t(u, x._) in the term t(u, x.λy.r)(u ′ , z.r ′ ) can be seen as a substitution [t * u * /x] inserted between the abstraction λy.r and the argument u ′ . But how can we now (informally) relate π to the notions of existing permutations for λES? Using the previous translation, we can see that t 0 = t(u, x.r)(u ′ , y.r ′ ) → π t(u, x.r(u ′ , y.r ′ )) = t 1 simulates as The first step is an instance of a rule in ES known as σ 1 : ([u/x]t)v → [u/x](tv), and the second one of a rule we call σ 4 : [[u/x]t/y]v → [u/x][y/t]v. Quantitative types for ES tell us that only rule σ 1 , but not rule σ 4 , is valid for a call-by-name calculus. This is why it is not surprising that π is rejected by our type system, as detailed in subsection 4.3.
The alternative encoding we propose is as follows (noted (·) ⋆ instead of (·) * ): Definition 5.2 (Translation from T J to T ES ).
where x l and x r are fresh variables.
Notice the above π-reduction t 0 → t 1 is still simulated: t ⋆ 0 → 2 σ 4 t ⋆ 1 . Moreover, consider again the counterexample t = δ(δ, y.r) to faithfulness (Example 5.1). The alternative encoding of t is now giving by [δ ⋆ /y l ][δ ⋆ /y r ]{y l y r /y}r ⋆ , which is just [δ ⋆ /y l ][δ ⋆ /y r ]r ⋆ , because y / ∈ fv(r ⋆ ). The only hope to have an interaction between the two copies of δ ⋆ in the previous term is to execute the ES, but such executions will just throw away those two copies, because y l , y r / ∈ fv(r ⋆ ). This hopefully gives an intuitive idea of the faithfulness of our encoding.

Proof of Faithfulness.
We need to prove the equivalence between two notions of strong normalization: the one of a term in λJ n and the one of its encoding in λES. While this proof can be a bit involved using traditional methods, quantitative types will make it very straightforward. Indeed, since quantitative types correspond exactly to strong normalization, we only have to show that a term t is typable exactly when its encoding is typable, for two appropriate quantitative type systems. For λES, we will use the following system [KV20]: A simple induction on the type derivation shows that the encoding is sound.
Proof. By induction on the type derivation. Notice that the statement also applies by straightforward i.h. for rule (many). If I = ∅, it is easy to construct a derivation x l : [M i → τ i ] i∈I ; x r : ⊔ i∈I M i ∩ES x l x r : [τ i ] i∈I . By Lemma 4.3, we get Φ = Λ; x l : [M i → τ i ] i∈I ; x r : ⊔ i∈I M i ∩ES {x l x r /x}r ⋆ : σ. We conclude by building the following derivation.
This last result, together with the two characterization Theorem 4.16 and Theorem 5.4, gives: Corollary 5.6. Let t ∈ T J . If t ∈ SN(dβ) then t ⋆ ∈ SN(dB, sub).
We show the converse by a detour through the encoding of T ES to T J . Definition 5.7 (Translation from T ES to T J ).
The two following lemmas, shown by induction on the type derivations, give in particular that t ⋆ typable implies t typable. If I = ∅, We conclude by building the following derivation.
If I = ∅, We conclude by building the following derivation (where τ is arbitrary).
Proof. By induction on t. The cases where t = x or t = λx.s are straightforward by the i.h. We reason by cases for the generalized application. Case t = s(u, x.r) where x ∈ fv(r): We have By construction and also by the anti-substitution Lemma 4.13 it is not difficult to see that Γ = Γ s ⊎ Γ u ⊎ Γ r and there exist derivations having the following conclusions, where I = ∅: The i.h. on points 1, 5 and 7 give Γ r ; x : [τ i ] i∈I ∩J r : σ, Γ u ∩J u : [τ i ] i∈I and Γ s ∩J s : [[τ i ] → τ i ] i∈I resp., so that we conclude with the following derivation: ∈ fv(r): Then we have We have the following derivation, where Γ = Γ s ⊎ Γ r ⊎ Γ r , [τ 1 ] → τ 1 , [τ 2 ] → τ 2 , ρ and ρ ′ are witness types.
. . Putting everything together, we get this equivalence: This corollary, together with the two characterization Theorem 4.16 and Theorem 5.4, provides the main result of this section: Theorem 5.11 (Faithfulness). Let t ∈ T J . Then t ∈ SN(dβ) ⇐⇒ t ⋆ ∈ SN(dB, sub).

Equivalent Notions of Strong Normalization
In the previous section, we related strong dβ-normalization with strong normalization of ES. In this section we compare the various concepts of strong normalization that are induced on T J by β, dβ, (β, p2) and (β, π). This comparison makes use of several results obtained in the previous sections. From it, we obtain new results about the original calculus ΛJ. 6.1. β-Normalization is not Enough. Obviously, SN(dβ) ⊆ SN(β), since β ⊆ dβ. Similarly, SN(β, π) ⊆ SN(β) and SN(β, p2) ⊆ SN(β). We now see that these inclusions are strict. We have discussed in subsection 2.2 the unblocking property of π and p2 and the unblocked character of distant redexes. From the point of view of normalization, this means that T J [β] has premature normal forms and that SN(β) ⊆ SN(dβ); similarly for the other inclusions above. To illustrate this we give an example of a T J -term which normalizes when only using rule β, but diverges when adding permutation rules or distance. Let us take t := w(u, w ′ .δ)(δ, x.x), where δ is the term of Example 5.1. Although this term is a normal form in T J [β], the second δ is actually an argument for the first one, as we can see with a π permutation: π). We can also unblock the redex in t by a p2-permutation moving the inner λx up: ). Finally, we get the same thing in a unique dβ-step: In all the three cases, β-strong normalization is not preserved by the permutation rules, as there is a term t ∈ SN(β) such that t / ∈ SN(β, π), t / ∈ SN(β, p2) and t / ∈ SN(dβ).
6.2. Comparison with β + p2. We now formalize the fact that our calculus T J [dβ] is a version with distance of T J [β, p2], so that they are equivalent from a normalization point of view. For this, we will establish the equivalence between strong normalization w.r.t. dβ and (β, p2), through a long chain of equivalences. One of them is Theorem 5.11, that we have proved in the previous section; the other is a result about σ-rules in the λ-calculuswhich is why we have to go through the λ-calculus again.
Definition 6.1 (Translation · ↓ from T ES to T Λ ). Proof. For typability in the λ-calculus, we use the type system S ′ λ with choice operators of [KV20]. It can be seen as a restriction of the system ∩ES to λ-terms. Suppose M ∈ SN(dB, sub). By Theorem 5.4 M is typable in ∩ES, and it is straightforward to show that For t ∈ T J , let t := (t ⋆ ) ↓ . So, we are just composing the alternative encoding of generalized application into ES with the map into λ-calculus just introduced. The translation (·) may be given directly by recursion as follows: Proof. Because (·) produces a strict simulation from T J to T Λ . More precisely: (i) if t 1 → β t 2 then t 1 → + β t 2 ; (ii) if t 1 → p2 t 2 then t 1 → 2 σ 2 t 2 .
Incidentally, the previous proof also contains a new proof of Theorem 5.11. 6.3. Comparison with β+π. We now prove the equivalence between strong normalization for dβ and for (β, π). One of the implications already follows from the properties of the typing system.
Proof. Follows from the completeness of the typing system (Lemma 4.15) and soundness of ∩J for (β, π) (Lemma 4.26).
The proof of the other implication requires more work, organized in 4 parts: 1) A remark about ES; 2) A remark about translations of ES into the ΛJ-calculus; 3) Two new properties of strong normalization for β, π in ΛJ; and 4) Preservation of strong β, π-normalization by a certain map from the set T J into itself.
The remark about explicit substitutions is this: The translation (·) • in Definition 5.7 induces a simulation of each reduction step → sub on T ES into a reduction step → β on T J , but cannot simulate the creation of an ES effected by rule dB. A solution is to refine the translation (·) • for applications, yielding the following alternative translation: Since the clause for ES is not changed, simulation of each reduction step → sub by a reduction step → β holds as before. The improvement lies in the simulation of each dBreduction step: This strict simulation gives immediately: We now prove two properties of strong normalization for (β, π) in ΛJ. Following [Mat00], SN(β, π) admits an inductive characterization ISN(β, π), given in Figure 1, which uses the following inductive generation for T J -terms: Hence S stands for a generalized argument, while S denotes a possibly empty list of S's.
Proof of (II). We prove the following: for all t 1 ∈ ISN(β, π), for all n ≥ 0, if t 1 has n occurrences of the sub-term {{u/y}t/z}r, then, for any choice of n such occurrences, t 2 ∈ ISN(β, π), where t 2 is the term that results from t 1 by replacing each of those n occurrences by (λy.t)(u, z.r).
Notice the statement we are going to prove entails the admissibility of (II). Indeed, given s, let n be the number of free occurrences of x in s. The term t 1 = {{{u/y}t/z}r/x}s has well determined n occurrences of the sub-term {{u/y}t/z}r (it may have others), and {(λy.t)(u, z.r)/x}s is the term that results from t 1 by replacing each of those n occurrences by (λy.t)(u, z.r).
Suppose t 1 ∈ ISN(β, π) and consider n occurrences of the sub-term {{u/y}t/z}r in t 1 . The proof is by induction on t 1 ∈ ISN(β, π) and sub-induction on n. A term s is determined, with n free occurrences of x, such that x / ∈ t, u, r and t 1 = {{{u/y}t/z}r/x}s. We want to prove that {(λy.t)(u, z.r)/x}s ∈ ISN(β, π). We will use a device to shorten the writing: if E is t, or S, or S, then E denotes {{{u/y}t/z}r/x}E and E denotes {(λy.t)(u, z.r)/x}E. The proof proceeds by case analysis on s.
We now move to the fourth part of the ongoing reasoning. Consider the map from T J to itself obtained by composing (·) ⋆ : T J → T ES with (·) • : T ES → T J . Let us write · † this composition. A recursive definition is also possible, as follows: Lemma 6.10. If t ∈ SN(β, π) then t † ∈ SN(β, π).
Proof. ⇒) We show that each rule defining ISNj is admissible for the predicate ISN(β, π) defined in Figure 1. Cases (snvar) and (snabs) are straightforward. Case (snredex1) is by the i.h. and Lemma 6.8. Case (snredex2) is by the i.h. and rule (II). Case (snapp) is proved by a straightforward induction on n. ⇐) We show that each rule in Figure 1 defining the predicate ISN(β, π) is admissible for the predicate ISNj. Cases (var) and (lambda) are straightforward. Case (beta) is by rule (snredex2) and the i.h., by just taking R = ⋄ S. Case (hvar) follows by Lemma 6.14 and the i.h. Case (pi) is by Lemma 6.15 and the i.h. 6.5. Alternative Proof of Equivalence. The last theorem can also be shown as a corollary of ISNj = SN(β, π) and the fact that SN(β, π) = ISN(β, π) proved by [JM03]. We will show the first equality ISNj = SN(β, π) in a similar way as for dβ (Theorem 3.11).
Lemma 6.18. The strategy introduced in subsection 6.4 is deterministic.
Proof. For every term there is a unique decomposition in terms of a R context and a redex. Besides that, β and π redexes do not overlap.
We conclude by the hypothesis.
Proof. We use the lexicographic order to reason by induction on ||t 0 || β,π , n . To show t ′ 0 ∈ SN(β, π) it is sufficient to show that all its reducts are in SN(β, π). We analyze all possible cases.

Conclusion
Contributions. This paper presents and studies several properties of the call-by-name λJ ncalculus, a formalism implementing an appropriate notion of distant reduction to unblock the β-redexes arising in generalized application notation. Strong normalization of simple typed terms was shown by translating the λJ n -calculus into the λ-calculus. A full characterization of strong normalization was developed by means of a quantitative type system, where the length of reduction to normal form is bound by the size of the type derivation of the starting term. An inductive definition of strong normalization was defined and proved correct in order to achieve this characterization. It was also shown how the traditional permutative π-rule is rejected by the quantitative system, thus emphasizing the choice of distant reduction for a quantitative generalized application framework.
We have also defined a faithful translation from the λJ n -calculus into ES. The translation preserves strong normalization, in contrast to the traditional translation from generalized applications to ES e.g. in [ES07]. Last but not least, we related strong normalization of λJ n with that of other calculi, including in particular the original ΛJ of Joachimski and Matthes [JM03,JM00]. New results for the latter were found by means of the techniques developed for λJ n . In particular, a quantitative characterization of strong normalization was developed for ΛJ, where the bound of reduction given by the size of type derivations only holds for β-steps (and not for π-steps).
This paper is an extended version of [ESKP22]. In this version we provide full proofs, and improve the presentation and discussion. The proof of confluence for λJ n given in subsection 2.3 comes from [Pey22]. Related work. Generalizing elimination rules of natural deduction is an old idea, occurring several times in the literature, most notably by [SH84b,SH84a] or [Ten92,Ten02], before being coined in the version at the origin of ΛJ by von Plato [vP01]. The generalization of implication elimination itself has come up independently along the years, as pointed out by [SH14].
Concerning ΛJ, some interesting results were given, motivated by a proof-theoretical approach. In parallel to his works with Joachimski [JM00,JM03] introducing the calculus, [Mat01] proves an interpolation theorem (with information on terms) for ΛJ extended with pairs and sum datatypes. In his PhD thesis, [Bar08] defines a set of conversions for ΛJ beyond β and π. Some of these conversions where already given by [Mat01], another one is an undirected version of p2. Espírito Santo and his coauthors have used ΛJ, and his multiary extension ΛJ m [ESP03] to compare the computational content of natural deduction and the sequent calculus [ES09,ESFP16]. The call-by-value variant of ΛJ was introduced in [ES20].
The first non-idempotent type system for generalized applications was proposed in our conference paper [ESKP22]. Intersection type systems for ΛJ have been given before in [Mat00] and [ESIL12], but these systems handle idempotent types, so that they are not able to characterize quantitative properties. Since [ESKP22], further investigations on generalized applications based on distant reduction appeared in [KP22,Pey22]. Other calculi based on different logical systems have been adapted to enable quantitative analyzes: this is for instance the case of λµ based on classical logic [KV20], or the Curry-Howard interpretation of the intuitionistic sequent calculusλ [KV15]. Future work. Quantitative type systems, introduced here for the call-by-name system λJ n , have been successfuly adapted to the call-by-value seeting in [KP22]. Further unification between call-by-name and call-by-value with the help of generalized applications could be considered in the setting of call-by-push-value [Lev06] or the polarized lambdacalculus [ES16].
It would be interesting to see if the techniques developed for tightness [AGLK20,KV22] can be adapted to this framework. The precise measures on reduction length obtained would enable us to precisely measure the quantitative relationship between the call-by-name λcalculus and λJ n . Such techniques could also be adopted for call-by-value, to sharpen the relation between generalized applications and call-by-value calculi.