Keywords

figure a
figure b

1 Introduction

Statically typed languages can statically detect potential errors in programs, but must necessarily be conservative and reject some well-behaved programs. With dynamically typed languages, all programs are accepted, which offers a great amount of flexibility. However, the accepted dynamic programs include programs with type errors, making it harder to detect programs that are ill-behaved because of type errors. Considering the weaknesses and advantages of static and dynamic type systems, many approaches have proposed to integrate these two spectrums [1, 7, 8, 22, 35]. Gradual typing [31, 35] provides a smooth integration of the two styles and has been under active research in the programming languages community. In addition to the type soundness property, a gradual language should behave as a static language if it is fully annotated. Conversely, it should behave as a dynamic language for fully dynamic programs. Importantly, the gradual guarantee [32] has been proposed to ensure a smooth transition between static and dynamic typing.

The importance of System F as a foundation for programming languages with polymorphism naturally leads to the question of whether it is possible to gradualize it. Various researchers have explored this question. In this line of research, a long-standing goal has been how to preserve relational parametricity [28]. Parametricity ensures a uniform behavior for all instantiations of polymorphic functions, and is an important property of System F. In addition it is also desirable to preserve the gradual guarantee [32], which is recognized as an important property for gradual languages. Unlike System F, where no dynamic mechanism is needed to ensure parametricity, with gradualized versions of System F this is no longer the case. Ahmed et al. [3] showed that parametricity can be enforced using a dynamic sealing mechanism at runtime. They prove parametricity, but the gradual guarantee is not discussed. Igarashi et al. [17] improved on the dynamic sealing approach and proposed a more efficient mechanism. While the gradual guarantee has been discussed, it was left as a conjecture. Toro et al. [37] even proved that gradual guarantee and parametricity are incompatible. By giving up the traditional System F syntax, New et al. [24] proved the gradual guarantee and parametricity by using user-provided sealing annotations, but this requires resorting to syntax that is not based on System F. Finally, Labrada et al. [20] proved the gradual guarantee and parametricity by inserting sealing with some restrictions. For instance, only base and variable types can be used to instantiate type applications.

While parametricity is highly valued and it is guaranteed in practice in some functional languages, many mainstream programming languages – such as Java, TypeScript or Flow – do not have parametricity. In mainstream languages the value of parametric polymorphism, and its ability to express a whole family of functions in a reusable and type-safe manner is certainly recognized. However, such languages are imperative and come with a variety of programming language features (such as unrestricted forms of mutable state, exceptions, parallelism and concurrency mechanisms, reflection, etc.) that make it hard to apply reasoning principles known in functional programming. In particular, most of those features are known to be highly challenging to deal with in the presence of parametricity [2, 18, 23]. This makes it non-obvious how to design a language with all those features, while preserving parametricity, in the first place. Moreover, preserving parametricity may require extra dynamic checks at runtime, which for implementations where performance is a critical factor may discourage implementers from doing such checks. Therefore all the aforementioned programming languages support System F like mechanisms to deal with polymorphism and benefit from the reuse afforded by polymorphism. However, the reasoning principles that arise from polymorphism, such as parametricity is discarded, and parametricity is not enforced.

In particular, programming languages such as TypeScript or Flow, which support some form of gradual/optional typing, and are widely used in practice, do not support parametricity. Figure 1 encodes an example from Ahmed et al. ’s work [3], which was used to illustrate the parametricity challenge in gradual typing, in TypeScript and Flow. In this program, the polymorphic function has a polymorphic type: \(( X \rightarrow Y \rightarrow Y )\), where \( X \) and \( Y \) are type variables. In a calculus with parametricity, we know that a function with such type should always return the second argument or, in the presence of runtime casts, return an error. In the program, is as a function that casts a dynamic constant function ( ) that returns the first argument, which violates parametricity. When the TypeScript and Flow programs are run the first argument 2 is returned, illustrating that both languages do not enforce parametricity. In a gradual language with parametricity the result that we would expect is an error. Furthermore, even if we turn to Typed Racket [36], which is a well-established gradual language used in both gradual typing research and in practice, the result is similar and 2 is returned:

Fig. 1.
figure 1

Ahmed et al. [3] program for illustrating parametricity in TypeScript and Flow.

figure f

Therefore Typed Racket does not enforce parametricity either.

In this paper, we explore the more pragmatic design space of polymorphic gradual languages with the gradual guarantee, but no parametricity. We believe that such designs are relevant because many practical language designs do not support parametricity, but support various other programming features instead. Dropping the requirement for parametricity enables us to explore language designs with many relevant practical features, while being in line with current designs for existing practical gradually typed languages. In particular, this paper studies the combination of parametric polymorphism, gradual typing and references. We show that, when parametricity is not a goal, the design of gradually polymorphic languages can be simplified, making it easier to add features such as references. Moreover, the gradual guarantee, which has shown to be quite problematic in all existing calculi with gradual polymorphism, is simple to achieve. We present a standard static calculus with polymorphism and mutable references called \(\lambda _{gpr}\). Then we introduce the gradual counterpart, called \(\lambda ^{G}_{gpr}\).

The approach that we follow to give the dynamic semantics to \(\lambda ^{G}_{gpr}\) is to use the recently proposed Type-Directed Operational Semantics TDOS [16, 42]. In contrast, traditionally the semantics of a gradually typed language is defined by elaboration to a target cast calculus such as the blame calculus [39]. In other words, the dynamic semantics of the gradual source language is given indirectly by translating to the target language. As Ye et al. [42] shows, TDOS avoids such indirection and uses bidirectional typing and type annotations to enforce both implicit and explicit casting at runtime in gradually typed languages.

In summary, we make the following contributions in this paper:

  • The \(\lambda ^{G}_{gpr}\) calculus: A gradual calculus with polymorphism and mutable references. \(\lambda ^{G}_{gpr}\) calculus is the gradual counterpart of the \(\lambda _{gpr}\) calculus. Both \(\lambda ^{G}_{gpr}\) and \(\lambda _{gpr}\) are shown to be type sound and deterministic.

  • Gradual guarantee for \(\lambda ^{G}_{gpr}\) . We prove the gradual guarantee for \(\lambda ^{G}_{gpr}\). The proof is easy and quite simple, in contrast to previous work in gradual polymorphism, where the gradual guarantee was a major obstacle.

  • A TDOS extension. TDOS has been applied to gradual typing before [42]. However, the previous work on TDOS for gradual typing only works in a purely functional, simply typed calculus. Our work shows that the TDOS approach can incorporate other features, including polymorphism and references.

  • A mechanical formalization in the Coq theorem prover. All the calculi and proofs in this paper have been mechanically formalized in the Coq theorem prover. The Coq formalization can be found in the supplementary materials of this paper:

2 Overview

This section provides a background for gradual polymorphic calculi, calculi with gradual references and the key ideas of our static system (\(\lambda _{gpr}\)) with polymorphism and references and its gradual counterpart (\(\lambda ^{G}_{gpr}\)).

2.1 Background

Gradual References. Mutable references read or write content into a memory cell. A common set of operations is: allocating a memory cell (\( \text {ref} ~ e \)); updating references (\( e_{{\textrm{1}}} := e_{{\textrm{2}}} \)) and reading the content from a reference ( !e). Locations (\( o \)) point to the memory cell. For a reference value \( \text {ref} ~ 1 \), a new location (\( o \)) is generated and value 1 is stored in the cell at the location \( o \). If 2 is assigned to this location \( o := 2 \), the cell value is updated to 2. Later, when we read this cell (\( ! o \)), 2 is returned. Siek et al. [31] defined an invariant consistency relation for reference types. Reference types are only consistent with themselves. For example:

$$\begin{aligned} ( \lambda x .\, ( x := 2 ) : \text {Ref} ~ \star \rightarrow \text {Ref} ~ \star ) \, ( \text {ref} ~ 1 ){} & {} -- \text {Rejected!} \; \; \text {Ref} ~ \textsf{Int} \not \sim \text {Ref} ~ \star \end{aligned}$$

Although the type \( \textsf{Int} \) is consistent with \( \star \), it does not mean that \( \text {Ref} ~ \textsf{Int} \) is consistent with \( \text {Ref} ~ \star \). Therefore, the argument type is not consistent with the function input, and the program is rejected. Herman et al. [14] proposed a gradually typed lambda source language with references, which defines the dynamic semantics by elaborating to a coercion calculus. The above program is allowed in their calculus. They define variant consistency where if A is consistent with B then \( \text {Ref} ~ A \) is consistent with \( \text {Ref} ~ B \). In their calculus, casts are combined to achieve space-efficiency. Furthermore, Siek et al. [33] explored monotonic references with variant consistency. Their main consideration is space efficiency. No runtime overhead is imposed in the statically typed part of programs. All the above works have not considered the gradual guarantee.

Toro and Tanter [38] showed how to employ the Abstracting Gradual Typing (AGT) [12] methodology to design a gradually typed calculus with mutable references (\(\lambda _{\widetilde{REF}}\)). Their dynamic semantics of the source language is defined by translating to an evidence base calculus. They prove a bisimulation with the coercion calculus by Herman et al. [14]. \(\lambda _{\widetilde{REF}}\) is proved to satisfy the gradual guarantee. The consistency of \(\lambda _{\widetilde{REF}}\) is also variant.

Gradual Polymorphism. Gradual polymorphism is a popular topic. Researchers have been working in this area for a long time. Prior work has focused on two key properties: relational parametricity [28] and the gradual guarantee [32]. Relational parametricity ensures that all instantiations to a polymorphic value behave uniformly. The gradual guarantee ensures that less dynamic programs behave the same as more static programs.

Satisfying these two properties at once has shown to be problematic. Ahmed et al. [3] showed that a naive combination of the unknown type \( \star \) and type substitution breaks relational parametricity. They show the problem using a simple expression with two casts. To simplify the presentation, we ignore blame labels. Suppose that \(K^{\star } = \lceil \lambda x. \lambda y. x \rceil \), the dynamically typed constant function, is cast to a polymorphic type:

$$\begin{aligned}&K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow X{} & {} K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow Y \end{aligned}$$

The notation \(e : A \Rightarrow B\), borrowed from the blame calculus [29], means cast expression e from type A to type B. The constant function \(K^{\star }\) returns the first argument. Considering relational parametricity, a value of type \( \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow X \) should be a constant value which always returns the first argument. While a value of type \( \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow Y \) should return the second argument. Therefore, the first cast succeeds and the second cast should fail. However, if these two casts are applied to the arguments in the usual way employing type substitutions, then we obtain the following:

$$\begin{aligned}&(K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow X ) \, \textsf{Int} \, \textsf{Int} \, 2 \, 3 \\ \hookrightarrow ^{*} \,\,&(K^{\star }: \star \Rightarrow \textsf{Int} \rightarrow \textsf{Int} \rightarrow \textsf{Int} ) \\ \hookrightarrow ^{*} \,\,&2 \\&(K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow Y ) \, \textsf{Int} \, \textsf{Int} \, 2 \, 3 \\ \hookrightarrow ^{*} \,\,&(K^{\star }: \star \Rightarrow \textsf{Int} \rightarrow \textsf{Int} \rightarrow \textsf{Int} ) \\ \hookrightarrow ^{*} \,\,&2 \end{aligned}$$

The second cast succeeds and returns the first argument, which breaks parametricity. The reason for this behavior is that, after the type substitution, the polymorphic information is lost. Note that, as we have seen in Section 1, this is exactly how various practical languages (TypeScript, Flow and Typed Racket) behave.

Much of the work on gradual polymorphism aims at addressing the above problem. That is, for the second cast we would like to obtain blame instead of 2, so that parametricity is not violated. While the preservation of parametricity is a worthy goal, it typically requires substantial changes to a calculus to ensure its preservation, since naive direct type substitutions do not work. Furthermore, this also affects proofs, which can become significantly more complicated due to the changes in the calculus. To address this problem a well-known approach, originally proposed by Ahmed et al. [3], is to employ dynamic sealing. With dynamic sealing we do not do the substitution directly but record a fresh variable binding. However, even calculi that satisfy parametricity have to compromise on the important gradual guarantee property, or System F syntax, or be equiped with heavy forms of runtime evidence [20, 37]. A thorough discussion of various approaches is given in Section 6.

2.2 Key Ideas

Our key design decision is to give up support for parametricity in exchange for a simpler calculus that is also easier to extend with other important practical features. In particular, in our work we illustrate how to obtain a polymorphic gradually typed calculus, with gradual references and with the gradual guarantee. In contrast, none of the existing gradually polymorphic calculi supports references and the gradual guarantee is only supported with restrictions [20]; or major modifications in the syntax and semantics of the language [24]; or not supported/proved at all [3, 17, 37].

A direct semantics with a TDOS. Our gradually typed calculus \(\lambda ^{G}_{gpr}\) has a direct semantics by using a (TDOS) [15] approach. In \(\lambda ^{G}_{gpr}\), type annotations are operationally relevant and they basically play a role similar to casts. Nevertheless, implicit casts should also be enforced for a gradual calculus at runtime. Most previous work makes the implicit casts explicit via the elaboration process. That is the reason why dynamic semantics is not defined directly. We resort to bidirectional typing with inferred (\( \Rightarrow \)) and checked (\( \Leftarrow \)) modes. Using the checking mode of bidirectional typing, the consistency (\( \sim \)) between values and the checked type is checked and enforced via an implicit cast. At compile time, the flexible consistency relation allows more programs to be accepted, while the checking mode signals casts that are needed at runtime. For example, in the typing rule for applications.

figure g

The checking mode signals an implicit cast for the argument. The argument \(e_{{\textrm{2}}}\) is checked to be consistent with the type \(A_{{\textrm{1}}}\) using the bidirectional subsumption rule:

figure h

For instance, \(( \lambda x .\, x : \textsf{Int} \rightarrow \textsf{Int} ) \, ( \textsf{True} : \star )\) type-checks, but at run-time the invalid cast to the value argument \(( \textsf{True} : \star )\) is detected and an error is reported.

Conservativity, no parametricity and direct substitutions. The \(\lambda ^{G}_{gpr}\) calculus is a conservative extension of its static counterpart. Notably, our \(\lambda ^{G}_{gpr}\) is a simple polymorphic calculus, without using mechanisms such as dynamic sealing and evidences. Instead, since parametricity is not a goal, we can simply use direct type substitutions during reduction as follows:

figure i

Our type application rule substitutes type directly unlike in previous work with dynamic sealing where a fresh type name variable is generated and stored in a global or local context. Dynamic sealing takes extra time and space. With a large enough number of type applications, the space consumption may go unbounded.

Gradual guarantee and references. Furthermore, \(\lambda ^{G}_{gpr}\) is mechanically formalized and shown to have the gradual guarantee. Our application of the eager semantics and the choice of value forms for \(\lambda ^{G}_{gpr}\) simplify the gradual guarantee. To prove the gradual guarantee we need a precision (\( \sqsubseteq \)) relation. The gradual guarantee theorem needs to ensure that if the more static program does not go wrong, then the less static program should not go wrong as well. The precision relation is used to relate two programs, which have different type information. Type precision compares the amount of static type information for programs and types. A type is more precise than another if it is more static. The unknown type (\(\star \)) is the least precise type, since we do not have any static information about that type. Let’s consider two programs:

$$\begin{aligned}&\lambda x .\, 1 : \textsf{Int} \rightarrow \textsf{Int} \\&\lambda x .\, 1 : \star \rightarrow \star \end{aligned}$$

The first one is more precise than the second one because the second program is fully dynamic. The value forms of \(\lambda ^{G}_{gpr}\) are annotated and include terms such as \( i : \textsf{Int} \) and \(( \lambda x .\, e : A \rightarrow B ) : C\). The simplicity of the proof of the gradual guarantee is greatly related to the choice of representation of values. In \(\lambda ^{G}_{gpr}\), the gradual guarantee theorem can be formalized in a simple way with a lemma similar to a lemma proposed by Garcia et al. [12]. The lemma states that if \(e_{{\textrm{1}}}\) is more precise than \(e_{{\textrm{2}}}\) and \(e_{{\textrm{1}}}\) takes a step to \(e'_{{\textrm{1}}}\) then \(e_{{\textrm{2}}}\) takes a step to \(e'_{{\textrm{2}}}\) and \(e'_{{\textrm{1}}}\) is more precise than \(e'_{{\textrm{2}}}\). With this lemma, we can infer that two expressions related by precision have the same behavior. Thus, this lemma is enough to obtain the dynamic gradual guarantee. Notably, \(\lambda ^{G}_{gpr}\) is extended with mutable references using a form of variant consistency [14, 38]. This is in contrast to the previously discussed gradually polymorphic calculi where references are not supported.

3 The \(\lambda _{gpr}\)  Calculus: Syntax, Typing and Semantics

In this section, we will introduce the \(\lambda _{gpr}\)  calculus, which is a calculus with references and polymorphism. \(\lambda _{gpr}\)  calculus is an extended version of System F with references and is the static calculus that serves as a foundation for the gradual calculus in Section 4.

3.1 Syntax

The syntax of the \(\lambda _{gpr}\)  calculus is shown in Figure 2.

Fig. 2.
figure 2

\(\lambda _{gpr}\) syntax

Types. Meta-variables AB range over types. Types include base types (\( \textsf{Int} \)), function types \((A \rightarrow B)\), type variables \(( X )\), polymorphic types \(( \forall X .\, A )\), the unit type \( \textsf{Unit} \) and reference types \( \text {Ref} ~ A \), which denotes a reference with type A.

Expressions. Meta-variables e range over expressions. Most of the expressions are standard: variables (\( x \)), integers (\( i \)), annotations (e : A), applications (\(e_{{\textrm{1}}} \, e_{{\textrm{2}}}\)), type applications (e A), dereferences \(( ! e )\), assignments \( e_{{\textrm{1}}} := e_{{\textrm{2}}} \), references (\( \text {ref} ~ e \)), unit (\( \textsf{unit} \)), locations \( o \), lambda abstractions (\( \lambda x : A.e \)) (which are annotated with input type A), and type abstractions (\( \varLambda X .\, e \)).

Values. Meta-variables v range over values. A raw value is either an integer (\( i \)), a type abstraction (\( \varLambda X .\, e \)), a lambda abstraction (\( \lambda x : A .\, e \)), a unit (\( \textsf{unit} \)) or a location (\( o \)).

Contexts, stores, locations and frames. The type context \(\varGamma \) tracks the bound variables \( x \) with their types and the bound type variables \( X \). Typing location \(\varSigma \) tracks the bound locations \( o \) with their types, while the store \(\mu \) tracks locations with their stored values during the reduction process. Frames (F) include applications, type applications, dereferences, assignments and references.

3.2 Type System

Before introducing the type system, we show the well-formedness of types at the top of Figure 3. The well-formedness of types ensures that there are no free type variables and that each type variable is bound in the contexts.

Typing relation. The typing relation of \(\lambda _{gpr}\) is shown at the bottom of Figure 3. The type system essentially includes the usual System F rules, except that they also propagate the location typing context (\(\varSigma \)). Reference locations \( o \) are stored in the location typing context \(\varSigma \) (rule styp-loc). The bound type of locations indicates the type of stored values. For instance, \( o \) points to 1 stored in a memory cell. The integer type for 1 is tracked by the location \( o \) in the location typing context \(\varSigma \). Other rules related to references such as assignments (rule styp-assign), references (rule styp-ref) and dereferences (rule styp-deref) are standard. Annotation expressions (e : A) are not necessary for the static system where the annotated types are syntactically equal (rule styp-anno), but they will play an important role in the gradual system and are included here.

Fig. 3.
figure 3

The type system of \(\lambda _{gpr}\)  calculus.

Definition 1 defines well-formed stores (\(\mu \)) with respect to the typing locations \(\varSigma \), using the typing relation:

Definition 1

(Well-formedness of the store with respect to \(\varSigma \) ).

$$ \varSigma \vdash \mu \equiv if~dom(\mu ) = dom(\varSigma ) ~ and ~ \varSigma ; \cdot \vdash \mu ( o ) : \varSigma ( o ), ~for ~every ~ o \in \mu $$

A store is well-formed with the typing location if the store and the typing location contain the same domains. For each location, which is in the store, the bounded value \(\mu ( o )\) can be inferred with the type bound in the typing location \((\varSigma ( o ))\).

3.3 Dynamic Semantics

The operational semantics for the \(\lambda _{gpr}\) calculus is shown in Figure 4 (we ignore the gray parts for now). \(\mu ; e \hookrightarrow \mu ' ; e'\) represents the reduction rules, which states that e with store \(\mu \) reduces to \(e'\) with the updated store \(\mu '\). The reduction rules of \(\lambda _{gpr}\) are straightforward. A reference value is bound in the store by a fresh location as shown in rule step-refv. The dereference rule extracts the bound value of the location in the store (rule step-deref). Rule step-eval evaluates the frames. Let’s see how the example \( o_{{\textrm{1}}} := ( \varLambda X .\, ( \lambda x : X .\, x ) \, ! o_{{\textrm{2}}} ) ~ \textsf{Int} \) with the existing store \( o_{{\textrm{1}}} = 1 , o_{{\textrm{2}}} = 2 \) reduces. 2 is read from store \( o_{{\textrm{1}}} = 1 , o_{{\textrm{2}}} = 2 \). After the type substitution, 2 is substituted into the lambda. Then 2 is used to update the store pointed by \( o_{{\textrm{1}}} \). Finally, the store becomes \( o_{{\textrm{1}}} = 2 , o_{{\textrm{2}}} = 2 \). The detailed steps are as follows:

Fig. 4.
figure 4

Reduction rules for \(\lambda _{gpr}\).

figure j

Theorem 1 shows that the \(\lambda _{gpr}\) calculus is deterministic:

Theorem 1

(Determinism of \(\lambda _{gpr}\) ). If \(\varSigma ; \cdot \vdash _s e : A\), \( \varSigma \vdash \mu \), \(\mu ; e \hookrightarrow _{s} \mu _{{\textrm{1}}} ; e_{{\textrm{1}}}\) and \(\mu ; e \hookrightarrow _{s} \mu _{{\textrm{2}}} ; e_{{\textrm{2}}}\) then \(e_1=e_2\) and \(\mu _{{\textrm{1}}} = \mu _{{\textrm{2}}}\).

Furthermore, the preservation Theorem 2 and progress Theorem 3 of \(\lambda _{gpr}\) calculus are shown below:

Theorem 2

(Type Preservation of \(\lambda _{gpr}\) ). If \(\varSigma ; \cdot \vdash _s e : A\), \( \varSigma \vdash \mu \) and \(\mu ; e \hookrightarrow _{s} \mu ' ; e'\) then \(\varSigma ' ; \cdot \vdash _s e' : A\), \( \varSigma ' \vdash \mu ' \) and \( \varSigma ' ~\supseteq ~ \varSigma \).

Theorem 3

(Progress of \(\lambda _{gpr}\) ). If \(\varSigma ; \cdot \vdash _s e : A\) then e is a value or \(\exists e' \mu '\), \(\mu ; e \hookrightarrow _{s} \mu ' ; e'\).

Fig. 5.
figure 5

Bidirectional typing for the \(\lambda _{gpr}\)  calculus.

3.4 Bidirectional Typing

We also present a set of bidirectional typing rules (shown in Figure 5) for \(\lambda _{gpr}\). Although bidirectional typing is not essential for \(\lambda _{gpr}\), it is used later for the gradual typing criteria proofs. The typing judgment is represented as \(\varSigma ; \varGamma \vdash e \, \Leftrightarrow \, A\). The expression e is inferred (\( \Rightarrow \)) or checked (\( \Leftarrow \)) by type A under the typing context \(\varGamma \) and location typing context \(\varSigma \). Typing modes (\(\Leftrightarrow \)) contain the inference mode (\( \Rightarrow \)) and checking mode (\( \Leftarrow \)), which are shown at the top of Figure 5. One extra rule is rule sty-eq, which switches modes. We proved that the two type systems are equivalent:

Lemma 1

(Typing Equivalence for \(\lambda _{gpr}\) ). \(\varSigma ; \varGamma \vdash _s e : A\) iff \(\varSigma ; \varGamma \vDash _s e \, \Leftrightarrow \, A\).

4 The \(\lambda ^{G}_{gpr}\)  Calculus

This section introduces the \(\lambda ^{G}_{gpr}\) calculus, which gradualizes the \(\lambda _{gpr}\) calculus. Normally, a gradually typed lambda calculus (GTLC) does not define the operational semantics directly, but is elaborated to a cast calculus. \(\lambda ^{G}_{gpr}\) instead defines the dynamic semantics directly using the TDOS approach [15]. \(\lambda ^{G}_{gpr}\) is proved to be type sound and it has a gradual guarantee. The calculus does not have parametricity, enabling simplifications in the calculus, and the addition of features such as gradual references, which none of the previous gradual calculi with polymorphism support.

4.1 Static Semantics

Fig. 6.
figure 6

\(\lambda ^{G}_{gpr}\) syntax and consistency.

Syntax, type well-formedness and consistency. Figure 6 shows the syntax and consistency of the \(\lambda ^{G}_{gpr}\) calculus. The gray parts are the same as \(\lambda _{gpr}\). The \(\lambda ^{G}_{gpr}\) calculus extends types with the unknown type \( \star \) with respect to \(\lambda _{gpr}\). Because of the power of the unknown type \( \star \), dynamic type checking is required and run-time errors may be raised. Therefore, in addition to expressions, \(\lambda ^{G}_{gpr}\) has the run-time error \(\textsf{blame}\). Because of the run-time checking requirement for the gradual typing system, we need annotations for type abstractions and lambda abstractions. Furthermore, due to the imprecision of the unknown type \( \star \), values are also annotated. Otherwise, examples such as \(1 : \star \) are troublesome. Because of the value forms, annotations are not included in frames, unlike in the \(\lambda _{gpr}\) calculus. We will explain the details later.

Well-formed types are extended with the following rule for the unknown type \( \star \):

figure k

Notably, instead of syntactic equality, a more general relation called consistency (\( \varGamma \vdash A \sim B \)) is defined in \(\lambda ^{G}_{gpr}\). Every well-formed type is consistent with itself. The unknown type is consistent with any other well-formed type. Structural types such as functions, references and polymorphic types are consistent if their type sub-components are consistent. Note that for two reference types, consistency is variant: if A and B are consistent then \( \text {Ref} ~ A \) and \( \text {Ref} ~ B \) are consistent. Unlike invariant consistency [31], type A and B do not have to be the same. As usual, consistency is reflexive and symmetric, but not transitive. We use the following abbreviation for consistency: \( A \sim B \) \(\equiv \) \( \cdot \vdash A \sim B \).

Typing relation. Bidirectional typing is used to design the type system. Bidirectional typing is not essential for \(\lambda _{gpr}\) but it is necessary for \(\lambda ^{G}_{gpr}\). Annotation expressions (e : A) and the checking mode (\( \Leftarrow \)) signal the use of casts (explicitly or implicitly) at run-time.

Fig. 7.
figure 7

The type system for the \(\lambda ^{G}_{gpr}\)  calculus.

The typing rules of the \(\lambda ^{G}_{gpr}\) calculus are shown in Figure 7. They are almost the same as \(\lambda _{gpr}\) ’s type system. For rule Typ-app, rule Typ-tapp, rule Typ-assign and rule Typ-deref, the unknown type \( \star \) can be matched with, respectively, a dynamic function type (\( \star \rightarrow \star \)), a dynamic polymorphic type (\( \forall X .\, \star \)) and a dynamic reference type (\( \text {Ref} ~ \star \)). In a system with gradual typing and the unknown type \(\star \) we always have to consider cases where the type may be unknown. For instance in an application \(e_1~e_2\), \(e_1\) can infer a function type as usual, but it can also infer type \(\star \) and still be well-typed. So, a matching function (\(A \vartriangleright B\)) is needed to account for both possibilities. The table at the bottom of Figure 7 shows the definition of the matching functions \(A \vartriangleright B\). Note that we overload the notation, but there are 3 different matching functions, in each column of the table, that are employed by the rules correspondingly. For example, rule Typ-edref employs the matching function in the third column of the table. The first row in the table depicts the form of the matching function, while the other two rows give its definition.

The checking mode rule Typ-sim is generalized to check if the inferred type A and checked type B are consistent. Note that rule Typ-sim is the only rule in the checked mode and, as such, does not overlap with anything else. Moreover, all the rules in the inference mode are syntax directed. Therefore, the rules are basically directly implementable, as usual for bidirectional type-checking rules. Note that in \(\lambda ^{G}_{gpr}\) annotation expressions combined with consistency play an important role, where more programs are allowed. For instance, \(( \lambda x .\, ( ( x : \star ) \, 1 ) : \textsf{Bool} \rightarrow \star ) \, \textsf{True}\) is accepted, but raises a \(\textsf{blame}\) error at run-time. Note that dynamically typed lambdas \(\lambda x.e\) are syntactic sugar for \(\lambda x.e: \star \rightarrow \star \). The use of this syntactic sugar enables us to encode the dynamically typed lambda calculus (DTLC) [4] easily in \(\lambda ^{G}_{gpr}\).

Definition 2 shows dynamic type checking for raw and annotated values, which is done at run-time. Dynamic type checking for values exploits the annotations that are present at run-time, and does not make use of the typing relation. Dynamic type checking is essentially a constant time operation, with little cost (note that the function is not recursive).

Definition 2 (Dynamic type)

\( | u |_{ \mu } = A \) and \( | v |_{ \mu } = A \) denote the dynamic type of the raw and annotated values.

$$\begin{aligned} | i |_{ \mu }&= \textsf{Int} \\ | ( \lambda x .\, e : A \rightarrow B ) |_{ \mu }&= A \rightarrow B \\ | ( \varLambda X .\, e : A ) |_{ \mu }&= \forall X .\, A \\ | \textsf{unit} |_{ \mu }&= \textsf{Unit} \\ | o |_{ \mu }&= \text {Ref} ~ | v |_{ \mu } ~~~~ \text {when}~ o = v \, \in \, \mu \\ | ( u : A ) |_{ \mu }&= A \end{aligned}$$

\( | u |_{ \mu } = A \) states that the dynamic type of the raw value u is A under store \(\mu \). Notably, for locations \( o \), the dynamic type is defined by the dynamic type of the bounded values in the store. Other rules are straightforward. Lemma 2 shows that if a raw value can be inferred with type A, then its dynamic type is type A as well.

Lemma 2 (Synthesis of Dynamic Types)

For any raw value u, if \( \varSigma \vdash \mu \) and \(\varSigma ; \cdot \vdash u \, \Rightarrow \, A\) then \( | u |_{ \mu } = A \).

As in \(\lambda _{gpr}\), a term typed using the inference mode is guaranteed to infer a unique type. In addition, Lemma 3 shows that each well-typed term can be checked.

Lemma 3 (Synthesis principality)

If \(\varSigma ; \varGamma \vdash e \, \Rightarrow \, A\) then exists B, \(\varSigma ; \varGamma \vdash e \, \Leftarrow \, B\) and \( \varGamma \vdash A \sim B \).

4.2 Dynamic Semantics

The dynamic semantics contains two parts. The first part is casting, which casts a value to another value with a target type. In casting the dynamic type of the value is the source type. The second part is the reduction rules.

Casting. Figure 8 shows the casting rules of the \(\lambda ^{G}_{gpr}\) calculus. \( \mu ; v \hookrightarrow _{ A } \mu ; r \) represents casting values v by type A under store \(\mu \).

Fig. 8.
figure 8

Casting for values

The dynamic type of the raw values u is checked to be consistent with type A or not. If two types are consistent, then the intermediate type can be removed and the raw values are annotated with target types. Otherwise, a run-time error is raised. For example when \(1 : \star \) is cast by type \(\textsf{Bool}\), the dynamic type of 1 is \( \textsf{Int} \), which is not consistent with \(\textsf{Bool}\), and blame is raised. While in \(1 : \star \) cast by type \( \textsf{Int} \), the type \( \textsf{Int} \) is consistent with type \( \textsf{Int} \). Thus, type \( \star \) is erased and 1 is annotated with type \( \textsf{Int} \). Since a location \( o \) is a raw value, if we want to obtain the dynamic type of the location, we should obtain it from the store \(\mu \). Therefore, casting uses the store. Casting by two types is shown at the bottom of Figure 8. It simply casts the types one by one, using the basic casting relation.

Reduction. The reduction rules of \(\lambda ^{G}_{gpr}\) calculus are shown in Figure 9. Raw values are reduced to become values, which are annotated by the dynamic type of the raw values with rule step-u. Due to this rule, annotations are not included in the frame. Annotated expressions are further dealt by rule step-anno and rule step-annop. From the typing rules of rules Typ-app, Typ-tapp, Typ-assign, and Typ-deref, type \( \star \) is allowed to match, respectively, a dynamic function, a polymorphic function or a reference type. Moreover, we know that \( \star \) is consistent with any type. Therefore, we should check whether the internal values cannot match with the wanted type structure. For example, ill-formed applications \(( ( 1 : \star ) \, 2 )\) where the internal value (1) is not an lambda abstraction. There are similar examples for type applications and assignments: \( ( 1 : \star ) ~ \textsf{Bool} \) and \( ( \textsf{True} : \star ) := 2 \) where 1 is not a type abstraction and \(\textsf{True}\) is not a location. Using rules vstep-betad, vstep-tapd, vstep-derefp, and vstep-assignd, we cast the value to the corresponding dynamic types and filter out programs with errors. To apply a value to a functional value (rules vstep-beta and vstep-betap), the argument type must be consistent with function input types \(A_{{\textrm{2}}}\). Moreover, the expected substituted value type is \(A_{{\textrm{1}}}\). Thus, the argument value should be cast by \(A_{{\textrm{2}}}\) and \(A_{{\textrm{1}}}\), which may return a \(\textsf{blame}\) error. To preserve the type, the substituted body is annotated with \(B_{{\textrm{1}}}\) and \(B_{{\textrm{2}}}\). When a value v is annotated with a type A, the type of the value must be consistent with type A, and run-time checking is needed to validate consistency (rule vstep-annov). A reference value \( \text {ref} ~ v \) is bound in the store with a fresh location \( o \) (rule vstep-refv). To obtain a value from the store by the location, from the last expression we use rule vstep-deref. Note that in the typing rule for references:

Fig. 9.
figure 9

Reduction rules for \(\lambda ^{G}_{gpr}\).

figure l

The expected type is A but the bound value type is consistent with A. Thus we annotate v using type A. When assigning a value to replace the bound value in the reference using rules vstep-assign and vstep-assignp:

figure m

The bound value by location \( o \) has type \(A_{{\textrm{1}}}\), while the type of \(v_{{\textrm{2}}}\) is consistent with type \(A_{{\textrm{2}}}\) and \(A_{{\textrm{2}}}\) is consistent with \(A_{{\textrm{1}}}\). The expected type to be replaced is type \(A_{{\textrm{1}}}\), therefore \(v_{{\textrm{2}}}\) is cast by type \(A_{{\textrm{1}}}\) and \(A_{{\textrm{2}}}\). Note that the cast result can be blamed. If a type is applied to a polymorphic value, from the last expression (rule vstep-tap):

figure n

The expected type is \(( B_{{\textrm{2}}} [ X \mapsto C ] )\) but the substituted expression \(( e [ X \mapsto C ] : A [ X \mapsto C ] )\) has type \(( A [ X \mapsto C ] )\), so it is annotated with type \(( B_{{\textrm{2}}} [ X \mapsto C ] )\).

Properties of \(\lambda ^{G}_{gpr}\) . \(\lambda ^{G}_{gpr}\) is deterministic (Theorem 4) and type sound (Theorem 5 and Theorem 6).

Theorem 4

(Determinism of \(\lambda ^{G}_{gpr}\) ). If \(\varSigma ; \cdot \vdash e \, \Leftrightarrow \, A\), \(\mu ; e \hookrightarrow \mu _{{\textrm{1}}} ; r_{{\textrm{1}}}\) and \(\mu ; e \hookrightarrow \mu _{{\textrm{2}}} ; r_{{\textrm{2}}}\) then \(r_1=r_2\) and \(\mu _{{\textrm{1}}} = \mu _{{\textrm{2}}}\).

Theorem 5

(Type Preservation of \(\lambda ^{G}_{gpr}\) ). If \(\varSigma ; \cdot \vdash e \, \Leftrightarrow \, A\), \( \varSigma \vdash \mu \), and \(\mu ; e \hookrightarrow \mu ' ; e'\) then \(\varSigma ' ; \cdot \vdash e' \, \Leftrightarrow \, A\), \( \varSigma ' \vdash \mu ' \) and \( \varSigma ' ~\supseteq ~ \varSigma \).

Theorem 6

(Progress of \(\lambda ^{G}_{gpr}\) ). If \(\varSigma ; \cdot \vdash e \, \Leftrightarrow \, A\) then e is a value or \(\exists r ~ \mu '\), \(\mu ; e \hookrightarrow \mu ' ; r\).

Fig. 10.
figure 10

Precision Relation.

Fig. 11.
figure 11

Reduction rules for \(\lambda _{gpr}\).

4.3 Gradual Typing Criteria

Siek et al. [31, 32] proposed a set of criteria for gradual typing system. At the end of the spectrum, a fully annotated gradually typed program should behave as a statically typed program. Conversely, a gradually typed program without annotations should behave as a dynamic program. Siek et al. proposed the gradual guarantee, which states that having annotations that are more/less precise should not change the behavior of the programs. Here we show that \(\lambda ^{G}_{gpr}\) has the gradual guarantee.

To prove the gradual guarantee, we define the precision for types, expressions and stores. At the top of Figure 10 is type precision \( A \sqsubseteq B \), which states that type A is more precise than B. The unknown type \( \star \) is less precise than any other types. Each type is more precise than itself. The precision of functions, polymorphic functions and reference types holds, if the precision of their sub-components holds. Note that the precision of function types is “covariant” in the argument types since to compare the precision of the two programs:

$$\begin{aligned}&\lambda x .\, 1 : \textsf{Int} \rightarrow \textsf{Int} \\&\lambda x .\, 1 : \star \rightarrow \textsf{Int} \end{aligned}$$

we should just say that the first one is more precise than the second one because the input type of the second one is fully dynamic. Expression precision is shown in the middle of Figure 10. The rules can mostly be derived from the type precision. Each expression is in a precision relation with itself. Structural expressions are in a precision relation if their sub-expressions are related. Lastly, store precision, shown at the bottom of Figure 10, shows that precision holds if the precision of values in the store holds.

Static criteria. We show that the full static type system of \(\lambda ^{G}_{gpr}\) is equivalent to the \(\lambda _{gpr}\) calculus (Theorem 7). We use s to denote a relation from the static system in case of ambiguity. Theorem 8 shows the static gradual guarantee of \(\lambda ^{G}_{gpr}\). If a more precise program is well-typed then a less precise program should be well-typed with a less precise type.

Theorem 7

(Equivalence for \(\lambda _{gpr}\) (statics). If \( \cdot ; \cdot \vDash _s e \, \Leftrightarrow \, A\) if and only if \( \cdot ; \cdot \vdash e \, \Leftrightarrow \, A\).

Theorem 8 (Static Gradual Guarantee)

If \( e_{{\textrm{1}}} \sqsubseteq e_{{\textrm{2}}} \), \( \cdot ; \cdot \vdash e_{{\textrm{1}}} \, \Leftrightarrow \, A\) then \( \cdot ; \cdot \vdash e_{{\textrm{2}}} \, \Leftrightarrow \, B\) and \( A \sqsubseteq B \).

Dynamic criteria. Theorem 9 says that fully static programs of \(\lambda ^{G}_{gpr}\) calculus behaves in the same as the \(\lambda _{gpr}\) at run-time. To make the proofs easier, the reduction rules of \(\lambda _{gpr}\) calculus have extra annotations to follow \(\lambda ^{G}_{gpr}\) (we denoted as \(s*\)). It means that there are extra identical annotations, as shown in the gray parts of Figure 4. However, these annotations are identical and they can be removed without affecting the final reduction result. In addition, as in \(\lambda ^{G}_{gpr}\): values have annotations; raw values should step to be annotated values; and annotations are not included in Frames. This requires a few extra rules, which are shown in Figure 11.

Notably, \(\lambda ^{G}_{gpr}\) has the dynamic gradual guarantee (Theorem 10). The proof is simple in comparison to the original proof by Siek et al. [32]. This simple theorem is formalized following the work of Garcia et al. [12]. It says that if a more precise program with a more precise store can reduce, then the less precise program with a less precise store can also reduce. Furthermore, their resulting programs and stores should keep the precision relation.

Theorem 9

(Equivalence for \(\lambda _{gpr}\) (dynamic)).\(\forall \) \( \cdot ; \cdot \vDash _s e \, \Leftrightarrow \, A\),

  • If \(\mu ; e \hookrightarrow _{s*} \mu ' ; e'\) then \(\mu ; e \hookrightarrow \mu ' ; e'\).

  • If \(\mu ; e \hookrightarrow \mu ' ; e'\) then \(\mu ; e \hookrightarrow _{s*} \mu ' ; e'\).

Theorem 10 (Dynamic Gradual Guarantee)

If \( e_{{\textrm{1}}} \sqsubseteq e_{{\textrm{2}}} \) , \( \mu _{{\textrm{1}}} \sqsubseteq \mu _{{\textrm{2}}} \), \( \cdot ; \cdot \vdash e_{{\textrm{1}}} \, \Leftrightarrow \, A\), \( \cdot ; \cdot \vdash e_{{\textrm{2}}} \, \Leftrightarrow \, B\) and \(\mu _{{\textrm{1}}} ; e_{{\textrm{1}}} \hookrightarrow \mu '_{{\textrm{1}}} ; e'_{{\textrm{1}}}\) then there exists \(e'_{{\textrm{2}}}\) and \(\mu '_{{\textrm{2}}}\) such that \(\mu _{{\textrm{2}}} ; e_{{\textrm{2}}} \hookrightarrow \mu '_{{\textrm{2}}} ; e'_{{\textrm{2}}}\) , \( e'_{{\textrm{1}}} \sqsubseteq e'_{{\textrm{2}}} \) and \( \mu '_{{\textrm{1}}} \sqsubseteq \mu '_{{\textrm{2}}} \).

5 Discussion

In this section, we briefly discuss alternative designs and possible extensions.

Preserving relational parametricity. An alternative design is to have a directed semantics gradual polymorphism calculi, which preserves parametricity. We employ the eager semantics similar to the AGT methodology, which is applied in the GSF calculus. Toro et al. [37] analyzed the following example to show how parametricity is broken by the naive use of the dynamic sealing in the eager semantics:

$$\begin{aligned} (\varLambda X.(\lambda x: X. \textsf{let} \; y: \star = x \; \textsf{in} \; \textsf{let} \; z : \star = y \; \textsf{in} \; z + 1)) \; \textsf{Int} \; 1 \end{aligned}$$

The polymorphic function with type (\( \forall X .\, X \rightarrow \star \)) breaks parametricity, which should be detected at run-time and raise an error. However, the application of the function reduces to 2. A fresh name variable \(\alpha \) is generated and is bounded to the type \( \textsf{Int} \). Variable x to y is flowing from type \( \textsf{Int} \) to type \(\alpha \); y to z is flowing from type \( \star \) to type \( \star \); and x to z is flowing from \( \textsf{Int} \) to \( \star \). Any of these type flows are safe. Thus the reason for the loss of parametricity is related to the loss of precise type information. Consequently, dynamic sealing is not enough to enforce relational parametricity. For the above example, GSF detects the error by the refining evidences such as \((\langle {\alpha ^{E_1}}, {\alpha ^{E_2}} \rangle )\). Importantly in the type flow from y to z, more precise types (\( \textsf{Int} \) and \(\alpha ^{Int}\)) instead of \( \star \) and \( \star \) are obtained, so when moving from x to z the type changes from \( \textsf{Int} \) to \(\alpha ^{Int}\). When doing the addition, the run-time error is detected since the flow from \(\alpha ^{Int}\) to \( \textsf{Int} \) is not defined. A potential approach for us is to use tracked types \((A^{<B_{{\textrm{1}}},B_{{\textrm{2}}}>})\), which are similar to the refined evidences in the GSF calculus. Because \(\lambda ^{G}_{gpr}\) is a source language, we do not have evidences, thus a possible approach is to record information in types. For the above example, tracked types can track the unknown type with more precise types from y to z to be \( \textsf{Int} \) and \(\alpha ^{Int}\) which is \( \star ^{( \textsf{Int} ,\alpha ^{Int})}\) and then from x to z to be \( \star ^{( \textsf{Int} ,\alpha ^{Int})}\) as the refined evidences and a run-time error is detected when doing the addition.

A space-efficient gradual polymorphic calculus. Ozaki et al. [27] explored the space efficiency problem in the gradual polymorphic calculus. They extended the coercion calculus (\(\lambda C\)) [29] with parametric polymorphism (called \(\lambda C^{\forall }\)). Dynamic sealing was applied in \(\lambda C^{\forall }\) to enforce relational parametricity. Consequently, a sequence of coercions is allowed and they showed that it cannot be normalized to a smaller coercion. In other words, the size of sequences is unbounded. Notably, they stated and proved that \(\lambda C^{\forall }\) cannot be space-efficient when dynamic sealing is supported. Furthermore, they conjectured that the gradual polymorphic calculus with dynamic sealing cannot become space-efficient. Our \(\lambda ^{G}_{gpr}\) calculus substitutes types directly, as the traditional semantics without employing dynamic sealing. Moreover, the eager semantics is applied. Thus we believe that it is possible for our \(\lambda ^{G}_{gpr}\) calculus to be a space-efficient gradual polymorphic calculus. Two tentative and promising rules are as follows:

figure o

With the above two rules, annotations are removed or an error is raised, to achieve the space-efficient goal. Surprisingly, with these two rules, it seems possible to have a space-efficient gradual references calculus naturally. We intend to explore this in the future.

Implicit polymorphic references. Implicit (higher-rank) polymorphism [10, 19, 26] is pervasive in theoretic and practical programming languages. Existing gradual polymorphic calculi are mainly explicitly polymorphic. One exception is the work of Xie et al. [41]. Explicit polymorphism means that polymorphic types are not related to any of its instantiated types but in implicit polymorphism, they are related. Xie et al. [41] designed a source gradual implicit polymorphism calculus with consistent subtyping but their dynamic semantics is defined by translating to the well-known polymorphic blame calculus (\(\lambda B^{\forall }\)) [3] without the proof of the dynamic gradual guarantee. A possible extension of Xie et al.’s work is to support implicit polymorphism with a direct dynamic semantics, and to explore the dynamic gradual guarantee and parametricity properties. However, it is well-known that a naive combination of implicit polymorphism and references lead to an unsound language. A possible solution is to limit polymorphism to syntactic let-bound values as adopted by Standard ML [40].

Alternative forms of values. In our calculus, all values are annotated, such as \( 1 : \textsf{Int} \) or \(( \lambda x .\, x : \textsf{Int} \rightarrow \textsf{Int} ) : \textsf{Int} \rightarrow \textsf{Int} \). This introduces some overhead as some annotations are redundant. We can have an alternative and workable form of values as follows:

figure p

The above value form removes redundant annotations such as integers (\( 1 : \textsf{Int} \)). This is good for performance, but it would make the proof of dynamic gradual guarantee harder. However, the resulting calculus with fewer annotations should have an equivalent semantics to our calculus, and would be a better candidate for guiding an implementation.

6 Related Work

Gradual typing. Gradual typing is a term coined by Siek et al. [31]. The unknown type ?, which we represent as \( \star \), is the new notion introduced to a gradual type system to integrate dynamic and static typing. By using the unknown type \( \star \), equality on types is lifted to consistency. Any type is consistent with type \( \star \). Therefore, run-time type checking is needed for a gradually typed lambda calculus. Traditionally, the dynamic semantics of a gradual language is defined by elaborating to a target language, which includes cast calculi [3, 11, 29, 34, 39] and coercion calculi [13, 14, 27, 29, 30].

Garcia et al. [12] proposed the abstracting gradual typing (AGT) approach, which allows for deriving a gradual type system by lifting the static type system. They argue about the weakness of elaborating to a target language, and did not resort to a target language in their calculus by using intrinsic terms. Our \(\lambda ^{G}_{gpr}\) defines the dynamic semantics directly without using intrinsic terms, but employing instead an approach based on type-directed operational semantics (TDOS). Type directed operational semantics (TDOS) was proposed by Huang et al. [15] to design calculi with the merge operator and intersection types. Ye et al. [42] explored the use of the TDOS in gradual typing. In TDOS, type annotations are relevant at runtime and can affect the semantics, unlike many traditional calculi where types are not runtime relevant. With a TDOS we can design a gradually typed calculus without elaboration to a cast calculus, since the semantics can be given directly. Our \(\lambda ^{G}_{gpr}\) employs the eager semantics for higher-order values following an approach similar to AGT. Ye et al. only consider a TDOS for a simply typed, purely functional language. Our work shows that the TDOS approach can be extended to important features, such as polymorphism and references.

Gradual typing with references. Many languages with static and dynamic typing, employing some form of optional typing, support references. These include Flow [8], Dart [6] and TypeScript [5]. However for optional typing, the run-time checking is not performed for fully dynamic programs, leading to unsoundness with respect to the static type system. In the work of Siek et al. [31], he already considered mutable references, but in a very simple setting without annotation expressions. Furthermore, the gradually typed lambda calculus is elaborated to a target language to define the dynamic semantics. Herman et al. [14] designed a coercion calculus with references, which is space efficient. A gradualizer, introduced by Cimini and Siek [9], can derive a gradual static type system and cast insertion with references systematically. Toro et al.  [38] designed source gradual typing system with references \(\lambda _{\widetilde{REF}}\) and a corresponding target language \(\lambda ^{\epsilon }_{\widetilde{REF}}\) using the Abstracting Gradual Typing (AGT) methodology. They designed the \(\lambda ^{\epsilon }_{\widetilde{REF}}\) as a space-efficient calculus and proved the gradual guarantee. Our \(\lambda ^{G}_{gpr}\) is the first polymorphic gradually typed language with references.

Existing gradual polymorphic calculi. In the following we summarize some of the solutions to the problem of preserving parametricity and gradual guarantee in gradual polymorphic calculi and the changes that these solutions entail.

Dynamic sealing. Ahmed et al. [3] solved the problem in Section 2 by using dynamic sealing, inspired by the work of Matthews et al. [21]. They proposed the polymorphic blame calculus [3] (we present it as \(\lambda B^{\forall }\)), which is a widely used cast calculus with dynamic sealing. The most interesting construct of \(\lambda B^{\forall }\) is the named type binding \(\nu X := A. t\), which is introduced to record the instantiated type of a type variable. The programs in Section 2 behave as expected in \(\lambda B^{\forall }\):

$$\begin{aligned}&(K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow X ) \, \textsf{Int} \, \textsf{Int} \, 2 \, 3 \\ \hookrightarrow ^{*} \,\,&\nu Y := \textsf{Int} . \nu X := \textsf{Int} . (2 : X \Rightarrow \star : \star \Rightarrow X ) \\ \hookrightarrow ^{*} \,\,&2 \\&(K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow Y ) \, \textsf{Int} \, \textsf{Int} \, 2 \, 3 \\ \hookrightarrow ^{*} \,\,&\nu Y := \textsf{Int} . \nu X := \textsf{Int} . (2 : X \Rightarrow \star : \star \Rightarrow Y ) \\ \hookrightarrow ^{*} \,\,&blame \end{aligned}$$

The first program succeeds and returns the first argument. While the second program fails, since the polymorphic information is recorded as \(X:= \textsf{Int} \) and \(Y:= \textsf{Int} \) in type bindings and the original type variable names are preserved in the casts. Notably, for higher-order values, \(\lambda B^{\forall }\) follows the lazy semantics as the blame calculus [29, 39]. That is, for a function value, the checking is delayed until an argument value is applied. This, unfortunately results in unbounded space consumption for higher-order casts [13, 14].

As Xie et al. [41] pointed out, the compatibility relation of \(\lambda B^{\forall }\) mixes explicit and implicit polymorphism to some extent, since they employ the following rule:

figure q

This compatibility rule of \(\lambda B^{\forall }\) allows \( \forall X .\, X \rightarrow X \) to be compatible with any static instantiated types such as \( \textsf{Int} \rightarrow \textsf{Int} \) and \( \textsf{Bool} \rightarrow \textsf{Bool} \). These types are not related in System F so \(\lambda B^{\forall }\) is not a conservative extension of System F. The gradual guarantee has not been discussed in \(\lambda B^{\forall }\), but they show the parametricity property.

The \(F_G\) and \(F_C\) calculi. Igarashi et al. [17] improved on \(\lambda B^{\forall }\). They designed a source calculus (\(F_G\)) and a target calculus (\(F_C\)), which is a conservative extension of System F. The dynamic semantics of \(F_G\) is indirect and defined by translation to \(F_C\). \(F_G\) does not relate \( \forall X .\, X \rightarrow X \) with static instantiations, but only with the dynamic instantiation \( \star \rightarrow \star \). The type \( \star \rightarrow \star \) is called quasi-polymorphic, since it is an instantiation of \( \forall X .\, X \rightarrow X \) similarly to what happens with implicit polymorphism. However, a type such as \( \textsf{Int} \rightarrow \textsf{Int} \) is not quasi-polymorphic. Instead of binding types locally by (\(\nu X := A. t\)), they made the type bindings global. Their reduction form \(\varSigma \triangleright f \hookrightarrow \varSigma '\triangleright f' \) is augmented with a store, which records the bounded type variables \(X := A\). The above example reduces in \(F_C\) as follows.

$$\begin{aligned}&\varSigma \triangleright (K^{\star }: \star \Rightarrow \forall X .\, \forall Y .\, X \rightarrow Y \rightarrow X ) \; \textsf{Int} \; \textsf{Int} \; 2 \; 3 \\ \hookrightarrow ^{*} \,\,&\varSigma \triangleright (\varLambda X. \varLambda Y . K^{\star }: \star \Rightarrow X \rightarrow Y \rightarrow X ) \; \textsf{Int} \; \textsf{Int} \; 2 \; 3 \\ \hookrightarrow ^{*} \,\,&\varSigma , X := \textsf{Int} , Y := \textsf{Int} \triangleright (K^{\star }: \star \Rightarrow X \rightarrow Y \rightarrow X ) \; \textsf{Int} \; 2 \; 3 \\ \hookrightarrow ^{*} \,\,&2 \end{aligned}$$

Furthermore, they argue that type bindings generated locally lead to run-time overheads. Their observation is that type bindings are not required for every substitution, but only for casts with the dynamic type (\( \star \)). Therefore they employ two kinds of type variables, which are distinguished by labels. One kind is static type variables (X::S) and the other kind is gradual type variables (X::G). Type application for static type abstraction does not generate type bindings, which are only generated for gradual type abstractions. Parametricity and the static gradual guarantee are proved, although the proofs are not mechanized. However, the dynamic gradual guarantee is left as conjecture. In addition their static gradual guarantee is proved with some constraints in the type precision relation. In their precision, \( \forall X .\, X \rightarrow X \) is more precise than \( \forall X .\, X \rightarrow \star \) but not \( \forall X .\, \star \rightarrow X \).

The GSF calculus. Toro et al. [37] presented the gradual polymorphic calculus (named GSF), which employs the Abstracting Gradual Typing (AGT) methodology. In AGT, casting of higher-order values is eager compared to \(\lambda B^{\forall }\) and \(F_C\). This avoids the problem of space consumption although, as New et al. [25] pointed out, the \(\eta \) principle (which ensures \(V \equiv \lambda x.V x\) in the call-by-value languages) is broken. To preserve parametricity, global dynamic sealing, which does not distinguish between static and gradual variables, is used. They also refine the presentation of evidence, which witnesses the consistency judgement, ensuring that it holds. Instead of simple evidences such as (\(\langle \alpha , Int \rangle \)), they employ sealing evidences (\(\langle {\alpha ^{E}}, Int \rangle \)). GSF satisfies parametricity but not the gradual guarantee. Importantly, they proved that the gradual guarantee is incompatible with parametricity.

Parametricity with the Gradual Guarantee. To achieve both parametricity and the gradual guarantee, New et al. [24] designed \(PolyG^{v}\) calculus which gave up the syntax of System F and the users are required to provide different sealing options. They introduced the sealed syntax as \(seal_{X}~M\) which explicitly seals terms. With the user-defined syntax, the gradual guarantee and parametricity are proved. More recently, Labrada et al. [20] improve on GSF. They do not change the syntax of System F but insert plausible sealing forms during the elaboration from a gradual source language which is named Funk to a target cast calculus. They proved the gradual guarantee and parametricity for the target language, but for the source language (Funk), the gradual guarantee comes with a restriction for type applications, which can only be instantiated with base and variable types. Some of the main theorems are proved in Agda.

Table 1. Comparison among gradual polymorphism calculi. A \(\times \) denotes no. A \(\checkmark \) denotes yes while denotes partial yes.

Summary. In order to keep parametricity we need several compromises. For instance, we need to use a dynamic sealing mechanism instead of direct type substitution causing extra space and time consumption. In many of the earlier calculi, the gradual guarantee is not obtained. In the later calculi, the gradual guarantee is either restricted or we need to give up the syntax of System F. Traditionally, many works on gradual typing are based on two different calculi: a source gradually typed language, and a target cast/coercion calculus where casts/coercions are explicit. The dynamic semantics is defined by elaborating the source language to the target calculus. In other words, the semantics of the gradually typed language is given indirectly via a second, target language. All previously discussed works follow this indirect way to give the semantics to a gradually typed source language.

Furthermore, none of the gradually typed polymorphic calculi supports references. However, even for a static polymorphic calculus extended with mutable references obtaining parametricity is highly non-trivial. As Ahmed et al. [2] stated: “combing mutable references with polymorphism can be extremely tricky.” From the analysis of Jaber and Tzevelekos [18], we know that naively moving from a polymorphic calculus to incorporate with mutable references, breaks parametricity. The reason is that common references can be instantiated with differently typed variables. Therefore, extending a gradual polymorphic calculus with the mutable references is non-trivial, and none of the existing gradual languages with polymorphism support references.

Table 1 summarizes several features and differences in existing gradually polymorphic calculi.

7 Conclusion

In this paper, we design a static system \(\lambda _{gpr}\) with polymorphism and references and its gradual counterpart \(\lambda ^{G}_{gpr}\). \(\lambda ^{G}_{gpr}\) has a direct semantics without resorting to a cast calculi. In \(\lambda ^{G}_{gpr}\), the gradual guarantee is proved but we give up parametricity. In exchange, our calculus can be simplified, since sophisticated mechanisms such as dynamic sealing are not needed. Our calculus follows the original semantics of System F, based on direct type substitutions, avoiding extra space and time complexity that is necessary by mechanisms such as dynamic sealing. In the future, we could try to find out if there is a way to keep both gradual guarantee and relational parametricity for the source language, or explore more efficient formulations of \(\lambda ^{G}_{gpr}\).