The unrestricted language
is a run-of-the-mill idealized ML language with functions, pairs, sums, iso-recursive types and polymorphism. It is presented in its explicitly typed form—we will not discuss type inference in this work. The full syntax is described in Fig. 1, and the typing rules in Fig. 2. The dynamic semantics is completely standard. Having binary sums, binary products and iso-recursive types lets us express algebraic datatypes in the usual way.
The novelty lies in the linear language
, which we present in several steps. As is common in \(\lambda \)-calculi with references, the small-step operational semantics is given for a language that is not exactly the surface language in which programs are written, because memory allocation returns locations
that are not in the grammar of surface terms. Reductions are defined on configurations, a local store paired with a term in a slightly larger internal language. We have two type systems, a type system on surface terms, that does not mention locations and stores—which is the one a programmer needs to know—and a type system on configurations, which contains enough static information to reason about the dynamics of our language and prove subject reduction. Again, this follows the standard structure of syntactic soundness proofs for languages with a mutable store.
2.1 The Core of 
Figure 3 presents the surface syntax of our linear language
. For the syntactic categories of types
, and expressions
, the last line contains the constructions related to the linear store that we only discuss in Sect. 2.2.
In technical terms, our linear type system is exactly propositional intuitionistic linear logic, extended with iso-recursive types. For simplicity and because we did not need them, our current system also does not have polymorphism or additive/lazy pairs
. Additive pairs would be a trivial addition, but polymorphism would require more work when we define the multi-language semantics in Sect. 3.
In less technical terms, our type system can enforce that values be used linearly, meaning that they cannot be duplicated or erased, they have to be deconstructed exactly once. Only some types have this linearity restriction; others allow duplication and sharing of values at will. We can think of linear values as resources to be spent wisely; for any linear value somewhere in a term, there can be only one way to access this value, so we can interpret the language as enforcing an ownership discipline where whoever points to a linear value owns it.
In particular, linear functions of type
must be called exactly once, and their results must in turn be consumed – they can safely capture linear resources. On the other hand, the non-linear, duplicable values are those at types of the form
— the exponential modality of linear logic. If the term
has duplicable type
, then the term
has type
: this creates a local copy of the value that is uniquely-owned by its receiver and must be consumed linearily.
This resource-usage discipline is enforced by the surface typing rules of
, presented in Fig. 4. They are exactly the standard (two-sided) logical rules of intuitionistic linear logic, annotated with program terms. The non-duplicability of linear values is enforced by the way contexts are merged by the inference rules: if
is type-checked in the context
and
in
, then the linear pair
is only valid in the combined context
. The
operation is partial; this combined context is defined only if the variables shared by
and
are duplicable—their type is of the form
. In other words, a variable at a non-duplicable type in
cannot possibly appear in both
and
: it must appear exactly onceFootnote 3.
The expression
takes a term at some type
and creates a “shared” term, whose value will be duplicable. Its typing rule uses a context of the form
, which is defined as the pointwise application of the
connectives to all the types in
. In other words, the context of this rule must only have duplicable types: a term can only be made duplicable if it does not depend on linear resources from the context. Otherwise, duplicating the shared value could break the unique-ownership discipline on these linear resources.
Finally, the linear isomorphism notation for
and
in Fig. 4 defines them as primitive functions, at the given linear function type, in the empty context – using them does not consume resources. This notation also means that, operationally, these two operations shall be inverses of each other. The rules for the linear store type
and
are described in Sect. 2.2.
2.2 Linear Memory in 
The surface typing rules for the linear store are given at the end of Fig. 4. The linear type
represents a memory location that holds a value of type
. The type
represents a location that has been allocated, but does not currently hold a value. The primitive operations to act on this type are given as linear isomorphisms:
allocates, turning a unit value into an empty location; conversely,
reclaims an empty location. Putting a value into the location and taking it out are expressed by
and
, which convert between a pair of an empty location and a value, of type
, and a full location, of type
.
For example, the following program takes a full reference and a value, and swaps the value with the content of the reference:
The programming style following from this presentation of linear memory is functional, or applicative, rather than imperative. Rather than insisting on the mutability of references—which is allowed by the linear discipline—we may think of the type
as representing the indirection through the heap that is implicit in functional programs. In a sense, we are not writing imperative programs with a mutable store, but rather making explicit the allocations and dereferences happening in higher-level purely functional language. In this view, empty cells allow memory reuse.
This view that
represents indirection through the memory suggests we can encode lists of values of type
by the type
. The placement of the box inside the sum mirrors the fact that empty list is represented as an immediate value in functional languages. From this type definition, one can write an in-place reverse function on lists of
as follows:
Our linear language
is a formal language that is not terribly convenient to program directly. We will not present a full surface language in this work, but one could easily define syntactic sugar to write the exact same function as follows:
One can read this function as the usual functional rev_append function on lists, annotated with memory reuse information: if we assume we are the unique owner of the input list and won’t need it anymore, we can reuse the memory of its cons cells (given in this example the name
) to store the reversed list. On the other hand, if you read the
and
as imperative operations, this code expresses the usual imperative pointer-reversal algorithm.
This double view of linear state occurs in other programming systems with linear state. It was recently emphasized in O’Connor et al.
(2016), where the functional point of view is seen as easing formal verification, while the imperative view is used as a compilation technique to produce efficient C code from linear programs.
2.3 Internal
Syntax and Typing
To give a dynamic semantics for
and prove it sound, we need to extend the language with explicit stores and store locations. Indeed, the allocating term
should reduce to a “fresh location”
allocated in some store
, and neither are part of the surface-language syntax. The corresponding internal typing judgment is more complex, but note that users do not need to know about it to reason about correctness of surface programs. The internal typing is essential for the soundness proof, but also useful for defining the multi-language semantics in Sect. 3.
We work with configurations
, which are pairs of a store
and a term
. Our internal typing judgment
checks configurations, not just terms, and relies not only on a typing context for variables
but also on a store typing
, which maps the locations of the configuration to typing assumptions.
Unfortunately, due to space limits, we will not present this part of the type system – which is not directly exposed to users of the language. See some examples of reduction rules in Fig. 5, and the long version of this work.
2.4 Reduction of Internal Terms
In the long version of this work we give a reduction relation between linear configurations
and prove a subject reduction result.
Theorem 1
(Subject reduction for
). If
and
, then there exists a (unique)
such that
.