A Verified Compiler from Isabelle/HOL to CakeML
 13 Citations
 6 Mentions
 6.4k Downloads
Abstract
Many theorem provers can generate functional programs from definitions or proofs. However, this code generation needs to be trusted. Except for the HOL4 system, which has a proof producing code generator for a subset of ML. We go one step further and provide a verified compiler from Isabelle/HOL to CakeML. More precisely we combine a simple proof producing translation of recursion equations in Isabelle/HOL into a deeply embedded term language with a fully verified compilation chain to the target language CakeML.
Keywords
Isabelle CakeML Compiler Higherorder term rewriting1 Introduction
Many theorem provers have the ability to generate executable code in some (typically functional) programming language from definitions, lemmas and proofs (e.g. [6, 8, 9, 12, 16, 27, 37]). This makes code generation part of the trusted kernel of the system. Myreen and Owens [30] closed this gap for the HOL4 system: they have implemented a tool that translates from HOL4 into CakeML, a subset of SML, and proves a theorem stating that a result produced by the CakeML code is correct w.r.t. the HOL functions. They also have a verified implementation of CakeML [24, 40]. We go one step further and provide a onceandforall verified compiler from (deeply embedded) function definitions in Isabelle/HOL [32, 33] into CakeML proving partial correctness of the generated CakeML code w.r.t. the original functions. This is like the step from dynamic to static type checking. It also means that preconditions on the input to the compiler are explicitly given in the correctness theorem rather than implicitly by a failing translation. To the best of our knowledge this is the first verified (as opposed to certifying) compiler from function definitions in a logic into a programming language.

We erase types right away. Hence the type system of the source language is irrelevant.

We merely assume that the source language has a semantics based on equational logic.
 1.
The preprocessing phase eliminates features that are not supported by our compiler. Most importantly, dictionary construction eliminates occurrences of type classes in HOL terms. It introduces dictionary datatypes and new constants and proves the equivalence of old and new constants (Sect. 7).
 2.
The deep embedding lifts HOL terms into terms of type \(\mathsf {term}\), a HOL model of HOL terms. For each constant c (of arbitrary type) it defines a constant \(c'\) of type \(\mathsf {term}\) and proves a theorem that expresses equivalence (Sect. 3).
 3.
There are multiple compiler phases that eliminate certain constructs from the \(\mathsf {term}\) type, until we arrive at the CakeML expression type. Most phases target a different intermediate term type (Sect. 5).
The first two stages are preprocessing, are implemented in ML and produce certificate theorems. Only these stages are specific to Isabelle. The third (and main) stage is implemented completely in the logic HOL, without recourse to ML. Its correctness is verified once and for all.^{1}
2 Related Work
There is existing work in the Coq [2, 15] and HOL [30] communities for proof producing or verified extraction of functions defined in the logic. Anand et al. [2] present work in progress on a verified compiler from Gallina (Coq’s specification language) via untyped intermediate languages to CompCert C light. They plan to connect their extraction routine to the CompCert compiler [26].
Translation of type classes into dictionaries is an important feature of Haskell compilers. In the setting of Isabelle/HOL, this has been described by Wenzel [44] and Krauss et al. [23]. Haftmann and Nipkow [17] use this construction to compile HOL definitions into target languages that do not support type classes, e.g. Standard ML and OCaml. In this work, we provide a certifying translation that eliminates type classes inside the logic.
Compilation of pattern matching is well understood in literature [3, 36, 38]. In this work, we contribute a transformation of sets of equations with pattern matching on the lefthand side into a single equation with nested pattern matching on the righthand side. This is implemented and verified inside Isabelle.
Besides CakeML, there are many projects for verified compilers for functional programming languages of various degrees of sophistication and realism (e.g. [4, 11, 14]). Particularly modular is the work by Neis et al. [31] on a verified compiler for an MLlike imperative source language. The main distinguishing feature of our work is that we start from a set of higherorder recursion equations with pattern matching on the lefthand side rather than a lambda calculus with pattern matching on the righthand side. On the other hand we stand on the shoulders of CakeML which allows us to bypass all complications of machine code generation. Note that much of our compiler is not specific to CakeML and that it would be possible to retarget it to, for example, Pilsner abstract syntax with moderate effort.
Finally, Fallenstein and Kumar [13] have presented a model of HOL inside HOL using large cardinals, including a reflection proof principle.
3 Deep Embedding
Starting with a HOL definition, we derive a new, reified definition in a deeply embedded term language depicted in Fig. 1a. This term language corresponds closely to the term datatype of Isabelle’s implementation (using de Bruijn indices [10]), but without types and schematic variables.
To establish a formal connection between the original and the reified definitions, we use a logical relation, a concept that is wellunderstood in literature [20] and can be nicely implemented in Isabelle using type classes. Note that the use of type classes here is restricted to correctness proofs; it is not required for the execution of the compiler itself. That way, there is no contradiction to the elimination of type classes occurring in a previous stage.
Notation. We abbreviate \(\mathsf {App}\;t\;u\) to t $ u and \(\mathsf {Abs}\;t\) to \(\varLambda \;t\). Other term types introduced later in this paper use the same conventions. We reserve \(\lambda \) for abstractions in HOL itself. Typing judgments are written with a double colon: \(t\, {:}{:}\, \tau \).
SmallStep Semantics. Figure 1b specifies the smallstep semantics for \(\mathsf {term}\). It is reminiscent of higherorder term rewriting, and modelled closely after equality in HOL. The basic idea is that if the proposition \(t = u\) can be proved equationally in HOL (without symmetry), then \(R \vdash {\left\langle t\right\rangle } \longrightarrow ^* {\left\langle u\right\rangle }\) holds (where \(\textit{R}\, {:}{:}\, (\mathsf {term} \times \mathsf {term})\;\mathsf {set}\)). We call \(\textit{R}\) the rule set. It is the result of translating a set of defining equations \( lhs = rhs \) into pairs \((\left\langle lhs \right\rangle , \left\langle rhs \right\rangle ) \in \textit{R}\).
Rule Step performs a rewrite step by picking a rewrite rule from R and rewriting the term at the root. For that purpose, \(\mathsf {match}\) and \(\mathsf {subst}\) are (mostly) standard firstorder matching and substitution (see Sect. 4 for details).
Our semantics does not constitute a fullygeneral higherorder term rewriting system, because we do not allow substitution under binders. For de Bruijn terms, this would pose no problem, but as soon as we introduce named bound variables, substitution under binders requires dealing with capture. To avoid this altogether, all our semantics expect terms that are substituted into abstractions to be closed. However, this does not mean that we restrict ourselves to any particular evaluation order. Both callbyvalue and callbyname can be used in the smallstep semantics. But later on, the target semantics will only use callbyvalue.
Embedding Relation. We denote the concept that an embedded term t corresponds to a HOL term a of type \(\tau \) w.r.t. rule set \(\textit{R}\) with the syntax \(\textit{R} \vdash t \approx a\). If we want to be explicit about the type, we index the relation: \(\approx _\tau \).
The induction principle for the proof arises from the use of the Open image in new window command that is used to define recursive functions in HOL [22]. But the user is also allowed to specify custom equations for functions, in which case we will use heuristics to generate and prove the appropriate induction theorem. For simplicity, we will use the term (defining) equation uniformly to refer to any set of equations, either default ones or ones specified by the user. Embedding partiallyspecified functions – in particular, proving the certificate theorem about them – is currently not supported. In the future, we plan to leverage the domain predicate as produced by Open image in new window to generate conditional theorems.
4 Terms, Matching and Substitution
The compiler transforms the initial \(\mathsf {term}\) type (Fig. 1a) through various intermediate stages. This section gives an overview and introduces necessary terminology.
Preliminaries. The function arrow in HOL is \(\Rightarrow \). The cons operator on lists is the infix \(\#\).
Throughout the paper, the concept of mappings is pervasive: We use the type notation \(\alpha \rightharpoonup \beta \) to denote a function \(\alpha \Rightarrow \beta \;\mathsf {option}\). In certain contexts, a mapping may also be called an environment. We write mapping literals using brackets: \([a \Rightarrow x, b \Rightarrow y, \ldots ]\). If it is clear from the context that \(\sigma \) is defined on a, we often treat the lookup \(\sigma \;a\) as returning an \(x\, {:}{:}\, \beta \).
The functions \(\mathsf {dom}\, {:}{:}\, (\alpha \rightharpoonup \beta ) \Rightarrow \alpha \;\mathsf {set}\) and \(\mathsf {range}\, {:}{:}\, (\alpha \rightharpoonup \beta ) \Rightarrow \beta \;\mathsf {set}\) return the domain and range of a mapping, respectively.
Dropping entries from a mapping is denoted by \(\sigma  k\), where \(\sigma \) is a mapping and k is either a single key or a set of keys. We use \(\sigma ' \subseteq \sigma \) to denote that \(\sigma '\) is a submapping of \(\sigma \), that is, \(\mathsf {dom}\;\sigma ' \subseteq \mathsf {dom}\;\sigma \) and \(\forall a \in \mathsf {dom}\;\sigma '.\; \sigma '\;a = \sigma \;a\).
Merging two mappings \(\sigma \) and \(\rho \) is denoted with \(\sigma \mathbin {+\!\!+}\rho \). It constructs a new mapping with the union domain of \(\sigma \) and \(\rho \). Entries from \(\rho \) override entries from \(\sigma \). That is, \(\rho \subseteq \sigma \mathbin {+\!\!+}\rho \) holds, but not necessarily \(\sigma \subseteq \sigma \mathbin {+\!\!+}\rho \).
All mappings and sets are assumed to be finite. In the formalization, this is enforced by using subtypes of \(\rightharpoonup \) and \(\mathsf {set}\). Note that one cannot define datatypes by recursion through sets for cardinality reasons. However, for finite sets, it is possible. This is required to construct the various term types. We leverage facilities of Blanchette et al.’s Open image in new window command to define these subtypes [7].
Standard Functions. All type constructors that we use (\(\rightharpoonup \), \(\mathsf {set}\), \(\mathsf {list}\), \(\mathsf {option}\), ...) support the standard operations \(\mathsf {map}\) and \(\mathsf {rel}\). For lists, \(\mathsf {map}\) is the regular covariant map. For mappings, the function has the type \((\beta \Rightarrow \gamma ) \Rightarrow (\alpha \rightharpoonup \beta ) \Rightarrow (\alpha \rightharpoonup \gamma )\). It leaves the domain unchanged, but applies a function to the range of the mapping.
Function \(\mathsf {rel}_\tau \) lifts a binary predicate \(P\, {:}{:}\, \alpha \Rightarrow \alpha \Rightarrow \mathsf {bool}\) to the type constructor \(\tau \). We call this lifted relation the relator for a particular type.
Definition 1 (Set relator)
Definition 2 (Mapping relator)
Term Types. There are four distinct term types: \(\mathsf {term}\), \(\mathsf {nterm}\), \(\mathsf {pterm}\), and \(\mathsf {sterm}\). All of them support the notions of free variables, matching and substitution. Free variables are always a finite set of strings. Matching a term against a pattern yields an optional mapping of type \(\mathsf {string} \rightharpoonup \alpha \) from free variable names to terms.
Note that the type of patterns is itself \(\mathsf {term}\) instead of a dedicated pattern type. The reason is that we have to subject patterns to a linearity constraint anyway and may use this constraint to carve out the relevant subset of terms:
Definition 3
A term is linear if there is at most one occurrence of any variable, it contains no abstractions, and in an application \(f\mathbin {\$}x\), f must not be a free variable. The HOL predicate is called \(\mathsf {linear}\, {:}{:}\, \mathsf {term} \Rightarrow \mathsf {bool}\).
Because of the similarity of operations across the term types, they are all instances of the \(\mathsf {term}\) type class. Note that in Isabelle, classes and types live in different namespaces. The \(\mathsf {term}\) type and the \(\mathsf {term}\) type class are separate entities.
Definition 4

\(\mathsf {matchs}\) matches a list of patterns and terms sequentially, producing a single mapping

\(\mathsf {closed}\;t\) is an abbreviation for \(\mathsf {frees}\;t = \emptyset \)

\(\mathsf {closed}\;\sigma \) is an overloading of \(\mathsf {closed}\), denoting that all values in a mapping are closed
Additionally, some (obvious) axioms have to be satisfied. We do not strive to fully specify an abstract term algebra. Instead, the axioms are chosen according to the needs of this formalization.
A notable deviation from matching as discussed in term rewriting literature is that the result of matching is only welldefined if the pattern is linear.
Definition 5
An equation is a pair of a pattern (lefthand side) and a term (righthand side). The pattern is of the form \(f\mathbin \$p_1\mathbin \$\ldots \mathbin \$p_n\), where f is a constant (i.e. of the form \(\mathsf {Const}\; name \)). We refer to both f or \( name \) interchangeably as the function symbol of the equation.
Following term rewriting terminology, we sometimes refer to an equation as rule.
4.1 De Bruijn terms ( Open image in new window )
The definition of \(\mathsf {term}\) is almost an exact copy of Isabelle’s internal term type, with the notable omissions of type information and schematic variables (Fig. 1a). The implementation of \(\beta \)reduction is straightforward via index shifting of bound variables.
4.2 Named Bound Variables ( Open image in new window )
The \(\mathsf {nterm}\) type is similar to \(\mathsf {term}\), but removes the distinction between bound and free variables. Instead, there are only named variables. As mentioned in the previous section, we forbid substitution of terms that are not closed in order to avoid capture. This is also reflected in the syntactic side conditions of the correctness proofs (Sect. 5.1).
4.3 Explicit Pattern Matching ( Open image in new window )
Functions in HOL are usually defined using implicit pattern matching, that is, the terms \(p_i\) occurring on the lefthand side \(\left\langle \mathsf {f}\;p_1\;\ldots \;p_n\right\rangle \) of an equation must be constructor patterns. This is also common among functional programming languages like Haskell or OCaml. CakeML only supports explicit pattern matching using case expressions. A function definition consisting of multiple defining equations must hence be translated to the form \(f = \lambda x.\;\mathsf {\mathbf {case}}\;x\;\mathsf {\mathbf {of}}\;\ldots \). The elimination proceeds by iteratively removing the last parameter in the block of equations until none are left.
In our formalization, we opted to combine the notion of abstraction and case expression, yielding case abstractions, represented as the \(\mathsf {Pabs}\) constructor. This is similar to the fn construct in Standard ML, which denotes an anonymous function that immediately matches on its argument [28]. The same construct also exists in Haskell with the LambdaCase language extension. We chose this representation mainly for two reasons: First, it allows for a simpler language grammar because there is only one (shared) constructor for abstraction and case expression. Second, the elimination procedure outlined above does not have to introduce fresh names in the process. Later, when translating to CakeML syntax, fresh names are introduced and proved correct in a separate step.
The set of pairs of pattern and righthand side inside a case abstraction is referred to as clauses. As a shorthand notation, we use \(\varLambda \{ p_1 \Rightarrow t_1, p_2 \Rightarrow t_2, \ldots \}\).
4.4 Sequential Clauses ( Open image in new window )
In the term rewriting fragment of HOL, the order of rules is not significant. If a rule matches, it can be applied, regardless when it was defined or proven. This is reflected by the use of sets in the rule and term types. For CakeML, the rules need to be applied in a deterministic order, i.e. sequentially. The \(\mathsf {sterm}\) type only differs from \(\mathsf {pterm}\) by using \(\mathsf {list}\) instead of \(\mathsf {set}\). Hence, case abstractions use list brackets: \(\varLambda [p_1 \Rightarrow t_1, p_2 \Rightarrow t_2, \ldots ]\).
4.5 Irreducible Terms ( Open image in new window )
CakeML distinguishes between expressions and values. Whereas expressions may contain free variables or \(\beta \)redexes, values are closed and fully evaluated. Both have a notion of abstraction, but values differ from expressions in that they contain an environment binding free variables.
Consider the expression \((\lambda x. \lambda y. x)\,(\lambda z. z)\), which is rewritten (by \(\beta \)reduction) to \(\lambda y. \lambda z. z\). Note how the bound variable x disappears, since it is replaced. This is contrary to how programming languages are usually implemented: evaluation does not happen by substituting the argument term t for the bound variable x, but by recording the binding \(x \mapsto t\) in an environment [24]. A pair of an abstraction and an environment is usually called a closure [25, 41].
Note the nested structure of the closure, whose environment itself contains a closure.
5 Intermediate Semantics and Compiler Phases
5.1 Side Conditions

Patterns must be linear, and constructors in patterns must be fully applied.

Definitions must have at least one parameter on the lefthand side (Sect. 5.6).

The righthand side of an equation refers only to free variables occurring in patterns on the lefthand side and contain no dangling de Bruijn indices.

There are no two defining equations \( lhs = rhs _1\) and \( lhs = rhs _2\) such that \( rhs _1 \ne rhs _2\).

For each pair of equations that define the same constant, their arity must be equal and their patterns must be compatible (Sect. 5.3).

There is at least one equation.

Variable names occurring in patterns must not overlap with constant names (Sect. 5.7).

Any occurring constants must either be defined by an equation or be a constructor.
The conditions for the subsequent phases are sufficiently similar that we do not list them again.
In the formalization, we use named contexts to fix the rules and assumptions on them (locales in Isabelle terminology). Each phase has its own locale, together with a proof that after compilation, the preconditions of the next phase are satisfied. Correctness proofs assume the above conditions on R and similar conditions on the term that is reduced. For brevity, this is usually omitted in our presentation.
5.2 Naming Bound Variables: From Open image in new window to Open image in new window
Isabelle uses de Bruijn indices in the term language for the following two reasons: For substitution, there is no need to rename bound variables. Additionally, \(\alpha \)equivalent terms are equal. In implementations of programming languages, these advantages are not required: Typically, substitutions do not happen inside abstractions, and there is no notion of equality of functions. Therefore CakeML uses named variables and in this compilation step, we get rid of de Bruijn indices.
The “named” semantics is based on the \(\mathsf {nterm}\) type. The rules that are changed from the original semantics (Fig. 1b) are given in Fig. 3 (Fun and Arg remain unchanged). Notably, \(\beta \)reduction reuses the substitution function.
For the correctness proof, we need to establish a correspondence between \(\mathsf {term}\)s and \(\mathsf {nterm}\)s. Translation from \(\mathsf {nterm}\) to \(\mathsf {term}\) is trivial: Replace bound variables by the number of abstractions between occurrence and where they were bound in, and keep free variables as they are. This function is called \(\mathsf {nterm\_to\_term}\).
The other direction is not unique and requires introduction of fresh names for bound variables. In our formalization, we have chosen to use a monad to produce these names. This function is called \(\mathsf {term\_to\_nterm}\). We can also prove the obvious property \(\mathsf {nterm\_to\_term}\;(\mathsf {term\_to\_nterm}\;t) = t\), where t is a \(\mathsf {term}\) without dangling de Bruijn indices.
Generation of fresh names in general can be thought of as picking a string that is not an element of a (finite) set of already existing names. For Isabelle, the Nominal framework [42, 43] provides support for reasoning over fresh names, but unfortunately, its definitions are not executable.
Theorem 1 (Correctness of compilation)
5.3 Explicit Pattern Matching: From Open image in new window to Open image in new window
Usually, functions in HOL are defined using implicit pattern matching, that is, the lefthand side of an equation is of the form \(\left\langle \mathsf {f}\;p_1\;\ldots \;p_n\right\rangle \), where the \(p_i\) are patterns over datatype constructors. For any given function \(\mathsf {f}\), there may be multiple such equations. In this compilation step, we transform sets of equations for \(\mathsf {f}\) defined using implicit pattern matching into a single equation for \(\mathsf {f}\) of the form \(\left\langle \mathsf {f}\right\rangle = \varLambda \;\textit{C}\), where \(\textit{C}\) is a set of clauses.
Semantics. The target semantics is given in Fig. 4 (the Fun and Arg rules from previous semantics remain unchanged). We start out with a rule set \(\textit{R}\) that allows only implicit pattern matching. After elimination, only explicit pattern matching remains. The modified Step rule merely replaces a constant by its definition, without taking arguments into account.
This compatibility constraint ensures that any two overlapping patterns (of the same column) \(p_{i,k}\) and \(p_{j,k}\) are equal and are thus appropriately grouped together in the elimination procedure. We require all defining equations of a constant to be mutually compatible. Equations violating this constraint will be flagged during embedding (Sect. 3), whereas the pattern elimination algorithm always succeeds.
While this rules out some theoretically possible pattern combinations (e.g. the diagonal function [36, Sect. 5.5]), in practice, we have not found this to be a problem: All of the function definitions we have tried (Sect. 8) satisfied pattern compatibility (after automatic renaming of pattern variables). As a last resort, the user can manually instantiate function equations. Although this will always lead to a pattern compatible definition, it is not done automatically, due to the potential blowup.
5.4 Sequentialization: From Open image in new window to Open image in new window
The semantics of \(\mathsf {pterm}\) and \(\mathsf {sterm}\) differ only in rule Step and Beta. Figure 5 shows the modified rules. Instead of any matching clause, the first matching clause in a case abstraction is picked.
5.5 BigStep Semantics for Open image in new window
This bigstep semantics for \(\mathsf {sterm}\) is not a compiler phase but moves towards the desired evaluation semantics. In this first step, we reuse the \(\mathsf {sterm}\) type for evaluation results, instead of evaluating to the separate type \(\mathsf {value}\). This allows us to ignore environment capture in closures for now.
All previous \(\longrightarrow \) relations were parametrized by a rule set. Now the bigstep predicate is of the form \(\textit{rs}, \sigma \vdash t \downarrow t'\) where \(\sigma \, {:}{:}\, \mathsf {string}\rightharpoonup \mathsf {sterm}\) is a variable environment.
This semantics also introduces the distinction between constructors and defined constants. If \(\mathsf {C}\) is a constructor, the term \(\left\langle \mathsf {C}\;t_1\;\ldots \;t_n\right\rangle \) is evaluated to \(\left\langle \mathsf {C}\;t'_1\;\ldots \;t'_n\right\rangle \) where the \(t_i'\) are the results of evaluating the \(t_i\).
The full set of rules is shown in Fig. 6. They deserve a short explanation:
 Const.

Constants are retrieved from the rule set \(\textit{rs}\).
 Var.

Variables are retrieved from the environment \(\sigma \).
 Abs.

In order to achieve the intended invariant, abstractions are evaluated to their fully substituted form.
 Comb.

Function application \(t \;\$\; u\) first requires evaluation of t into an abstraction \(\varLambda \;\textit{cs}\) and evaluation of u into an arbitrary term \(u'\). Afterwards, we look for a clause matching \(u'\) in \(\textit{cs}\), which produces a local variable environment \(\sigma '\), possibly overwriting existing variables in \(\sigma \). Finally, we evaluate the righthand side of the clause with the combined global and local variable environment.
 Constr.

For a constructor application \(\left\langle \mathsf {C}\;t_1\;\ldots \right\rangle \), evaluate all \(t_i\). The set constructors is an implicit parameter of the semantics.
Lemma 1 (Closedness invariant)
If \(\sigma \) contains only closed terms, \(\mathsf {frees}\;t \subseteq \mathsf {dom}\;\sigma \) and \(\textit{rs}, \sigma \vdash t \downarrow t'\), then \(t'\) is closed.
Correctness of the bigstep w.r.t. the smallstep semantics is proved easily by induction on the former:
Lemma 2
By setting \(\sigma = []\), we obtain:
Theorem 2 (Correctness)
\(\textit{rs}, [] \vdash t \downarrow u \wedge \mathsf {closed}\;t \rightarrow \textit{rs}\vdash t \longrightarrow ^* u\)
5.6 Evaluation Semantics: Refining Open image in new window to Open image in new window
At this point, we introduce the concept of values into the semantics, while still keeping the rule set (for constants) and the environment (for variables) separate. The evaluation rules are specified in Fig. 7 and represent a departure from the original rewriting semantics: a term does not evaluate to another term but to an object of a different type, a \(\mathsf {value}\). We still use \(\downarrow \) as notation, because bigstep and evaluation semantics can be disambiguated by their types.
The evaluation model itself is fairly straightforward. As explained in Sect. 4.5, abstraction terms are evaluated to closures capturing the current variable environment. Note that at this point, recursive closures are not treated differently from nonrecursive closures. In a later stage, when \(\textit{rs}\) and \(\sigma \) are merged, this distinction becomes relevant.
 Abs.

Abstraction terms are evaluated to a closure capturing the current environment.
 Comb.

As before, in an application \(t\mathbin {\$}u\), t must evaluate to a closure \(\mathsf {Vabs}\;\textit{cs}\;\sigma '\). The evaluation result of u is then matched against the clauses \(\textit{cs}\), producing an environment \(\sigma ''\). The righthand side of the clause is then evaluated using \(\sigma '\mathbin {+\!\!+}\sigma ''\); the original environment \(\sigma \) is effectively discarded.
 RecComb.

Similar as above. Finding the matching clause is a twostep process: First, the appropriate clause list is selected by name of the currently active function. Then, matching is performed.
 Constr.

As before, for an nary application \(\left\langle \mathtt {C}\;t_1\;\ldots \right\rangle \), where \(\mathsf {C}\) is a data constructor, we evaluate all \(t_i\). The result is a \(\mathsf {Vconstr}\) value.
Conversion Between Open image in new window and Open image in new window . To establish a correspondence between evaluating a term to an \(\mathsf {sterm}\) and to a \(\mathsf {value}\), we apply the same trick as in Sect. 5.2. Instead of specifying a complicated relation, we translate \(\mathsf {value}\) back to \(\mathsf {sterm}\): simply apply the substitutions in the captured environments to the clauses.
The translation rules for \(\mathsf {Vabs}\) and \(\mathsf {Vrecabs}\) are kept similar to the Abs rule from the bigstep semantics (Fig. 6). Roughly speaking, the bigstep semantics always keeps terms fully substituted, whereas the evaluation semantics defers substitution.
Similarly to Sect. 5.2, we can also define a function \(\mathsf {sterm\_to\_value}\, {:}{:}\, \mathsf {sterm} \Rightarrow \mathsf {value}\) and prove that one function is the inverse of the other.
Matching. The \(\mathsf {value}\) type, instead of using binary function application as all other term types, uses nary constructor application. This introduces a conceptual mismatch between (binary) patterns and values. To make the proofs easier, we introduce an intermediate type of nary patterns. This intermediate type can be optimized away by fusion.
Correctness. The correctness proof requires a number of interesting lemmas.
Lemma 3 (Substitution before evaluation)
Assuming that a term t can be evaluated to a value u given a closed environment \(\sigma \), it can be evaluated to the same value after substitution with a subenvironment \(\sigma '\). Formally: \(\textit{rs}, \sigma \vdash t \downarrow u \wedge \sigma ' \subseteq \sigma \rightarrow \textit{rs}, \sigma \vdash \mathsf {subst}\;\sigma '\;t \downarrow u\)
This justifies the “presubstitution” exhibited by the Abs rule in the bigstep semantics in contrast to the environmentcapturing Abs rule in the evaluation semantics.
Theorem 3 (Correctness)
Let \(\sigma \) be a closed environment and t a term which only contains free variables in \(\mathsf {dom}\;\sigma \). Then, an evaluation to a value \(\textit{rs}, \sigma \vdash t \downarrow v\) can be reproduced in the bigstep semantics as \(\textit{rs}', \mathsf {map}\;\mathsf {value\_to\_sterm}\;\sigma \vdash t \downarrow \mathsf {value\_to\_sterm}\;v\), where \(\textit{rs}' = [( name , \mathsf {value\_to\_sterm}\; rhs ) \;\; ( name , rhs ) \leftarrow \textit{rs}]\).
Instantiating the Correctness Theorem. The correctness theorem states that, for any given evaluation of a term t with a given environment \(\textit{rs}, \sigma \) containing \(\mathsf {value}\)s, we can reproduce that evaluation in the bigstep semantics using a derived list of rules \(\textit{rs}'\) and an environment \(\sigma '\) containing \(\mathsf {sterm}\)s that are generated by the \(\mathsf {value\_to\_sterm}\) function. But recall the diagram in Fig. 2. In our scenario, we start with a given rule set of \(\mathsf {sterm}\)s (that has been compiled from a rule set of \(\mathsf {term}\)s). Hence, the correctness theorem only deals with the opposite direction.
It remains to construct a suitable \(\textit{rs}\) such that applying \(\mathsf {value\_to\_sterm}\) to it yields the given \(\mathsf {sterm}\) rule set. We can exploit the side condition (Sect. 5.1) that all bindings define functions, not constants:
Definition 6 (Global clause set)
The mapping \(\mathsf {global\_css}\, {:}{:}\, \mathsf {string} \rightharpoonup ((\mathsf {term} \times \mathsf {sterm})\;\mathsf {list})\) is obtained by stripping the \(\mathsf {Sabs}\) constructors from all definitions and converting the resulting list to a mapping.
For each definition with name f we define a corresponding term \(v_f = \mathsf {Vrecabs}\;\mathsf {global\_css}\;f\;[]\). In other words, each function is now represented by a recursive closure bundling all functions. Applying \(\mathsf {value\_to\_sterm}\) to \(v_f\) returns the original definition of f. Let \(\textit{rs}\) denote the original \(\mathsf {sterm}\) rule set and \(\textit{rs}_\text {v}\) the environment mapping all f’s to the \(v_f\)’s.
The variable environments \(\sigma \) and \(\sigma '\) can safely be set to the empty mapping, because toplevel terms are evaluated without any free variable bindings.
Corollary 1 (Correctness)
\(\textit{rs}_\text {v}, [] \vdash t \downarrow v \rightarrow \textit{rs}, [] \vdash t \downarrow \mathsf {value\_to\_sterm}\;v\)
Note that this step was not part of the compiler (although \(\textit{rs}_\text {v}\) is computable) but it is a refinement of the semantics to support a more modular correctness proof.
5.7 Evaluation with Recursive Closures
 Const/Var.

Constant definition and variable values are both retrieved from the same environment \(\sigma \). We have opted to keep the distinction between constants and variables in the \(\mathsf {sterm}\) type to avoid the introduction of another term type.
 Abs.

Identical to the previous evaluation semantics. Note that evaluation never creates recursive closures at runtime (only at compiletime, see Sect. 5.6). Anonymous functions, e.g. in the term \(\left\langle \mathsf {map}\;(\lambda x.\;x)\right\rangle \), are evaluated to nonrecursive closures.
 Comb.

Identical to the previous evaluation semantics.
 RecComb.

Almost identical to the evaluation semantics. Additionally, for each function \(( name , cs ) \in \textit{css}\), a new recursive closure \(\mathsf {Vrecabs}\;\textit{css}\; name \;\sigma '\) is created and inserted into the environment. This ensures that after the first call to a recursive function, the function itself is present in the environment to be called recursively, without having to introduce coinductive environments.
 Constr.

Identical to the evaluation semantics.
Conflating Constants and Variables. By merging the rule set \(\textit{rs}\) with the variable environment \(\sigma \), it becomes necessary to discuss possible clashes. Previously, the syntactic distinction between \(\mathsf {Svar}\) and \(\mathsf {Sconst}\) meant that \(\left\langle x\right\rangle \) and \(\left\langle \mathsf {x}\right\rangle \) are not ambiguous: all semantics up to the evaluation semantics clearly specify where to look for the substitute. This is not the case in functional languages where functions and variables are not distinguished syntactically.
Instead, we rely on the fact that the initial rule set only defines constants. All variables are introduced by matching before \(\beta \)reduction (that is, in the Comb and RecComb rules). The Abs rule does not change the environment. Hence it suffices to assume that variables in patterns must not overlap with constant names (see Sect. 5.1).
Correspondence Relation. Both constant definitions and values of variables are recorded in a single environment \(\sigma \). This also applies to the environment contained in a closure. The correspondence relation thus needs to take a different sets of bindings in closures into account.
Hence, we define a relation \(\approx _\text {v}\) that is implicitly parametrized on the rule set \(\textit{rs}\) and compares environments. We call it rightconflating, because in a correspondence \(v \approx _\text { v} u\), any bound environment in u is thought to contain both variables and constants, whereas in v, any bound environment contains only variables.
Definition 7 (Rightconflating correspondence)
Consequently, \(\approx _\text {v}\) is not reflexive.
Correctness. The correctness lemma is straightforward to state:
Theorem 4 (Correctness)
Let \(\sigma \) be an environment, t be a closed term and v a value such that \(\sigma \vdash t \downarrow v\). If for all constants x occurring in t, \(\textit{rs}\;x \approx _\text { v} \sigma \;x\) holds, then there is an u such that \(\textit{rs}, [] \vdash t \downarrow u\) and \(u \approx _\text { v} v\).
As usual, the rather technical proof proceeds via induction over the semantics (Fig. 8). It is important to note that the global clause set construction (Sect. 5.6) satisfies the preconditions of this theorem:
Lemma 4
Because \(\approx _\text {v}\) is defined coinductively, the proof of this precondition proceeds by coinduction.
5.8 CakeML
CakeML is a verified implementation of a subset of Standard ML [24, 40]. It comprises a parser, type checker, formal semantics and backend for machine code. The semantics has been formalized in Lem [29], which allows export to Isabelle theories.
Our compiler targets CakeML’s abstract syntax tree. However, we do not make use of certain CakeML features; notably mutable cells, modules, and literals. We have derived a smaller, executable version of the original CakeML semantics, called CupCakeML, together with an equivalence proof. The correctness proof of the last compiler phase establishes a correspondence between CupCakeML and the final semantics of our compiler pipeline.
For the correctness proof of the CakeML compiler, its authors have extracted the Lem specification into HOL4 theories [1]. In our work, we directly target CakeML abstract syntax trees (thereby bypassing the parser) and use its bigstep semantics, which we have extracted into Isabelle.^{2}

CakeML does not combine abstraction and pattern matching. For that reason, we have to translate \(\varLambda \;[p_1 \Rightarrow t_1, \ldots ]\) into \(\varLambda x.\;\mathsf {\mathbf {case}}\;x\;\mathsf {\mathbf {of}}\;p_1 \Rightarrow t_1 \;\; \ldots \), where x is a fresh variable name. We reuse the \(\mathsf {fresh}\) monad to obtain a bound variable name. Note that it is not necessary to thread through already created variable names, only existing names. The reason is simple: a generated variable is bound and then immediately used in the body. Shadowing it somewhere in the body is not problematic.

CakeML has two distinct syntactic categories for identifiers (that can represent variables or functions) and data constructors. Our term types however have two distinct syntactic categories for constants (that can represent functions or data constructors) and variables. The necessary prerequisites to deal with this are already present in the MLstyle evaluation semantics (Sect. 5.7) which conflates constants and variables, but has a dedicated Constr rule for data constructors.
Types. During embedding (Sect. 3), all type information is erased. Yet, CakeML performs some limited form of type checking at runtime: constructing and matching data must always be fully applied. That is, data constructors must always occur with all arguments supplied on righthand and lefthand sides.
Fully applied constructors in terms can be easily guaranteed by simple preprocessing. For patterns however, this must be ensured throughout the compilation pipeline; it is (like other syntactic constraints) another side condition imposed on the rule set (Sect. 5.1).
The shape of datatypes and constructors is managed in CakeML’s environment. This particular piece of information is allowed to vary in closures, since ML supports local type definitions. Tracking this would greatly complicate our proofs. Hence, we fix a global set of constructors and enforce that all values use exactly that one.
Correspondence Relation. We define two different correspondence relations: One for values and one for expressions.
Definition 8 (Expression correspondence)
We will explain each of the rules briefly here.
 Var.

Variables are directly related by identical name.
 Const.

As described earlier, constructors are treated specially in CakeML. In order to not confuse functions or variables with data constructors themselves, we require that the constant name is not a constructor.
 Constr.

Constructors are directly related by identical name, and recursively related arguments.
 App.

CakeML does not just support general function application but also unary and binary operators. In fact, function application is the binary operator \(\mathsf {Opapp}\). We never generate other operators. Hence the correspondence is restricted to \(\mathsf {Opapp}\).
 Fun/Mat.

Observe the symmetry between these two cases: In our term language, matching and abstraction are combined, which is not the case in CakeML. This means we relate a case abstraction to a CakeML function containing a match, and a case abstraction applied to a value to just a CakeML match.
There is no separate relation for patterns, because their translation is simple.
The value correspondence (\(\mathsf {rel\_v}\)) is structurally simpler. In the case of constructor values (\(\mathsf {Vconstr}\) and \(\mathsf {Cake.Conv}\)), arguments are compared recursively. Closures and recursive closures are compared extensionally, i.e. only bindings that occur in the body are checked recursively for correspondence.
Correctness. We use the same trick as in Sect. 5.6 to obtain a suitable environment for CakeML evaluation based on the rule set \(\textit{rs}\).
Theorem 5 (Correctness)
If the compiled expression \(\mathsf {sterm\_to\_cake}\;t\) terminates with a value u in the CakeML semantics, there is a value v such that \(\mathsf {rel\_v}\;v\;u\) and \(\textit{rs}\vdash t \downarrow v\).
6 Composition
The complete compiler pipeline consists of multiple phases. Correctness is justified for each phase between intermediate semantics and correspondence relations, most of which are rather technical. Whereas the compiler may be complex and impenetrable, the trustworthiness of the constructions hinges on the obviousness of those correspondence relations.
Fortunately, under the assumption that terms to be evaluated and the resulting values do not contain abstractions – or closures, respectively – all of the correspondence relations collapse to simple structural equality: two terms are related if and only if one can be converted to the other by consistent renaming of term constructors.
This theorem directly relates the evaluation of a term t in the full CakeML (including mutability and exceptions) to the evaluation in the initial higherorder term rewriting semantics. The evaluation of t happens using the environment produced from the initial rule set. Hence, the theorem can be interpreted as the correctness of the pseudoML expression \(\mathsf {\mathbf {let\ rec}}\;\textit{rs}\;\mathsf {\mathbf {in}}\;t\).
7 Dictionary Construction
Isabelle’s type system supports type classes (or simply classes) [18, 44] whereas CakeML does not. In order to not complicate the correctness proofs, type classes are not supported by our embedded term language either. Instead, we eliminate classes and instances by a dictionary construction [19] before embedding into the term language. Haftmann and Nipkow give a penandpaper correctness proof of this construction [17, Sect. 4.1]. We augmented the dictionary construction with the generation of a certificate theorem that shows the equivalence of the two versions of a function, with type classes and with dictionaries. This section briefly explains our dictionary construction.
Figure 9 shows a simple example of a dictionary construction. Type variables may carry class constraints (e.g. \(\alpha \, {:}{:}\, \mathsf {add}\)). The basic idea is that classes become dictionaries containing the functions of that class; class instances become dictionary definitions. Dictionaries are realized as datatypes. Class constraints become additional dictionary parameters for that class. In the example, class \(\mathsf {add}\) becomes \(\mathsf {dict\_add}\); function f is translated into \(f'\) which takes an additional parameter of type \(\mathsf {dict\_add}\). In reality our tool does not produce the Isabelle source code shown in Fig. 9b but performs the constructions internally. The correctness lemma \(\mathsf {f'\_eq}\) is proved automatically. Its precondition expresses that the dictionary must contain exactly the function(s) of class \(\mathsf {add}\). For any monomorphic instance, the precondition can be proved outright based on the certificate theorems proved for each class instance as explained next.
8 Evaluation
We have tried out our compiler on examples from existing Isabelle formalizations. This includes an implementation of Huffman encoding, lists and sorting, string functions [39], and various data structures from Okasaki’s book [34], including binary search trees, pairing heaps, and leftist heaps. These definitions can be processed with slight modifications: functions need to be totalized (see the end of Sect. 3). However, parts of the tactics required for deep embedding proofs (Sect. 3) are too slow on some functions and hence still need to be optimized.
9 Conclusion
For this paper we have concentrated on the compiler from Isabelle/HOL to CakeML abstract syntax trees. Partial correctness is proved w.r.t. the bigstep semantics of CakeML. In the next step we will link our work with the compiler from CakeML to machine code. Tan et al. [40, Sect. 10] prove a correctness theorem that relates their semantics with the execution of the compiled machine code. In that paper, they use a newer iteration of the CakeML semantics (functional bigstep [35]) than we do here. Both semantics are still present in the CakeML source repository, together with an equivalence proof. Another important step consists of targeting CakeML’s native types, e.g. integer numbers and characters.
Evaluation of our compiled programs is already possible via Isabelle’s predicate compiler [5], which allows us to turn CakeML’s bigstep semantics into an executable function. We have used this execution mechanism to establish for sample programs that they terminate successfully. We also plan to prove that our compiled programs terminate, i.e. total correctness.
The total size of this formalization, excluding theories extracted from Lem, is currently approximately 20000 lines of proof text (90 %) and ML code (10 %). The ML code itself produces relatively simple theorems, which means that there are less opportunities for it to go wrong. This constitutes an improvement over certifying approaches that prove complicated properties in ML.
Footnotes
 1.
All Isabelle definitions and proofs can be found on the paper website: https://lars.hupel.info/research/codegen/, or archived as https://doi.org/10.5281/zenodo.1167616.
 2.
Based on a repository snapshot from March 27, 2017 (0c48672).
References
 1.The HOL System Description (2014). https://holtheoremprover.org/
 2.Anand, A., Appel, A.W., Morrisett, G., Paraskevopoulou, Z., Pollack, R., Bélanger, O.S., Sozeau, M., Weaver, M.: CertiCoq: a verified compiler for Coq. In: CoqPL 2017: Third International Workshop on Coq for Programming Languages (2017)Google Scholar
 3.Augustsson, L.: Compiling pattern matching. In: Jouannnaud, J.P. (ed.) Functional Programming Languages and Computer Architecture, pp. 368–381. Springer, Heidelberg (1985)CrossRefGoogle Scholar
 4.Benton, N., Hur, C.: Biorthogonality, stepindexing and compiler correctness. In: Hutton, G., Tolmach, A.P. (eds.) ICFP 2009, pp. 97–108. ACM (2009)Google Scholar
 5.Berghofer, S., Bulwahn, L., Haftmann, F.: Turning inductive into equational specifications. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 131–146. Springer, Heidelberg (2009). https://doi.org/10.1007/9783642033599_11CrossRefGoogle Scholar
 6.Berghofer, S., Nipkow, T.: Executing higher order logic. In: Callaghan, P., Luo, Z., McKinna, J., Pollack, R., Pollack, R. (eds.) TYPES 2000. LNCS, vol. 2277, pp. 24–40. Springer, Heidelberg (2002). https://doi.org/10.1007/3540458425_2CrossRefGoogle Scholar
 7.Blanchette, J.C., Hölzl, J., Lochbihler, A., Panny, L., Popescu, A., Traytel, D.: Truly modular (co)datatypes for Isabelle/HOL. In: Klein, G., Gamboa, R. (eds.) ITP 2014. LNCS, vol. 8558, pp. 93–110. Springer, Cham (2014). https://doi.org/10.1007/9783319089706_7CrossRefGoogle Scholar
 8.Boespflug, M., Dénès, M., Grégoire, B.: Full reduction at full throttle. In: Jouannaud, J.P., Shao, Z. (eds.) CPP 2011. LNCS, vol. 7086, pp. 362–377. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642253799_26CrossRefGoogle Scholar
 9.Boyer, R.S., Strother Moore, J.: Singlethreaded objects in ACL2. In: Krishnamurthi, S., Ramakrishnan, C.R. (eds.) PADL 2002. LNCS, vol. 2257, pp. 9–27. Springer, Heidelberg (2002). https://doi.org/10.1007/3540455876_3CrossRefGoogle Scholar
 10.de Bruijn, N.G.: Lambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the churchrosser theorem. Indag. Math. (Proceedings) 75(5), 381–392 (1972)MathSciNetCrossRefGoogle Scholar
 11.Chlipala, A.: A verified compiler for an impure functional language. In: Hermenegildo, M.V., Palsberg, J. (eds.) POPL 2010, pp. 93–106. ACM (2010)Google Scholar
 12.Crow, J., Owre, S., Rushby, J., Shankar, N., StringerCalvert, D.: Evaluating, testing, and animating PVS specifications. Technical report, Computer Science Laboratory, SRI International, Menlo Park, CA, March 2001Google Scholar
 13.Fallenstein, B., Kumar, R.: Proofproducing reflection for HOL. In: Urban, C., Zhang, X. (eds.) ITP 2015. LNCS, vol. 9236, pp. 170–186. Springer, Cham (2015). https://doi.org/10.1007/9783319221021_11CrossRefGoogle Scholar
 14.Flatau, A.D.: A verified implementation of an applicative language with dynamic storage allocation. Ph.D. thesis, University of Texas at Austin (1992)Google Scholar
 15.Forster, Y., Kunze, F.: Verified extraction from coq to a lambdacalculus. In: The 8th Coq Workshop (2016)Google Scholar
 16.Greve, D.A., Kaufmann, M., Manolios, P., Moore, J.S., Ray, S., RuizReina, J., Sumners, R., Vroon, D., Wilding, M.: Efficient execution in an automated reasoning environment. J. Funct. Program. 18(1), 15–46 (2008)CrossRefGoogle Scholar
 17.Haftmann, F., Nipkow, T.: Code generation via higherorder rewrite systems. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) FLOPS 2010. LNCS, vol. 6009, pp. 103–117. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642122514_9CrossRefGoogle Scholar
 18.Haftmann, F., Wenzel, M.: Constructive type classes in Isabelle. In: Altenkirch, T., McBride, C. (eds.) TYPES 2006. LNCS, vol. 4502, pp. 160–174. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540744641_11CrossRefGoogle Scholar
 19.Hall, C.V., Hammond, K., Jones, S.L.P., Wadler, P.L.: Type classes in Haskell. ACM Trans. Program. Lang. Syst. 18(2), 109–138 (1996)CrossRefGoogle Scholar
 20.Hermida, C., Reddy, U.S., Robinson, E.P.: Logical relations and parametricity  a Reynolds programme for category theory and programming languages. Electron. Notes Theoret. Comput. Sci. 303, 149–180 (2014)MathSciNetCrossRefGoogle Scholar
 21.Hupel, L.: Dictionary construction. Archive of Formal Proofs, May 2017. http://isaafp.org/entries/Dict_Construction.html, Formal proof development
 22.Krauss, A.: Partial and nested recursive function definitions in higherorder logic. J. Autom. Reason. 44(4), 303–336 (2010)MathSciNetCrossRefGoogle Scholar
 23.Krauss, A., Schropp, A.: A mechanized translation from higherorder logic to set theory. In: Kaufmann, M., Paulson, L.C. (eds.) ITP 2010. LNCS, vol. 6172, pp. 323–338. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642140525_23CrossRefGoogle Scholar
 24.Kumar, R., Myreen, M.O., Norrish, M., Owens, S.: CakeML: a verified implementation of ML. In: POPL 2014, pp. 179–191. ACM (2014)Google Scholar
 25.Landin, P.J.: The mechanical evaluation of expressions. Comput. J. 6(4), 308–320 (1964)CrossRefGoogle Scholar
 26.Leroy, X.: Formal verification of a realistic compiler. Commun. ACM 52(7), 107–115 (2009). http://doi.acm.org/10.1145/1538788.1538814
 27.Letouzey, P.: A new extraction for Coq. In: Geuvers, H., Wiedijk, F. (eds.) TYPES 2002. LNCS, vol. 2646, pp. 200–219. Springer, Heidelberg (2003). https://doi.org/10.1007/3540391851_12CrossRefzbMATHGoogle Scholar
 28.Milner, R., Tofte, M., Harper, R., MacQueen, D.: The Definition of Standard ML (Revised). MIT Press, Cambridge (1997)Google Scholar
 29.Mulligan, D.P., Owens, S., Gray, K.E., Ridge, T., Sewell, P.: Lem: reusable engineering of realworld semantics. In: ICFP 2014, pp. 175–188. ACM (2014)Google Scholar
 30.Myreen, M.O., Owens, S.: Proofproducing translation of higherorder logic into pure and stateful ML. JFP 24(2–3), 284–315 (2014)MathSciNetzbMATHGoogle Scholar
 31.Neis, G., Hur, C.K., Kaiser, J.O., McLaughlin, C., Dreyer, D., Vafeiadis, V.: Pilsner: a compositionally verified compiler for a higherorder imperative language. In: ICFP 2015, pp. 166–178. ACM, New York (2015)Google Scholar
 32.Nipkow, T., Klein, G.: Concrete Semantics. Springer, Cham (2014). https://doi.org/10.1007/9783319105420CrossRefzbMATHGoogle Scholar
 33.Nipkow, T., Wenzel, M., Paulson, L.C. (eds.): Isabelle/HOL—A Proof Assistant for HigherOrder Logic. LNCS, vol. 2283. Springer, Heidelberg (2002). https://doi.org/10.1007/3540459499. 218p.CrossRefzbMATHGoogle Scholar
 34.Okasaki, C.: Purely Functional Data Structures. Cambridge University Press, Cambridge (1999)zbMATHGoogle Scholar
 35.Owens, S., Myreen, M.O., Kumar, R., Tan, Y.K.: Functional bigstep semantics. In: Thiemann, P. (ed.) ESOP 2016. LNCS, vol. 9632, pp. 589–615. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662494981_23CrossRefGoogle Scholar
 36.Peyton Jones, S.L.: The Implementation of Functional Programming Languages. PrenticeHall Inc., Upper Saddle River (1987)zbMATHGoogle Scholar
 37.Shankar, N.: Static analysis for safe destructive updates in a functional language. In: Pettorossi, A. (ed.) LOPSTR 2001. LNCS, vol. 2372, pp. 1–24. Springer, Heidelberg (2002). https://doi.org/10.1007/3540456074_1CrossRefGoogle Scholar
 38.Slind, K.: Reasoning about terminating functional programs. Ph.D. thesis, Technische Universität München (1999)Google Scholar
 39.Sternagel, C., Thiemann, R.: Haskell’s show class in Isabelle/HOL. Archive of Formal Proofs, July 2014. http://isaafp.org/entries/Show.html, Formal proof development
 40.Tan, Y.K., Myreen, M.O., Kumar, R., Fox, A., Owens, S., Norrish, M.: A new verified compiler backend for CakeML. In: Proceedings of 21st ACM SIGPLAN International Conference on Functional Programming  ICFP 2016. Association for Computing Machinery (ACM) (2016)Google Scholar
 41.Turner, D.A.: Some history of functional programming languages. In: Loidl, H.W., Peña, R. (eds.) TFP 2012. LNCS, vol. 7829, pp. 1–20. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642404474_1CrossRefGoogle Scholar
 42.Urban, C.: Nominal techniques in Isabelle/HOL. J. Autom. Reason. 40(4), 327–356 (2008). https://doi.org/10.1007/s1081700890972MathSciNetCrossRefGoogle Scholar
 43.Urban, C., Berghofer, S., Kaliszyk, C.: Nominal 2. Archive of Formal Proofs, February 2013. Formal proof development: http://isaafp.org/entries/Nominal2.shtml
 44.Wenzel, M.: Type classes and overloading in higherorder logic. In: Gunter, E.L., Felty, A. (eds.) TPHOLs 1997. LNCS, vol. 1275, pp. 307–322. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0028402CrossRefGoogle Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.