Higher-Ranked Annotation Polymorphic Dependency Analysis

The precision of a static analysis can be improved by increasing the context-sensitivity of the analysis. In a type-based formulation of static analysis for functional languages this can be achieved by, e.g., introducing let-polyvariance or subtyping. In this paper we go one step further by defining a higher-ranked polyvariant type system so that even properties of lambda-bound identifiers can be generalized over. We do this for dependency analysis, a generic analysis that can be instantiated to a range of different analyses that in this way all can profit. We prove that our analysis is sound with respect to a call-by-name semantics and that it satisfies a so-called noninterference property. We provide a type reconstruction algorithm that we have proven to be terminating, and sound and complete with respect to its declarative specification. Our principled description can serve as a blueprint for making other analyses higher-ranked.


Introduction
The typical compiler for a statically typed functional language will perform a number of analyses for validation, optimisation, or both (e.g., strictness analysis, control-flow analysis, and binding time analysis). These analyses can be specified as a type-based static analysis so that vocabulary, implementation and concepts from the world of type systems can be reused in this setting [19,24]. In that setting the analysis properties are taken from a language of annotations which adorn the types computed for the program during type inference: the analysis is specified as an annotated type system, and the payload of the analysis corresponds to the annotations computed for a given program.
Consider for example binding-time analysis [5,7]. In this case, we have a twovalue lattice of annotations containing S for static and D for dynamic (where ⊥ = S D = , so that whenever an expression is annotated with S, it can be soundly changed to D, because that is a strictly weaker property). An expression that is known to be static may be evaluated at compile time, because the analysis has determined that all the values that determine its outcome are in fact available at compile-time while all other expressions are annotated with D, and must be evaluated at run-time; the goal of binding-time analysis is then to (soundly) assign S to as many expressions as possible.
Static analyses may differ in precision, e.g., a monovariant binding-time analysis lacks context-sensitivity for let-bound identifiers (although some of it can be recovered with subtyping). Assuming id to be the identity function, if in the program the subexpression s is a statically known integer, which we denote as s : int S , and d : int D a dynamic integer, then for id we arrive at int D → int D , so that the property found for id s is that it is a dynamic integer. Clearly, however, if the value of s is known statically then also that of id s is! The fact that values with different properties flow to a function and we have to be (overly) pessimistic for some of these is a phenomenon sometimes called poisoning [28]. Context-sensitivity reduces poisoning; it can be achieved by making the analysis polyvariant. In that case, our type for id may become ∀β.int β → int β , so that for the first call to id we may instantiate β with S and for the second choose D, essentially mimicking the polymorphic lambda-calculus at the level of annotations.
But what about a function like in which we have two calls to a lambda-bound function argument f ? Can we treat these context-sensitively as well, so that we can have the most precise types for both calls, independent of each other? The answer is: yes, we can. Independence can be achieved by inferring for foo a type that associates with f an annotation polymorphic type, Here, β 0 ranges over simple annotations (such as S and D), and β 1 ranges over annotation level functions (in the terminology of this paper, these annotations are higher-sorted; see section 3). The annotation variable β 0 is a placeholder for the analysis property of the actual argument to f , while β 1 represents how that property propagates to the value returned by f . If the identity function ∀β.int β → int β is passed to foo, a pair with annotated type int D × int S will be returned. This is because the types of f d and f s can be determined independently of each other, because the choice for β 0 can be made separately for each call. The "price" we pay is that we have to know how the annotations on the values returned by f can be derived from the annotations on the arguments. This is exactly what β 1 represents.
If β 0 or β 1 would range over (annotated) types, then the underlying language itself would be higher-ranked, and inference in that case is known to be undecidable [14]. However, as we show in this paper, if they range only over annotations (even higher-sorted ones), then inference may become decidable again. Why is that? Intuitively, this is because the underlying types provide structure to the analysis inference algorithm, while a higher-ranked polymorphic type system does not have this advantage.
In which situations can we expect to benefit from higher-ranked polyvariance? Generally speaking, this is when we have functions of order 2 and higher, functions that often show up in idiomatic functional code.
Languages like Haskell do support higher-rank types [13]. Decidability is not problematic then, because the compiler expects the programmer to provide the higher-rank type signatures where necessary, and the compiler only needs to verify that the provided types are consistent: type checking is decidable. In our situation this is typically not acceptable: we cannot expect programmers to provide explicit control-flow [12] or binding-time information. So we have to insist on full inference of analysis information, and this paper shows how this can be done for dependency analysis [1].
Dependency analysis is in fact a family of analyses; instances include bindingtime analysis, exception analysis, secure information flow analysis and static slicing. The precision of our higher-ranked polyvariant annotated type system for dependency analysis thereby carries over immediately to the instances, and metatheoretical properties we prove, like a noninterference theorem [8], need to be proven only once.
In summary, this paper offers the following contributions. We (1) define a higher-ranked annotation polymorphic type system for a generic dependency analysis (section 4) for a call-by-name language that takes its annotations from a simply typed lambda-calculus enriched with lattice operations (section 3). The analysis also supports polyvariant recursion [10] to improve precision for certain recursive functions. Due to the principled way in which the analysis is set-up it can serve as a blueprint for giving other analyses the same treatment. We (2) prove our system sound with respect to a call-by-name operational semantics. We also formulate and prove a noninterference theorem for our system (section 5). We (3) give a type reconstruction algorithm that is sound and complete with respect to the type system (section 6) and provide a prototype implementation (section 7). For reasons of space we omit many details that are available in a separate document [26].

Intuition and motivation
Before we go on to the technical details of this paper, we want to elaborate upon our intuitive description from the introduction. We do this by means of a few small examples, keeping the discussion informal. Formally discussed examples, as generated by our implementation, become big and hard to read pretty quickly; these can be found in section 7.
We start with a few examples in which binding-time analysis is the dependency analysis instance, followed by a few examples that use security flow analysis; our implementation supports both instances. We note that our implementation supports a few more language constructs than the formal specification given in this paper, giving us a bit more flexibility. Neither, however, supports polymorphism at the type level. This substantially simplifies the technicalities.
For the following example our analysis can derive a higher-ranked polyvariant type for f , where β 1 and β 2 can be instantiated independently for each of the two calls to f in foo, and β 3 is universally bound by foo and represents how the argument f uses its function argument.
Since the argument to f is itself a function, the information that flows out of, say, the first call to f can be independent of the analysis of the function that flows into the second call (and vice versa), thereby avoiding unnecessary poisoning. This means that the binding-time of, say, the second component of the pair depends only on f and the function λx : int.0, irrespective of f also receiving λx : int.x as argument to compute the first component.
For the next example, let us consider security flow analysis in which we have annotations L and H that designate values (call these L-values and H-values) of low respectively high confidentiality. An important scenario where additional precision can be achieved is when analyzing Haskell code in which type classes have been desugared to dictionary-passing functional core. A function like g x y = (x + y, y + y) is then transformed into something like g (+) x y = (x + y, y + y). Now, consider the case that we pass an H-value to x and an L-value to y; the operator (+) produces an L-value if and only if both arguments are L-values. Without higher-ranked annotations, the annotation on the first argument to (+) has to be consistent with all uses of (+). Because x is an H-value, that will then also be the case for the second call to (+), leading to a pair of values of which the components are both H-values. With higher-ranked annotations, we can instantiate the two instances independently, and the second component of the pair is analyzed to produce an L-value. Functions in Haskell that use type classes are extremely common.

The λ -calculus
An essential ingredient of our annotated type system is the language of annotations that we use to decorate our types and to represent the dependencies resulting from evaluating an expression. Indeed, the fact that annotations are in fact "programs" in a lambda calculus is what allows us to make our analysis a higher-ranked polyvariant one. For the purpose of this paper, we generalize the λ ∪ -calculus of [16] to the λ -calculus (λ for short) a simply typed lambda calculus extended with a lattice structure.
On the term level, we allow arbitrary elements of the underlying lattice and taking binary joins, in addition to the usual variables, function applications and lambda abstractions. Lattice elements are assumed to be taken from a bounded join-semilattice L, an algebraic structure L, consisting of an underlying set L and an associative, commutative and idempotent binary operation , called join (we usually write ∈ L for ∈ L), and a least element ⊥.
The sorting rules of λ are straightforward (see [26]). Values of the underlying lattice are always of sort , and the join operator is defined on arbitrary terms of the same sort: The sorting rule uses sort environments denoted by the letter Σ that map annotation variables β to sorts κ. We denote the set of sort environments by SortEnv. More precisely, a sort environment or sort context Σ is a finite list of bindings from annotation variables β to sorts κ. The empty context is written as ∅ (in code as []), and the context Σ extended with the binding of the variable β to the sort κ is written Σ, β : κ. We denote the set of annotation variables in the context Σ with dom(Σ). When we write, Σ(β) = κ this means that β ∈ dom(Σ) and the rightmost occurrence of β binds it to κ. Moreover, Σ \ B where B ⊆ AnnVar denotes the context Σ where all bindings of annotation variables in B have been removed. In the remainder of this paper, we shall overload this notation for all kinds of other environments we shall be needing, including type environments, and annotated type environments. The λ -calculus enjoys a number of properties, many of which are what one might expect; we have put these and their proofs in [26].
A substitution is a map from variables to terms usually denoted by the letter θ. The application of a substitution θ to a term ξ is written θξ and replaces all free variables in ξ that are also in the domain of θ with the corresponding terms they are mapped to. A concrete substitution replacing the variables β 1 , . . . , β n with terms ξ 1 , . . . , ξ n is written [ξ 1 /β 1 , . . . , ξ n /β n ].
Assuming the usual definitions for the pointwise extension of a lattice L, and for monotone (order-preserving) functions between lattices, Figure 2 shows the denotational semantics of λ , where we employ the pointwise lifting of ∪ to functions to give semantics to the join of λ . The universe V κ denotes the lattice that is represented by the sort κ. The base sort represents the underlying lattice L and the function sort κ 1 ⇒ κ 2 represents the lattice constructed by pointwise extension of the lattice V κ2 restricted to monotone functions.
The denotation function · ρ is parameterized with an environment ρ of the given type that provides the values of variables. The denotation of a lambda term is simply an element of the corresponding function space. Applications are therefore mapped directly to the underlying function application of the metatheory. This is unlike the λ ∪ -calculus of [16] where lambda terms are mapped to singleton sets of functions and function application is defined in terms of the union of the results of individually applying each function. The crucial difference is that we have offloaded this complexity into the definition of the pointwise extension of lattices. It is therefore important to note that the join operator used in the denotation of a term ξ 1 ξ 2 depends on the sort κ of this term and belongs to the lattice V κ .

The declarative type system
The types and syntax of our source language are given in figure 3. The types of our source language consist of a unit type, and product, sum and function types. As mentioned earlier, let-polymorphism at the type level is not part of the τ ∈ Ty ::= unit (unit type) (forcing) | ann (t) (raise annotation level to ∈ L) Fig. 3: The types and terms of the source language type system. The language itself is then hardly suprising and includes variables, a unit constant, lambda abstraction, function application, projection functions for product types, sum constructors, a sum eliminator (case), fixpoints, seq for explicitly forcing evaluation in our call-by-name language, and, finally, a special operation ann (t) that raises the annotation level of t to . We omit the underlying type system for the source language since it consists mostly of the standard rules (see [26]). A notable exception is the rule for ann (t). Such an explicitly annotated term has the same underlying type as t: The annotation imposed on t only becomes relevant in the annotated type system that we discuss next. In the following, we assume the usual definitions for computing the set of free term variables of a term, ftv(t).
The annotated type system The source language is simply a desugared variant of the functional language a programmer deals with. The target language has the same structure, but adds dependency annotations to the source syntax. These annotations are the payload of the dependency analaysis and computed by the algorithm given in section 6, so that the analysis results can be employed in the back-end of a compiler. In other words, the algorithm elaborates a source level term into a target term.
The syntax of the target language is shown in figure 4. Annotated types of the target language are denoted by τ and annotated terms are denoted by t. The annotations that we put on compound types, as well as their components are not just there for uniformity. Because of our non-strict semantics and the τ ∈ Ty ::= ∀β :: κ. τ (annotation quantification) | unit (unit type) presence of seq, we can observe the effects on a pair constructor independently of its values, so we have separate annotations to represent these. On the type level, there is an additional construct ∀β :: κ. τ quantifying over an annotation variable β of sort κ. Furthermore, the recursive occurrences in the sum, product and arrow types now each carry an annotation. On the term level, the explicit type annotations of lambda expressions and fixpoints are now annotated types and also include a dependency annotation. Moreover, dependency abstraction and application have been added to reflect the quantification of dependency variables on the type level. We denote the set of free (term) variables in a target term t by ftv( t).
The formal definition of well-formedness for annotated types can be found in [26]. Informally, a type is well-formed only if all annotations are of sort and all annotation variables that are used have previously been bound.
Below, we assume the unsurprising recursive definitions for computing the underlying terms t and underlying types τ that correspond to annotated terms t and annotated types τ . We also straightforwardly extend the definition of free annotation variables to annotated types, and denote these by fav( τ ).
Subtyping To define subtyping we need an auxiliary relation that says when two annotated types τ 1 and τ 2 have the same shape. The unsurprising formal definition is in [26], but essentially they have the same syntactic structure, and in the forall case, quantify over the same annotation variable. It can be quite easily proven that if two types have the same shape, then they have the same underlying type. This is not true the other way around: the annotated types ∀β 1 .∀β 2 .int β 1 → int β 1 β 2 and ∀β 1 .int β 1 → int β 1 have the same underlying type, int → int, but do not have the same shape. Figure 5 shows the rules defining the subtyping relation on annotated types of the same shape, that allows us to weaken the annotations on a type to a less demanding one. Intuitively, a type τ 1 is a subtype of τ 2 under a sort environment Σ, written Σ sub τ 1 τ 2 , if a value of type τ 1 can be used in places where a value of type τ 2 is required. The subtyping relation only relates the annotations inside the types using the subsumption relation Σ sub ξ 1 ξ 2 between dependency terms. Moreover, the subtyping relation implicitly demands that both types are well-formed under the environment. The [Sub-Forall] rule requires that the quantified variable has the same name in both types. This is not a restriction, as we can simply rename the variables in one or both of the types accordingly in order to make them match and prevent unintentional capturing of previously free variables. Note that [Sub-Arr] is contravariant for argument positions. We omitted [Sub-Sum] which can be derived from [Sub-Prod] by replacing × with +.
The annotated type rules An annotated type environment Γ is defined analogously to sort environments, but instead maps term variables x to pairs of an annotated type τ and a dependency term ξ. We extend the definition of the set of free annotation variables to annotated environments by taking the union of the free annotation variables of all annotated types and dependency terms occurring in the environment, denoted by fav( Γ ). We denote the set of annotated type environments by AnnTyEnv.
We have now all the definitions in place in order to define the declarative annotated type system shown in figure 6. It consists of judgments of the form Σ | Γ te t : τ & ξ expressing that under the sort environment Σ and the annotated type environment Γ , the annotated term t has the annotated type τ and the dependency term ξ. The dependency term in this context is also called the dependency term of t 1 . It is implicitly assumed that every type τ is also wellformed under Σ, i.e. Σ wft τ , and that the resulting dependency annotation ξ is of sort , i.e. Σ s ξ : .
We now discuss some of the more interesting rules of figure 6. In [T-Var], both the annotated type and the dependency annotation are looked up in the environment. The dependency annotation of the unit value defaults to the least annotation in [T-Unit]. While we could admit an arbitrary dependency annotation here, the same can be achieved by using the subtyping rule [T-Sub]. We employ this principle more often, e.g., in [T-Abs], and [T-Pair]. This essentially means that the context in which such a term is used completely determines the annotation.
The rule [T-App] may seem overly restrictive by requiring that the types and dependency annotations of the arguments match, and that the dependency annotations of the return value and the function itself are the same. However, in combination with the subtyping rule [T-Sub], this effectively does not restrict the analysis in any way. We see the same happening in other rules, such as [T-Case] and [T-Proj]. Note that the dependency annotation of the argument does not play a role in the resulting dependency annotation of the application. This is because we are dealing with a call by name semantics which means that the argument is not necessarily evaluated before the function call. It should be noted that this does not mean that the dependency annotations of arguments are ignored completely. If the body of a function makes use of an argument, the type system makes sure that its dependency annotation is also incorporated into the result.
When constructing a pair (rule [T-Pair]), the dependency annotations of the components are stored in the type while the pair itself is assigned the least dependency annotation. When accessing a component of a pair (rule [T-Proj]), we require that the dependency annotation of the pair matches the dependency annotation of the projected component. Again, this is no restriction due to the subtyping rule.
In [T-Inl/Inr], the argument to the injection constructor only determines the type and annotation of one component of the sum type while the other component can be chosen arbitrarily as long as the underlying type matches the annotation on the constructor. The destruction of sum types happens in a case statement that is handled by rule [T-Case]. Again, to keep the rule simple and without loss of precision due to judicious use of rule [T-Sub], we may demand that the types of both branches match, and that additionally the dependency annotations of both branches and the scrutinee are equal.
The annotation rule [T-Ann] requires that the dependency annotation of the term being annotated is at least as large as the lattice element . In the fixpoint rule, [T-Fix], not only the types but also the dependency annotations of the term itself and the bound variables must match. Note that this rule also Fig. 7: Values in the target language admits polyvariant recursion [23], since quantification can occur anywhere in an annotated type. Since seq t 1 t 2 forces the evaluation of its first argument, it requires that t 1 's dependency annotation is part of the final result. This is justified, because the result depends on the termination behavior of t 1 .
The subtyping rule [T-Sub] allows us to weaken the annotations nested inside a type through the subtyping relation (see figure 5), as well as the dependency annotations itself through the subsumption relation. The rule [T-AnnAbs] introduces an annotation variable β of sort κ in the body t of the abstraction. The second premise ensures that the annotation variable does not escape its scope determined by the quantification on the type level. The annotation application rule [T-AnnApp] allows the instantiation of an annotation variable with an arbitrary well-sorted dependency term.

Metatheory
In this section we develop a noninterference proof for our declarative type system, based on a small-step operational call-by-name semantics for the target language. Figure 7 defines the values of the target language, i.e. those terms that cannot be further evaluated. Apart from a technicality related to annotations, they correspond exactly to the weak head normal forms of terms. The distinction for Nf ⊂ Nf is made to ensure that there is at most one annotation at top level.
The semantics itself is largely straightforward, except for the handling of annotations. These are moved just as far outwards as necessary in order to reach a normal form, thereby computing the least "permission" an evaluator must possess for computing a certain output. Figure 8 shows two rules: a lifting rule (for applications) and the rule for merging adjacent annotations (see the supplemental material for the others).
In the remainder of this section we state the standard progress and subject reduction theorems that ensure that our small-step semantics is compatible with [E-JoinAnn] ann 1 (ann 2 (v )) → ann 1 2 (v ) Fig. 8: Small-step semantics (t → t ) (excerpt) the annotated type system. The following progress theorem demonstrates that any well-typed term is in normal form, or an evaluation step can be performed.
Theorem 1 (Progress). If ∅ | ∅ te t : τ & ξ, then either t ∈ Nf or there is a t such that t → t .
The subject reduction property says that the reduction of a well-typed term results in a term of the same type.
As expected, subject reduction extends naturally to a sequence of reductions by induction on the length of the reduction sequence: where, as usual, we write t → * v if there is a finite sequence of terms (t i ) 0 i n with t 0 = t and t n = v ∈ Nf and reductions (t i → t i+1 ) 0 i<n between them. If there is no such sequence, this is denoted by t ⇑ and t is said to diverge.
Finally, if a term evaluates to an annotated value, this annotation is compatible with the dependency annotation that has been assigned to the term: The noninterference property An important theorem for the safety of program transformations/optimizations using the results of dependency analysis is noninterference. It guarantees that if there is a target term t depending on some variable x such that ∅ | x : τ &ξ te t : τ &ξ holds and the dependency annotation ξ of the variable is not encompassed by the resulting dependency annotation ξ (i.e. ∅ sub ξ ξ), then t will always evaluate to the same normal form, regardless the value of x .
Since we are in a non-strict setting, our noninterference property only applies to the topmost constructors of values. This is because the dependency annotations derived in the annotated type system only provide information about the evaluation to weak head normal form. Nested terms might possess lower as well as higher classifications. In particular, the subterms with greater dependency annotations than their enclosing constructors prevent us from making a more general statement because those can still depend on the context whereas the toplevel constructor cannot. In the noninterference theorem presented for the SLam calculus, this problem is circumvented by restricting the statement to so called transparent types, where the annotations of nested components are decreasing when moving further inward [9].
In the following we consider two normal forms v 1 , v 2 ∈ Nf to be similar, denoted v 1 v 2 , if their top level constructors (and annotations, if present) match (see the supplemental material for the unsurprising definition of ). So, v 1 v 2 implies that these two values are indistinguishable without further evaluation, which is the property guaranteed by the noninterference theorem.
Theorem 4 (Noninterference). Let t be a target term such that ∅ | x : τ & ξ te t : τ & ξ and ∅ sub ξ ξ. Let v be a value. If there is a t 1 with ∅ | ∅ te t 1 : The noninterference proofs crucially rely on the fact that the source term is well-typed, and the additional assumption ∅ sub ξ ξ stating that the dependency annotation of the variable in the context is not encompassed by the dependency annotation of the term being evaluated.
By introducing the restriction to transparent types, we can recover the notion of noninterference used for the SLam calculus. For example, if we have a transparent type τ 1 ξ 1 × τ 2 ξ 2 & ξ (i.e. ∅ sub ξ 1 ξ and ∅ sub ξ 2 ξ) and ∅ sub ξ ξ holds, then we also know ∅ sub ξ ξ 1 and ∅ sub ξ ξ 2 . Otherwise, we would get ∅ sub ξ ξ by transitivity, contradicting the assumption. This means all prerequisites of the noninterference theorem are still fulfilled.
Hence, it is possible in these cases to apply the noninterference theorem to the nested (possibly unevaluated) subterms of a constructor in weak head normal form. As in the work of [1], our noninterference theorem is restricted to deal with terms depending on exactly one variable.

The type reconstruction algorithm
Modularity considerations When designing the type reconstruction algorithm we have two goals: it should be a conservative extension of the underlying type system, and types assigned by the analysis should be as general as possible. Concretely, a function's type must be general enough to be able to adapt to arguments with arbitrary annotations. These two goals give rise to the notion of fully flexible and fully parametric types defined by [12]. [16] calls these types conservative and pattern types respectively. Informally, an annotated type is a pattern type if it can be instantiated to any conservative type of the same shape and a conservative type is an analysis of an expression that is able to cope with any arguments it might depend on. These types are conservative in the sense that they make the least assumptions about their arguments and therefore are a conservative estimate compared to other typings with fewer degrees of freedom.
For a pattern type to be instantiable to any conservative type, we first need to make sure that all dependency annotations occurring in it can be instantiated to the corresponding dependency terms in a matching conservative type. This leads to the following definition of a pattern in the λ -calculus. It is based on the similar definition by [16] which in turn is a special case of a pattern in higher-order unification theory [4,21]. A λ -term is a pattern if it is of the form f β 1 · · · β n where f is a free variable and β 1 , . . . , β n are distinct bound variables. A unification problem of the form ∀β 1 · · · β n .f β 1 · · · β n = ξ where the left-hand side is a pattern is called pattern unification. A pattern unification problem ∀β 1 · · · β n .f β 1 · · · β n = ξ has a unique most general solution, namely the substitution [f → λβ 1 . · · · λβ n .ξ] [4]. αi :: κα i p τ1 ξ1 × τ2 ξ2 & β αi β :: κα i ⇒ , βj :: κ β j , γ k :: κγ k ∅ p τ1 & ξ1 βj :: κ β j αi :: κα i , βj :: κ β j p τ2 & ξ2 γ k :: κγ k [P-Arr] αi :: κα i p ∀βj :: κ β j . τ1 ξ1 → τ2 ξ2 & β αi β :: κα i ⇒ , γ k :: κγ k The definition of a pattern is then extended to annotated types using the rules from figure 9. Our definition is more precise than the one from previous work in that it makes explicit which variables are expected to be bound and which are free. We require that all variables with different names in the definition of these rules are distinct from each other.
An annotated type and depencency pair τ & ξ is a pattern type under the sort environment Σ if the judgment Σ p τ & ξ Σ holds for some Σ . We call the variables in Σ argument variables and the variables in Σ pattern variables. Example 1. A simple pattern type with the pattern variables β :: ⇒ and β :: ⇒ ⇒ is ∀β 1 :: . unit β 1 → (∀β 2 :: . unit β 2 → unit β β 1 β 2 ) β β 1 Note that since β 1 is quantified on the function arrow chain, it is passed on to the second function arrow. However, it is not propagated into the second argument. In general, annotations on the return type may depend on the annotations of all previous arguments while annotations of the arguments may not. This prevents any dependency between the annotations of arguments and guarantees that they are as permissive as possible. This is also why pattern variables in a covariant position are passed on to the next higher level while pattern variables in arguments are quantified in the enclosing function arrow. This allows the caller of a function to instantiate the dependency annotations of the parameters to the actual arguments.
As we stated earlier, a conservative function type makes the least assumptions over its arguments. Formally, this means that arguments of conservative functions are pattern types. We will later see that a pattern type can be instantiated to any conservative type of the same shape. On the other hand, non-functional conservative types are not constrained in their annotations. These characteristics are captured by the following definition based on conservative types [16] and fully flexible types [12].
Moreover, an annotated type and depencency pair τ & ξ is conservative if τ is conservative and an annotated type environment Γ is conservative if for all The following type signature for the function f is a conservative type that takes the function type from example 1 as an argument.
(∀β 1 :: . unit β 1 → (∀β 2 :: Note that the pattern variables of the argument have been bound in the top-level function type. This allows callers of f to instantiate these patterns.
We can extend the previous definition of pattern types to the type completion relation shown in figure 10. It relates every underlying type τ with a pattern type τ such that τ erases to τ . It is defined through judgments Σ c τ : τ & ξ Σ with the meaning that under the sort environment Σ, τ is completed to the annotated type τ and the dependency annotation ξ containing the pattern variables Σ . The completion relation can also be interpreted as a function taking Σ and τ as arguments and returning τ , ξ and Σ .
Lastly, we revisit the examples from the previous sections and show how a pattern type can be mechanically derived from an underlying type.
In example 1 we presented a pattern type for the underlying type unit → unit → unit. Using the type completion relation, we can derive the pattern type, without having to guess. This is because the components τ , ξ and Σ in a judgment Σ c τ : τ & ξ Σ are uniquely determined by Σ and τ from looking at the syntax alone. The resulting pattern type contains three pattern variables, β :: ⇒ , β :: ⇒ ⇒ and β 3 :: . If the initial sort environment is empty, these are also the only free variables of the pattern type.
Based on the type completion relation we can define least type completions. These are conservative types that are subtypes of all other conservative types of the same shape. Therefore, all annotations occurring in positive positions on the top level function arrow chain must also be least. We do not need to consider arguments here because those are by definition equal up to alpha-conversion due to being pattern types. We define the least annotation term of sort κ as These least annotation terms correspond to the least elements of our bounded lattice for a given sort κ. This in turn leads us to the definition of the least completion of type τ (see figure 10) by substituting all free variables in the completion with the least annotation of the corresponding sort, i.e.

The algorithm
We can now move on to the type reconstruction algorithm that performs the actual analysis. At its core lies algorithm R shown in figure 11. The input of the algorithm is a triple ( Γ , Σ, t) consisting of a well-typed source term t, an annotated type environment Γ providing the types and dependency annotations of the free term variables in t and a sort environment Σ mapping each free annotation variable in scope to its sort. It returns a triple t : τ & ξ consisting of an elaborated term t in the target language (that erases to the source term t), an annotated type τ and an dependency annotation ξ such that Σ | Γ te t : τ & ξ holds. In the definition of R, to avoid clutter, we write Γ instead of Γ because we are only dealing with one kind of type environment.
The algorithm relies on the invariant that all types in the type environment and the inferred type must be conservative. In the version of [16], all inferred dependency annotations (including those nested as annotations in types) had to be canonically ordered as well. But as it turned out that this canonically ordered form was not enough for deciding semantic equality, so we lifted this requirement. We still mark those places in the algorithm where canonicalization would have occurred with · · , but the actual result of this operation does not matter as long as the dependency terms remain equivalent.
The algorithm for computing the least upper bound of types ( in figure 12) requires that both types are conservative, have the same shape and use the same names for bound variables. The latter can be ensured by α-conversion while the former two requirements are fulfilled by how this function is used in R.
The restriction to conservative types allows us to ignore functions arguments because these are always required to be pattern types, which are unique up to α-equivalence. This alleviates the need for computing a corresponding greatest lower bound of types, because the algorithm only traverses covariant positions. The handling of λ-abstractions uses the type completion algorithm C of figure 12, that defers its work to the type completion relation defined earlier which can be interpreted in a functional way (see figure 10). The underlying type of the function argument is completed to a pattern type. The function body is analyzed in the presence of the newly introduced pattern variables. Note that this pattern type is also conservative, thereby preserving the invariant that the context only holds conservative types. The inferred annotated type of the lambda abstraction universally quantifies over all pattern variables and the quantification is reflected on the term level through annotation abstractions Λβ :: κ.t.

Soundness and Completeness
An annotated type environment Γ is wellformed under an environment Σ, if Γ is conservative and for all bindings x : τ & ξ in Γ we have Σ wft τ and Σ s ξ : .
In order to demonstrate the correctness of the reconstruction algorithm presented in this section we have to show that for every well-typed underlying term, it produces an analysis (i.e. annotated types and dependency annotations) that can be derived in the annotated type system (see figure 6). That is to say, algorithm R is sound w.r.t. the annotated type system. Theorem 5. Let t be a source term, Σ a sort environment and Γ an annotated type environment well-formed under Σ such that R( Γ ; Σ; t) = t : τ & ξ for some t, τ and ξ.
The next step is to show that our analysis succeeds in deriving an annotated type and dependency annotation for any well-typed source term: it is complete.
The crucial part here is the termination of the fixpoint iteration. In order to show the convergence of the fixpoint iteration, we start by defining an equivalence relation on annotated type and depencency pairs.
Our type reconstruction algorithm handles polymorphic recursion through Kleene-Mycroft-iteration. Such an algorithm is based on fixpoint iteration and needs a way to decide whether two dependency terms are equal according to the denotational semantics of λ .
A straightforward way to decide semantic equivalence is to enumerate all possible environments and compare the denotations of the two terms in all of these (possibly after some semantics preserving normalization). This only works if the dependency lattice L is finite.
For some analyses, e.g., the set of all program locations in a slicing analysis, L = V is finite but large, and deciding equality in this fashion becomes impractical. To alleviate this problem, our prototype implementation applies a partial canonicalization procedure which, while not complete, can serve as an approximation of equality: if two canonicalized dependency terms become syntactically equal, then we can be assured that they are semantically equal, but if they are not we can still apply the above procedure to the canonicalized dependency terms. We omit formal details from the paper.
We can now state our completeness results for the type reconstruction algorithm. Here, we write Γ t t : τ to say that term t has type τ under the environment Γ in the underlying type system. Theorem 6 (Completeness). Given a source term t, a sort environment Σ, an annotated type environment Γ well-formed under Σ, and an underlying type τ such that Γ t t : τ , then there are t, τ and ξ such that R( Γ ; Σ; t) = t : τ & ξ and τ = τ , t = t.
As a corollary of the foregoing theorems, our analysis is a conservative extension of the underlying type system. Corollary 2 (Conservative Extension). Let t be a source term, τ be a type and Γ a type environment such that Γ t t : τ . Then there are Σ, Γ , t, τ , ξ such that Σ | Γ te t : τ & ξ with t = t, τ = τ and Γ = Γ .

Implementation and Examples
Beyond the definition of the annotated system and the development of the associated algorithm and meta-theory we also have a REPL prototype implementation of our analysis in Haskell. Compared to the annotated type system in the paper, the prototype provides support for booleans and integers, including literals and conditionals if c then t 1 else t 2 for which the type rules can be straightforwardly derived. Concrete lattice implementations are provided only for bindingtime analysis and security analysis, but the reconstruction algorithm abstracts away from the choice for a particular lattice, so it is easy to add new instances. The implementation is available at http://www.staff.science.uu.nl/ ∼ hage0101/ prototype-hrp.zip. Below we walk through a few examples, taking advantage of the slightly extended source language that our implementation supports. More (detailed) examples are discussed in [26].

Construction and Elimination
Whenever something is constructed, be it a product, a sum or a lambda abstraction, the outermost dependency annotation is ⊥. This is because the analysis aims to produce the best possible and thereby least annotations for a given source program.
Consider the case of binding-time analysis, and suppose we have a variable of function type f :∀β.int β → int β &D. We can see that it preserves the annotations of its arguments, i.e. if we apply f to a static value, the return annotation is also instantiated to be static. The function itself, however, is dynamic. And therefore, the whole result of the function application must also be dynamic, because we cannot know which particular function has been assigned to f .
As elimination always introduces a dependency in the program, and this can uncover subtleties arising when functions only differ in their termination behavior. For example, compare λp : int × int.p with λp : int × int.(proj 1 (p), proj 2 (p)). In a call-by-value language, these two functions would be (extensionally) equivalent. However, with non-strict evaluation, p might be a non-terminating computation. In that case, applying the former function would diverge, while the latter function at least produces the pair constructor. This is also reflected in the annotated types that are inferred. For the former, we get for the latter. In particular, the annotation of the product in the second type signature is S. Therefore, it can not depend on the input of the function.
Polymorphic Recursion One class of functions where the analysis benefits from polymorphic recursion are those that permute their arguments on recursive calls. Our example is a slightly modified version of an example from [5]: µf : bool → bool → bool.λx : bool.λy : bool. if x then true else f y x In an analysis with monomorphic recursion, the analysis assigns the same annotation to both parameters, large enough to accommodate for both arguments. This is due to the permutation of the arguments in the else branch. An analysis with polymorphic recursion is allowed to use a different instantiation for f in that case. Our algorithm hence infers the following most general type. point. Yet, both arguments are completely unrestricted, and unrelated in their annotations. In contrast, a type system with monomorphic recursion would only admit a weaker type, possibly similar to A real world example of this kind is Euclid's algorithm for computing the greatest common divisor(see [26]).
Higher-Ranked Polyvariance This section discusses several examples for the dependency analysis instance of binding time analysis, comparing our outcomes with a let-polyvariant analysis [29].
A simple example to start with is a function that applies a function to both components of a pair 2 Suppose in the context of binding-time analysis that both is used to apply a statically known function to a pair whose first component is always computable at compile time, but whose second component is dynamic. For simplicity's sake, the function is the identity on integers.
id : int → int id = λx : int.x A non-higher-ranked analysis would assign types to both and id . The annotation on the function argument to both must be large enough to accommodate both components of the pair as input. When we consider the call both id p for some pair p:int S ×int D &S. Then, the whole call has the type int D ×int D .
Our higher-ranked analysis infers the following conservative types for id and both.
Λβ 3 :: .Λβ 4 :: .Λβ 5 :: .λp : int β 3 × int β 4 . (f β 3 β 5 (proj 1 (p)), f β 4 β 5 (proj 2 (p))) In case of both, the function parameter f can be instantiated separately for each component because our analysis assigns it a type that universally quantifies over the annotation of its argument. It is evident from the type signature that the components of the resulting pair only depend on the corresponding components of the input pair, and the function and the input pair itself. They do not depend on the respective other component of the input. If we again consider the call both id p, we obtain β 2 = λβ :: .β, β 1 = β 3 = β 5 = S and β 4 = D through pattern unification. Normalization of the resulting dependency terms results in the expected return type int S × int D .
The generality provided by the higher-ranked analysis extends to an arbitrarily deep nesting of function arrows. The following example demonstrates this for two levels of arrows. Functions with more than two levels of arrows can arise directly in actual programs, but even more so in desugared code, e.g., when type classes in Haskell are implemented via explicit dictionary passing. Due to limitations of our source language, the examples are syntactically heavily restricted.
Consider the following function that takes a function argument which again requires a function.
The higher-ranked analysis infers the following type and target term (where we omitted the type in the argument of the lambda term because it essentially repeats what is already visible in the top level type signature).
(f S λβ 0 :: .β 0 (Λβ 5 :: .λx : int & β 5 .x ) , f S λβ 0 :: .S (Λβ 6 :: .λx : int & β 6 .1)) Since the type of f is a pattern type, the argument to f is also a pattern type by definition. Therefore, the analysis of f depends on the analysis of the function passed to it. This gives rise to the higher-order effect operator β 3 [12]. Thus, f can be applied to any function with a conservative type of the right shape. As our algorithm always infers conservative types, the type of f is as general as possible. This is reflected in the body of the lambda where in both cases f is instantiated with the dependency annotation corresponding to the function passed to it. The result of this instantiation can be observed in the returned product type where β 3 is applied to the effect operators λβ 0 :: .β 0 and λβ 0 :: .S corresponding to the respective functions used as arguments to f . Only when we finally apply foo, the resulting annotations can be evaluated.

Related Work
The basis for most type systems of functional programming languages is the Hindley-Milner type system [22]. Our algorithm R strongly resembles the wellknown type inference algorithm for the Hindley-Milner type system, Algorithm W [3], a distinct advantage of our approach. The idea to define an annotated type system as a means to design static analyses for higher-order languages is attributed to [19]. The major technical difference compared to a let-polyvariant analysis is that our annotations form a simply typed lambda-calculus.
Full reconstruction for a higher-ranked polyvariant annotated type system was first considered by [12] in the context of a control-flow analysis. However, we found that the (constraint-based) algorithm as presented in [12] generates constraints free of cycles. Therefore, it cannot faithfully reflect the constraints necessary for the fixpoint combinator. The algorithm incorrectly concludes for the following example that only the first and third 'False' term flow into the condition x , but not the second one.
(fix (λf . λx . λy. λz . if x then True else f z x y)) False False False We reproduced this mistake with their implementation and verified that the mistake was not a simple bug in that implementation.
Close to our formulation is the (unpublished) work of [16] which deals with exception analysis, which uses a simply typed lambda-calculus with sets to represent annotations. We have chosen a more modular approach in which we offload much of the complexity of dealing with lattice values to the lattice. In [16] terms from the simply typed lambda-calculus with sets are canonicalized and then checked for alpha equivalence during Kleene-Mycroft iteration. We found however that two terms can have different canonical forms even though they are actually semantically equivalent. This causes Koot's reconstruction algorithm to diverge on a particular class of programs, because the inferred annotations continue to grow. The simplest such program we found is the following.
µf : (unit → unit) → unit → unit.λg : unit → unit.λx : unit.g (f g x ) Our solution is to apply canonicalization to simplify terms as much as possible, and then compare the outcomes for all possible inputs.
The Dependency Core Calculus was introduced by [1] as a unifying framework for dependency analyses. Instances include binding-time analysis (see, e.g., [29]), exception analysis [17,16], secure information flow analysis [9] and static slicing [27]. They devised the Dependency Core Calculus (DCC) to which each instance of a dependency analysis can be mapped. This allowed them to compare different dependency analyses, uncover problems with existing instance analyses and to simplify proofs of noninterference [8,20]. The instance analyses in [1] were defined as a monovariant type and effect system with subtyping, for a monomorphic call-by-name language. An implicit, let-polymorphic implementation of DCC, FlowCaml, was developed by [25]. It is not higher-ranked.
The difference between DCC and our analysis is to a large extent a different focus: the DCC is a calculus defined in a way that any calculus that elaborates to DCC has the noninterference property and any other properties proven for the calculus. On the other hand, our analysis is meant to be implemented in a compiler (with the added precision), and that implementation (and its associated meta-theory) can then be reused inside the compiler for a variety of analyses. Comparable to DCC, we have proven a noninterference property for our generic higher-rank polyvariant dependency analysis, so that all its instances inherit it.
The Haskell community supports an implementation of DCC in which the (security) annotations are lifted to the Haskell type level [2]. Since the GHC compiler supports higher-rank types, the code written with this library can in fact model security flows with higher-rank. Because of the general undecidability of full reconstruction for higher-rank types [14], the programmer must however provide explicit type information. In [18], the authors introduce dependent flow types, that allows them to express a large variety of security policies. An essential difference with our work is that our approach is fully automated.
Early on in our research, we observed that the approach of [11] may lead to similar precision gains as higher-ranked annotations do. Since they deal with a different analysis, a direct comparison is impossible to make at this time.

Conclusion and Future Work
We have defined a higher-rank annotation polymorphic type system for a generic dependency analysis, established its soundness and provided a sound and complete reconstruction algorithm. Examples show that we can achieve higher precision than plain let-polyvariance. The analysis we have defined is for a call-byname language. We expect the results to hold as well for a lazy language, but chose call-by-name for reduced bookkeeping in the proofs. We also believe the analysis can be adapted relatively easily to one for a call-by-value language, by letting the annotation on the argument flow into the effect of the call. However, we would need to re-examine the metatheory.
In future work we want to consider whether we can further refine the canonicalization of λ terms so that syntactic equality up to alpha-equivalence can completely replace our current approach.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.