Synthesis with Asymptotic Resource Bounds

. We present a method for synthesizing recursive functions that satisfy both a functional speciﬁcation and an asymptotic resource bound. Prior methods for synthesis with a resource metric require the user to specify a concrete expression exactly describing resource usage, whereas our method uses big-O notation to specify the asymptotic resource usage. Our method can synthesize programs with complex resource bounds, such as a sort function that has complexity O ( n log( n )). Our synthesis procedure uses a type system that is able to assign an asymptotic complexity to terms, and can track recurrence relations of functions. These typing rules are justiﬁed by theorems used in analysis of algorithms, such as the Master Theorem and the Akra-Bazzi method. We implemented our method as an extension of prior type-based synthesis work. Our tool, SynPlexity , was able to synthesize complex divide-and-conquer programs that cannot be synthesized by prior solvers.


Introduction
Program synthesis is the task of automatically finding programs that meet a given behavioral specification, such as input-output examples or complete formal specifications.Most of the work on program synthesis has been devoted to qualitative synthesis, i.e., finding some correct solution.However, programmers often want more than just a correct solution-they may want the program that is smallest, most likely, or most efficient.While there are some techniques for adding a quantitative syntactic objective in program synthesis [11]-e.g., finding a smallest solution, or a most likely solution with respect to some distributionlittle attention has been devoted to quantitative semantic objectives-e.g., synthesizing a program that has a certain asymptotic complexity.
Recently, Knoth et al. [15] studied the problem of resource-guided program synthesis, where the goal is to synthesize programs with limited resource usage.Their approach, which combines refinement-type-directed synthesis [17] and automatic amortized resource analysis (AARA) [9], is restricted to concrete resource bounds, where the user must specify the exact resource usage of the synthesized program as a linear expression.This limitation has two drawbacks: (i) the user must have insights about the coefficients to put in the supplied bound-which means that the user has to provide details about the complexity of code that does not yet exist; (ii) the limitation to linear bounds means that the user cannot specify resource bounds that involve logarithms, such as O(log n) and O(n log n), common in problems based on divide and conquer.
In this paper, we introduce SynPlexity, a type-system paired with a typedirected synthesis technique that addresses these issues.In SynPlexity, the user provides as input a refinement type that describes both the functionality and the asymptotic (big-O) resource usage of a program.For example, a user might ask SynPlexity to synthesize an implementation of a sorting function with resource usage O(n log n), where n is the length of the input list.As in prior work, SynPlexity also takes as input a set of auxiliary functions that the synthesized program can use.SynPlexity then uses a type-directed synthesis algorithm to search for a program that has the desired functionality, and satisfies the asymptotic resource bound.SynPlexity's synthesis algorithm uses a new type system that can reason about the asymptotic complexity of functions.To achieve this goal, this type system uses two ideas.
1.The type system uses recurrence relations instead of concrete resource potentials [9] to reason about the asymptotic complexity of functions.For example, the recurrence relation T (u) ≤ 2T (⌊ u 2 ⌋) + O(u) denotes that on an input of size u, the function will perform at most two recursive calls on inputs of size at most ⌊ u 2 ⌋, and will use at most O(u) resources outside of the recursive calls. 1 For a given recurrence relation, our type system uses refinement types to guarantee that a function typed with this recurrence relation performs the correct number of recursive calls on parameters of the appropriate sizes.2. These typing rules are justified by classic theorems from the field of analysis of algorithms, such as the Master Theorem [5], the Akra-Bazzi method [1], or C-finite-sequence analysis [12].
Guéneau et al. observed that reasoning with O-notation can be tricky, and exhibited a collection of plausible-sounding, but flawed, inductive proofs [8, §2].We avoid this pitfall via SynPlexity's type system, which establishes whether a term satisfies a given recurrence relation.SynPlexity uses theorems that connect the form of a recurrence relation-e.g., the number of recursive calls, and the argument sizes in the subproblems-to its asymptotic complexity.In particular, the SynPlexity type system does not encode inductive proofs of the kind that Guéneau et al. show can go astray.
SynPlexity can synthesize functions with complexities that cannot be handled by existing type-directed tools [17,15], and compares favorably with existing tools on their benchmarks.Furthermore, for some domains, SynPlexity's type system allows us to discover auxiliary functions automatically (e.g., the split function of a merge sort), instead of requiring the user to provide them.

Contributions. The contributions of our work are as follows:
-A type system that uses refinement types to check whether a program satisfies a recurrence relation over a specified resource ( §3).
-A type-directed algorithm that uses our type system to synthesize functions with given resource bounds ( §4, §5).-SynPlexity, an implementation of our algorithm that, unlike prior tools, can synthesize programs with desired asymptotic complexities ( §6).
Complete proofs and details of the type system can be found in the appendices.

Overview
In this section, we illustrate the main components of our algorithm through an example.Consider the problem of synthesizing a function prod that implements the multiplication of two natural numbers, x and y.We want an efficient solution whose time complexity is O(log x) with respect to the value of the first argument x.In §2.1, we show how existing type-directed synthesizers solve this problem in the absence of a complexity-bound constraint.In §2.2, we illustrate how to specify asymptotic bounds in type-directed synthesis problems.In §2.3, we show how the tracking of recurrence relations can be used to establish complexity bounds as well as guide the synthesis search.

Type-Directed Synthesis
We first review one of the state-of-the-art type-directed synthesizers, Synquid, through the aforementioned example-i.e., synthesizing a program prod that computes the product of two natural numbers.In Synquid, the specification is given as a refinement type that describes the desired behavior of the synthesized function.We specify the behavior of prod using the following refinement-type: Here the types of the inputs x and y, as well as the return type of prod are refined with predicates.The refinement {Int | v ≥ 0} declares x and y to be non-negative, and the refinement {Int | v = x * y} of the return type declares the output value to be an integer that is equal to the product of the inputs x and y.
In addition to the specification, the synthesizer receives as input some signatures of auxiliary functions it can use.The specifications of auxiliary functions are also given as refinement types.In our example, we have the following functions: plus :: With the above specification and auxiliary functions, Synquid will output the implementation of prod shown in Eqn.(3).prod = λx.λy.if x==0 then x else plus y (prod (dec x) y) Synquid uses a sophisticated type system to guarantee that the synthesized term has the desired type.Furthermore, Synquid uses its type system to prune the search space by only enumerating terms that can possibly be typed, and thus meet the specification.Terms are enumerated in a top-down fashion, and appropriate specifications are propagated to sub-terms.As an example, let us see how Synquid synthesizes the function body-an if-then-else term-in Eqn.(3), which is of refinement type {Int | v = x * y}.Synquid will first enumerate an integer term for the then branch-a variable term x.Then, with the then branch fixed, the condition guard must be refined by some predicate ϕ under which the then branch (the term With this constraint, Synquid identifies the term x == 0 as the condition.Finally, Synquid propagates the negation of the condition to the else branch-the else branch should be a term of type {Int | v = x * y} with the path condition x = 0-and enumerates the term plus y (prod (dec x) y) as the else branch, which has the desired type.
The program in Eqn. ( 3) is correct, but inefficient.Let us count each call to an auxiliary function as one step; and let T (x) denote the number of steps in which the program runs with input x.The implementation in Eqn.(3) runs in Θ(x) steps because T (x) satisfies the recurrence T (x) = T (x − 1) + 2, implying T (x) ∈ Θ(x).Because, Synquid does not provide a way to specify resource bounds, such as O(log x); one cannot ask Synquid to find a more efficient implementation.

Adding Resource Bounds
In our tool, SynPlexity, one can specify a synthesis problem with an asymptotic resource bound, and can ask SynPlexity to find an O(log x) implementation of prod.To express this intent, the user needs to specify (1) the asymptotic resource-usage bound the synthesized program should satisfy, (2) the cost of each provided auxiliary function, and (3) the size of the input to the program.
Asymptotic Resource Bound.We extend refinement types with resource annotations.The annotated refinement types are of the form τ ; α where τ is a regular refinement type, and α is a resource annotation.The following example asks the synthesizer to find a solution with the resource-usage bound O(log u): Cost of Auxiliary Functions.The auxiliary functions supplied by the user serve as the API in terms of which the synthesized program is programmed.Thus, the resource usage of the synthesized program is the sum of the costs of all auxiliary calls made during execution.We allow users to assign a polynomial cost O(u a ), for some constant a, or a constant cost O(1) to each auxiliary function.Here, u is a free variable that represents the size of the problem on which the auxiliary function is called.
In the prod example, all auxiliary functions are assigned constant cost, e.g., we give even the signature even :: x:Int → {Bool | x mod 2 = 0}, O (1) .
Size of Problems.The user needs to specify a size function, size:τ → Int, that maps inputs to their sizes, e.g., when synthesizing the sorting function for an input of type list, the size function can be λl.|l|-thelength of the input list.In the prod example, the size function is size = λx.λy.x.

Checking Recurrence Relations
We extend Synquid's refinement-type system with resource annotations, so that the extended type system enforces the resource usage of terms.The idea of the type system is to check if the given function satisfies some recurrence relation.If so, it can infer that the function also satisfies the corresponding resource bound.For example, according to the Master Theorem [3], if a function f satisfies the recurrence relation where u is the size of the input, then the resource usage of f is bounded by O(log u).Checking if a function satisfies a given recurrence relation can be performed by checking if the function contains appropriate recursive calls-e.g., if a function contains one recursive call to a sub-problem of half size, and consumes only a constant amount of resources in its body, then it satisfies T (u) ≤ T (⌊ u 2 ⌋) + O(1).The following rule is an example of how we connect recurrence annotations and resource bounds.
The rule instantiates the Master Theorem example above.Note that, the annotation ([1, ⌊ u 2 ⌋], O( 1)) states that the function body contains up to one recursive call to a problem of size ⌊ u 2 ⌋, and the resource usage in the body of t (aside from calls to f itself) is bounded by O (1).The rule states that if the function body t of type τ 2 contains one recursive call to a sub-problem of size ⌊ u 2 ⌋, then the function will be bounded by O(log u).
The implementation of prod shown in Eqn.(4) runs in O(log x) steps.
if even x then double (prod (div2 x) y) else plus y (double (prod (div2 x) y)) To check that, SynPlexity's type system counts the number of recursive calls along any path of the function.There are three paths (two nested if-then-else terms) in the program, and at most one recursive call along each path.Also, one can check that the problem size of each recursive call is no more than ⌊ x 2 ⌋.For example, the recursive call prod (div2 x) y calls to a problem with size div2 x, which is consistent with [1, ⌊ u 2 ⌋], and u is x because size x y = x.In addition, the condition that the resource usage of the body is bounded by O( 1) is satisfied because only auxiliary functions with constant cost are called.

The SynPlexity Type System
In this section, we present our type system.First, we give the surface language and the types, which extend the Synquid liquid-types framework with resource

Syntax and Types
Syntax.Consider the language shown in Fig. 1.In the language, we distinguish between two kinds of terms: elimination terms (E-terms) and introduction terms (I-terms).E-terms consist of variable terms, constant values c, and application terms.Condition guards and match scrutinies can only be E-terms.I-terms are branching terms and function terms.The key property of I-terms is that if the type of any I-term is known, the types of its sub-terms are also known (which is not the case for E-terms).Types.Our language of types, presented in Fig. 2, extends the one of Synquid [17] with recurrence annotations, which are used to track recurrence relations on functions.To simplify the presentation, we ignore some of the features of the type system of Synquid [17] that do not affect our algorithm.In particular, we do not discuss polymorphic types and the enumerating strategy that ensures that only terminating programs are synthesized.However, our implementation is built on top of Synquid, and supports both of those features.Logical expressions are built from variables, constants, arithmetic operators, and other user-defined logical functions.Logical expressions in our type system can be used as refinements ϕ, size expressions φ, or bound expressions ψ.Refinements ϕ are logical predicates used to refine ordinary types in refinement types {B | ϕ}.We usually use a reserved symbol v as the free variable in ϕ, and let v represents the inhabitants, i.e., inhabitants of the type {B | ϕ} are valuations of v that satisfy ϕ.For example, the type {Int | v mod 2 = 0} represents the even integers.Size expressions and bound expressions are used in recurrence annotations, and are explained later.
Ordinary types includes primitive types and user-defined algebraic datatypes D. Datatype constructors C are functions of type τ 1 → . . .→ τ n → D. For example, the datatype List(Int) has two constructors: Cons : Int → List(Int) → List(Int), and Nil : List(Int).Refinement types are ordinary types refined with some predicates ψ, or arrow types.Note that, unlike Synquid's type system, SynPlexity's type system does not support higher-order functions-i.e., arguments of functions have to be non-arrow types.All occurrences of τ i and τ in arrow types x 1 : τ → . . .→ x n : τ n → y : τ have to be ordinary types or refined ordinary types.We will discuss this limitation in §7.
We use recFun to denote the name of the function for which we are performing type-checking, and args to denote the tuple of arguments to recFun.For example, in the function prod shown in Eqn.(3), recFun=prod and args=x y An environment Γ is a sequence of variable bindings x : γ, path conditions ϕ, and assignments for variables recFun and args.

Recurrence Annotations.
Annotated types are refinement types annotated with recurrence annotations.A recurrence annotation is a pair 1) a set of recursive-call costs of the form [c i , φ i ] f , and (2) a resource-usage bound of the form O(ψ). Intuitively, a recurrence annotation tracks the number c i of recursive calls to f of size φ i in the first element [c 1 , φ 1 ] f , . . ., [c n , φ n ] f of the pair, as well as the asymptotic resource usage of the body of the function (the second element O(ψ)).Using these quantities, we can compute a recurrence relation describing the resource usage of the function recFun.For example, the recurrence annotation (

1).
A recursive-call cost [c, φ] f associated with a function f denotes that the body of f can contain up to c recursive calls to subproblems that have sizes up to the one specified by size expression φ.A size expression, φ, is a polynomial over a reserved variable symbol u that represents the size of the top-level problem.In our paper, a problem with respect to a function g :: x 1 : τ 1 → . . .→ x n : τ n → y : τ is a tuple of terms e 1 . . .e n , to which g can be applied-i.e., e i has type τ i for all i from 1 to n.For the problems of function g, the size of each problem is defined by a size function size g -a user-defined logical function that has type τ 1 → . . .→ τ n → Int; i.e., it takes a problem of g as input and outputs a non-negative integer.In the body of g, we say that a recursive-call term g e 1 . . .e n satisfies a size expression φ if for all x 1 , . .., x n , size g e 1 . . .e n ≤ [(size g x 1 . . .x n )/u]φ, where the x i 's are the arguments of g and the e i 's are the evaluations of e i on input x 1 . . .x n .(See §3.2 for the formal definition of • .)Note that one annotation can contain multiple recursive-call costs, which allows the function to make recursive calls to sub-problems with different sizes.We often abbreviate τ, (O(1)) as τ and omit f in recursive-call costs if it is clear from context.
A resource bound O(ψ) of a non-arrow type specifies the bound of the resource usage strictly within the top-level-function body.A resource bound in a signature of an auxiliary function f specifies the resource usage of f .Bound expressions ψ in O(ψ) are of the form u a log b u + c where a, b, and c are all non-negative constants, and u represents the size of the top-level problem.

Semantics and Cost Model
We introduce the concrete-cost semantics of our language here.The semantics serves two goals: (1) it defines the evaluation of terms (i.e., how to obtain values), which can be used to compute the sizes of problems in application expressions, and (2) it defines the resource usages of terms.
Besides the syntax shown in Fig. 1, implementations of auxiliary functions can contain calls to a tick function tick(c, t), which specifies that c units of a resource are used, and the overall value is the value of t.Note that in our synthesis language, we are not actually synthesizing programs with tick functions.We assume that tick functions are only called in the implementations of auxiliary functions.In the concrete-cost semantics, a configuration t, C consists of a term t and a nonnegative integer C denoting the resource usage so far.The evaluation judgment t, C ֒→ t ′ , C + C ∆ states that a term t can be evaluated in one step to a term (or a value) t ′ , with resource usage C ∆ .We write t, C ֒→ * t ′ , C +C ∆ to indicate the reduction from t to t ′ in zero or more steps.All of the evaluation judgments are standard, and are shown in §A.Here we show the judgment of the tick function, where resource usage happens.

Sem-Tick
For a term t, t denotes the evaluation result of t, i.e., t, • ֒→ * t , • .

Let t body denote the function body if x=0 then 0 else tick(1,2+double(x-1)).
The result of evaluating double on input 5 is 10, with resource usage 5.

Definition 1 (Complexity).
Given a function fix f.λy.t of type : τ 1 → τ 2 , with size function size f : τ 1 → N, and suppose that for any possible input x, the configuration (fix f.λy.t)x, 0 can be reduced to v, C x for some value v.Then, if Note that Defn. 1 assumes that the top-level term (fix f.λy.t)x can be reduced to some value.Thus, Defn. 1 only applies to terminating programs.

Definition 2 (Big-O notation). Given two integer functions f and g, we say that
In the rest of the paper, we use T f to denote the complexity function of the function f , and we say the complexity of f is bounded by a function g if T f ∈ O(g).As an example, the complexity of the double function shown in Ex. 2 is T double (u) := u, and hence T double (u) ∈ O(u).Auxiliary functions.We allow users to supply signatures for auxiliary functions, instead of implementations.It is an obligation on users that such signatures be sensible; in particular, when the user gives the signature τ 1 → {B | ϕ(v, y)}, O(ψ(u)) for auxiliary function f , the user asserts that there exists some implementation fix f.λy.t of f , such that: 1)for any input x, the output of f on x satisfies ϕ, i.e., ϕ( (fix f.λy.t)x , x) is valid; and 2)for any input x, the complexity of f is bounded by ψ(u), i.e., T f (u) ∈ O(ψ(u)).Signatures always over-approximate their implementations, as illustrated by the following example.
Example 3. The signature doubleRelaxed :: describes an auxiliary function that computes no more than the input times 3, and has quadratic resource usage.Note that the function double shown in Ex. 2 can be an implementation of this signature because double(x) = 2 * x ≤ 3 * x, and the complexity function

Typing Rules
The typing rules of SynPlexity are inspired by bidirectional type checking [16] and type checking with cost sharing [15].Recall that we use recFun to denote the name of the function for which we are performing type-checking, and args to denote the tuple of arguments to recFun.
An environment Γ is a sequence of variable bindings of the form x : γ, path conditions ϕ, and assignments of the form x = ϕ for recFun and the components of args.SynPlexity's typing rules use three judgments: 1) Γ ⊢ t :: γ states that t has type γ, 2) Γ ⊢ γ 1 <: γ 2 states that γ 2 is a subtype of γ 1 , and 3) Γ ⊢ γ γ 1 |γ 2 states that γ 1 and γ 2 share the costs in γ Subtyping.
Subtyping judgments are shown in Fig. 6 in App. A. The <:-Fun, <:-Sc, and <:-Refl are standard subtyping rules for refinement types.The remaining rules allow us to compare resource consumption of recurrence annotations.For example, if one branch of some branching term has type τ, ([1, ⌊ u 3 ⌋], O(ψ)) , it can be over-approximated by a super type τ, . The idea is that the resource usage of an application calling to a problem of size ⌊ u 2 ⌋, will be larger than the application calling to a smaller problem of size ⌊ u 3 ⌋ (assuming all resource usages are monotonic).Subtyping rules also allow the type system to compare branches with a different number of recursive calls.For example, base cases of recursive procedures have no recursive calls, and thus have types of the form τ, ([], O(ψ)) .With subtyping, these types can be over-approximated by types of the form τ, ([c, φ], O(ψ)) .Cost sharing.When a term has more than one sub-term in the same path, e.g., the condition guard and the then branch are in the same path in an ite term, the recursive-call costs of the term will be shared into its sub-terms.The sharing operator α α 1 |α 2 partitions the recursive-call costs of α into α 1 and α 2 -i.e., the sum of the costs in α 1 and α 2 equals the costs in α.Sharing rules are shown in Fig. 8.The idea is that a single cost c can be shared to two costs c 1 and c 2 such that their sum is no more than c.An annotation can be shared to two parts if every recursive cost Finally, annotations can also be shared to more than two parts.Example 4.There are multiple ways to share the recurrence annotation where one annotation contains both recursive-call costs [1, and the other contains no recursive-call cost.And where each annotation contains one recursive-call cost.

T-Abs
Table 1.Annotations that can be used to instantiate the rule T-Abs.
For example, if the annotation of the function body is ([1, ⌊ u 2 ⌋]; O(1)), then the resource bound in the function type will be O(log u), i.e., the resource usage of At the same time, the rule stores the name f of the recursive function into recFun, and its arguments as a tuple into args.
Example 5. We use a function fix bar.λx.if x = 1 then 1 else 1+bar(div2 x) to illustrate the first pattern in Tab. 1.The body of bar has the annotated type ) because (i) there exists only one recursive call to a sub-problem whose size is half of the top-level problem size u, and (ii) the resource usage inside the body is constant (with the assumption that all auxiliary functions have constant resource usage).This type appears in row 1, column 4 of Tab. 1. Consequently, the recurrence relation of bar is ), where T (u) is the resource usage of bar on problems with size u.Finally, according to the Master Theorem, the resource usage of bar is bounded by O(log u) (row 1, column 2).

Branching terms.
In rule T-If, the condition has type Bool with refinement ϕ e .Two branches have different types-the then branch follows the path condition ϕ e , and the refinement ϕ of the branch term, while the else branch follows the path condition ¬ϕ e .By having both branches share the same recurrence annotation, T-If can introduce some imprecision.In particular, if the branches belong to different complexity classes, the annotation of the conditional term will be the upper bound of both branches.
The rule T-Match ( §B) is slightly different: (1) there can be more than two branches, (2) all branches have the same type τ, α 2 , and (3) variables in each case C i (x 1 i . . .x n i ) are introduced in the corresponding branch.E-terms.The typing rules for E-terms are shown in Fig. 3.The two rules for application terms are the key rules of our type system.Let us first look at the E-RecApp rule for recursive-call terms.Recall that the recursive-call annotation tracks the number of recursive calls and the sizes of sub-problems.If the term f e 1 . . .e n is a recursive call-i.e., Γ (recFun) = f-the number of recursive calls in one of the recursive-call costs will increase by one-i.e., 2 ⌋}, the second argument y has type {Int | v = y}, the size function is size prod = λz.λw.z, and the arguments in the context are Γ (args) = x y.Therefore, the following predicate is valid: The rule E-App states that callees have types τ i , and the resource usage does not exceed the bound O(ψ) in the annotation.Similar to the E-RecApp rule, the size of the problem g calls to is [size g y 1 . . .y m /u] with the premise ψ) in the rule states that for any instance of Γ , the size of the problem in the application term is in the big-O class O([size Γ (args)/u]ψ).Note that the membership of big-O classes can be encoded as an ∃∀ query.The query is non-linear, and hence undecidable in general.However, we observed in our experiments that for many benchmarks the query stays linear.Furthermore, even when the query is non-linear, existing SMT solvers are capable of handling many such checks in practice.

Soundness
We assume that the resource-usage function ψ and the complexities T of each function are all nonnegative and monotonic integer functions-both the input and the output are integers.We show soundness of the type system with respect to the resource model.The soundness theorem states that if we derive a bound O(ψ) for a function f, then the complexity of f is bounded by ψ.
Our type system is incomplete with respect to resource usage.That is, there are functions in our programming language that are actually in a complexity class O(p(x)), but cannot be typed in our type system.The main reason why our type system is incomplete is that it ignores condition guards when building recurrence relations, and over-approximates if-then-else terms by choosing the largest complexity among all the paths including even unreachable ones.

The SynPlexity Synthesis Algorithm
In this section, we present the SynPlexity synthesis algorithm, which uses annotated types to guide the search of terms of given types.

Overview of the Synthesis Algorithm
The algorithm takes as input a goal type f : τ, O(ψ) , an environment Γ that includes type information of auxiliary functions, and the size functions for f and all auxiliary functions.The goal is to find a function term of type τ, O(ψ) .
The algorithm uses the rules of the SynPlexity type system to decompose goal types into sub-goals, and then applies itself recursively on the sub-goals to synthesize sub-terms.Concretely, given a goal γ, the algorithm tries all the rules shown in Figs.7 and 3, where the type in the conclusion matches γ, to construct sub-goals: for each sub-term t in the conclusion, there must be a judgment Γ ⊢ t :: γ ′ in the premise; thus, we construct the sub-goal γ ′ -the desired type of t.For each I-term rule (Fig. 7), the type of each sub-term is always known, and thus a fixed set of sub-goals is generated.For each E-term rule (Fig. 3), the algorithm enumerates E-terms up to a certain depth (the depth can be given as a parameter or it can automatically increase throughout the search).If the algorithm fails to solve some sub-goal using some E-term rule, it backtracks to an earlier choice point, and tries another rule.
Because the top-level goal is always a function type, the algorithm always starts by applying the rule T-Abs, which matches the resource bound O(ψ) using Tab. 1 to infer a possible recurrence annotation for the type of the function body.Also T-Abs constructs a sub-goal type for the function body.In the rest of this section, we assume that goals are not function types.Variables.Given a goal γ, the algorithm first tries to apply rule E-Var, which simply checks if any variable in the environment is of type γ, and hence is a solution.If no variable in the environment could be a solution, it starts enumerating E-terms up to a certain depth.Synthesizing Application Terms.To enumerate application terms, the algorithm first enumerate a term t that satisfies the base types B in the goal . Then, the algorithm checks if the total number of recursive calls in the term t exceeds the bound i c i .If yes, the term t is rejected.Otherwise, the sizes of sub-problems of recursive calls are checked.Formally, to check if a recursive application term f (t 1 , .., t m ) is consistent with some [c k , φ k ], the algorithm queries the validity of the following predicate where the y i 's are fresh variables, and the ϕ i 's are the refinements of the t i terms.If the sizes of sub-problems are not consistent with the recursive-call costs [c 1 , φ 1 ]..[c n , φ n ], the term t is rejected.Note that one recursive call can possibly satisfy more than one [c k , φ k ].The algorithm will enumerate all possible matches.
Checking the validity of auxiliary application terms is similar.The following predicate is checked, which states that the resource usage of an auxiliary function call should not exceed the bound O(ψ).
Recall that the above query is undecidable in general, and is checked using an SMT solver in SynPlexity.An enumerated term is accepted if its refinement implies the goal refinement ϕ.Rules for Branching Term.When the algorithm chooses to apply the rule T-If to synthesize a term of the form if e then t 1 else t 2 for a given goal {B | ϕ}, α , there are three steps to construct sub-goals for sub-terms e, t 1 , and t 2 : (1) sharing the recursive-call costs in α, (2) enumerating the condition   guard e, and (3) propagating sub-goals to the two branches t 1 and t 2 .Note that there can be multiple ways to share α to α 1 and α 2 , and the algorithm will try them one by one (because the numbers of recursive calls are natural numbers, SynPlexity will just enumerate all possible ways to split them).Once a sharing α 1 , α 2 is chosen, the algorithm constructs a goal Bool, α 1 for the condition guard e.For a candidate e of type {Bool | ϕ e }, α 1 , the algorithm constructs two sub-goals {B | ϕ}, α 2 along with the path condition ϕ e , and {B | ϕ}, α 2 along with the path condition ¬ϕ e for the two branches t 1 and t 2 , respectively.Applying the rule T-Match is similar to T-If.
Example 7. We illustrate in Fig. 4 how the algorithm synthesizes the O(log x) implementation of prod presented in Eqn.(4).We omit the type contexts in the example.We will use "??" to denote intermediate terms being synthesized (i.e., holes in the program).At the beginning, the type of ?? 1 (i.e., the term we are synthesizing) is an arrow type with resource bound O(log u) specified by the input goal.In this example, SynPlexity applies to the arrow type the rule T-Abs, parameterized according to the first rule in Tab. 1.This step produces the sub-problem of synthesizing the function body ?? 2 , whose annotation is )-which means that ?? 2 should contain at most one recursive call to sub-problems with size ⌊ u 2 ⌋.Next, SynPlexity chooses to fill ?? 2 with an if-then-else term (by applying the T-If rules) with three sub-problems: the condition guard ?? 3 , the then branch ?? 4 and the else branch ?? 5 .Note that here we share the number of recursive calls [1, u 2 ] as follows: 0 recursive calls in the condition guard, and 1 in the then branch and the else branch.The left arrow E-App shows how SynPlexity enumerates terms and checks them against the goal types of sub-problems.For example, to fill ?? 4 , SynPlexity enumerates terms of type ) , which are restricted to contain at most one recursive call to prod.In Fig. 4, SynPlexity has picked the term x to fill ?? 4 .The refinement type of the variable term x is {Int | v = x ∧ x = 0} where x = 0 is the path condition.To check that x also satisfies the type of ?? 4 , the algorithm needs to apply rule E-SubType, and check that, for any v and x, v = x ∧ x = 0 implies v = x * y ∧ x = 0, and [0, ⌊ u 2 ⌋] is approximated by [1, ⌊ u 2 ⌋].After applying another T-If rule for ?? 5 , SynPlexity produces three new sub-problems ?? 6 , ?? 7 , and ?? 8 .When enumerating terms to fill ?? 7 , SynPlexity finds an application term double (prod (div2 x) y) that satisfies the goal 1)) .To check that the size of the problem in the recursive call prod (div2 x) y satisfies the recursive-call cost [1, ⌊ u 2 ⌋], the type system first checks the refinement of the callee.The refinement of the first argument (Recall that the size function for prod is size := λz.λw.z.) The algorithm is sound because it only enumerates well-typed terms.

Theorem 2 (Soundness of the synthesis algorithm). Given a goal type τ, O(ψ)
and an environment Γ , if a term fix f.λx 1 ..λx n .t is synthesized by SynPlexity, then the complexity of f is bounded by ψ.

Extensions to the SynPlexity Type System
In this section, we introduce two extensions to the SynPlexity type system.Recurrence Relations with Correlated Sizes.The type system shown in §3 only tracks sub-problems with independent sizes.For example, consider the recurrence relation T (u) = T (l) + T (r) + O (1), where the variables l and r are correlated by the constraint l+r < u.This relation is needed to reason about programs that manipulate binary trees or binary heaps, where l and r represent the sizes of the two children.To support such a recurrence relation, we extend Syn-Plexity's type system with recursive-call costs of the form where l is a free variable.When correlated recurrence relations are present, the synthesis algorithm will: (1) match the first enumerated recursive-call term to [1, l], and instantiate the size l with s, where s is the size of the recursive-call term (s should be smaller than the size u of the top-level function); and (2) use the size s of the recursive-call term computed in step 1 to constrain the algorithm to enumerate only recursive-call terms of sizes at most u − 1 − s.Synthesis of Auxiliary Functions.Most of the existing type-directed approaches require the input to the problem to contain all needed auxiliary functions.With SynPlexity, some of the auxiliary functions needed to solve synthesis problems with resource annotations can be synthesized automatically.For example, consider the problem prod described in §2.In this problem, we observe that one of the provided auxiliary functions, div2, strongly resembles one of the elements of the recurrence relation, T (u) ≤ T (⌊ u 2 ⌋) + O(1), needed to synthesize a program with the desired resource usage.In particular, we know that one needs an auxiliary function that can take an input of size u and produce an output of size ⌊ u 2 ⌋.In this example, the required auxiliary function div2 merely needs to divide the input by 2 (and round down), but in certain cases it might need a more precise refinement type than merely changing the size of the input.For example, the auxiliary function split used by merge sort needs to split the input list xs into two lists v1 and v2 that are half the length of the input and such that elems(v1) ⊎ elems(v2) = elems(xs).However, all we know from the refinement is that the output lists must be half the length of the original list.
Although we do not know what this auxiliary function should do exactly, we can use the size constraint appearing in the recurrence relation to define part of the refinement type we want the auxiliary function to satisfy.SynPlexity builds on this idea and incorporates an (optionally enabled) algorithm, SynAuxRef, that while trying to synthesize a solution to the top-level synthesis problem also tries in parallel to synthesize auxiliary functions that can create sub-problems with the size constraints needed in the recurrence relation.To address the problem mentioned above-i.e., that we do not know the exact refinement type the auxiliary function should satisfy-SynAuxRef enumerates auxiliary refinements, which are possible specifications that the auxiliary function aux we are trying to synthesize might satisfy.

Evaluation
In this section, we evaluate the effectiveness and performance of SynPlexity, and compare it to existing tools. 3We implemented SynPlexity in Haskell on top of Synquid by extending its type system with recurrence annotations as presented in §3.The detailed results are reported in §D.

Comparison to Prior Tools
We compared SynPlexity against two related tools: Synquid [17] and ReSyn [15], which are also based on refinement types.Benchmarks.We considered a total of 77 synthesis problems: 56 synthesis problems from ReSyn (each benchmark specifies a concrete linear-time resource annotation), 16 synthesis problems from Synquid (which do not include resource annotations) that are not included in ReSyn, and 5 new synthesis problems involving non-linear resource annotations.In these synthesis problems, synthesis specifications and auxiliary functions are all given as refinement types.For 3 of the new benchmarks, the auxiliary function required to split the input into smaller ones is not given-i.e., the synthesizer needs to identify it automatically.
The three solvers (SynPlexity, Synquid, and ReSyn) have different features, and hence not all synthesis problem can be encoded as synthesis benchmarks for a single solver.In the rest of this section, we describe what benchmarks we considered for each tool, and how we modified the benchmarks when needed.
Synquid: Synquid does not support resource bounds, so we encoded 77 synthesis problems as Synquid benchmarks by dropping the resource annotations.Synquid returns the first program that meet the synthesis specification, and cannot provide any guarantees about the resource usage of the returned program.Synquid can solve 75 benchmarks, and takes on average 3.3s.For 10 benchmarks Synquid synthesizes a non-optimal program-i.e., there exists another program with better concrete resource usage.For example, on the ReSyntriple-2 benchmark (where the input is a list xs), Synquid found a solution with resource usage O(|xs| 2 ), while both SynPlexity and ReSyn can synthesize a more efficient implementation with resource usage O(|xs|).The two benchmarks that Synquid failed to solve include the new benchmark SynPlexity-mergesort'.In this benchmark, the auxiliary function required to break the input into smaller inputs is not given, without which the sizes of solutions become much larger.Therefore Synquid times out.ReSyn: We ran ReSyn on the 56 ReSyn benchmarks with the corresponding concrete resource bounds.We could not encode 16 problems because ReSyn does not support non-linear resources bounds-e.g., the bound log |y| in the AVLinsert Synquid benchmark.ReSyn solved all 56 benchmarks with an average running time of 18.3s.SynPlexity: We manually added resource usages and resource bounds to existing problems to encode them for SynPlexity.For Synquid benchmarks without concrete resource bounds, we chose well-known time complexities as the bounds, e.g., we added the resource bound O(u log u) to the Sort-merge-sort problem.For the ReSyn benchmarks, we translated the concrete resource usage and resource bounds to the corresponding asymptotic ones-e.g., for the ReSyn-common' benchmark with the concrete resource bound |ys| + |zs|, we constructed a SynPlexity variant with the asymptotic bound O(u) and a size function λys.λzs.|ys|+ |zs|.We could not encode 9 synthesis problems as Syn-Plexity benchmarks because they involved higher-order functions, which are not supported by SynPlexity, or the resource usage bound O(2 u ) (the Treecreate-balanced problem from Synquid).
SynPlexity solved 68 benchmarks with an average running time of 8.1s.Unlike Synquid, SynPlexity guarantees that the synthesized program satisfies the given resource bounds.For 10 benchmarks, SynPlexity found programs that had better resource usage than those synthesized by Synquid.Furthermore, SynPlexity can encode and solve 9 problems that ReSyn could not solve because the resource bounds involve logarithms.However, SynPlexity cannot encode and solve 8 benchmarks that involve higher-order functions.SynPlexity could solve 3 problems that required synthesizing both the main function (e.g., SynPlexity-merge-sort) and its auxiliary function (e.g., the function splitting a given list into two balanced partitions).No other tool could solve the Syn-Plexity-merge-sort' benchmark.
Finding.SynPlexity can express and solve 68/77 benchmarks.SynPlexity has comparable performance to existing tools, and can synthesize programs with resource bounds that are not supported by prior tools.

Pruning the Search Space with Annotated Types
SynPlexity uses recurrence annotations to guide the search and avoids enumerating terms that are guaranteed to not match the specified complexity.We compared the numbers of E-terms enumerated by SynPlexity and Synquid for 56 benchmark on which both tool produced same solutions.Synquid always enumerated at least as many E-terms as SynPlexity, SynPlexity enumerated strictly fewer E-terms for 26/56 benchmarks.For these 26 benchmarks, SynPlexity can on average prune the search space by 6.2%.For example, in one case (BST-delete) SynPlexity enumerated 2,059 E-terms, while Synquid enumerated 2,202.
Finding.On average, SynPlexity reduces the size of the search space by 6.2% for approximately half of the benchmarks.

Related Work
Resource-Bound Analysis.Rather than determining whether a given program satisfies a specification, a synthesizer determines whether there exists a program that inhabits a given specification.The branch of verification that we draw upon for resource-based synthesis is resource-bound analysis [19].
Within the literature on automated resource-bound analysis, there are methods that extract and solve recurrence relations for imperative code [2,7,4,14].However, these methods are unlike the type system presented in this work because they extract concrete complexity bounds as recurrence relations, and then solve the recurrences to find a concrete upper bound on resource usage.The dominant terms of the resulting concrete bounds can then be used to state a big-O complexity bound.In contrast, we want to synthesize programs with respect to a big-O complexity directly, which is more similar to the manual reasoning of [6,8].Thus, if we were to use these techniques for our problem, the first step in our synthesis algorithm would be to pick a concrete complexity function given a big-O complexity, and then reverse the verification problem with regards to that concrete complexity.However, for any big-O complexity, there are an infinite number of functions that satisfy that complexity, which presents a significant challenge at the outset.Our design choice also has some drawbacks.As noted in [8], reasoning compositionally with big-O complexity is challenging due to the hidden quantifier structure of big-O notation.Thus, to maintain soundness our type system has to sacrifice precision and generality in some places.For example, when a function has multiple paths, our type system over-approximates by choosing the largest complexity among all the paths.Another set of methods to generate resource bounds are type-based [9,10,18,13].As we discussed throughout the paper, the complexities generated by these methods are concrete functions and not expressed with big-O notation, although [18] is sometimes able to pattern match a case of the Master Theorem.These type systems differ from ours in a few ways.The AARA line of research [9,10,13] is able to assign amortized complexity to programs, but is not able to generate logarithmic bounds.[18] is also able to perform amortized analysis; however, the technique is not fully automated, and instead requires the user to provide type annotations on terms, which are then checked by the type system.
Type-and Resource-Aware Synthesis.The SynPlexity implementation is built on top of Synquid [17] a type-directed synthesis tool based on refinement types and polymorphism.The work that most closely resembles ours is ReSyn [15].As in our work, they combine the type-directed synthesizer Synquid with a type system that is able to assign complexity bounds to functional programs.The type system used in ReSyn is based on one originally used in the context of verification [10].That work uses a sophisticated type system to assign amortized resource-usage bounds to a given program.The type system of ReSyn differs from the one presented in §3 in a few significant ways.
As highlighted earlier, the technique of ReSyn for automatically inferring bounds on recursive functions is based on amortized analysis, and restricted to linear bounds, whereas our system is able to synthesize complexities of the form O(n a log b n + c).
Another difference is that ReSyn synthesizes programs with a concrete complexity bound.This approach has advantages and disadvantages.For instance, it places an extra burden on the human to provide the correct bound with precise coefficient.On the other hand, the user might want an implementation that has a complexity with a small coefficient, whereas our system provides no guarantee that the complexity of an implementation will have a small coefficient in the dominant term: SynPlexity only guarantees asymptotic behavior.
ReSyn can synthesize programs with higher-order functions, which are not supported by SynPlexity.To handle higher-order functions, ReSyn attaches resource units to types, which gives it resource polymorphism.Moreover, costs of inputs with function types can be written generally as polymorphic types (i.e., costs can be polymorphic with respect to the size of the specific input types).SynPlexity does not have asymptotic resource polymorphism because it cannot directly compose unknown big-O functions (i.e., the complexity of higher-order inputs).We envision that with carefully crafted restrictions on the resource annotations of higher-order functions, SynPlexity could handle synthesis problems involving such functions, e.g., assuming that the complexity of input functions is known and the refinements of input functions are precise enough.Because big-O functions cannot be directly composed, developing a more general extension to SynPlexity that supports higher-order functions is a challenging direction for future work.

A Semantics
In this section, we presented two kinds of semantics: 1) the concrete small step semantics which define the concrete complexity of functions, and 2) a loose semantics which over-approximates the concrete semantics and will be used in the proof of the soundness theorem.Concrete semantics.The evaluation rules of concrete-cost semantics are shown in Fig. 5.In the concrete-cost semantics, a configuration t, C consists of a term t and a nonnegative integer C denoting the resource usage so far.The evaluation judgment t, C ֒→ t ′ , C + C ∆ states that a term t can be evaluated in one step to a term (or a value) t ′ , with resource usage C ∆ .We write t, C ֒→ * t ′ , C + C ∆ to indicate the reduction from t to t ′ in zero or more steps.Polynomial-Bounded Refinement Type.Before introducing the loose semantics, we introduce a class of refinement type that can be over-approximated as polynomials.
The resource usage of a function call depends on the size of the problem it makes calls to.However, predicates used to refine functions could be imprecise, and thus we may have to reason about sizes of problems with imprecise refinements.For example, consider a function square:: In the nested application term square (square x), we know the size of the problem of the inner application is x, but we do not know the size of the problem square x of the outer application (all we know about square x is that it is non-negative).To reason about input sizes, we introduce a class of refinement types with which we can infer upper bounds of the sizes of problems.
Let us first assume that all terms and variables are integers or integer functions (we will discuss the case where terms and variables hold values other than integers later).For an integer refinement type τ = {Int | ϕ} refined by a predicate ϕ(v, x) over v and a tuple of variables x, we say that ϕ is bounded by a polynomial term p(x), written as ϕ ⊏ p if That is, there exist some positive constants c such that, for any x (point-wise) greater than c and for any v, if v, x satisfying ϕ, the absolute value |v| is always less than p(x).
For datatype D other than Int, we assume that there is an intrinsic measure with output type Int for every D, denoted by | • | D (we omit the subscript if it is clear from the context).Intrinsic measures are specified by users for userdefined datatypes.For example, the intrinsic measure of lists can be defined as a function that computes the length of lists, i.e., |l| = len l for any list l.The A loose cost model.The semantics given previously give a standard notion of complexity.However, we find two challenges connecting these semantics to our synthesis algorithm.First, we allow users to supply auxiliary functions as signatures involving big-O notation as opposed to implementations.Second, our synthesis algorithm ensures complexity through the tracking of recursive calls, which are not present in the concrete semantics given above.To address these challenges we introduce an intermediate semantics that uses recurrence relations and big-O notation.We then show in Thm. 3 that this intermediate semantics approximates complexity in the sense of Defns. 1 and 2. The signatures of auxiliary functions g are of the form τ 1 → {B | ϕ(v, y)}, O(ψ(u)) .Although we don't really have the implementation of g, we assume that there exists some implementation fix g.λy.t of g, such that for any input x, the output of g on x satisfies the signature, i.e., (fix g.λy.t)x, 0 ֒→ * v x , C x implies ϕ(v x , x); and -for any input x, the complexity of g is bounded by ψ(u), i.e., T g (n) ∈ O(ψ(n)).
For the top-level function we are evaluating, we assume that its signature τ 1 → {B | ϕ(v, y)}, O(ψ(u)) is also given, whereas the semantics of f is overapproximated by its refinement, i.e., for any input x, (fix f.λy.t)x, 0 ֒→ * v x , C x implies ϕ(v x , x).Now we introduce our intermediate loose semantics.Formally, reductions are defined between configurations.Each configuration t, R # is a pair of an extended term t and a recurrence parameter R. Extended terms t ::= t | ϕ are either terms t or formula expressions ϕ.Recurrence parameters R ::= φ | ⊥ | R R are either size expressions φ, or parameters combined by a parallel operator , i.e., R is a collection of size expressions.The parallel operator distributes over the plus, i.e., ( Intuitively, a parameter R without parallelism denotes the recurrence relation of the function along one path.When the function contains more than one path, the overall parameter will be sub-parameters in parallel.
We use t, R # → ϕ, R ′ # to denote a step of a reduction.The goal is to reduce a term t in a function f to a predicate ϕ such that ϕ describes the behavior of t-ϕ is a refinement of t.At the same time, recurrence relations can be built by incrementally appending expressions representing resource usage to the recurrence parameter R.
Because we are building recurrence relations for a function f, the reduction always starts from a fix-term and an empty parameter ⊥.The result configuration is the refinement ϕ of the function body t with the recurrence parameter R. We use the function T : Int → Int to denote the resource usage of the function f; hence, the recurrence parameter we build for f will be the recurrence relation of resource usage T .
In our loose semantics, each auxiliary function g has a resource annotation (O(ψ g )) denoting the resource usage of g, a logical signature ϕ g denoting the behavior of g, and a size function size g .Resource usage happens when an auxiliary function is called.
That is, if each callee e i can be reduced to a predicate ϕ i bounded by v i with the recurrence parameter change R i , the non-recursive application term g e 1 .. e n will be reduced to the predicate ϕ with resource usage [size g v 1 .. v n /u]ψ g (the resource usage of g) and n i=1 R i (resource usage used to evaluate {e i } i ).We over-approximate the size of the problem e 1 ..e n using the upper bounds v i of callee's behavior predicates ϕ i .The result predicate ϕ is actually the behavior ϕ g of g with each argument x i instantiated with the semantic predicates [x i /v]ϕ i of callee e i .
The semantics of performing a recursive call is a bit different.The resource usage is instead T (size f v 1 .. v n )-the resource usage T on a sub-problem with size size f v 1 .. v n where the v i 's are over-approximations of the callee e i 's.
The reduction of if-terms will result in ite predicates.The resulting recurrence parameter R e + R 1 R e + R 2 uses the parallel operator because there are two paths in an ite term.
The rules for match term and variable term are similar.
With the above rules, we can then say the complexity of a function to be any expression that satisfy the recurrence parameter of the function.

Theorem 3 (Complexity bounds).
Given a function term fix f.λx.t f , the signature type of f , and the signature types of all auxiliary function used in f , if the refinements of auxiliary functions and f are all bounded by some monotonic polynomials; -the function body t f can be reduced to •, R f # , where R f is of form R f,1 .. R f,m , and none of the R f,i 's contain an occurrence of the parallel operator; and there exists a function ψ that satisfies then the complexity T f of f is bounded by the function ψ, i.e., T f ∈ O(ψ).
Proof.We first show by induction on a loose semantic derivation that for any term t, Base case.The base case is variable term.The ϕ for variable term is precise and the resource usage are 0 in both semantics.
Non-recursive application terms.When t = g e 1 .. e n is an non-recursive application term, with the induction hypothesis, we have that all e i 's loose semantics ϕ i over-approximates their concrete semantics v ei , i.e., ∀in, p i .
because each v i is the bound of ϕ i and hence the bound of the concrete value v ei .Recursive application terms.When t = f e 1 .. e n is a recursive application term, the proof of the behavior part is similar as above because we have the same behavior assumption on the signature of the top-level function.The concrete cost C t of t is bounded by T f (size f (|v e1 |, . . ., |v en |)), where T f is the complexity function of f , plus the concrete cost of evaluating each e i (which is bounded by R i according to the induction hypothesis).Note that here T f is an uninterpreted monotonic non-negative function.The loose semantics T (size f ( v 1 .. v n )) also contains an uninterpreted monotonic nonnegative function T .In this proof, we generalize the comparison symbol ≤ to or R e + R 2 , respectively, according to the induction hypothesis.The concrete semantics is either v t1 or v t2 .Any bound p of the loose semantics ϕ 1 ∨ ϕ 2 is also the bound on both ϕ 1 or ϕ 2 .According to the induction hypothesis, p will also be the bound of the concrete semantics v t .
The case of match term is similar to the conditional term.Now, for any input in, the complexity function T f of the top-level function f should should satisfy the recurrence parameter along one of the path, i.e., ∃k∀in ∃j.T f (szie f (in)) ≤ k * [T f /T ]R f,j .So, if a bound ψ dominating the loose cost for every path, it will always dominate the complexity T f of f .Example 9.For the program shown in Eqn.(4), there are three paths.At the beginning, fix prod.λx.λy.t is reduced to ϕ, T (size prod x y) ≤ 0 + R # where R is the reduction result of the function body, and size prod x y = x since the size function for prod is λz.λw.z The first ite term if x == 0 then t 1 else t 2 is reduced to ite where the condition contain one equivalence operator (R e = 1), the then branch has 0 resource usage (it is a variable term) (R 1 = 0), and the else branch has resource usage R ∈ , which we learn from the reduction of t 2 .
The application term div2 x is reduced to v = z 2 ∧ z = x, 1 # since the resource usage of div2 is O (1).
The recursive-application term prod (div2 x) y is reduced to . Thus the complexity of prod is bounded by log x since T (x) ≤ 1 =⇒ T (x) ∈ O(log x), and, T (x) ≤ 5 + T ( x 2 ) =⇒ T (x) ∈ O(log x) according to the Master Theorem.

B Typing rules
Subtyping.Subtyping judgments are shown in Fig. 6, and are standard.The most notable rule is <:-Anno, which states that bounds ψ ′ and the subproblems' size φ ′ in the annotations in subtypes should be less than in the supertype.
Subtying judgments are shown in Fig. 6, and are standard.The most notable rules are <:-Rec and <:-Bound, which states that bounds ψ and the sub-problems' size φ in the annotations in subtypes should be less than in the supertype.
For example, if one branch of some branching term has type τ, ([1, ⌊ u 3 ⌋], O(ψ)) , it can be over-approximated by a super type τ, ([1, ⌊ u 2 ⌋], O(ψ)) .The idea is that the resource usage of an application calling to a problem of size ⌊ u 2 ⌋, will be larger than the application calling to a smaller problem of size ⌊ u 3 ⌋, with the assumption that all resource usages are monotonic.Cost sharing.The sharing operator α α 1 |α 2 partitions the recursive-call cost of α into α 1 and α 2 -i.e., the sum of the costs in α 1 and α 2 equals the cost in α.Sharing rules are shown in Fig. 8. S-Pot shares a single cost c to two costs c 1 and c 2 such that their sum is no more than c.An annotation can be shared to two parts (S-Anno) if every recursive cost [c i , φ i ] in it can be shared to two parts [c 1 i , φ 1 ] and [c 2 i , φ 2 ].Finally, annotations can also be shared to more than two parts (S-Mul).

Soundness theorem.
The proof of the soundness theorem uses the following crucial lemma connecting the recurrence annotations with the actual recurrence relations of functions.Proof.The proof of the lemma consists two parts: 1) the recursive calls are tracked correctly, and 2) the bound on the function body is tracked correctly.We assume that the function fix f.λx 1 ..λx n .tcan be evaluated to the recurrence parameter R 1 .. R m , and show that ([c 1 , φ 1 ], .., [c m , φ m ]; O(ψ)) overapproximates the recurrence parameter in the way that, along any path R i , the number of T (φ) such that φ ≤ φ i should be no more than c i , and the sum of non-T () terms in R i .
First, note that if Γ ⊢ t :: {B | ϕ t }, α and t, 0 # → ϕ ′ t , R t # , then ϕ ′ t =⇒ ϕ t because they are built in the same bottom-up way.The difference is that the type ϕ t inferred by the type system can be arbitrarily over-approximated by the E-SubType rule.
For a recursive application term t := f (e 1 , . . ., e n ), it will be of subtype of some types containing a recursive-call cost [1, φ k ] if m i=1 [y i /v]ϕ ei ⇒ (size y 1 . . .y m ≤ [size Γ (args)/u]φ k ).Similarly, the term t will be evaluated

Fig. 4 .
Fig. 4. Trace of the synthesis of an O(log x) implementation of prod.
and none of the R i 's contains an occurrence of the parallel operator; and -[(fix f.λx.t f /f ][(fix g.λy.t g /g][in/x]t, 0 ֒→ * v t , C t for all in, then we have for any in and polynomial p t , ϕ t ⊏ p t =⇒ |v t | ≤ p t (|in|), and ∃c, k > 0 ∀in ∃j.size f Assume that p g is the tightest bound of ϕ g and ϕ i 's are the tightest bounds of ϕ i respectively, i.e., ∃c∀v, y.(size g (y) > c ∧ ϕ g (v, y) =⇒ |v| ≤ p g (|y|)), .andforany i.∃c∀in, y i .(size f (in) > c ∧ ϕ i (y i , in) =⇒ |y i | ≤ p i (|in|)).
g (|y 1 |, . .., |y n |) ≤ p g (p 1 (|in|), . .., p n (|in|))).That is, the polynomial p g (p 1 , . . ., p n ) is the tightest bound of ϕ t .At the same time, According the first assumption of the signature of the auxiliary functions, ∀in.ϕ g (v t , v ei ), which implies that any bound of ϕ g is also a bound of v t ; hence, |v t | is bounded by p g (|v ei |).Each v ei is bounded by p i according to eh induction hypothesis, that is |v ei | ≤ p i (in) for sufficient large in.Therefore, p g (p 1 , . . ., p n ) is also a bound on v t .According to the second assumption on signatures of auxiliary functions, we have T g that is, the comparison between uninterpreted functions is the result of comparison between their inputs.With such generalization,T f (size f (|v e1 |, ..., |v en |)) ≤ T (size f ( v 1 .. v n ))because each v ei is bounded by v i according to the induction hypothesis.Branching terms.When t = if e then t 1 else t 2 is a conditional term, the concrete cost C t1 + C e or C t1 + C e of it is bounded by the loose cost R e + R 1

Table 2 .
Evaluation results of Synquid, ReSyn, and SynPlexity on benchmarks that can be encoded as SynPlexity benchmarks.T denotes running time.B denotes the given resource bounds.TO denotes a timeout.The benchmarks cannot be encoded by some tools are shown as -.Rec.rel.represents the recurrence-relation pattern SynPlexity chose to use.C=C-finite sequence.M=Master Theorem.A=Akra-Bazzi method.T=the tree recurrence we introduced in §5.N=non-recursive.