Overfitting in Synthesis: Theory and Practice

In syntax-guided synthesis (SyGuS), a synthesizer's goal is to automatically generate a program belonging to a grammar of possible implementations that meets a logical specification. We investigate a common limitation across state-of-the-art SyGuS tools that perform counterexample-guided inductive synthesis (CEGIS). We empirically observe that as the expressiveness of the provided grammar increases, the performance of these tools degrades significantly. We claim that this degradation is not only due to a larger search space, but also due to overfitting. We formally define this phenomenon and prove no-free-lunch theorems for SyGuS, which reveal a fundamental tradeoff between synthesizer performance and grammar expressiveness. A standard approach to mitigate overfitting in machine learning is to run multiple learners with varying expressiveness in parallel. We demonstrate that this insight can immediately benefit existing SyGuS tools. We also propose a novel single-threaded technique called hybrid enumeration that interleaves different grammars and outperforms the winner of the 2018 SyGuS competition (Inv track), solving more problems and achieving a $5\times$ mean speedup.


Introduction
The syntax-guided synthesis (SyGuS) framework [ ] provides a unified format to describe a program synthesis problem by supplying ( ) a logical specification for the desired functionality, and ( ) a grammar of allowed implementations. Given these two inputs, a SyGuS tool searches through the programs that are permitted by the grammar to generate one that meets the specification. Today, SyGuS is at the core of several state-of-the-art program synthesizers [ , , , , ], many of which compete annually in the SyGuS competition [ , ].
We demonstrate empirically that five state-of-the-art SyGuS tools are very sensitive to the choice of grammar. Increasing grammar expressiveness allows the tools to solve some problems that are unsolvable with less-expressive grammars. However, it also causes them to fail on many problems that the tools are able to solve with a less expressive grammar. We analyze the latter behavior both theoretically and empirically and present techniques that make existing tools much more robust in the face of increasing grammar expressiveness.
We restrict our investigation to a widely used approach [ ] to SyGuS called counterexample-guided inductive synthesis (CEGIS) [ , § ]. In this approach, the synthesizer is composed of a learner and an oracle. The learner iteratively identifies a candidate program that is consistent with a given set of examples (initially empty) and queries the oracle to either prove that the program is correct, i.e., meets the given specification, or obtain a counterexample that demonstrates that the program does not meet the specification. The counterexample is added to the set of examples for the next iteration. The iterations continue until a correct program is found or resource/time budgets are exhausted.
Overfitting. To better understand the observed performance degradation, we instrumented one of these SyGuS tools ( § . ). We empirically observe that for a large number of problems, the performance degradation on increasing grammar expressiveness is often accompanied by a significant increase in the number of counterexamples required. Intuitively, as grammar expressiveness increases so does the number of spurious candidate programs, which satisfy a given set of examples but violate the specification. If the learner picks such a candidate, then the oracle generates a counterexample, the learner searches again, and so on.
In other words, increasing grammar expressiveness increases the chances for overfitting, a well-known phenomenon in machine learning (ML). Overfitting occurs when a learned function explains a given set of observations but does not generalize correctly beyond it. Since SyGuS is indeed a form of function learning, it is perhaps not surprising that it is prone to overfitting. However, we identify its specific source in the context of SyGuS -the spurious candidates induced by increasing grammar expressiveness -and show that it is a significant problem in practice. We formally define the potential for overfitting (Ω), in Definition , which captures the number of spurious candidates.
No Free Lunch. In the ML community, this tradeoff between expressiveness and overfitting has been formalized for various settings as no-free-lunch (NFL) theorems [ , § . ]. Intuitively such a theorem says that for every learner there exists a function that cannot be efficiently learned, where efficiency is defined by the number of examples required. We have proven corresponding NFL theorems for the CEGIS-based SyGuS setting (Theorems and ).
A key difference between the ML and SyGuS settings is the notion of mlearnability. In the ML setting, the learned function may differ from the true function, as long as this difference (expressed as an error probability) is relatively small. However, because the learner is allowed to make errors, it is in turn required to learn given an arbitrary set of m examples (drawn from some distribution). In contrast, the SyGuS learning setting is all-or-nothing -either the tool synthesizes a program that meets the given specification or it fails. Therefore, it would be overly strong to require the learner to handle an arbitrary set of examples.
Instead, we define a much weaker notion of m-learnability for SyGuS, which only requires that there exist a set of m examples for which the learner succeeds. Yet, our NFL theorem shows that even this weak notion of learnability can always be thwarted: given an integer m ≥ 0 and an expressive enough (as a function of m) grammar, for every learner there exists a SyGuS problem that cannot be learned without access to more than m examples. We also prove that overfitting is inevitable with an expressive enough grammar (Theorems and ) and that the potential for overfitting increases with grammar expressiveness (Theorem ).
Mitigating Overfitting. Inspired by ensemble methods [ ] in ML, which aggregate results from multiple learners to combat overfitting (and underfitting), we propose PLearn -a black-box framework that runs multiple parallel instances of a SyGuS tool with different grammars. Although prior SyGuS tools run multiple instances of learners with different random seeds [ , ], to our knowledge, this is the first proposal to explore multiple grammars as a means to improve the performance of SyGuS However, running parallel instances of a synthesizer is computationally expensive. Hence, we also devise a white-box approach, called hybrid enumeration, that extends the enumerative synthesis technique [ ] to efficiently interleave exploration of multiple grammars in a single SyGuS instance. We implement hybrid enumeration within LoopInvGen and show that the resulting single-threaded learner, LoopInvGen+HE, has negligible overhead but achieves performance comparable to that of PLearn for LoopInvGen. Moreover, LoopInvGen+HE significantly outperforms the winner [ ] of the invariant-synthesis (Inv) track of SyGuS competition [ ] -a variant of LoopInvGen specifically tuned for the competition -including a 5× mean speedup and solving two SyGuS problems that no tool in the competition could solve.
Contributions. In summary, we present the following contributions: ( § ) We empirically observe that, in many cases, increasing grammar expressiveness degrades performance of existing SyGuS tools due to overfitting. ( § ) We formally define overfitting and prove no-free-lunch theorems for the SyGuS setting, which indicate that overfitting with increasing grammar expressiveness is a fundamental characteristic of SyGuS. ( § ) We propose two mitigation strategies -( ) a black-box technique that runs multiple parallel instances of a synthesizer, each with a different grammar, and ( ) a single-threaded enumerative technique, called hybrid enumeration, that interleaves exploration of multiple grammars. ( § ) We show that incorporating these mitigating measures in existing tools significantly improves their performance.

Motivation
In this section, we first present empirical evidence that existing SyGuS tools are sensitive to changes in grammar expressiveness. Specifically, we demonstrate that as we increase the expressiveness of the provided grammar, every tool starts failing on some benchmarks that it was able to solve with less-expressive grammars. We then investigate one of these tools in detail.

. Grammar Sensitivity of SyGuS Tools
We

Additional rules in Intervals grammar
Additional rules in Octagons grammar : Additional rule in Polyhedra grammar : Additional rule in Polynomials grammar : Additional rule in Peano grammar :

Fig. . Grammars of quantifierfree predicates over integers
We ran these five tools on 180 invariantsynthesis benchmarks, which we describe in § . We ran the benchmarks with each of the six grammars of quantifier-free predicates, which are shown in Fig. .  The * S operator denotes scalar multiplication, e.g., (* S 2 x), and * N denotes nonlinear multiplication, e.g., (* N x y). In Fig. , we report our findings on running each benchmark on each tool with each grammar, with a 30-minute wall-clock timeout. For each tool, grammar pair, the y-axis shows the number of failing benchmarks that the same tool is able to solve with a lessexpressive grammar. We observe that, for each tool, the number of such failures increases with the grammar expressiveness. For instance, introducing the scalar multiplication operator ( * S ) causes CVC to fail on 21 benchmarks that it is able to solve with Equalities ( 4 /21), Intervals ( 18 /21), or Octagons ( 10 /21). Similarly, adding nonlinear multiplication causes LoopInvGen to fail on 10 benchmarks that it can solve with a less-expressive grammar.

Fig. .
For each grammar, each tool, the ordinate shows the number of benchmarks that fail with the grammar but are solvable with a less-expressive grammar.
We use the |= + operator to append new rules to previously defined nonterminals.

Evidence for Overfitting
To better understand this phenomenon, we instrumented LoopInvGen [ ] to record the candidate expressions that it synthesizes and the number of CEGIS iterations (called rounds henceforth). We compare each pair of successful runs of each of our 180 benchmarks on distinct grammars. In 65 % of such pairs, we observe performance degradation with the more expressive grammar. We also report the correlation between performance degradation and number of rounds for the more expressive grammar in each pair in Fig. .
In 67 % of the cases with degraded performance upon increased grammar expressiveness, the number of rounds remains unaffected -indicating that this slowdown is mainly due to a larger search space. However, there is significant evidence of performance degradation due to overfitting as well. We note an increase in the number of rounds for 27 % of the cases with degraded performance. Moreover, we notice performance degradation in 79 % of all cases that required more rounds on increasing grammar expressiveness.
Thus, a more expressive grammar not only increases the search space, but also makes it more likely for LoopInvGen to overfit -select a spurious expression, which the oracle rejects with a counterexample, hence requiring more rounds. In the remainder of this section, we demonstrate this overfitting phenomenon on the verification problem shown in Fig. , Fig. , we require an inductive invariant that is strong enough to prove that the assertion on line always holds. In the SyGuS setting, we need to synthesize a predicate I : Z 4 → B defined on a symbolic state σ = m, n, x, y , that satisfies ∀σ : ϕ(I, σ) for the specification ϕ: where σ = m , n , x , y denotes the new state after one iteration, and T is a transition relation that describes the loop body: We ignore failing runs since they require an unknown number of rounds. We use B, N, and Z to denote the sets of all Boolean values, all natural numbers (positive integers), and all integers respectively.  In Fig. (a), we report the performance of LoopInvGen on fib_19 (Fig. ) with our six grammars ( Fig. ). It succeeds with all but the least-expressive grammar. However, as grammar expressiveness increases, the number of rounds increase significantly -from 19 rounds with Intervals to 88 rounds with Peano.
LoopInvGen converges to the exact same invariant with both Polyhedra and Peano but requires 30 more rounds in the latter case. In Figs. (b) and (c), we list some expressions synthesized with Polyhedra and Peano respectively. These expressions are solutions to intermediate subproblems -the final loop invariant is a conjunction of a subset of these expressions [ , § . ]. Observe that the expressions generated with the Peano grammar are quite complex and unlikely to generalize well. Peano's extra expressiveness leads to more spurious candidates, increasing the chances of overfitting and making the benchmark harder to solve.

SyGuS Overfitting in Theory
In this section, first we formalize the counterexample-guided inductive synthesis (CEGIS) approach [ ] to SyGuS, in which examples are iteratively provided by a verification oracle. We then state and prove no-free-lunch theorems, which show that there can be no optimal learner for this learning scheme. Finally, we formalize a natural notion of overfitting for SyGuS and prove that the potential for overfitting increases with grammar expressiveness.

. Preliminaries
We borrow the formal definition of a SyGuS problem from prior work [ ]: Definition (SyGuS Problem). Given a background theory T, a function symbol f : X → Y , and constraints on f : ( ) a semantic constraint, also called a specification, φ(f, x) over the vocabulary of T along with f and a symbolic input x, and ( ) a syntactic constraint, also called a grammar, given by a (possibly infinite) set E of expressions over the vocabulary of the theory T; find an expression e ∈ E such that the formula ∀x ∈ X : φ(e, x) is valid modulo T.
We denote this SyGuS problem as f X→Y | φ, E T and say that it is satisfiable iff there exists such an expression e, i.e., ∃ e ∈ E : ∀x ∈ X : φ(e, x). We call e a satisfying expression for this problem, denoted as e |= f X→Y | φ, E T . Recall, we focus on a common class of SyGuS learners, namely those that learn from examples. First we define the notion of input-output (IO) examples that are consistent with a SyGuS specification: The next two definitions respectively formalize the two key components of a CEGIS-based SyGuS tool: the verification oracle and the learner.

Definition
(Verification Oracle). Given a specification φ defined on a function f : X → Y over theory T, a verification oracle O φ is a partial function that given an expression e, either returns ⊥ indicating ∀x ∈ X : φ(e, x) holds, or gives a counterexample x, y against e, denoted as e × φ x, y , such that We omit φ from the notations O φ and × φ when it is clear from the context.

Definition (CEGIS-based Learner). A CEGIS-based learner
is a partial function that given an integer q ≥ 0, a set E of expressions, and access to an oracle O for a specification φ defined on f : X → Y , queries O at most q times and either fails with ⊥ or generates an expression e ∈ E. The trace e 0 × x 0 , y 0 , . . . , e p−1 × x p−1 , y p−1 , e p where 0 ≤ p ≤ q summarizes the interaction between the oracle and the learner. Each e i denotes the i th candidate for f and x i , y i is a counterexample e i , i.e., Note that we have defined oracles and learners as (partial) functions, and hence as deterministic. In practice, many SyGuS tools are deterministic and this assumption simplifies the subsequent theorems. However, we expect that these theorems can be appropriately generalized to randomized oracles and learners. .

Learnability and No Free Lunch
In the machine learning (ML) community, the limits of learning have been formalized for various settings as no-free-lunch theorems [ , § . ]. Here, we provide a natural form of such theorems for CEGIS-based SyGuS learning. In SyGuS, the learned function must conform to the given grammar, which may not be fully expressive. Therefore we first formalize grammar expressiveness:

Definition
(k-Expressiveness). Given a domain X and range Y , a grammar E is said to be k-expressive iff E can express exactly k distinct X → Y functions.
A key difference from the ML setting is our notion of m-learnability, which formalizes the number of examples that a learner requires in order to learn a desired function. In the ML setting, a function is considered to m-learnable by a learner if it can be learned using an arbitrary set of m i.i.d. examples (drawn from some distribution). This makes sense in the ML setting since the learned function is allowed to make errors (up to some given bound on the error probability), but it is much too strong for the all-or-nothing SyGuS setting.
Instead, we define a much weaker notion of m-learnability for CEGIS-based SyGuS, which only requires that there exist a set of m examples that allows the learner to succeed. The following definition formalizes this notion.

Definition (CEGIS-based m-Learnability). Given a SyGuS problem
Finally we state and prove the no-free-lunch (NFL) theorems, which make explicit the tradeoff between grammar expressiveness and learnability. Intuitively, given an integer m and an expressive enough (as a function of m) grammar, for every learner there exists a SyGuS problem that cannot be solved without access to at least m + 1 examples. This is true despite our weak notion of learnability.
Put another way, as grammar expressiveness increases, so does the number of examples required for learning. On one extreme, if the given grammar is 1-expressive, i.e., can express exactly one function, then all satisfiable SyGuS problems are 0-learnable -no examples are needed because there is only one function to learn -but there are many SyGuS problems that cannot be satisfied by this function. On the other extreme, if the grammar is |Y | |X| -expressive, i.e., can express all functions from X to Y , then for every learner there exists a SyGuS problem that requires all |X| examples in order to be solved.
Below we first present the NFL theorem for the case when the domain X and range Y are finite. We then generalize to the case when these sets may be countably infinite. The proofs of these theorems can be found in Appendix A. .

Theorem (NFL in CEGIS-based SyGuS on Finite Sets).
Let X and Y be two arbitrary finite sets, T be a theory that supports equality, E be a grammar over T, and m be an integer such that 0 ≤ m < |X|. Then, either:

Theorem (NFL in CEGIS-based SyGuS on Countably Infinite Sets).
Let X be an arbitrary countably infinite set, Y be an arbitrary finite or countably infinite set,T be a theory that supports equality, E be a grammar over T, and m be an integer such that m ≥ 0. Then, either:

. Overfitting
Last, we relate the above theory to the notion of overfitting from ML. In the context of SyGuS, overfitting can potentially occur whenever there are multiple candidate expressions that are consistent with a given set of examples. Some of these expressions may not generalize to satisfy the specification, but the learner has no way to distinguish among them (using just the given set of examples) and so can "guess" incorrectly. We formalize this idea through the following measure: Intuitively, a zero potential for overfitting means that overfitting is not possible on the given problem with respect to the given set of examples, because there is no spurious candidate. A positive potential for overfitting means that overfitting is possible, and higher values imply more spurious candidates and hence more potential for a learner to choose the "wrong" expression.
The following theorems connect our notion of overfitting to the earlier NFL theorems by showing that overfitting is inevitable with an expressive enough grammar. We provide their proofs in Appendix A. .

Theorem
(Overfitting in SyGuS on Finite Sets). Let X and Y be two arbitrary finite sets, m be an integer such that 0 ≤ m < |X|, T be a theory that supports equality, and E be a k-expressive grammar over T for some k > Theorem (Overfitting in SyGuS on Countably Infinite Sets). Let X be an arbitrary countably infinite set, Y be an arbitrary finite or countably infinite set, T be a theory that supports equality, and E be a k-expressive grammar over T for some k > ℵ 0 . Then, there exists a satisfiable SyGuS problem Finally, it is straightforward to show that as the expressiveness of the grammar provided in a SyGuS problem increases, so does its potential for overfitting.

Theorem
(Overfitting Increases with Expressiveness). Let X and Y be two arbitrary sets, T be an arbitrary theory, E 1 and E 2 be grammars over T such that E 1 ⊆ E 2 , φ be an arbitrary specification over T and a function symbol f : X → Y , and Z be a set of IO examples for φ. Then, we have Algorithm The PLearn framework for SyGuS tools.

Mitigating Overfitting
Ensemble methods [ ] in machine learning (ML) are a standard approach to reduce overfitting. These methods aggregate predictions from several learners to make a more accurate prediction. In this section we propose two approaches, inspired by ensemble methods in ML, for mitigating overfitting in SyGuS. Both are based on the key insight from § . that synthesis over a subgrammar has a smaller potential for overfitting as compared to that over the original grammar. .

Parallel SyGuS on Multiple Grammars
Our first idea is to run multiple parallel instances of a synthesizer on the same SyGuS problem but with grammars of varying expressiveness. This framework, called PLearn, is outlined in Algorithm . It accepts a synthesis tool T , a SyGuS problem f X→Y | φ, E T , and subgrammars E 1...p , such that E i ⊆ E. The parallel for construct creates a new thread for each iteration. The loop in PLearn creates p copies of the SyGuS problem, each with a different grammar from E 1...p , and dispatches each copy to a new instance of the tool T . PLearn returns the first solution found or ⊥ if none of the synthesizer instances succeed.
Since each grammar in E 1...p is subsumed by the original grammar E, any expression found by PLearn is a solution to the original SyGuS problem. Moreover, from Theorem it is immediate that PLearn indeed reduces overfitting.

Theorem (PLearn Reduces Overfitting). Given a SyGuS problem
A key advantage of PLearn is that it is agnostic to the synthesizer's implementation. Therefore, existing SyGuS learners can immediately benefit from PLearn, as we demonstrate in § . . However, running p parallel SyGuS instances can be prohibitively expensive, both computationally and memory-wise. The problem is worsened by the fact that many existing SyGuS tools already use multiple threads, e.g., the SketchAC [ ] tool spawns 9 threads. This motivates our hybrid enumeration technique described next, which is a novel synthesis algorithm that interleaves exploration of multiple grammars in a single thread.

Hybrid Enumeration
Hybrid enumeration extends the enumerative synthesis technique, which enumerates expressions within a given grammar in order of size and returns the first candidate that satisfies the given examples [ ]. Our goal is to simulate the behavior of PLearn with an enumerative synthesizer in a single thread. However, a straightforward interleaving of multiple PLearn threads would be highly inefficient because of redundancies -enumerating the same expression (which is contained in multiple grammars) multiple times. Instead, we propose a technique that ( ) enumerates each expression at most once, and ( ) reuses previously enumerated expressions to construct larger expressions.
To achieve this, we extend a widely used [ , , ] synthesis strategy, called component-based synthesis [ ], wherein the grammar of expressions is induced by a set of components, each of which is a typed operator with a fixed arity. For example, the grammars shown in Fig. are induced by integer components (such as 1, +, mod, =, etc.) and Boolean components (such as true, and, or, etc.). Below, we first formalize the grammar that is implicit in this synthesis style.

Definition
(Component-Based Grammar). Given a set C of typed components, we define the component-based grammar E as the set of all expressions formed by well-typed component application over C, i.e., . . a ⊂ E ∧ e 1 : τ 1 ∧ · · · ∧ e a : τ a } where e : τ denotes that the expression e has type τ .
We denote the set of all components appearing in a component-based grammar E as components(E). Henceforth, we assume that components(E) is known (explicitly provided by the user) for each E. We also use values(E) to denote the subset of nullary components (variables and constants) in components(E), and operators(E) to denote the remaining components with positive arities.
The closure property of component-based grammars significantly reduces the overhead of tracking which subexpressions can be combined together to form larger expressions. Given a SyGuS problem over a grammar E, hybrid enumeration requires a sequence E 1...p of grammars such that each E i is a component-based grammar and that E 1 ⊂ · · · ⊂ E p ⊆ E. Next, we explain how the subset relationship between the grammars enables efficient enumeration of expressions.
Given grammars E 1 ⊂ · · · ⊂ E p , observe that an expression of size k in E i may only contain subexpressions of size {1, . . . , (k − 1)} belonging to E 1...i . This allows us to enumerate expressions in an order such that each subexpression e is synthesized (and cached) before any expressions that have e as a subexpression. We call an enumeration order that ensures this property a well order.

Definition
(Well Order). Given arbitrary grammars E 1...p , we say that a strict partial order on E 1...p × N is a well order iff Motivated by Theorem , our implementation of hybrid enumeration uses a particular well order that incrementally increases the expressiveness of the space of expressions. For a rough measure of the expressiveness (Definition ) of a pair (E, k), i.e., the set of expressions of size k in a given grammar E, we simply overapproximate the number of syntactically distinct expressions: Theorem . Let E 1...p be component-based grammars and C i = components(E i ). Then, the following strict partial order * on E 1...p × N is a well order We now describe the main hybrid enumeration algorithm, which is listed in Algorithm . The HEnum function accepts a SyGuS problem f X→Y | φ, E T , a set E 1...p of component-based grammars such that E 1 ⊂ · · · ⊂ E p ⊆ E, a well order , and an upper bound q ≥ 0 on the size of expressions to enumerate. In lines -, we first enumerate all values and cache them as expressions of size one. In general C [j, k][τ ] contains expressions of type τ and size k from E j \ E j−1 . In line we sort (grammar, size) pairs in some total order consistent with . Finally, in lines -, we iterate over each pair (E j , k) and each operator from E 1...j and invoke the Divide procedure (Algorithm ) to carefully choose the operator's argument subexpressions ensuring ( ) correctness -their sizes sum up to k − 1, ( ) efficiency -expressions are enumerated at most once, and ( ) completenessall expressions of size k in E j are enumerated.
The Divide algorithm generates a set of locations for selecting arguments to an operator. Each location is a pair (x, y) indicating that any expression from C[  In lines -, the size budget is recursively divided among a − 1 locations. In each recursive step, the upper bound (q − a + 1) on v ensures that we have a size budget of at least q − (q − a + 1) = a − 1 for the remaining a − 1 locations. This results in a call tree such that the accumulator α at each leaf node contains the locations from which to select the last a − 1 arguments, and we are left with some size budget q ≥ 1 for the first argument e 1 . Finally in lines -, we carefully select the locations for e 1 to ensure that o(e 1 , . . . , e a ) has not been synthesized before -either o ∈ components(E j ) or at least one argument belongs to E j \ E j−1 . We conclude this section by stating some desirable properties satisfied by HEnum. The proofs of the following theorems can be found in Appendix A. .

Theorem (HEnum is Complete up to Size q). Given a SyGuS problem
T , let E 1...p be component-based grammars over theory T such that E 1 ⊂ · · · ⊂ E p = E, be a well order on E 1...p × N, and q ≥ 0 be an upper bound on size of expressions. Then, HEnum(S, E 1...p , , q) will eventually find a satisfying expression if there exists one with size ≤ q.

Theorem
(HEnum is Efficient). Given a SyGuS problem S = f X→Y | φ, E T , let E 1...p be component-based grammars over theory T such that E 1 ⊂ · · · ⊂ E p ⊆ E, be a well order on E 1...p ×N, and q ≥ 0 be an upper bound on size of expressions. Then, HEnum(S, E 1...p , , q) will enumerate each distinct expression at most once.

Experimental Evaluation
In this section we empirically evaluate PLearn and HEnum. Our evaluation uses a set of 180 synthesis benchmarks, consisting of all 127 official benchmarks from the Inv track of

SyGuS competition [ ] augmented with benchmarks from the Software Verification competition (SV-Comp) [ ] and challenging
We use as the cons operator for sequences, e.g., x y, z = x, y, z . All benchmarks are available at https://github.com/SaswatPadhi/LoopInvGen. verification problems proposed in prior work [ , ]. All these synthesis tasks are defined over integer and Boolean values, and we evaluate them with the six grammars described in Fig. . We have omitted benchmarks from other tracks of the SyGuS competition as they either require us to construct E 1...p ( § ) by hand or lack verification oracles. All our experiments use an 8-core Intel ® Xeon ® E machine clocked at . GHz with GB memory running Ubuntu ® . . EUSolver [ ] -we have compared the performance across various grammars, with and without the PLearn framework (Algorithm ). In this framework, to solve a SyGuS problem with the p th expressiveness level from our six integer-arithmetic grammars (see Fig. ), we run p independent parallel instances of a SyGuS tool, each with one of the first p grammars. For example, to solve a SyGuS problem with the Polyhedra grammar, we run four instances of a solver with the Equalities, Intervals, Octagons and Polyhedra grammars. We evaluate these runs for each tool, for each of the 180 benchmarks and for each of the six expressiveness levels. Fig. summarizes our findings. Without PLearn the number of failures initially decreases and then increases across all solvers, as grammar expressiveness increases. However, with PLearn the tools incur fewer failures at a given level of expressiveness, and there is a trend of decreased failures with increased expressiveness. Thus, we have demonstrated that PLearn is an effective measure to mitigate overfitting in SyGuS tools and significantly improve their performance.

Robustness of PLearn
.

Performance of Hybrid Enumeration
To evaluate the performance of hybrid enumeration, we augment an existing synthesis engine with HEnum (Algorithm ). We modify our LoopInvGen tool [ ], which is the best-performing SyGuS synthesizer from Fig. .   H is not only significantly robust against increasing grammar expressiveness, but it also has a much smaller cost (x) than P and a negligible overhead over L.
LoopInvGen leverages Escher [ ], an enumerative synthesizer, which we replace with HEnum. We make no other changes to LoopInvGen. We evaluate the performance and resource usage of this solver, LoopInvGen+HE, relative to the original LoopInvGen with and without PLearn (Algorithm ).
Performance. In Fig. (a), we show the number of failures across our six grammars for LoopInvGen, LoopInvGen+HE and LoopInvGen with PLearn, over our 180 benchmarks. LoopInvGen+HE has a significantly lower failure rate than LoopInvGen, and the number of failures decreases with grammar expressiveness. Thus, hybrid enumeration is a good proxy for PLearn.
Resource Usage. To estimate how computationally expensive each solver is, we compare their total-time cost (τ). Since LoopInvGen and LoopInvGen+HE are single-threaded, for them we simply use the wall-clock time for synthesis as the total-time cost. However, for PLearn with p parallel instances of LoopInvGen, we consider the total-time cost as p times the wall-clock time for synthesis.
In Fig. (b), we show the median overhead (ratio of τ) incurred by PLearn over LoopInvGen+HE and LoopInvGen+HE over LoopInvGen, at various expressiveness levels. As we move to grammars of increasing expressiveness, the total-time cost of PLearn increases significantly, while the total-time cost of LoopInvGen+HE essentially matches that of LoopInvGen. .

Competition Performance
Finally, we evaluate the performance of LoopInvGen+HE on the benchmarks from the Inv track of the SyGuS competition [ ], against the official winning solver, which we denote LIG [ ] -a version of LoopInvGen [ ] that has been extensively tuned for this track. In the competition, there are some invariantsynthesis problems where the postcondition itself is a satisfying expression. LIG starts with the postcondition as the first candidate and is extremely fast on such programs. For a fair comparison, we added this heuristic to LoopInvGen+HE as well. No other change was made to LoopInvGen+HE.
LoopInvGen solves 115 benchmarks in a total of 2191 seconds whereas LoopInvGen+HE solves 117 benchmarks in 429 seconds, for a mean speedup of over 5×. Moreover, no entrants to the competition could solve [ ] the two additional benchmarks (gcnr_tacas08 and fib_20) that LoopInvGen+HE solves.

Related Work
The most closely related work to ours investigates overfitting for verification tools [ ]. Our work differs from theirs in several respects. First, we address the problem of overfitting in CEGIS-based synthesis. Second, we formally define overfitting and prove that all synthesizers must suffer from it, whereas they only observe overfitting empirically. Third, while they use cross-validation to combat overfitting in tuning a specific hyperparameter of a verifier, our approach is to search for solutions at different expressiveness levels.
The general problem of efficiently searching a large space of programs for synthesis has been explored in prior work. Lee et al. [ ] use a probabilistic model, learned from known solutions to synthesis problems, to enumerate programs in order of their likelihood. Other approaches employ type-based pruning of large search spaces [ , ]. These techniques are orthogonal to, and may be combined with, our approach of exploring grammar subsets.
Our results are widely applicable to existing SyGuS tools, but some tools fall outside our purview. For instance, in programming-by-example (PBE) systems [ , § ], the specification consists of a set of input-output examples. Since any program that meets the given examples is a valid satisfying expression, our notion of overfitting does not apply to such tools. However in a recent work, Inala and Singh [ ] show that incrementally increasing expressiveness can also aid PBE systems. They report that searching within increasingly expressive grammar subsets requires significantly fewer examples to find expressions that generalize better over unseen data. Other instances where the synthesizers can have a free lunch, i.e., always generate a solution with a small number of counterexamples, include systems that use grammars with limited expressiveness [ , , ].
Our paper falls in the category of formal results about SyGuS. In one such result, Jha and Seshia [ ] analyze the effects of different kinds of counterexamples and of providing bounded versus unbounded memory to learners. Notably, they do not consider variations in "concept classes" or "program templates," which are precisely the focus of our study. Therefore, our results are complementary: we treat counterexamples and learners as opaque and instead focus on grammars.

Conclusion
Program synthesis is a vibrant research area; new and better synthesizers are being built each year. This paper investigates a general issue that affects all CEGIS-based SyGuS tools. We recognize the problem of overfitting, formalize it, and identify the conditions under which it must occur. Furthermore, we provide mitigating measures for overfitting that significantly improve the existing tools.  (|X| − i)! distinct traces (sequences of counterexamples) of length at most m over X and Y . Now, consider some CEGIS-based learner L, and suppose E is k-expressive for some k > t. Then, since the learner can deterministically choose at most t candidates for the t traces, there must be at least one function f that is expressible in E, but does not appear in the trace of L O (m, E) for any oracle O.

References
Let e be an expression in E that implements the function f . Then, we can define the specification φ(f, x) def = f (x) = e(x) and the SyGuS problem S = f X→Y | φ, E T . By construction, S is satisfiable since e |= S, but we have that L O (m, E) |= S for all oracles O. So, by Definition , we have that S is not m-learnable by L.
However, we can construct a learner L such that S is m-learnable by L . We construct L such that L always produces e as its first candidate expression for any trace. The result then follows by Definition .

Theorem
(NFL in CEGIS-based SyGuS on Countably Infinite Sets). Let X be an arbitrary countably infinite set, Y be an arbitrary finite or countably infinite set, T be a theory that supports equality, E be a grammar over T, and m be an integer such that m ≥ 0. Then, either: -E is not k-expressive for any k > ℵ 0 , where ℵ 0 def = |N|, or -for every CEGIS-based learner L, there exists a satisfiable SyGuS problem S = f X→Y | φ, E T such that S is not m-learnable by L. Moreover, there exists a different CEGIS-based learner for which S is m-learnable.
Proof. Consider some CEGIS-based learner L, and suppose E is k-expressive for some k > ℵ 0 . Note that there are m i = 0 |X|! |Y | i (|X| − i)! distinct traces of length at most m over X and Y . Let us overapproximate each |X|! |Y | i (|X| − i)! as (|X| |Y |) m , and thus the number of distinct traces as (m + 1) (|X| |Y |) m . We have two cases for Y : . Y is finite i.e., |X| = ℵ 0 and |Y | < ℵ 0 . Then, the number of distinct traces is at most (m + 1) (|X| |Y |) m = (ℵ 0 |Y |) m = ℵ 0 . Or, . Y is countably infinite i.e., |X| = |Y | = ℵ 0 . Then, the number of distinct traces is at most (m + 1) Thus, the number of distinct traces is at most ℵ 0 , i.e., countably infinite. Since the number of distinct functions k > ℵ 0 , the claim follows using a construction similar to the proof of Theorem .

A.
Overfitting Theorems ( § . ) Theorem (Overfitting in SyGuS on Finite Sets). Let X and Y be two arbitrary finite sets, m be an integer such that 0 ≤ m < |X|, T be a theory that supports equality, and E be a k-expressive grammar over T for some k > Proof. First, note that there are t = |X|! |Y | m m! (|X| − m)! distinct ways of constructing a set of m IO examples, over X and Y . Now, suppose E is k-expressive for some k > t. Then, there must be at least one function f that is expressible in E, but every set of m IO examples that f is consistent with is also satisfied by some other expressible function.
Let e be an expression in E that implements the function f . Then, we can define the specification φ(f, x) def = f (x) = e(x) and the SyGuS problem S = f X→Y | φ, E T . The claim then immediately follows from Definition .

Theorem (Overfitting in SyGuS on Countably Infinite Sets).
Let X be an arbitrary countably infinite set, Y be an arbitrary finite or countably infinite set, T be a theory that supports equality, and E be a k-expressive grammar over T for some k > ℵ 0 . Then, there exists a satisfiable SyGuS problem S = f X→Y | φ, E

Theorem
(Overfitting Increases with Expressiveness). Let X and Y be two arbitrary sets, T be an arbitrary theory, E 1 and E 2 be grammars over T such that E 1 ⊆ E 2 , φ be an arbitrary specification over T and a function symbol f : X → Y , and Z be a set of IO examples for φ. Then, we have Proof. If E 1 ⊆ E 2 , then for any set Z ⊆ X × Y of IO examples, we have {e ∈ E 1 | ∀ x, y ∈ Z : e(x) = y} ⊆ {e ∈ E 2 | ∀ x, y ∈ Z : e(x) = y} The claim immediately follows from this observation and Definition .

A. Properties of Hybrid Enumeration ( § . )
Lemma . Let E 1 and E 2 be two arbitrary component-based grammars. Then, if E 1 ⊆ E 2 , it must also be the case that components(E 1 ) ⊆ components(E 2 ), where components(E i ) denotes the set of all components appearing in E i .
Proof. Let C 1 = components(E 1 ), C 2 = components(E 2 ), and E 1 ⊆ E 2 . Suppose C 1 ⊆ C 2 . Then, there must be at least one component c such that c ∈ C 1 \ C 2 . By definition of components(E 1 ), the component c must appear in at least one expression e ∈ E 1 . However, since c ∈ C 2 , it must be the case that e ∈ E 2 , thus contradicting E 1 ⊆ E 2 . Hence, our assumption C 1 ⊆ C 2 must be false.
Theorem . Given component-based grammars E 1...p , the following strict partial order * on E 1...p × N is a well order ∀ E a , E b ∈ E 1...p : ∀ m, n ∈ N : (E a , m) * (E b , n) ⇐⇒ | C a | m < | C b | n where C i = components(E i ) denotes the set of all components appearing in E i .
Proof. Let E a and E b be two component-based grammars in E 1...p . By Lemma , we have that E a ⊆ E b =⇒ components(E 1 ) ⊆ components(E 2 ). The claim then immediately follows from Definition .

Definition (j k -Uniqueness).
Given grammars E 1 ⊆ · · · ⊆ E p , we say that an expression e of size k is j k -unique with respect to E 1...p if it is contained in E j but not in E (j−1) . We define U[E 1...p ] k j as the maximal such set of expressions, i.e., U[E 1...p ] k j def = e ∈ E j | size(e) = k ∧ e ∈ E (j−1) Lemma . Let E 1 ⊆ · · · ⊆ E p be p component-based grammars. Then, for any expression o(e 1 , . . . , e a ) ∈ U[E 1...p ] k j , if the operator o belongs to operators(E q ) such that q < j, at least one argument must belong to E j but not E (j−1) , i.e., o ∈ operators(E q ) ∧ q < j =⇒ ∃ e ∈ e 1 . . . a : e ∈ E j ∧ e ∈ E (j−1) Proof. Consider an arbitrary expression e * = o(e 1 , . . . , e a ) ∈ U[E 1...p ] k j such that o ∈ operators(E q ) ∧ q < j. Suppose ∀ e ∈ e 1 . . . a : e ∈ E j ∨ e ∈ E (j−1) . Then, for any argument subexpression e, we have the following three possibilities: e ∈ E j ∧ e ∈ E (j−1) is impossible since E (j−1) ⊆ E j . e ∈ E j ∧ e ∈ E (j−1) is also impossible, by Definition , due to the closure property of component-based grammars. e ∈ E j ∧ e ∈ E (j−1) must be false for at least one argument subexpression. Otherwise, since o ∈ operators(E (j−1) ) and E (j−1) is closed under operator application by Definition , e * ∈ E (j−1) must be true. However, by Definition , we have that e * ∈ U[E 1...p ] k j =⇒ e * ∈ E (j−1) . Therefore, our assumption ∀ e ∈ e 1 . . . a : e ∈ E j ∨ e ∈ E (j−1) must be false.
Lemma . Let E 0 = {} and E 1 ⊆ · · · ⊆ E p be p component-based grammars. Then, for any l ≥ 1 and any operator o ∈ operators(E l ) \ operators(E l−1 ) of arity a, Divide(a, k − 1, l, j, ) generates the following set L of all possible distinct locations for selecting the arguments for o such that o(e 1 , . . . , e a ) ∈ U[E 1...p ] k j : k 1 ) ). Given a SyGuS problem S = f X→Y | φ, E T , let E 1...p be component-based grammars over theory T such that E 1 ⊂ · · · ⊂ E p ⊆ E, be a well order on E 1...p ×N, and q ≥ 0 be an upper bound on size of expressions. Then, HEnum(S, E 1...p , , q) will enumerate each distinct expression at most once.
Proof. As shown in the proof of Theorem , C [j, k] in HEnum (Algorithm ) stores U[E 1...p ] k j . Then, by Lemma , we immediately have that all pairs C[j, k] and C[j , k ] of synthesized expressions are disjoint when j = j or k = k .
Furthermore, although each C[j, k] is implemented as a list, we show that any two expressions within any C[j, k] list must be syntactically distinct. The base cases C[i, 1] are straightforward. For the inductive case, observe that if each list C[j 1 , k 1 ], . . . , C[j a , k a ] only contains syntactically distinct expressions, then all tuples within C[j 1 , k 1 ] × · · · × C[j a , k a ] must also be distinct. Thus, if an operator o with arity a is applied to subexpressions drawn from the cross product, i.e., e 1 , . . . , e a ∈ C[j 1 , k 1 ] × · · · × C[j a , k a ], then all resulting expressions of the form o(e 1 , . . . , e a ) must be syntactically distinct. Thus, by structural induction, we have that in any list C [j, k] all contained expressions are syntactically distinct.