Abstract
We present an approach to program repair and its application to programs with recursive functions over unbounded data types. Our approach formulates program repair in the framework of deductive synthesis that uses existing program structure as a hint to guide synthesis. We introduce a new specification construct for symbolic tests. We rely on such userspecified tests as well as automatically generated ones to localize the fault and speed up synthesis. Our implementation is able to eliminate errors within seconds from a variety of functional programs, including symbolic computation code and implementations of functional data structures. The resulting programs are formally verified by the Leon system.
Keywords
 Fault Localization
 Recursive Function
 Recursive Call
 Deduction Rule
 Symbolic Test
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
V. Kuncak—This work is supported in part by the European Research Council (ERC) Project Implicit Programming and Swiss National Science Foundation Grant Constraint Solving Infrastructure for Program Analysis.
Download conference paper PDF
1 Introduction
This paper explores the problem of automatically repairing programs written as a set of mutually recursive functions in a purely functional subset of Scala. We consider a function to be subject to repair if it does not satisfy its specification, expressed in the form of pre and postcondition. The task of repair consists of automatically generating an alternative implementation that meets the specification. The repair problem has been studied in the past for reactive and pushdown systems [8, 10, 11, 19, 20, 26]. We view repair as generalizing, for example, the choose construct of complete functional synthesis [15], sketching [21, 22], and program templates [23], because the exact location and nature of expressions to be synthesized is left to the algorithm. Repair is thus related to localization of error causes [12, 14, 27]. To speed up our repair approach, we do use coarsegrained error localization based on derived test inputs. However, a more precise nature of the fault is in fact the outcome of our tool, because the repair identifies a particular change that makes the program correct. Using tests alone as a criterion for correctness is appealing for performance reasons [7, 17, 18], but this can lead to erroneous repairs. We therefore leverage prior work [13] on verifying and synthesizing recursive functional programs with unbounded datatypes (trees, lists, integers) to provide strong correctness guarantees, while at the same time optimizing our technique to use automatically derived tests. By phrasing the problem of repair as one of synthesis and introducing tailored deduction rules that use the original implementation as guide, we allow the repairoriented synthesis procedure to automatically find correct fixes, in the worst case resorting to resynthesizing the desired function from scratch. To make the repair approach practical, we found it beneficial to extend the power and generality of the synthesis engine itself, as well as to introduce explicit support for symbolic tests in the specification language and the repair algorithm.
Contributions. The overall contribution of this paper is a new repair algorithm and its implementation inside a deductive synthesis framework for recursive functional programs. The specific new techniques we contribute are the following.

Exploration of similar expressions. We present an algorithm for expression repair based on a grammar for generating expressions similar to a given expression (according to an error model we propose). We use such grammars within our new generic symbolic term exploration routine, which leverages test inputs as well as an SMT solver, and efficiently explores the space of expressions that contain recursive calls whose evaluation depends on the expression being synthesized.

Fault localization. To narrow down repair to a program fragment, we localize the error by doing dynamic analysis using test inputs generated automatically from specifications. We combine two automatic sources of inputs: enumeration techniques and SMTbased techniques. We collect traces leading to erroneous executions and compute common prefixes of branching decisions. We show that this localization is in practice sufficiently precise to repair sizeable functions efficiently.

Symbolic examples. We propose an intuitive way of specifying possibly symbolic inputoutput examples using pattern matching of Scala. This allows the user to partially specify a function without necessarily having to provide full inputs and outputs. Additionally, it enables the developer to easily describe properties of generic (polymorphic) functions. We present an algorithm for deriving new examples from existing ones, which improves the usefulness of example sets for fault localization and repair.
In our experience, the combination of formal specification and symbolic examples gives the user significant flexibility when specifying functions, and increases success rates when discovering and repairing program faults.

Integration into a deductive synthesis and verification framework. Our repair system is part of a deductive verification system, so it can automatically produce new inputs from specification, prove correctness of code for all inputs ranging over an unbounded domain, and synthesize program fragments using deductive synthesis rules that include common recursion schemas.
The repair approach offers significant improvements compared with synthesis from scratch. Synthesis alone scales poorly when the expression to synthesize is large. Fault localization focuses synthesis on the smaller, invalid portions of the program and thus results in big performance gains. The source code of our tool and additional details are available from http://leon.epfl.ch as well as http://lara.epfl.ch/w/leonrepair.
Example. Consider the following functionality inspired by a part of a compiler. We wish to transform (desugar) an abstract syntaxtree of a typed expression language into a simpler untyped language, simplifying some of the constructs and changing the representation of some of the types, while preserving the semantics of the transformed expression. In Fig. 1, the original syntax trees are represented by the class
and its subclasses, whereas the resulting untyped language trees are given by
. A syntax tree of
either evaluates to an integer, to a boolean, or to no value if it is not well typed. We capture this by defining a typechecking function
, along with two separate semantic functions,
and
.
, on the other hand, always evaluates to an integer, as defined by the
function. For brevity, most subclass definitions are omitted.
The
function translates a syntax tree of
into one of
. We expect the function to ensure that the transformation preserves the semantics of the tree: originally integervalued trees evaluate to the same value, booleanvalued trees now evaluate to 0 and 1, representing
and
, respectively, and mistyped trees are left unconstrained. This is expressed in the postcondition of
.
The implementation in Fig. 1 contains a bug: the
and
branches of the
case have been accidentally switched. Using tests automatically generated using generic enumeration of small values, as well as from a verification attempt of
, our tool is able to find a coarsegrained location of the bug, as the body of the relevant case of the match statement. During repair, one of the rules performs a semantic exploration of expressions similar to the invalid one. It discovers that using the expression
instead of the invalid one makes the discovered tests pass. The system can then formally verify that the repaired program meets the specification for all inputs. If we try to introduce similar bugs in the correct
function, or to replace the entire body of a case with a dummy value, the system successfully recovers the intended case of the transformation. In some cases our system can repair multiple simultaneous errors; the mechanism behind that is explained in Sect. 2.2. Note that the developer communicates with our system only by writing code and specifications, both of which are functions in an existing functional programming language. This illustrates the potential of repair as a scalable and developerfriendly deployment of synthesis in software development.
2 Deductive Guided Repair
We next describe our deductive repair framework. The framework currently works under several assumptions, which we consider reasonable given the state of the art in repair of infinitestate programs. We consider the specifications of functions as correct; the code is assumed wrong if it cannot be proven correct with respect to this specification for all of the infinitely many inputs. If the specification includes inputoutput tests, it follows that the repaired function must have the same behavior on these tests. We do not guarantee that the output of the function is the same as the original one on tests not covered by the specification, though the repair algorithm tends to preserve some of the existing behaviors due to the local nature of repair. It is the responsibility of the developer to sufficiently specify the function being repaired. Although underspecified benchmarks may produce unexpected expressions as repair solutions, we found that even partial specifications often yield the desired repairs. A particularly effective specification style in our experience is to give a partial specification that depends on all components of the structure (for example, describes property of the set of stored elements), and then additionally provide a finite number of symbolic inputoutput tests. We assume that only one function of the program is invalid; the implementation of all other functions is considered valid as far as the repair of interest is concerned. Finally, we assume that all functions of the program, even the invalid one, terminate.
Stages of the Repair Algorithm. The function being repaired passes through the following stages, which we describe in the rest of the paper:

Test generation and verification. We combine enumeration and SMTbased techniques to either verify the validity of the function, or, if it is not valid, discover counterexamples (examples of misbehaviors).

Fault localization. Our localization algorithm then selects the smallest expression executed in all failing tests, modulo recursion.

Synthesis of similar expressions. This erroneous expression is replaced by a “program hole”. The nowincomplete function is sent to synthesis, with the previous expression used as a synthesis hint. (Neither the notion of holes nor the notion of synthesis hints has been introduced in prior work on deductive synthesis [13]).

Verification of the solution. Lastly, the system attempts to prove the validity of the discovered solution. Our results in Sect. 5, Fig. 4 indicate in which cases the synthesized function passed the verification.
Repair Framework. Our starting point is the deductive synthesis framework first introduced in [13]. We show how this framework can be applied to program repair by introducing dedicated rules, as well as special predicates. We reuse the notation for synthesis tasks \( \left[ \!\left[ \bar{a} \ \left\langle \varPi \rhd \phi \right\rangle \ \bar{x}\right] \!\right] \): \(\bar{a}\) denotes the set of input variables, \(\bar{x}\) denotes the set of output variables, \(\phi \) is the synthesis predicate, and \(\varPi \) is the path condition to the synthesis problem. The framework relies on deduction rules that take such an input synthesis problem and either (1) solve it immediately by returning the tuple \(\langle {P} \mid {T} \rangle \) where \(P\) corresponds to the precondition under which the term T is a solution, or (2) decompose it into subproblems, and define a way to compute the overall solution from subsolutions. We illustrate these rules as well as their notation with a rule for splitting a problem containing a toplevel or:
This rule should be interpreted as follows: from an input synthesis problem \(\left[ \!\left[ \bar{a} \ \left\langle \varPi \rhd \phi _1 \vee \phi _2 \right\rangle \ \bar{x}\right] \!\right] \), the rule decomposes it in two subproblems: \(\left[ \!\left[ \bar{a} \ \left\langle \varPi \rhd \phi _1 \right\rangle \ \bar{x}\right] \!\right] \) and \(\left[ \!\left[ \bar{a} \ \left\langle \varPi \rhd \phi _2 \right\rangle \ \bar{x}\right] \!\right] \). Given corresponding solutions \(\langle {P_1} \mid {T_1} \rangle \) and \(\langle {P_2} \mid {T_2} \rangle \), the rule solves the input problem with \(\langle {P_1 \vee P_2} \mid {{\mathsf{if( }}P_1{\mathsf{) \{ }}T_1{\mathsf{\} \mathsf{ else }\{}}T_2{\mathsf{\} }}} \rangle \).
To track the original (incorrect) implementation along instantiations of our deductive synthesis rules, we introduce a guiding predicate into the path condition of the synthesis problem. We refer to this guiding predicate as \(\odot \left[ {\mathsf{expr }}\right] \), where
represents the original expression. This predicate does not have any logical meaning in the pathcondition (it is equivalent to
), but it provides syntactic information that can be used by repairdedicated rules. These rules are covered in detail in Sects. 2.1, 2.2 and 3.
2.1 Fault Localization
A contribution of our system is the ability to focus the repair problem to a small subpart of the function’s body that is responsible for its erroneous behavior. The underlying hypothesis is that most of the original implementation is correct. This technique allows us to reuse as much of the original implementation as possible and minimizes the size of the expression given to subsequent more expensive techniques. Focusing also has the profitable sideeffect of making repair more predictable, even in the presence of weak specifications: repaired implementation tends to produce programs that preserve some of the existing branches, and thus have the same behavior on the executions that use only these preserved branches. We rely on the list of examples that fail the function specification to lead us to the source of the problem: if all failing examples only use one branch of some branching expression in the program, then we assume that the error is contained in that branch. We define \(\mathcal {F}\) as the set of all inputs of collected failing tests (see Sect. 4). We describe focusing using the following rules.
IfFocus. Given the input problem \(\left[ \!\left[ \bar{a} \ \left\langle \odot \left[ {{\mathsf{if( }}c{\mathsf{) \{ }}t{\mathsf{\} \mathsf{ else }\{}}e{\mathsf{\} }}}\right] \rhd \phi \right\rangle \ \bar{x}\right] \!\right] \) we first check if there is an alternative condition expression such that all failing tests succeed:
Instead of solving this higherorder hypothesis, we execute the function and nondeterministically consider both branches of the
(and do so within recursive invocations as well). If a valid execution exists for each failing test, the formula is considered satisfiable enabling us to focus on the condition. Otherwise, we check whether
evaluates to either
or
for all failing inputs, allowing us to focus on the corresponding branch:
We use analogous rules to repair
expressions, which are ubiquitous in our programs. Here, if all failing tests lead to one particular branch of the
, we focus on that particular branch.
The above rules use tests to locally approximate the validity of branches. They are sound only if \(\mathcal {F}\) is sufficiently large. Our system therefore performs an endtoend verification for the complete solution, ensuring the overall soundness.
2.2 Guided Decompositions
In case focusing rules fail to identify a single branch of an
 or
expression as responsible, we might still benefit from reusing most of the expression. In the case of
, reuse is limited to the
condition, but for a
expression, this may extend to multiple valid cases. To this end, we introduce rules analogous to focus, that do decompositions based on the guide.
To reuse the valid branches of an
or a
expression on which focus failed, we introduce a rule that solves the problem if the guiding expression satisfies the specification.
2.3 Generating Recursive Calls
Our purely functional language often requires us to synthesize recursive implementations. Consequently, the synthesizer must be able to generate calls to the function currently getting synthesized. However, we must take special care to avoid introducing calls resulting in a nonterminating implementation. (Such an erroneous implementation would be conceived as valid if it trivially satisfies the specification due to inductive hypothesis over a nonwellfounded relation.)
Our technique consists of recording the arguments
at the entry point of the function,
, and keeping track of these arguments through the decompositions. We represent this information with a syntactic predicate \(\Downarrow \left[ \mathsf{f(a) }\right] \), similar to the guiding predicate from the previous sections. We then heuristically assume that reducing the arguments
will not introduce nonterminating calls.
We illustrate this mechanism by considering the
function shown in Fig. 1. We start by injecting the entry call information as
This synthesis problem will then be decomposed by the various deduction rules available in the framework. An interesting case to consider is a decomposition by patternmatching on
which specializes the problem to known variants of
. The specialized problem for the
variant will look as follows:
As a result, we assume that the calls
and
are likely to terminate, so they are considered as candidate expressions when symbolically exploring terms, as explained in Sect. 3.
This relatively simple technique allows us to introduce recursive calls while filtering trivially nonterminating calls. In the case where it still introduces infinite recursion, we can discard the solution using a more expensive termination checker, though we found that this is seldom needed in practice.
2.4 Synthesis Within Repair
The repairspecific rules described earlier aim at solving repair problems according to the error model. Thanks to integration into the Leon synthesis framework, general synthesis rules also apply, which enables the repair of more intricate errors. This achieves an appealing combination between fast repairs for predictable errors and expressive, albeit slower, repairs for more complicated errors.
3 CounterexampleGuided SimilarTerm Exploration
After following the overall structure of the original problem, it is often the case that the remaining erroneous branches can be fixed by applying small changes to their implementations. For instance, an expression calling a function might be wrong only in one of its arguments or have two of its arguments swapped. We exploit this assumption by considering different variations to the original expression. Due to the lack of a large code base in pure Scala subset that Leon handles, we cannot use statistically informed techniques such as [9], so we define an error model following our intuition and experience from previous work.
We use the notation \(G(\mathsf{expr })\) to denote the space of variations of
and define it in the form of a grammar as
with the following forms of variations.
Swapping Arguments. We consider here all the variants of swapping two arguments that are compatible typewise. For instance, for an operation with three operands of the same type:
Generalizing One Argument. This variation corresponds to making a mistake in only one argument of the operation we generalize:
Bounded Arbitrary Expression. We consider a grammar of interesting expressions of the given type and of limited depth. This grammar considers all operations in scope as well as all input variables. It also considers safe recursive calls discovered in Sect. 2.3. Finally, it includes the guiding expression as a terminal, which corresponds to possibly wrapping the source expression in an operation. For example, given a predicate \(\Downarrow \left[ \mathsf{listSum(Cons(h,t)) }\right] \) and a mod function \(\mathsf{Int } \times \mathsf{Int } \rightarrow \mathsf{Int }\) in scope, an integer operation op(a,b,c) is generalized as:
Our grammars cover a range of variations corresponding to common errors. During synthesis, the system generates a specific grammar for each invocation of this repair rule, and explores symbolically the space of all expressions in the grammar. We rely on a CEGISloop bootstrapped with our test inputs to explore these expressions. This can be abstractly represented by the following rule:
Even though this rule is inherently incomplete, it is able to fix common errors efficiently. Our deductive approach allows us to introduce such tailored rules without loss of generality: errors that go beyond this model may be repaired using more general, albeit slower synthesis rules.
Precise Handling of Recursive Calls in CEGIS. Our system uses a symbolic approach to avoid enumerating expressions explicitly [13]. When considering recursive calls among possible expressions within CEGIS, the interpretation of such calls needs to refer back to this same expression. Our previous approach [13] treats recursive invocations of the function under synthesis as satisfying only the postcondition, leading to spurious counterexamples. Our new solution first constructs a parametrized program explicitly representing the search space: given a grammar G at a certain unfolding level, we construct a function
in which we describe nonterminals as values with each production guarded by a distinct entry of the
array, as in the following repair a case of the
function.
In this new program, the function under repair is defined using the partial solution corresponding to the current deduction tree, in which we call
at the point of the CEGIS invocation. Other unsolved branches of the deduction tree become synthesis holes. We augment transitive callers with this additional
argument, passing it accordingly. This ensures that a specific valuation of
corresponds exactly to a program where the point of CEGIS invocation is replaced by the corresponding expression. We rely on tests collected in Sect. 4 to test individual valuations of
, removing failing expression from the search space. Finally, we perform CEGIS using symbolic term exploration with the SMT solver to find candidate expressions [13].
4 Generating and Using Tests for Repair
Tests play an essential role in our framework, allowing us to gather information about the valid and invalid parts of the function. In this section we elaborate on how we select, generate, and filter examples of inputs and possibly outputs. Several components of our system then make use of these examples. We distinguish two kinds of tests: input tests and inputoutput tests. Namely, input tests provide valid inputs for the function according to its precondition, while inputoutput tests also specify the exact output corresponding to each input.
Extraction and Generation of Tests. Our system relies on three main sources for tests that are used to make the repair process more efficient.
(1) Userprovided symbolic inputoutput tests. It is often interesting for the user to specify how a function behaves by listing a few examples providing inputs and corresponding outputs. However, having to provide full inputs and outputs can be tedious and impractical. To make specifying families of tests convenient, we define a
construct to express inputoutput examples, relying on pattern matching in our language to symbolically describe sets of inputs and their corresponding outputs. This gives us an expressive way of specifying classes of inputoutput examples. Not only may the pattern match more than one input, but the corresponding outputs are given by an expression which may depend on the pattern’s variables. Wildcard patterns are particularly useful when the function does not depend on all aspects of its inputs. For instance, a function computing the size of a generic list does not inspect the values of individual list elements. Similarly, the sum of a list of integers could be specified concisely for all lists of sizes up to 2. Both examples are illustrated by Fig. 2.
Having partially symbolic inputoutput examples strikes a good balance between literal examples and fullfunctional specifications. From the symbolic tests, we generate concrete inputoutput examples by instantiating each pattern several times using enumeration techniques, and executing the output expression to yield an output value. For instance, from case Cons(a, Cons(b, Nil())) \(\Rightarrow \) a + b we will generate the following tests resulting from replacing a, b with all combinations of values from a finite set, including, for example, test with input Cons(1, Cons(2, Nil())) and output 3. We generate up to 5 distinct tests per pattern, when possible. These symbolic specifications are the only forms of tests provided by the developer; any other tests that our system uses are derived automatically.
(2) Generated input tests. We rely on the same enumeration technique to generate inputs satisfying the precondition of the function. Using a generate and test approach, we gather up to 400 valid input tests in the first 1000 enumerated.
(3) Solvergenerated Tests. Lastly, we rely on the underlying solvers for recursive functions of Leon [25] to generate counterexamples. Given that the function is invalid and that it terminates, the solver (which is complete for counterexamples) is guaranteed to eventually provide us with at least one failing test.
Classifying and Minimizing Traces. We partition the set of collected tests into passing and failing sets. A test is considered as failing if it violates a precondition, a postcondition, or emits one of various other kinds of runtime errors when the function to repair is executed on it. In the presence of recursive functions, a given test may fail within one of its recursive invocations. It is interesting in such scenarios to consider the arguments of this specific subinvocation: they are typically smaller than the original and are better representatives of the failure. To clarify this, consider the example in Fig. 3 (based on the program in Fig. 1):
Assume the tests collected are
,
and
. When executed with these tests, the function produces the graph of
invocations shown on the right of Fig. 3. A trivial classification tactic would label all three tests as faulty, even though it is obvious that all errors can be explained by the bug in
, due to the dependencies between tests. More generally, a failing test should also be blamed for the failure of all other tests that invoke it transitively. Our framework deploys this smarter classification. Thus, in our example, it would only label
as a failing example, which would lead to correct localization of the problem on the faulty branch. Note that this process will discover new failing tests not present in the original test set, if they occur as recursive subinvocations.
Our experience with incorporating tests into the Leon system indicate that they are proving time and again to be extremely important for the tool’s efficiency, even though our system is in its spirit based on verification as opposed to testing alone. In addition to allowing us to detect errors sooner and filter out wrong synthesis candidates, tests also allow us to quickly find the approximate error location.
Automatically repaired functions using our system. We provide for each operation: a small description of the kind of error introduced, the overall program size, the size of the invalid function, the size of the erroneous expression we locate and the size of the repaired version. We then provide the times our tool took to: gather and classify tests, and repair the erroneous expression. Finally, we mention if the resulting expression verifies. The source of all benchmarks can be found on http://lara.epfl.ch/w/leonrepair (see also http://leon.epfl.ch)
5 Evaluation
We evaluate our implementation on a set of benchmarks in which we manually injected errors (Fig. 4). The programs mainly focus on data structure implementations and syntax tree operations. Each benchmark is comprised of algebraic datatype definitions and recursive functions that manipulate them, specified using strong yet still partial preconditions and postconditions. We manually introduced errors of different types in each copy of the benchmarks. We ran our tool unassisted until completion to obtain a repair, providing it only with the name of the file and the name of the function to repair (typically the choice of the function could also have been localized automatically by running the verification on the entire file). The experiments were run on an Intel(R) Core(TM) i72600K CPU @ 3.40GHz with 16GB RAM, with 2GB given to the Java Virtual Machine. While the deductive reasoning supports parallelism in principle, our implementation is currently singlethreaded.
For each benchmark of Fig. 4 we provide: (1) the name of the benchmark and the broken operation; (2) a short classification of the kind of error introduced. The error kinds include: a small variation of the original program, a completely faulty
case, a missing
case, a missing necessary
split, a missing function call, and finally, two separate variations in the same function. We describe the relevant sizes (counted in abstract syntax tree nodes) of: (3) the overall benchmark, (4) the erroneous function, (5) the localized error, and (6) the repaired expression. The full size of the program is relevant because our repair algorithm may introduce calls to any function defined in the benchmark, and also because the verification of a function depends on other functions in the file (recall Fig. 1). We also include the time, in seconds, our tool took to: (7) collect and classify tests and (8) repair the broken expression. Finally, we report (9) if the system could formally (and automatically) prove the validity of the repaired implementation. Our examples are challenging to verify, let alone repair. They contain both functional and testbased specifications to capture the intended behavior. Many rely on unfolding procedure of [24, 25] to handle contracts that contain other auxiliary recursive functions. The fast exponentiation algorithm of
relies on nonlinear reasoning of the Z3 SMT solver [4].
An immediate observation is that fault localization is often able to focus the repair to a small subset of the body. Combined with the symbolic term exploration, this translates to a fast repair if the error fell within the error model. Among the hardest benchmarks are the ones labeled as having “2 variations”. For example,
is similar to one in Fig. 1 but contains two errors. In those cases, localization returns the entire
as the invalid expression. Our guided repair uses the existing
as the guide and successfully resynthesizes code that repairs both erroneous branches. Another challenging example is
, for which the more elaborate IfFocusCondition rule of Sect. 2.1 kicks in to resynthesize the condition of the
expression.
The repairs listed in evaluation are not only valid according to their specification, but were also manually validated by us to match the intended behavior. A failing proof thus does not indicate a wrong repair, but rather that our system was not able to automatically derive a proof of its correctness, often due to insufficient inductive invariants. We identify three scenarios under which repair itself may not succeed: if the assumptions mentioned in Sect. 2 are violated, when the necessary repair is either too big or outside of the scope of general synthesis, or if test collection does not yield sufficiently many interesting failing tests to locate the error.
6 Further Related Work
Much of the prior work focused on imperative programming, without native support for algebraic data types, making it typically infeasible to even automatically verify data structure properties of the kind that our benchmarks contain. Syntaxguided synthesis format [1, 2] does not support algebraic data types, or specific notion of repair (it could be used to specify some of the subproblems that our system generates, such those of Sect. 3).
GenProg [7] and SemFix [17] accept as input a C program along with userprovided sets of passing and failing test cases, but no formal specification. Our technique for fault localization is not applicable to a sequential program with sideeffects, and these tools employ statistical fault localization techniques, based on program executions. GenProg applies no code synthesis, but tries to repair the program by iteratively deleting, swapping, or duplicating program statements, according to a genetic algorithm. SemFix, on the other hand, uses synthesis, but does not take into account the faulty expression while synthesizing. AutoFixE/E2 [18] operates on Eiffel programs equipped with formal contracts. Formal contracts are used to automatically generate a set of passing and failing test cases, but not to verify candidate solutions. AutoFixE uses an elaborate mechanism for fault localization, which combines syntactic, control flow and statistical dynamic analysis. It follows a synthesis approach with repair schemas, which reuse the faulty statement (e.g. as a branch of a conditional). Samanta et al. [20] propose abstracting a C program with a boolean constraint, repairing this constraint so that all assertions in the program are satisfied by repeatedly applying to it update schemas according to a cost model, then concretize the boolean constraint back to a repaired C program. Their approach needs developer intervention to define the cost model for each program, as well as at the concretization step. Logozzo et al. [16] present a repair suggestion framework based on static analysis provided by the CodeContracts static checker [5]; the properties checked are typically simpler than those in our case. In [6], Gopinath et al. repair data structure operations by picking an input which exposes a suspicious statement, then using a SATsolver to discover a corresponding concrete output that satisfies the specification. This concrete output is then abstracted to various possible expressions to yield candidate repairs, which are filtered with bounded verification. In their approach, Chandra et al. [3] consider an expression as a candidate for repair if substituting it with some concrete value fixes a failing test.
Repair has also been studied in the context of reactive and pushdown systems with otherwise finite control [8, 10, 11, 19, 20, 26]. In [26], the authors generate repairs that preserve explicitly subsets of traces of the original program, in a way strengthening the specification automatically. We deal with the case of functions from inputs to outputs equipped with contracts. In case of a weak contract we provide only heuristic guarantees that the existing behaviors are preserved, arising from the tendency of our algorithm to reuse existing parts of the program.
7 Conclusions
We have presented an approach to program repair of mutually recursive functional programs, building on top of a deductive synthesis framework. The starting point gives it the ability to verify functions, find counterexamples, and synthesize small fragments of code. When doing repair, it has proven fruitful to first localize the error and then perform synthesis on a small fragment. Tests proved very useful in performing such localization, as well as for generally speeding up synthesis and repair. In addition to deriving tests by enumeration and verification, we have introduced a specification construct that uses pattern matching to describe symbolic tests, from which we efficiently derive concrete tests without invoking fullfledged verification. In case of tests for recursive functions, we perform dependency analysis and introduce new ones to better localize the cause of the error. While localization of errors within conditional control flow can be done by analyzing test runs, the challenge remains to localize change inside large expressions with nested function calls. We have introduced the notion of guided synthesis that uses the previous version of the code as a guide when searching for a small change to an existing large expression. The use of a guide is very flexible, and also allows us to repair multiple errors in some cases.
Our experiments with benchmarks of thousands of syntax tree nodes in size, including tree transformations and data structure operations confirm that repair is more tractable than synthesis for functional programs. The existing (incorrect) expression provides a hint on useful code fragments from which to build a correct solution. Compared to unguided synthesis, the common case of repair remains more predictable and scalable. At the same time, the developer need not learn a notation for specifying holes or templates. We thus believe that repair is a practical way to deploy synthesis in software development.
References
Alur, R., Bodik, R., Dallal, E., Fisman, D., Garg, P., Juniwal, G., KressGazit, H., Madhusudan, P., Martin, M.M.K., Raghothaman, M., Saha, S., Seshia, S. A., Singh, R., SolarLezama, A., Torlak, E., Udupa, A.: Syntaxguided synthesis. To Appear in Marktoberdrof NATO proceedings, (2014). http://sygus.seas.upenn.edu/files/sygus_extended.pdf. Accessed 02 June 2015
Alur, R., Bodík, R., Juniwal, G., Martin, M.M.K., Raghothaman, M., Seshia, S. A., Singh, R., SolarLezama, A., Torlak, E., Udupa, A.: Syntaxguided synthesis. In: FMCAD, pp. 1–17. IEEE (2013)
Chandra, S., Torlak, E., Barman, S., Bodík, R.: Angelic debugging. In: Taylor, R.N., Gall, H.C., Medvidovic, N. (eds.) ICSE, pp. 121–130. ACM, New york (2011)
de Moura, L., Bjørner, N.S.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)
Fähndrich, M., Logozzo, F.: Static contract checking with abstract interpretation. In: Beckert, B., Marché, C. (eds.) FoVeOOS 2010. LNCS, vol. 6528, pp. 10–30. Springer, Heidelberg (2011)
Gopinath, D., Malik, M.Z., Khurshid, S.: SpecificationBased Program Repair Using SAT. In: Abdulla, P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 173–188. Springer, Heidelberg (2011)
Goues, C.L., Nguyen, T., Forrest, S., Weimer, W.: Genprog: a generic method for automatic software repair. TSE 38(1), 54–72 (2012)
Griesmayer, A., Bloem, R., Cook, B.: Repair of boolean programs with an application to C. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 358–371. Springer, Heidelberg (2006)
Gvero, T., Kuncak, V., Kuraj, I., Piskac, R., Complete completion using types and weights. In: PLDI, pp. 27–38 (2013)
Jobstmann, B., Griesmayer, A., Bloem, R.: Program repair as a game. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 226–238. Springer, Heidelberg (2005)
Jobstmann, B., Staber, S., Griesmayer, A., Bloem, R.: Finding and fixing faults. JCSS 78(2), 441–460 (2012)
Jose, M., Majumdar, R.: Cause clue clauses: error localization using maximum satisfiability. In: Hall, M.W., Padua, D.A. (eds.) PLDI, pp. 437–446. ACM, New york (2011)
Kneuss, E., Kuraj, I., Kuncak, V., Suter, P.: Synthesis modulo recursive functions. In: Hosking, A.L., Eugster, P.T., Lopes, C.V. (eds.) OOPSLA, pp. 407–426. ACM, New york (2013)
Könighofer, R., Bloem, R.: Automated error localization and correction for imperative programs. In: Bjesse, P., Slobodová, A. (eds.) FMCAD, pp. 91–100. FMCAD Inc, Austin (2011)
Kuncak, V., Mayer, M., Piskac, R., Suter, P.: Functional synthesis for linear arithmetic and sets. STTT 15(5–6), 455–474 (2013)
Logozzo, F., Ball, T.: Modular and verified automatic program repair. In: Leavens, G.T., Dwyer, M.B. (eds.) OOPSLA, pp. 133–146. ACM, Newyork (2012)
Nguyen, H.D.T., Qi, D., Roychoudhury, A., Chandra, S.: Semfix: program repair via semantic analysis. In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) ICSE, pp. 772–781. IEEE and ACM, New York (2013)
Pei, Y., Wei, Y., Furia, C.A., Nordio, M., Meyer, B.: Codebased automated program fixing. ArXiv eprints (2011). arXiv:1102.1059
Samanta, R., Deshmukh, J.V., Emerson, E.A.: Automatic generation of local repairs for boolean programs. In: Cimatti, A., Jones, R.B. (eds.) FMCAD, pp. 1–10. IEEE, New York (2008)
Samanta, R., Olivo, O., Emerson, E.A.: Costaware automatic program repair. In: MüllerOlm, M., Seidl, H. (eds.) Static Analysis. LNCS, vol. 8723, pp. 268–284. Springer, Heidelberg (2014)
SolarLezama, A.: Program sketching. STTT 15(5–6), 475–495 (2013)
SolarLezama, A., Tancau, L., Bodík, R., Seshia, S.A., Saraswat, V.A.: Combinatorial sketching for finite programs. In: ASPLOS, pp. 404–415 (2006)
Srivastava, S., Gulwani, S., Foster, J.S.: Templatebased program verification and program synthesis. STTT 15(5–6), 497–518 (2013)
Suter, P.: Programming with Specifications. Ph.D. thesis, EPFL, December 2012
Suter, P., Köksal, A.S., Kuncak, V.: Satisfiability modulo recursive programs. In: Yahav, E. (ed.) Static Analysis. LNCS, vol. 6887, pp. 298–315. Springer, Heidelberg (2011)
von Essen, C., Jobstmann, B.: Program repair without regret. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 896–911. Springer, Heidelberg (2013)
Zeller, A., Hildebrandt, R.: Simplifying and isolating failureinducing input. TSE 28(2), 183–200 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Kneuss, E., Koukoutos, M., Kuncak, V. (2015). Deductive Program Repair. In: Kroening, D., Păsăreanu, C. (eds) Computer Aided Verification. CAV 2015. Lecture Notes in Computer Science(), vol 9207. Springer, Cham. https://doi.org/10.1007/9783319216683_13
Download citation
DOI: https://doi.org/10.1007/9783319216683_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319216676
Online ISBN: 9783319216683
eBook Packages: Computer ScienceComputer Science (R0)