Abstract
We propose \(\delta \)complete decision procedures for solving satisfiability of nonlinear SMT problems over real numbers that contain universal quantification and a wide range of nonlinear functions. The methods combine interval constraint propagation, counterexampleguided synthesis, and numerical optimization. In particular, we show how to handle the interleaving of numerical and symbolic computation to ensure deltacompleteness in quantified reasoning. We demonstrate that the proposed algorithms can handle various challenging global optimization and control synthesis problems that are beyond the reach of existing solvers.
Download conference paper PDF
1 Introduction
Much progress has been made in the framework of deltadecision procedures for solving nonlinear Satisfiability Modulo Theories (SMT) problems over real numbers [1, 2]. Deltadecision procedures allow onesided bounded numerical errors, which is a practically useful relaxation that significantly reduces the computational complexity of the problems. With such relaxation, SMT problems with hundreds of variables and highly nonlinear constraints (such as differential equations) have been solved in practical applications [3]. Existing work in this direction has focused on satisfiability of quantifierfree SMT problems. Going one level up, SMT problems with both free and universally quantified variables, which correspond to \(\exists \forall \)formulas over the reals, are much more expressive. For instance, such formulas can encode the search for robust control laws in highly nonlinear dynamical systems, a central problem in robotics. Nonconvex, multiobjective, and disjunctive optimization problems can all be encoded as \(\exists \forall \)formulas, through the natural definition of “finding some x such that for all other \(x'\), x is better than \(x'\) with respect to certain constraints.” Many other examples from various areas are listed in [4].
CounterexampleGuided Inductive Synthesis (CEGIS) [5] is a framework for program synthesis that can be applied to solve generic existsforall problems. The idea is to break the process of solving \(\exists \forall \)formulas into a loop between synthesis and verification. The synthesis procedure finds solutions to the existentially quantified variables and gives the solutions to the verifier to see if they can be validated, or falsified by counterexamples. The counterexamples are then used as learned constraints for the synthesis procedure to find new solutions. This method has been shown effective for many challenging problems, frequently generating more optimized programs than the best manual implementations [5].
A direct application of CEGIS to decision problems over real numbers, however, suffers from several problems. CEGIS is complete in finite domains because it can explicitly enumerate solutions, which can not be done in continuous domains. Also, CEGIS ensures progress by avoiding duplication of solutions, while due to numerical sensitivity, precise control over real numbers is difficult. In this paper we propose methods that bypass such difficulties.
We propose an integration of the CEGIS method in the branchandprune framework as a generic algorithm for solving nonlinear \(\exists \forall \)formulas over real numbers and prove that the algorithm is \(\delta \)complete. We achieve this goal by using CEGISbased methods for turning universallyquantified constraints into pruning operators, which is then used in the branchandprune framework for the search for solutions on the existentiallyquantified variables. In doing so, we take special care to ensure correct handling of numerical errors in the computation, so that \(\delta \)completeness can be established for the whole procedure.
The paper is organized as follows. We first review the background, and then present the details of the main algorithm in Sect. 3. We then give a rigorous proof of the \(\delta \)completeness of the procedure in Sect. 4. We demonstrated the effectiveness of the procedures on various global optimization and Lyapunov function synthesis problems in Sect. 5.
Related Work. Quantified formulas in real arithmetic can be solved using symbolic quantifier elimination (using cylindrical decomposition [6]), which is known to have impractically high complexity (double exponential [7]), and can not handle problems with transcendental functions. Stateoftheart SMT solvers such as CVC4 [8] and Z3 [9] provide quantifier support [10,11,12,13] but they are limited to decidable fragments of firstorder logic. Optimization Modulo Theories (OMT) is a new field that focuses on solving a restricted form of quantified reasoning [14,15,16], focusing on linear formulas. Generic approaches to solving existsforall problems such as [17] are generally based on CEGIS framework, and not intended to achieve completeness. Solving quantified constraints has been explored in the constraint solving community [18]. In general, existing work has not proposed algorithms that intend to achieve any notion of completeness for quantified problems in nonlinear theories over the reals.
2 Preliminaries
2.1 DeltaDecisions and \(\text {CNF}^{\forall }\)Formulas
We consider firstorder formulas over real numbers that can contain arbitrary nonlinear functions that can be numerically approximated, such as polynomials, exponential, and trignometric functions. Theoretically, such functions are called Type2 computable functions [19]. We write this language as \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\), formally defined as:
Definition 1
(The \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\) Language). Let \(\mathcal {F}\) be the set of Type2 computable functions. We define \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\) to be the following firstorder language:
Remark 1
Negations are not needed as part of the base syntax, as it can be defined through arithmetic: \(\lnot (t>0)\) is simply \(t\ge 0\). Similarly, an equality \(t=0\) is just \(t\ge 0\wedge t\ge 0\). In this way we can put the formulas in normal forms that are easy to manipulate.
We will focus on the \(\exists \forall \)formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\) in this paper. Decision problems for such formulas are equivalent to satisfiability of SMT with universally quantified variables, whose free variables are implicitly existentially quantified.
It is clear that, when the quantifierfree part of an \(\exists \forall \) formula is in Conjunctive Normal Form (CNF), we can always push the universal quantifiers inside each conjunct, since universal quantification commute with conjunctions. Thus the decision problem for any \(\exists \forall \)formula is equivalent to the satisfiability of formulas in the following normal form:
Definition 2
(CNF\(^{\forall }\) Formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\)). We say an \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\)formula \(\varphi \) is in the CNF\(^{\forall }\), if it is of the form
where \(c_{ij}\) are atomic constraints. Each universally quantified conjunct of the formula, i.e.,
is called as a \(\forall \)clause. Note that \(\forall \)clauses only contain disjunctions and no nested conjunctions. If all the \(\forall \)clauses are vacuous, we say \(\varphi ({\varvec{x}})\) is a ground SMT formula.
The algorithms described in this paper will assume that an input formula is in CNF\(^{\forall }\) form. We can now define the \(\delta \)satisfiability problems for \(\text {CNF}^{\forall }\)formulas.
Definition 3
(DeltaWeakening/Strengthening). Let \(\delta \in \mathbb {Q}^+\) be arbitrary. Consider an arbitrary \(\text {CNF}^{\forall }\)formula of the form
where \(\circ \in \{>,\ge \}\). We define the \(\delta \)weakening of \(\varphi ({\varvec{x}})\) to be:
Namely, we weaken the righthand sides of all atomic formulas from 0 to \(\delta \). Note how the difference between strict and nonstrict inequality becomes unimportant in the \(\delta \)weakening. We also define its dual, the \(\delta \)strengthening of \(\varphi ({\varvec{x}})\):
Since the formulas in the normal form no longer contain negations, the relaxation on the atomic formulas is implied by the original formula (and thus weaker), as was easily shown in [1].
Proposition 1
For any \(\varphi \) and \(\delta \in \mathbb {Q}^+\), \(\varphi ^{\delta }\) is logically weaker, in the sense that \(\varphi \rightarrow \varphi ^{\delta }\) is always true, but not vice versa.
Example 1
Consider the formula
It is equivalent to the \(\text {CNF}^{\forall }\)formula
whose \(\delta \)weakening is of the form
which is logically equivalent to
We see that the weakening of \(f(x,y)=0\) by \(f(x,y)  \le \delta \) defines a natural relaxation.
Definition 4
(DeltaCompleteness). Let \(\delta \in \mathbb {Q}^+\) be arbitrary. We say an algorithm is \(\delta \)complete for \(\exists \forall \)formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\), if for any input \(\text {CNF}^{\forall }\)formula \(\varphi \), it always terminates and returns one of the following answers correctly:

unsat: \(\varphi \) is unsatisfiable.

\(\delta \)sat: \(\varphi ^{\delta }\) is satisfiable.
When the two cases overlap, it can return either answer.
2.2 The BranchandPrune Framework
A practical algorithm that has been shown to be \(\delta \)complete for ground SMT formulas is the branchandprune method developed for interval constraint propagation [20]. A description of the algorithm in the simple case of an equality constraint is in Algorithm 1.
The procedure combines pruning and branching operations. Let \(\mathcal {B}\) be the set of all boxes (each variable assigned to an interval), and \(\mathcal {C}\) a set of constraints in the language. FixedPoint(g, B) is a procedure computing a fixedpoint of a function \(g : \mathcal {B} \rightarrow \mathcal {B}\) with an initial input B. A pruning operation \(\mathsf {Prune}: \mathcal {B} \times \mathcal {C} \rightarrow \mathcal {B}\) takes a box \(B\in \mathcal {B}\) and a constraint as input, and returns an ideally smaller box \(B'\in \mathcal {B}\) (Line 5) that is guaranteed to still keep all solutions for all constraints if there is any. When such pruning operations do not make progress, the Branch procedure picks a variable, divides its interval by halves, and creates two subproblems \(B_1\) and \(B_2\) (Line 8). The procedure terminates if either all boxes have been pruned to be empty (Line 15), or if a small box whose maximum width is smaller than a given threshold \(\delta \) has been found (Line 11). In [2], it has been proved that Algorithm 1 is \(\delta \)complete iff the pruning operators satisfy certain conditions for being welldefined (Definition 5).
3 Algorithm
The core idea of our algorithm for solving \(\text {CNF}^{\forall }\)formulas is as follows. We view the universally quantified constraints as a special type of pruning operators, which can be used to reduce possible values for the free variables based on their consistency with the universallyquantified variables. We then use these special \(\forall \)pruning operators in an overall branchandprune framework to solve the full formula in a \(\delta \)complete way. A special technical difficulty for ensuring \(\delta \)completeness is to control numerical errors in the recursive search for counterexamples, which we solve using doublesided error control. We also improve quality of counterexamples using localoptimization algorithms in the \(\forall \)pruning operations, which we call locallyoptimized counterexamples.
In the following sections we describe these steps in detail. For notational simplicity we will omit vector symbols and assume all variable names can directly refer to vectors of variables.
3.1 \(\forall \)Clauses as Pruning Operators
Consider an arbitrary \(\text {CNF}^{\forall }\)formula^{Footnote 1}
It is a conjunction of \(\forall \)clauses as defined in Definition 2. Consequently, we only need to define pruning operators for \(\forall \)clauses so that they can be used in a standard branchandprune framework. The full algorithm for such pruning operation is described in Algorithm 2.
In Algorithm 2, the basic idea is to use special y values that witness the negation of the original constraint to prune the box assignment on x. The two core steps are as follows.

1.
Counterexample generation (Line 4 to 9). The query for a counterexample \(\psi \) is defined as the negation of the quantifierfree part of the constraint (Line 4). The method \(\mathsf {Solve}(y, \psi , \delta )\) means to obtain a solution for the variables \(y\) \(\delta \)satisfying the logic formula \(\psi \). When such a solution is found, we have a counterexample that can falsify the \(\forall \)clause on some choice of x. Then we use this counterexample to prune on the domain of x, which is currently \(B_x\). The strengthening operation on \(\psi \) (Line 5), as well as the choices of \(\varepsilon \) and \(\delta '\), will be explained in the next subsection.

2.
Pruning on x (Line 10 to 13). In the counterexample generation step, we have obtained a counterexample b. The pruning operation then uses this value to prune on the current box domain \(B_x\). Here we need to be careful about the logical operations. For each constraint, we need to take the intersection of the pruned results on the counterexample point (Line 11). Then since the original clause contains the disjunction of all constraints, we need to take the boxhull (\(\bigsqcup \)) of the pruned results (Line 13).
We can now put the pruning operators defined for all \(\forall \)clauses in the overall branchandprune framework shown in Algorithm 1.
The pruning algorithms are inspired by the CEGIS loop, but are different in multiple ways. First, we never explicitly compute any candidate solution for the free variables. Instead, we only prune on their domain boxes. This ensures that the size of domain box decreases (together with branching operations), and the algorithm terminates. Secondly, we do not explicitly maintain a collection of constraints. Each time the pruning operation works on previous box – i.e., the learning is done on the model level instead of constraint level. On the other hand, being unable to maintain arbitrary Boolean combinations of constraints requires us to be more sensitive to the type of Boolean operations needed in the pruning results, which is different from the CEGIS approach that treats solvers as black boxes.
3.2 DoubleSided Error Control
To ensure the correctness of Algorithm 2, it is necessary to avoid spurious counterexamples which do not satisfy the negation of the quantified part in a \(\forall \)clause. We illustrate this condition by consider a wrong derivation of Algorithm 2 where we do not have the strengthening operation on Line 5 and try to find a counterexample by directly executing \(b \leftarrow \mathrm {Solve}(y, \psi = \bigwedge _{i=0}^k f_i(x,y)< 0, \delta )\). Note that the counterexample query \(\psi \) can be highly nonlinear in general and not included in a decidable fragment. As a result, it must employ a deltadecision procedure (i.e. Solve with \(\delta ' \in \mathbb {Q}^{+}\)) to find a counterexample. A consequence of relying on a deltadecision procedure in the counterexample generation step is that we may obtain a spurious counterexample b such that for some \(x = a\):
Consequently the following pruning operations fail to reduce their input boxes because a spurious counterexample does not witness any inconsistencies between x and y. As a result, the fixedpoint loop in this \(\forall \)Clause pruning algorithm will be terminated immediately after the first iteration. This makes the outermost branchandprune framework (Algorithm 1), which employs this pruning algorithm, solely rely on branching operations. It can claim that the problem is \(\delta \)satisfiable while providing an arbitrary box B as a model which is small enough (\(B  \le \delta \)) but does not include a \(\delta \)solution.
To avoid spurious counterexamples, we directly strengthen the counterexample query with \(\varepsilon \in \mathbb {Q}^+\) to have
Then we choose a weakening parameter \(\delta ' \in \mathbb {Q}\) in solving the strengthened formula. By analyzing the two possible outcomes of this counterexample search, we show the constraints on \(\delta '\), \(\varepsilon \), and \(\delta \) which guarantee the correctness of Algorithm 2:

\(\delta '\)sat case: We have a and b such that \(\bigwedge _{i=0}^k f_i(a,b) \le \varepsilon + \delta '\). For \(y = b\) to be a valid counterexample, we need \(\varepsilon + \delta ' < 0\). That is, we have
$$\begin{aligned} \delta ' < \varepsilon . \end{aligned}$$(2)In other words, the strengthening factor \(\varepsilon \) should be greater than the weakening parameter \(\delta '\) in the counterexample search step.

unsat case: By checking the absence of counterexamples, it proved that \(\forall y \bigvee _{i=0}^k f_i(x, y) \ge \varepsilon \) for all \(x \in B_{x}\). Recall that we want to show that \(\forall y \bigvee _{i=0}^k f_i(x, y) \ge \delta \) holds for some \(x = a\) when Algorithm 1 uses this pruning algorithm and returns \(\delta \)sat. To ensure this property, we need the following constraint on \(\varepsilon \) and \(\delta \):
$$\begin{aligned} \varepsilon < \delta . \end{aligned}$$(3)
3.3 LocallyOptimized Counterexamples
The performance of the pruning algorithm for \(\text {CNF}^{\forall }\)formulas depends on the quality of the counterexamples found during the search.
Figure 1a illustrates this point by visualizing a pruning process for an unconstrained minimization problem, \(\exists x \in X_0 \forall y \in X_0 f(x) \le f(y)\). As it finds a series of counterexamples \(\mathrm {CE}_1\), \(\mathrm {CE}_2\), \(\mathrm {CE}_3\), and \(\mathrm {CE}_4\), the pruning algorithms uses those counterexamples to contract the interval assignment on X from \(X_0\) to \(X_1\), \(X_2\), \(X_3\), and \(X_4\) in sequence. In the search for a counterexample (Line 6 of Algorithm 2), it solves the strengthened query, \(f(x) > f(y) + \delta \). Note that the query only requires a counterexample \(y = b\) to be \(\delta \)away from a candidate x while it is clear that the further a counterexample is away from candidates, the more effective the pruning algorithm is.
Based on this observation, we present a way to improve the performance of the pruning algorithm for \(\text {CNF}^{\forall }\)formulas. After we obtain a counterexample b, we locallyoptimize it with the counterexample query \(\psi \) so that it “further violates” the constraints. Figure 1b illustrates this idea. The algorithm first finds a counterexample \(\mathrm {CE}_1\) then refines it to \(\mathrm {CE}'_1\) by using a localoptimization algorithm (similarly, \(\mathrm {CE}_2 \rightarrow \mathrm {CE}'_2\)). Clearly, this refined counterexample gives a stronger pruning power than the original one. This refinement process can also help the performance of the algorithm by reducing the number of total iterations in the fixedpoint loop.
The suggested method is based on the assumption that localoptimization techniques are cheaper than finding a global counterexample using interval propagation techniques. In our experiments, we observed that this assumption holds practically. We will report the details in Sect. 5.
4 \(\delta \)Completeness
We now prove that the proposed algorithm is \(\delta \)complete for arbitrary \(\text {CNF}^{\forall }\) formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\). In the work of [2], \(\delta \)completeness has been proved for branchandprune for ground SMT problems, under the assumption that the pruning operators are welldefined. Thus, the key for our proof here is to show that the \(\forall \)pruning operators satisfy the conditions of welldefinedness.
The notion of a welldefined pruning operator is defined in [2] as follows.
Definition 5
Let \(\phi \) be a constraint, and \(\mathcal {B}\) be the set of all boxes in \(\mathbb {R}^n\). A pruning operator is a function \(\mathsf {Prune}: \mathcal {B} \times \mathcal {C} \rightarrow \mathcal {B}\). We say such a pruning operator is welldefined, if for any \(B\in \mathcal {B}\), the following conditions are true:

1.
\(\mathsf {Prune}(B,\phi )\subseteq B\).

2.
\(B\cap \{a\in \mathbb {R}^n: \phi (a) \text{ is } \text{ true. }\} \subseteq \mathsf {Prune}(B,\phi )\).

3.
Write \(\mathsf {Prune}(B,\phi ) = B'\). There exists a constant \(c \in \mathbb {Q}^+\), such that, if \(B' \ne \emptyset \) and \(B'  < \varepsilon \) for some \(\varepsilon \in \mathbb {Q}^+\), then for all \(a\in B'\), \(\phi ^{c\varepsilon }(a)\) is true.
We will explain the intuition behind these requirements in the next proof, which aims to establish that Algorithm 2 defines a welldefined pruning operator.
Lemma 1
(WellDefinedness of \(\forall \)Pruning). Consider an arbitrary \(\forall \)clause in the generic form
Suppose the pruning operators for \(f_1\ge 0,...,f_k\ge 0\) are welldefined, then the \(\forall \)pruning operation for c(x) as described in Algorithm 2 is welldefined.
Proof
We prove that the pruning operator defined by Algorithm 2 satisfies the three conditions in Definition 5. Let \(B_0,...,B_k\) be a sequence of boxes, where \(B_0\) is the input box \(B_x\) and \(B_k\) is the returned box B, which is possibly empty.
The first condition requires that the pruning operation for c(x) is reductive. That is, we want to show that \(B_x \subseteq B_x^{\mathrm {prev}}\) holds in Algorithm 2. If it does not find a counterexample (Line 8), we have \(B_{x} = B_x^{\mathrm {prev}}\). So the condition holds trivially. Consider the case where it finds a counterexample b. The pruned box \(B_{x}\) is obtained through boxhull of all the \(B_i\) boxes (Line 13), which are results of pruning on \(B_x^{\mathrm {prev}}\) using ordinary constraints of the form \(f_i(x,b)\ge 0\) (Line 11), for a counterexample b. Following the assumption that the pruning operators are welldefined for each ordinary constraint \(f_i\) used in the algorithm, we know that \(B_i \subseteq B_x^{\mathrm {prev}}\) holds as a loop invariant for the loop from Line 10 to Line 12. Thus, taking the boxhull of all the \(B_i\), we obtain \(B_{x}\) that is still a subset of \(B_x^{\mathrm {prev}}\).
The second condition requires that the pruning operation does not eliminate real solutions. Again, by the assumption that the pruning operation on Line 11 does not lose any valid assignment on x that makes the \(\forall \)clause true. In fact, since y is universally quantified, any choice of assignment \(y=b\) will preserve solution on x as long as the ordinary pruning operator is welldefined. Thus, this condition is easily satisfied.
The third condition is the most nontrivial to establish. It ensures that when the pruning operator does not prune a box to the empty set, then the box should not be “way off”, and in fact, should contain points that satisfy an appropriate relaxation of the constraint. We can say this is a notion of “faithfulness” of the pruning operator. For constraints defined by simple continuous functions, this can be typically guaranteed by the modulus of continuity of the function (Lipschitz constants as a special case). Now, in the case of \(\forall \)clause pruning, we need to prove that the faithfulness of the ordinary pruning operators that are used translates to the faithfulness of the \(\forall \)clause pruning results. First of all, this condition would not hold, if we do not have the strengthening operation when searching for counterexamples (Line 5). As is shown in Example 1, because of the weakening that \(\delta \)decisions introduce when searching for a counterexample, we may obtain a spurious counterexample that does not have pruning power. In other words, if we keep using a wrong counterexample that already satisfies the condition, then we are not able to rule out wrong assignments on x. Now, since we have introduced \(\varepsilon \)strengthening at the counterexample search, we know that b obtained on Line 6 is a true counterexample. Thus, for some \(x=a\), \(f_i(a,b)<0\) for every i. By assumption, the ordinary pruning operation using b on Line 11 guarantees faithfulness. That is, suppose the pruned result \(B_i\) is not empty and \(B_i  \le \varepsilon \), then there exists constant \(c_i\) such that \(f_i(x,b)\ge c_i \varepsilon \) is true. Thus, we can take the \(c = \min _i c_i\) as the constant for the pruning operator defined by the full clause, and conclude that the disjunction \(\bigvee _{i=0}^k f_i(x,y)<c\varepsilon \) holds for \(B_x  \le \varepsilon \).
Using the lemma, we follow the results in [2], and conclude that the branchandprune method in Algorithm 1 is deltacomplete:
Theorem 1
(\(\delta \)Completeness). For any \(\delta \in \mathbb {Q}^+\), using the proposed \(\forall \)pruning operators defined in Algorithm 2 in the branchandprune framework described in Algorithm 1 is \(\delta \)complete for the class of \(\text {CNF}^{\forall }\)formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\), assuming that the pruning operators for all the base functions are welldefined.
Proof
Following Theorem 4.2 (\(\delta \)Completeness of \(\mathrm {ICP}_{\varepsilon }\)) in [2], a branchandprune algorithm is \(\delta \)complete iff the pruning operators in the algorithm are all welldefined. Following Lemma 1, Algorithm 2 always defines welldefined pruning operators, assuming the pruning operators for the base functions are welldefined. Consequently, Algorithms 2 and 1 together define a deltacomplete decision procedure for \(\text {CNF}^{\forall }\)problems in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\).
5 Evaluation
Implementation. We implemented the algorithms on top of dReal [21], an opensource deltaSMT framework. We used IBEXlib [22] for interval constraints pruning and CLP [23] for linear programming. For local optimization, we used NLopt [24]. In particular, we used SLSQP (Sequential LeastSquares Quadratic Programming) localoptimization algorithm [25] for differentiable constraints and COBYLA (Constrained Optimization BY Linear Approximations) localoptimization algorithm [26] for nondifferentiable constraints. The prototype solver is able to handle \(\exists \forall \)formulas that involve most standard elementary functions, including power, \(\exp \), \(\log \), \(\sqrt{\cdot }\), trigonometric functions (\(\sin \), \(\cos \), \(\tan \)), inverse trigonometric functions (\(\arcsin \), \(\arccos \), \(\arctan \)), hyperbolic functions (\(\sinh \), \(\cosh \), \(\tanh \)), etc.
Experiment environment. All experiments were ran on a 2017 Macbook Pro with 2.9 GHz Intel Core i7 and 16 GB RAM running MacOS 10.13.4. All code and benchmarks are available at https://github.com/dreal/CAV18.
Parameters. In the experiments, we chose the strengthening parameter \(\epsilon = 0.99 \delta \) and the weakening parameter in the counterexample search \(\delta ' = 0.98 \delta \). In each call to NLopt, we used 1e–6 for both of absolute and relative tolerances on function value, 1e–3 s for a timeout, and 100 for the maximum number of evaluations. These values are used as stopping criteria in NLopt.
5.1 Nonlinear Global Optimization
We encoded a range of highly nonlinear \(\exists \forall \)problems from constrained and unconstrained optimization literature [27, 28]. Note that the standard optimization problem
can be encoded as the logic formula:
As plotted in Fig. 2, these optimization problems are nontrivial: they are highly nonconvex problems that are designed to test global optimization or genetic programming algorithms. Many such functions have a large number of local minima. For example, Ripple 1 Function [27].
defined in \(x_i \in [0, 1]\) has 252004 local minima with the global minima \(f(0.1, 0.1) = 2.2\). As a result, localoptimization algorithms such as gradientdescent would not work for these problems for itself. By encoding them as \(\exists \forall \)problems, we can perform guaranteed global optimization on these problems.
Table 1 provides a summary of the experiment results. First, it shows that we can find minimum values which are close to the known global solutions. Second, it shows that enabling the localoptimization technique speeds up the solving process significantly for 20 instances out of 23 instances.
5.2 Synthesizing Lyapunov Function for Dynamical System
We show that the proposed algorithm is able to synthesize Lyapunov functions for nonlinear dynamic systems described by a set of ODEs:
Our approach is different from a recent relatedwork [29] where they used dReal only to verify a candidate function which was found by a simulationguided algorithm. In contrast, we want to do both of search and verify steps by solving a single \(\exists \forall \)formula. Note that to verify a Lyapunov candidate function \(v : X \rightarrow \mathbb {R}^+\), v satisfies the following conditions:
We assume that a Lyapunov function is a polynomial of some fixed degrees over \({\varvec{x}}\), that is, \(v({\varvec{x}}) = {\varvec{z}}^T{\varvec{P}}{\varvec{z}}\) where \({\varvec{z}}\) is a vector of monomials over \({\varvec{x}}\) and P is a symmetric matrix. Then, we can encode this synthesis problem into the \(\exists \forall \)formula:
In the following sections, we show that we can handle two examples in [29].
Normalized Pendulum. Given a standard pendulum system with normalized parameters
and a quadratic template for a Lyapunov function \(v({\varvec{x}}) = {\varvec{x}}^T{\varvec{P}}{\varvec{x}} = c_1x_1x_2 + c_2x_1^2 + c_3 x_2^2\), we can encode this synthesis problem into the following \(\exists \forall \)formula:
Our prototype solver takes 44.184 s to synthesize the following function as a solution to the problem for the bound \({\varvec{x}}  \in [0.1, 1.0]\) and \(c_{i} \in [0.1, 100]\) using \(\delta = 0.05\):
Damped Mathieu System. Mathieu dynamics are timevarying and defined by the following ODEs:
Using a quadratic template for a Lyapunov function \(v({\varvec{x}}) = {\varvec{x}}^T{\varvec{P}}{\varvec{x}} = c_1x_1x_2 + c_2x_1^2 + c_3 x_2^2\), we can encode this synthesis problem into the following \(\exists \forall \)formula:
Our prototype solver takes 26.533 s to synthesize the following function as a solution to the problem for the bound \({\varvec{x}}  \in [0.1, 1.0]\), \(t \in [0.1, 1.0]\), and \(c_i \in [45, 98]\) using \(\delta = 0.05\):
6 Conclusion
We have described deltadecision procedures for solving existsforall formulas in the firstorder theory over the reals with computable real functions. These formulas can encode a wide range of hard practical problems such as general constrained optimization and nonlinear control synthesis. We use a branchandprune framework, and design special pruning operators for universallyquantified constraints such that the procedures can be proved to be deltacomplete, where suitable control of numerical errors is crucial. We demonstrated the effectiveness of the procedures on various global optimization and Lyapunov function synthesis problems.
Notes
 1.
Note that without loss of generality we only use nonstrict inequality here, since in the context of \(\delta \)decisions the distinction between strict and nonstrict inequalities as not important, as explained in Definition 3.
References
Gao, S., Avigad, J., Clarke, E.M.: Deltadecidability over the reals. In: LICS, pp. 305–314 (2012)
Gao, S., Avigad, J., Clarke, E.M.: \(\delta \)complete decision procedures for satisfiability over the reals. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 286–300. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642313653_23
Kong, S., Gao, S., Chen, W., Clarke, E.: dReach: \(\delta \)reachability analysis for hybrid systems. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 200–205. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_15
Ratschan, S.: Applications of quantified constraint solving over the realsbibliography. arXiv preprint arXiv:1205.5571 (2012)
SolarLezama, A.: Program Synthesis by Sketching. University of California, Berkeley (2008)
Collins, G.E.: Hauptvortrag: quantifier elimination for real closed fields by cylindrical algebraic decomposition. In: Automata Theory and Formal Languages, pp. 134–183 (1975)
Brown, C.W., Davenport, J.H.: The complexity of quantifier elimination and cylindrical algebraic decomposition. In: ISSAC2007 (2007)
Barrett, C., et al.: CVC4. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 171–177. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_14
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24
de Moura, L., Bjørner, N.: Efficient ematching for SMT solvers. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 183–198. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540735953_13
Bjørner, N., Phan, A.D., Fleckenstein, L.: \(\nu \)Z  an optimizing SMT solver. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 194–199. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_14
Ge, Y., Barrett, C., Tinelli, C.: Solving quantified verification conditions using satisfiability modulo theories. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 167–182. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540735953_12
Reynolds, A., Deters, M., Kuncak, V., Tinelli, C., Barrett, C.: Counterexampleguided quantifier instantiation for synthesis in SMT. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9207, pp. 198–216. Springer, Cham (2015). https://doi.org/10.1007/9783319216683_12
Nieuwenhuis, R., Oliveras, A.: On SAT modulo theories and optimization problems. In: Biere, A., Gomes, C.P. (eds.) SAT 2006. LNCS, vol. 4121, pp. 156–169. Springer, Heidelberg (2006). https://doi.org/10.1007/11814948_18
Cimatti, A., Franzén, A., Griggio, A., Sebastiani, R., Stenico, C.: Satisfiability modulo the theory of costs: foundations and applications. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 99–113. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642120022_8
Sebastiani, R., Tomasi, S.: Optimization in SMT with \({\cal{LA}}\)(Q) Cost Functions. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 484–498. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642313653_38
Dutertre, B.: Solving exists/forall problems with yices. In: Workshop on Satisfiability Modulo Theories (2015)
Nightingale, P.: Consistency for quantified constraint satisfaction problems. In: van Beek, P. (ed.) CP 2005. LNCS, vol. 3709, pp. 792–796. Springer, Heidelberg (2005). https://doi.org/10.1007/11564751_66
Weihrauch, K.: Computable Analysis: An Introduction. Springer, New York (2000). https://doi.org/10.1007/9783642569999
Benhamou, F., Granvilliers, L.: Continuous and interval constraints. In: Rossi, F., van Beek, P., Walsh, T. (eds.) Handbook of Constraint Programming. Elsevier (2006)
Gao, S., Kong, S., Clarke, E.M.: dReal: an SMT solver for nonlinear theories over the reals. In: CADE, pp. 208–214 (2013)
Trombettoni, G., Araya, I., Neveu, B., Chabert, G.: Inner regions and interval linearizations for global optimization. In: Burgard, W., Roth, D. (eds.) Proceedings of the TwentyFifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, 7–11 August 2011. AAAI Press (2011)
LougeeHeimer, R.: The common optimization interface for operations research: promoting opensource software in the operations research community. IBM J. Res. Dev. 47(1), 57–66 (2003)
Johnson, S.G.: The NLopt nonlinearoptimization package (2011)
Kraft, D.: Algorithm 733: TompFortran modules for optimal control calculations. ACM Trans. Math. Softw. 20(3), 262–281 (1994)
Powell, M.: Direct search algorithms for optimization calculations. Acta Numerica 7, 287–336 (1998)
Jamil, M., Yang, X.S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optimisation 4(2), 150–194 (2013)
Wikipedia contributors: Test functions for optimization – Wikipedia. The Free Encyclopedia (2017)
Kapinski, J., Deshmukh, J.V., Sankaranarayanan, S., Arechiga, N.: Simulationguided lyapunov analysis for hybrid dynamical systems. In: HSCC 2014, Berlin, Germany, 15–17 April 2014, pp. 133–142 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
This chapter is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what reuse is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and reuse information, please contact the Rights and Permissions team.
Copyright information
© 2018 The Author(s)
About this paper
Cite this paper
Kong, S., SolarLezama, A., Gao, S. (2018). DeltaDecision Procedures for ExistsForall Problems over the Reals. In: Chockler, H., Weissenbacher, G. (eds) Computer Aided Verification. CAV 2018. Lecture Notes in Computer Science(), vol 10982. Springer, Cham. https://doi.org/10.1007/9783319961422_15
Download citation
DOI: https://doi.org/10.1007/9783319961422_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319961415
Online ISBN: 9783319961422
eBook Packages: Computer ScienceComputer Science (R0)