DeltaDecision Procedures for ExistsForall Problems over the Reals
Abstract
We propose \(\delta \)complete decision procedures for solving satisfiability of nonlinear SMT problems over real numbers that contain universal quantification and a wide range of nonlinear functions. The methods combine interval constraint propagation, counterexampleguided synthesis, and numerical optimization. In particular, we show how to handle the interleaving of numerical and symbolic computation to ensure deltacompleteness in quantified reasoning. We demonstrate that the proposed algorithms can handle various challenging global optimization and control synthesis problems that are beyond the reach of existing solvers.
1 Introduction
Much progress has been made in the framework of deltadecision procedures for solving nonlinear Satisfiability Modulo Theories (SMT) problems over real numbers [1, 2]. Deltadecision procedures allow onesided bounded numerical errors, which is a practically useful relaxation that significantly reduces the computational complexity of the problems. With such relaxation, SMT problems with hundreds of variables and highly nonlinear constraints (such as differential equations) have been solved in practical applications [3]. Existing work in this direction has focused on satisfiability of quantifierfree SMT problems. Going one level up, SMT problems with both free and universally quantified variables, which correspond to \(\exists \forall \)formulas over the reals, are much more expressive. For instance, such formulas can encode the search for robust control laws in highly nonlinear dynamical systems, a central problem in robotics. Nonconvex, multiobjective, and disjunctive optimization problems can all be encoded as \(\exists \forall \)formulas, through the natural definition of “finding some x such that for all other \(x'\), x is better than \(x'\) with respect to certain constraints.” Many other examples from various areas are listed in [4].
CounterexampleGuided Inductive Synthesis (CEGIS) [5] is a framework for program synthesis that can be applied to solve generic existsforall problems. The idea is to break the process of solving \(\exists \forall \)formulas into a loop between synthesis and verification. The synthesis procedure finds solutions to the existentially quantified variables and gives the solutions to the verifier to see if they can be validated, or falsified by counterexamples. The counterexamples are then used as learned constraints for the synthesis procedure to find new solutions. This method has been shown effective for many challenging problems, frequently generating more optimized programs than the best manual implementations [5].
A direct application of CEGIS to decision problems over real numbers, however, suffers from several problems. CEGIS is complete in finite domains because it can explicitly enumerate solutions, which can not be done in continuous domains. Also, CEGIS ensures progress by avoiding duplication of solutions, while due to numerical sensitivity, precise control over real numbers is difficult. In this paper we propose methods that bypass such difficulties.
We propose an integration of the CEGIS method in the branchandprune framework as a generic algorithm for solving nonlinear \(\exists \forall \)formulas over real numbers and prove that the algorithm is \(\delta \)complete. We achieve this goal by using CEGISbased methods for turning universallyquantified constraints into pruning operators, which is then used in the branchandprune framework for the search for solutions on the existentiallyquantified variables. In doing so, we take special care to ensure correct handling of numerical errors in the computation, so that \(\delta \)completeness can be established for the whole procedure.
The paper is organized as follows. We first review the background, and then present the details of the main algorithm in Sect. 3. We then give a rigorous proof of the \(\delta \)completeness of the procedure in Sect. 4. We demonstrated the effectiveness of the procedures on various global optimization and Lyapunov function synthesis problems in Sect. 5.
Related Work. Quantified formulas in real arithmetic can be solved using symbolic quantifier elimination (using cylindrical decomposition [6]), which is known to have impractically high complexity (double exponential [7]), and can not handle problems with transcendental functions. Stateoftheart SMT solvers such as CVC4 [8] and Z3 [9] provide quantifier support [10, 11, 12, 13] but they are limited to decidable fragments of firstorder logic. Optimization Modulo Theories (OMT) is a new field that focuses on solving a restricted form of quantified reasoning [14, 15, 16], focusing on linear formulas. Generic approaches to solving existsforall problems such as [17] are generally based on CEGIS framework, and not intended to achieve completeness. Solving quantified constraints has been explored in the constraint solving community [18]. In general, existing work has not proposed algorithms that intend to achieve any notion of completeness for quantified problems in nonlinear theories over the reals.
2 Preliminaries
2.1 DeltaDecisions and \(\text {CNF}^{\forall }\)Formulas
We consider firstorder formulas over real numbers that can contain arbitrary nonlinear functions that can be numerically approximated, such as polynomials, exponential, and trignometric functions. Theoretically, such functions are called Type2 computable functions [19]. We write this language as \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\), formally defined as:
Definition 1
Remark 1
Negations are not needed as part of the base syntax, as it can be defined through arithmetic: \(\lnot (t>0)\) is simply \(t\ge 0\). Similarly, an equality \(t=0\) is just \(t\ge 0\wedge t\ge 0\). In this way we can put the formulas in normal forms that are easy to manipulate.
We will focus on the \(\exists \forall \)formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\) in this paper. Decision problems for such formulas are equivalent to satisfiability of SMT with universally quantified variables, whose free variables are implicitly existentially quantified.
It is clear that, when the quantifierfree part of an \(\exists \forall \) formula is in Conjunctive Normal Form (CNF), we can always push the universal quantifiers inside each conjunct, since universal quantification commute with conjunctions. Thus the decision problem for any \(\exists \forall \)formula is equivalent to the satisfiability of formulas in the following normal form:
Definition 2
The algorithms described in this paper will assume that an input formula is in CNF\(^{\forall }\) form. We can now define the \(\delta \)satisfiability problems for \(\text {CNF}^{\forall }\)formulas.
Definition 3
Since the formulas in the normal form no longer contain negations, the relaxation on the atomic formulas is implied by the original formula (and thus weaker), as was easily shown in [1].
Proposition 1
For any \(\varphi \) and \(\delta \in \mathbb {Q}^+\), \(\varphi ^{\delta }\) is logically weaker, in the sense that \(\varphi \rightarrow \varphi ^{\delta }\) is always true, but not vice versa.
Example 1
Definition 4

unsat: \(\varphi \) is unsatisfiable.

\(\delta \)sat: \(\varphi ^{\delta }\) is satisfiable.
When the two cases overlap, it can return either answer.
2.2 The BranchandPrune Framework
The procedure combines pruning and branching operations. Let \(\mathcal {B}\) be the set of all boxes (each variable assigned to an interval), and \(\mathcal {C}\) a set of constraints in the language. FixedPoint(g, B) is a procedure computing a fixedpoint of a function \(g : \mathcal {B} \rightarrow \mathcal {B}\) with an initial input B. A pruning operation \(\mathsf {Prune}: \mathcal {B} \times \mathcal {C} \rightarrow \mathcal {B}\) takes a box \(B\in \mathcal {B}\) and a constraint as input, and returns an ideally smaller box \(B'\in \mathcal {B}\) (Line 5) that is guaranteed to still keep all solutions for all constraints if there is any. When such pruning operations do not make progress, the Branch procedure picks a variable, divides its interval by halves, and creates two subproblems \(B_1\) and \(B_2\) (Line 8). The procedure terminates if either all boxes have been pruned to be empty (Line 15), or if a small box whose maximum width is smaller than a given threshold \(\delta \) has been found (Line 11). In [2], it has been proved that Algorithm 1 is \(\delta \)complete iff the pruning operators satisfy certain conditions for being welldefined (Definition 5).
3 Algorithm
The core idea of our algorithm for solving \(\text {CNF}^{\forall }\)formulas is as follows. We view the universally quantified constraints as a special type of pruning operators, which can be used to reduce possible values for the free variables based on their consistency with the universallyquantified variables. We then use these special \(\forall \)pruning operators in an overall branchandprune framework to solve the full formula in a \(\delta \)complete way. A special technical difficulty for ensuring \(\delta \)completeness is to control numerical errors in the recursive search for counterexamples, which we solve using doublesided error control. We also improve quality of counterexamples using localoptimization algorithms in the \(\forall \)pruning operations, which we call locallyoptimized counterexamples.
In the following sections we describe these steps in detail. For notational simplicity we will omit vector symbols and assume all variable names can directly refer to vectors of variables.
3.1 \(\forall \)Clauses as Pruning Operators
In Algorithm 2, the basic idea is to use special y values that witness the negation of the original constraint to prune the box assignment on x. The two core steps are as follows.
 1.
Counterexample generation (Line 4 to 9). The query for a counterexample \(\psi \) is defined as the negation of the quantifierfree part of the constraint (Line 4). The method \(\mathsf {Solve}(y, \psi , \delta )\) means to obtain a solution for the variables \(y\) \(\delta \)satisfying the logic formula \(\psi \). When such a solution is found, we have a counterexample that can falsify the \(\forall \)clause on some choice of x. Then we use this counterexample to prune on the domain of x, which is currently \(B_x\). The strengthening operation on \(\psi \) (Line 5), as well as the choices of \(\varepsilon \) and \(\delta '\), will be explained in the next subsection.
 2.
Pruning on x (Line 10 to 13). In the counterexample generation step, we have obtained a counterexample b. The pruning operation then uses this value to prune on the current box domain \(B_x\). Here we need to be careful about the logical operations. For each constraint, we need to take the intersection of the pruned results on the counterexample point (Line 11). Then since the original clause contains the disjunction of all constraints, we need to take the boxhull (\(\bigsqcup \)) of the pruned results (Line 13).
We can now put the pruning operators defined for all \(\forall \)clauses in the overall branchandprune framework shown in Algorithm 1.
The pruning algorithms are inspired by the CEGIS loop, but are different in multiple ways. First, we never explicitly compute any candidate solution for the free variables. Instead, we only prune on their domain boxes. This ensures that the size of domain box decreases (together with branching operations), and the algorithm terminates. Secondly, we do not explicitly maintain a collection of constraints. Each time the pruning operation works on previous box – i.e., the learning is done on the model level instead of constraint level. On the other hand, being unable to maintain arbitrary Boolean combinations of constraints requires us to be more sensitive to the type of Boolean operations needed in the pruning results, which is different from the CEGIS approach that treats solvers as black boxes.
3.2 DoubleSided Error Control
 \(\delta '\)sat case: We have a and b such that \(\bigwedge _{i=0}^k f_i(a,b) \le \varepsilon + \delta '\). For \(y = b\) to be a valid counterexample, we need \(\varepsilon + \delta ' < 0\). That is, we haveIn other words, the strengthening factor \(\varepsilon \) should be greater than the weakening parameter \(\delta '\) in the counterexample search step.$$\begin{aligned} \delta ' < \varepsilon . \end{aligned}$$(2)
 unsat case: By checking the absence of counterexamples, it proved that \(\forall y \bigvee _{i=0}^k f_i(x, y) \ge \varepsilon \) for all \(x \in B_{x}\). Recall that we want to show that \(\forall y \bigvee _{i=0}^k f_i(x, y) \ge \delta \) holds for some \(x = a\) when Algorithm 1 uses this pruning algorithm and returns \(\delta \)sat. To ensure this property, we need the following constraint on \(\varepsilon \) and \(\delta \):$$\begin{aligned} \varepsilon < \delta . \end{aligned}$$(3)
3.3 LocallyOptimized Counterexamples
Figure 1a illustrates this point by visualizing a pruning process for an unconstrained minimization problem, \(\exists x \in X_0 \forall y \in X_0 f(x) \le f(y)\). As it finds a series of counterexamples \(\mathrm {CE}_1\), \(\mathrm {CE}_2\), \(\mathrm {CE}_3\), and \(\mathrm {CE}_4\), the pruning algorithms uses those counterexamples to contract the interval assignment on X from \(X_0\) to \(X_1\), \(X_2\), \(X_3\), and \(X_4\) in sequence. In the search for a counterexample (Line 6 of Algorithm 2), it solves the strengthened query, \(f(x) > f(y) + \delta \). Note that the query only requires a counterexample \(y = b\) to be \(\delta \)away from a candidate x while it is clear that the further a counterexample is away from candidates, the more effective the pruning algorithm is.
Based on this observation, we present a way to improve the performance of the pruning algorithm for \(\text {CNF}^{\forall }\)formulas. After we obtain a counterexample b, we locallyoptimize it with the counterexample query \(\psi \) so that it “further violates” the constraints. Figure 1b illustrates this idea. The algorithm first finds a counterexample \(\mathrm {CE}_1\) then refines it to \(\mathrm {CE}'_1\) by using a localoptimization algorithm (similarly, \(\mathrm {CE}_2 \rightarrow \mathrm {CE}'_2\)). Clearly, this refined counterexample gives a stronger pruning power than the original one. This refinement process can also help the performance of the algorithm by reducing the number of total iterations in the fixedpoint loop.
The suggested method is based on the assumption that localoptimization techniques are cheaper than finding a global counterexample using interval propagation techniques. In our experiments, we observed that this assumption holds practically. We will report the details in Sect. 5.
4 \(\delta \)Completeness
We now prove that the proposed algorithm is \(\delta \)complete for arbitrary \(\text {CNF}^{\forall }\) formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\). In the work of [2], \(\delta \)completeness has been proved for branchandprune for ground SMT problems, under the assumption that the pruning operators are welldefined. Thus, the key for our proof here is to show that the \(\forall \)pruning operators satisfy the conditions of welldefinedness.
The notion of a welldefined pruning operator is defined in [2] as follows.
Definition 5
 1.
\(\mathsf {Prune}(B,\phi )\subseteq B\).
 2.
\(B\cap \{a\in \mathbb {R}^n: \phi (a) \text{ is } \text{ true. }\} \subseteq \mathsf {Prune}(B,\phi )\).
 3.
Write \(\mathsf {Prune}(B,\phi ) = B'\). There exists a constant \(c \in \mathbb {Q}^+\), such that, if \(B' \ne \emptyset \) and \(B'  < \varepsilon \) for some \(\varepsilon \in \mathbb {Q}^+\), then for all \(a\in B'\), \(\phi ^{c\varepsilon }(a)\) is true.
We will explain the intuition behind these requirements in the next proof, which aims to establish that Algorithm 2 defines a welldefined pruning operator.
Lemma 1
Proof
We prove that the pruning operator defined by Algorithm 2 satisfies the three conditions in Definition 5. Let \(B_0,...,B_k\) be a sequence of boxes, where \(B_0\) is the input box \(B_x\) and \(B_k\) is the returned box B, which is possibly empty.
The first condition requires that the pruning operation for c(x) is reductive. That is, we want to show that \(B_x \subseteq B_x^{\mathrm {prev}}\) holds in Algorithm 2. If it does not find a counterexample (Line 8), we have \(B_{x} = B_x^{\mathrm {prev}}\). So the condition holds trivially. Consider the case where it finds a counterexample b. The pruned box \(B_{x}\) is obtained through boxhull of all the \(B_i\) boxes (Line 13), which are results of pruning on \(B_x^{\mathrm {prev}}\) using ordinary constraints of the form \(f_i(x,b)\ge 0\) (Line 11), for a counterexample b. Following the assumption that the pruning operators are welldefined for each ordinary constraint \(f_i\) used in the algorithm, we know that \(B_i \subseteq B_x^{\mathrm {prev}}\) holds as a loop invariant for the loop from Line 10 to Line 12. Thus, taking the boxhull of all the \(B_i\), we obtain \(B_{x}\) that is still a subset of \(B_x^{\mathrm {prev}}\).
The second condition requires that the pruning operation does not eliminate real solutions. Again, by the assumption that the pruning operation on Line 11 does not lose any valid assignment on x that makes the \(\forall \)clause true. In fact, since y is universally quantified, any choice of assignment \(y=b\) will preserve solution on x as long as the ordinary pruning operator is welldefined. Thus, this condition is easily satisfied.
The third condition is the most nontrivial to establish. It ensures that when the pruning operator does not prune a box to the empty set, then the box should not be “way off”, and in fact, should contain points that satisfy an appropriate relaxation of the constraint. We can say this is a notion of “faithfulness” of the pruning operator. For constraints defined by simple continuous functions, this can be typically guaranteed by the modulus of continuity of the function (Lipschitz constants as a special case). Now, in the case of \(\forall \)clause pruning, we need to prove that the faithfulness of the ordinary pruning operators that are used translates to the faithfulness of the \(\forall \)clause pruning results. First of all, this condition would not hold, if we do not have the strengthening operation when searching for counterexamples (Line 5). As is shown in Example 1, because of the weakening that \(\delta \)decisions introduce when searching for a counterexample, we may obtain a spurious counterexample that does not have pruning power. In other words, if we keep using a wrong counterexample that already satisfies the condition, then we are not able to rule out wrong assignments on x. Now, since we have introduced \(\varepsilon \)strengthening at the counterexample search, we know that b obtained on Line 6 is a true counterexample. Thus, for some \(x=a\), \(f_i(a,b)<0\) for every i. By assumption, the ordinary pruning operation using b on Line 11 guarantees faithfulness. That is, suppose the pruned result \(B_i\) is not empty and \(B_i  \le \varepsilon \), then there exists constant \(c_i\) such that \(f_i(x,b)\ge c_i \varepsilon \) is true. Thus, we can take the \(c = \min _i c_i\) as the constant for the pruning operator defined by the full clause, and conclude that the disjunction \(\bigvee _{i=0}^k f_i(x,y)<c\varepsilon \) holds for \(B_x  \le \varepsilon \).
Using the lemma, we follow the results in [2], and conclude that the branchandprune method in Algorithm 1 is deltacomplete:
Theorem 1
(\(\delta \)Completeness). For any \(\delta \in \mathbb {Q}^+\), using the proposed \(\forall \)pruning operators defined in Algorithm 2 in the branchandprune framework described in Algorithm 1 is \(\delta \)complete for the class of \(\text {CNF}^{\forall }\)formulas in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\), assuming that the pruning operators for all the base functions are welldefined.
Proof
Following Theorem 4.2 (\(\delta \)Completeness of \(\mathrm {ICP}_{\varepsilon }\)) in [2], a branchandprune algorithm is \(\delta \)complete iff the pruning operators in the algorithm are all welldefined. Following Lemma 1, Algorithm 2 always defines welldefined pruning operators, assuming the pruning operators for the base functions are welldefined. Consequently, Algorithms 2 and 1 together define a deltacomplete decision procedure for \(\text {CNF}^{\forall }\)problems in \(\mathcal {L}_{\mathbb {R}_{\mathcal {F}}}\).
5 Evaluation
Implementation. We implemented the algorithms on top of dReal [21], an opensource deltaSMT framework. We used IBEXlib [22] for interval constraints pruning and CLP [23] for linear programming. For local optimization, we used NLopt [24]. In particular, we used SLSQP (Sequential LeastSquares Quadratic Programming) localoptimization algorithm [25] for differentiable constraints and COBYLA (Constrained Optimization BY Linear Approximations) localoptimization algorithm [26] for nondifferentiable constraints. The prototype solver is able to handle \(\exists \forall \)formulas that involve most standard elementary functions, including power, \(\exp \), \(\log \), \(\sqrt{\cdot }\), trigonometric functions (\(\sin \), \(\cos \), \(\tan \)), inverse trigonometric functions (\(\arcsin \), \(\arccos \), \(\arctan \)), hyperbolic functions (\(\sinh \), \(\cosh \), \(\tanh \)), etc.
Experimental results for nonlinear global optimization problems: The first 19 problems (Ackley 2D – Zettl) are unconstrained optimization problems and the last five problems (Rosenbrock Cubic – Simionescu) are constrained optimization problems. We ran our prototype solver over those instances with and without localoptimization option (“LOpt.” and “No LOpt.” columns) and compared the results. We chose \(\delta = 0.0001\) for all instances.
Name  Solution  Time (sec)  

Global  No LOpt.  LOpt.  No LOpt.  LOpt.  Speed up  
Ackley 2D  0.00000  0.00000  0.00000  0.0579  0.0047  12.32 
Ackley 4D  0.00000  0.00005  0.00000  8.2256  0.1930  42.62 
Aluffi Pentini  \(\)0.35230  \(\)0.35231  \(\)0.35239  0.0321  0.1868  0.17 
Beale  0.00000  0.00003  0.00000  0.0317  0.0615  0.52 
Bohachevsky1  0.00000  0.00006  0.00000  0.0094  0.0020  4.70 
Booth  0.00000  0.00006  0.00000  0.5035  0.0020  251.75 
Brent  0.00000  0.00006  0.00000  0.0095  0.0017  5.59 
Bukin6  0.00000  0.00003  0.00003  0.0093  0.0083  1.12 
Cross in tray  \(\)2.06261  \(\)2.06254  \(\)2.06260  0.5669  0.1623  3.49 
Easom  \(\)1.00000  \(\)1.00000  \(\)1.00000  0.0061  0.0030  2.03 
EggHolder  \(\)959.64070  \(\)959.64030  \(\)959.64031  0.0446  0.0211  2.11 
Holder Table 2  \(\)19.20850  \(\)19.20846  \(\)19.20845  52.9152  41.7004  1.27 
Levi N13  0.00000  0.00000  0.00000  0.1383  0.0034  40.68 
Ripple 1  \(\)2.20000  \(\)2.20000  \(\)2.20000  0.0059  0.0065  0.91 
Schaffer F6  0.00000  0.00004  0.00000  0.0531  0.0056  9.48 
Testtube holder  \(\)10.87230  \(\)10.87227  \(\)10.87230  0.0636  0.0035  18.17 
Trefethen  \(\)3.30687  \(\)3.30681  \(\)3.30685  3.0689  1.4916  2.06 
W Wavy  0.00000  0.00000  0.00000  0.1234  0.0138  8.94 
Zettl  \(\)0.00379  \(\)0.00375  \(\)0.00379  0.0070  0.0069  1.01 
Rosenbrock Cubic  0.00000  0.00005  0.00002  0.0045  0.0036  1.25 
Rosenbrock Disk  0.00000  0.00002  0.00000  0.0036  0.0028  1.29 
Mishra Bird  \(\)106.76454  \(\)106.76449  \(\)106.76451  1.8496  0.9122  2.03 
Townsend  \(\)2.02399  \(\)2.02385  \(\)2.02390  2.6216  0.5817  4.51 
Simionescu  \(\)0.07262  \(\)0.07199  \(\)0.07200  0.0064  0.0048  1.33 
Parameters. In the experiments, we chose the strengthening parameter \(\epsilon = 0.99 \delta \) and the weakening parameter in the counterexample search \(\delta ' = 0.98 \delta \). In each call to NLopt, we used 1e–6 for both of absolute and relative tolerances on function value, 1e–3 s for a timeout, and 100 for the maximum number of evaluations. These values are used as stopping criteria in NLopt.
5.1 Nonlinear Global Optimization
Table 1 provides a summary of the experiment results. First, it shows that we can find minimum values which are close to the known global solutions. Second, it shows that enabling the localoptimization technique speeds up the solving process significantly for 20 instances out of 23 instances.
5.2 Synthesizing Lyapunov Function for Dynamical System
6 Conclusion
We have described deltadecision procedures for solving existsforall formulas in the firstorder theory over the reals with computable real functions. These formulas can encode a wide range of hard practical problems such as general constrained optimization and nonlinear control synthesis. We use a branchandprune framework, and design special pruning operators for universallyquantified constraints such that the procedures can be proved to be deltacomplete, where suitable control of numerical errors is crucial. We demonstrated the effectiveness of the procedures on various global optimization and Lyapunov function synthesis problems.
Footnotes
References
 1.Gao, S., Avigad, J., Clarke, E.M.: Deltadecidability over the reals. In: LICS, pp. 305–314 (2012)Google Scholar
 2.Gao, S., Avigad, J., Clarke, E.M.: \(\delta \)complete decision procedures for satisfiability over the reals. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 286–300. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642313653_23CrossRefGoogle Scholar
 3.Kong, S., Gao, S., Chen, W., Clarke, E.: dReach: \(\delta \)reachability analysis for hybrid systems. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 200–205. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_15CrossRefGoogle Scholar
 4.Ratschan, S.: Applications of quantified constraint solving over the realsbibliography. arXiv preprint arXiv:1205.5571 (2012)
 5.SolarLezama, A.: Program Synthesis by Sketching. University of California, Berkeley (2008)Google Scholar
 6.Collins, G.E.: Hauptvortrag: quantifier elimination for real closed fields by cylindrical algebraic decomposition. In: Automata Theory and Formal Languages, pp. 134–183 (1975)Google Scholar
 7.Brown, C.W., Davenport, J.H.: The complexity of quantifier elimination and cylindrical algebraic decomposition. In: ISSAC2007 (2007)Google Scholar
 8.Barrett, C., et al.: CVC4. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 171–177. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642221101_14CrossRefGoogle Scholar
 9.de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/9783540788003_24CrossRefGoogle Scholar
 10.de Moura, L., Bjørner, N.: Efficient ematching for SMT solvers. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 183–198. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540735953_13CrossRefGoogle Scholar
 11.Bjørner, N., Phan, A.D., Fleckenstein, L.: \(\nu \)Z  an optimizing SMT solver. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 194–199. Springer, Heidelberg (2015). https://doi.org/10.1007/9783662466810_14CrossRefGoogle Scholar
 12.Ge, Y., Barrett, C., Tinelli, C.: Solving quantified verification conditions using satisfiability modulo theories. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 167–182. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540735953_12CrossRefGoogle Scholar
 13.Reynolds, A., Deters, M., Kuncak, V., Tinelli, C., Barrett, C.: Counterexampleguided quantifier instantiation for synthesis in SMT. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9207, pp. 198–216. Springer, Cham (2015). https://doi.org/10.1007/9783319216683_12CrossRefGoogle Scholar
 14.Nieuwenhuis, R., Oliveras, A.: On SAT modulo theories and optimization problems. In: Biere, A., Gomes, C.P. (eds.) SAT 2006. LNCS, vol. 4121, pp. 156–169. Springer, Heidelberg (2006). https://doi.org/10.1007/11814948_18CrossRefzbMATHGoogle Scholar
 15.Cimatti, A., Franzén, A., Griggio, A., Sebastiani, R., Stenico, C.: Satisfiability modulo the theory of costs: foundations and applications. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 99–113. Springer, Heidelberg (2010). https://doi.org/10.1007/9783642120022_8CrossRefzbMATHGoogle Scholar
 16.Sebastiani, R., Tomasi, S.: Optimization in SMT with \({\cal{LA}}\)(Q) Cost Functions. In: Gramlich, B., Miller, D., Sattler, U. (eds.) IJCAR 2012. LNCS (LNAI), vol. 7364, pp. 484–498. Springer, Heidelberg (2012). https://doi.org/10.1007/9783642313653_38CrossRefGoogle Scholar
 17.Dutertre, B.: Solving exists/forall problems with yices. In: Workshop on Satisfiability Modulo Theories (2015)Google Scholar
 18.Nightingale, P.: Consistency for quantified constraint satisfaction problems. In: van Beek, P. (ed.) CP 2005. LNCS, vol. 3709, pp. 792–796. Springer, Heidelberg (2005). https://doi.org/10.1007/11564751_66CrossRefzbMATHGoogle Scholar
 19.Weihrauch, K.: Computable Analysis: An Introduction. Springer, New York (2000). https://doi.org/10.1007/9783642569999CrossRefzbMATHGoogle Scholar
 20.Benhamou, F., Granvilliers, L.: Continuous and interval constraints. In: Rossi, F., van Beek, P., Walsh, T. (eds.) Handbook of Constraint Programming. Elsevier (2006)Google Scholar
 21.Gao, S., Kong, S., Clarke, E.M.: dReal: an SMT solver for nonlinear theories over the reals. In: CADE, pp. 208–214 (2013)Google Scholar
 22.Trombettoni, G., Araya, I., Neveu, B., Chabert, G.: Inner regions and interval linearizations for global optimization. In: Burgard, W., Roth, D. (eds.) Proceedings of the TwentyFifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, 7–11 August 2011. AAAI Press (2011)Google Scholar
 23.LougeeHeimer, R.: The common optimization interface for operations research: promoting opensource software in the operations research community. IBM J. Res. Dev. 47(1), 57–66 (2003)CrossRefGoogle Scholar
 24.Johnson, S.G.: The NLopt nonlinearoptimization package (2011)Google Scholar
 25.Kraft, D.: Algorithm 733: TompFortran modules for optimal control calculations. ACM Trans. Math. Softw. 20(3), 262–281 (1994)MathSciNetCrossRefGoogle Scholar
 26.Powell, M.: Direct search algorithms for optimization calculations. Acta Numerica 7, 287–336 (1998)MathSciNetCrossRefGoogle Scholar
 27.Jamil, M., Yang, X.S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optimisation 4(2), 150–194 (2013)CrossRefGoogle Scholar
 28.Wikipedia contributors: Test functions for optimization – Wikipedia. The Free Encyclopedia (2017)Google Scholar
 29.Kapinski, J., Deshmukh, J.V., Sankaranarayanan, S., Arechiga, N.: Simulationguided lyapunov analysis for hybrid dynamical systems. In: HSCC 2014, Berlin, Germany, 15–17 April 2014, pp. 133–142 (2014)Google Scholar
Copyright information
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis>This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara><SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>