Advertisement

XSat: A Fast Floating-Point Satisfiability Solver

  • Zhoulai FuEmail author
  • Zhendong Su
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9780)

Abstract

The Satisfiability Modulo Theory (SMT) problem over floating-point arithmetic is a major hurdle in applying SMT techniques to real-world floating-point code. Solving floating-point constraints is challenging in part because floating-point semantics is difficult to specify or abstract. State-of-the-art SMT solvers still often run into difficulties when solving complex, non-linear floating-point constraints.

This paper proposes a new approach to SMT solving that does not need to directly reason about the floating-point semantics. Our insight is to establish the equivalence between floating-point satisfiability and a class of mathematical optimization (MO) problems known as unconstrained MO. Our approach (1) systematically reduces floating-point satisfiability to MO, and (2) solves the latter via the Monte Carlo Markov Chain (MCMC) method.

We have compared our implementation, XSat, with MathSat, Z3 and Coral, state-of-the-art solvers that support floating-point arithmetic. Evaluated on 34 representative benchmarks from the SMT-Competition 2015, XSat significantly outperforms these solvers. In particular, it provides both 100 % consistent satisfiability results as MathSat and Z3, and an average speedup of more than 700X over MathSat and Z3, while Coral provides inconsistent results on 16 of the benchmarks.

Keywords

Monte Carlo Markov Chain Minimum Point Mathematical Optimization Symbolic Execution Satisfiability Modulo Theory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Floating-point constraint solving has received much recent attention to support the testing and verification of programs that involve floating-point computation. Existing decision procedures, or Satisfiability Modulo Theory (SMT) solvers, are usually based on the DPLL(T) framework [19, 33], which combines a Boolean satisfiability solver (SAT) for the propositional structure of constraints and a specialized theory solver. These decision procedures can cope with logical constraints over many theories, but since they are bit-level satisfiability solvers (SAT), their theory-specific SMT components can run into difficulties when dealing with complex, non-linear floating-point constraints.

This work proposes a new approach for solving floating-point satisfiability. Our approach does not need to directly reason about floating-point semantics. Instead, it transforms a floating-point constraint to a floating-point function that represents the models of the constraint as its minimum points. This “representing function” is similar to fitness functions used in search-based testing in the sense both reduce a search problem to a function minimization problem [30]. However, unlike search-based testing, which uses fitness functions as heuristics, our approach uses the representing function as an essential element in developing precise, systematic methods for solving floating-point satisfiability.

Representing Function. Let \(\pi \) be a floating-point constraint, and \(\mathsf {dom}(\pi ) \) be the value domain of its variables. Our insight is to derive from \(\pi \) a floating-point program \(\mathtt {R}\) that represents how far a value \(x\in \mathsf {dom}(\pi ) \) is from being a model of \(\pi \). As illustrated in Fig. 1, we can imagine \(\mathtt {R}\) as a distance from \(x\in \mathsf {dom}(\pi ) \) to the models of \(\pi \): It is non-negative everywhere, becomes smaller when x goes closer to the set of \(\pi \)’s models, and vanishes when x goes inside (i.e., when x becomes a model of \(\pi \)). Thus, such a function \(\mathtt {R}\) allows us to view the SMT constraint \(\pi \) as function minimization problem of function \(\mathtt {R}\). We call \(\mathtt {R}\) a representing function.1
Fig. 1.

Illustration of the representing function \(\mathtt {R}\) for a floating-point constraint \(\pi \).

It is a common need to minimize/maximize scalar functions in science and engineering. The research field dedicated to the subject is known as mathematical optimization (MO) [17]. MO works by iteratively evaluating its objective function, i.e., the function that MO attempts to minimize. In other words, the representing function allows the transformation of an SMT problem to an MO problem, which enables floating-point constraint solving by only executing its representing function, without the need to directly reason about floating-point semantics—a key benefit of such an SMT-MO problem reduction.

Note, however, that an MO formulation of the SMT problem does not, by itself, provide a panacea to SMT solving, since many MO problems are themselves intractable. However, efficient algorithms have been successfully applied to difficult MO problems. A classic example is the traveling salesman problem. The problem is NP-hard, but has been nicely handled by simulated annealing [26], a stochastic MO technique. Another example is the Monte Carlo Markov Chain method (MCMC) [9], which has been successfully applied in testing and verification [12, 18, 38].

The insight of this work is that, if we carefully design the representing functions so that certain rules are respected, we can reduce floating-point constraint solving to a category of MO problems known as unconstrained MO that can be efficiently solved. Thus, our high-level approach is: (1) systematically transform a floating-point constraint in conjunctive normal format (CNF) to its representing function, and (2) adapt MCMC to minimize the representing function to output a model of the constraint or report unsat.

We have compared our implementation, XSat, with Z3 [16], MathSat [14], and Coral [39], three solvers that can handle floating-point arithmetic. Our evaluation results on 34 representative benchmarks from the SMT-Competition 2015 show that XSat significantly outperforms these solvers in both correctness and efficiency. In particular, XSat provides 100 % consistent satisfiability results as MathSat and Z3, and an average speedup of more than 700X over MathSat and Z3, while Coral provides inconsistent results on 16 of the 34 benchmarks.

Contributions. We introduce a new SMT solver for the floating-point satisfiability problem. Our main contributions follow:
  • We show, via the use of representing functions, how to systematically reduce floating-point constraint solving to a class of MO problems known as unconstrained MO;

  • We establish a theoretical guarantee for the equivalence between the original floating-point satisfiability problem and the class of MO problems.

  • We realize our approach in the XSat solver and show empirically that XSat significantly outperforms the state-of-the-art solvers.

The rest of the paper is organized as follows. Section 2 gives an overview of our approach, and Sect. 3 presents its theoretical underpinning. Section 4 presents the algorithm design of the XSat solver, while Sect. 5 describes its overall implementation and our evaluation of XSat. Finally, Sect. 6 surveys related work, and Sect. 7 concludes.

2 Approach Overview

This section presents a high-level overview of our approach. Its main goal is to illustrate, via examples, that (1) it is possible to reduce a floating-point satisfiability problem to a class of mathematical optimization (MO) problems, and (2) efficient solutions exist for solving those MO problems.

2.1 Preliminaries on Mathematical Optimization

A general Mathematical Optimization (MO) problem can be written as follows:
$$\begin{aligned} \begin{array}{ll} {\text {minimize}} &{} f(x) \\ {\text {subject to}} &{} x\in S \end{array} \end{aligned}$$
(1)
where f is called the objective function, and S the search space [17].

MO techniques can be divided into two categories. One focuses on how functions are shaped at local regions and where a local minimum can be found near the given inputs. This local optimization is classic, involving techniques dated back to the 17th century (e.g., Newton’s approach or gradient-based search). Local optimization not only provides the minimum value of a function within a neighborhood of the given input points, but also aids global optimization, another, more active body of research, which determines the minimum value of a function over an entire search space.

Let f be a function over a metric space with d as its distance. We call \(x^*\) a local minimum point if there exists a neighborhood of \(x^*\), namely \(\{x \mid d(x,x^*) < \delta \}\) for some \(\delta >0\), so that all x in the neighborhood satisfy \(f(x)\ge f(x^*)\). The value of \(f(x^*)\) is called the local minimum of the function f. If \(f(x^*)\le f(x)\) for all \(x\in S\), we call \(f(x^*)\) the global minimum of the function f, and \(x^*\) a global minimum point.

In this presentation, if we say minimum (resp. minimum point), we mean global minimum (resp. global minimum point). It should be clear that a function may have more than one minimum point but only one minimum.

2.2 From SMT to Unconstrained Mathematical Optimization

Suppose we want to solve the simple floating-point constraint
$$\begin{aligned} x\le 1.5 \end{aligned}$$
(2)
Here, we aim to illustrate the feasibility of reducing an SMT problem to an MO problem. In fact, each model of \(x\le 1.5\) is a global minimum point of the formula
$$\begin{aligned} f_1(x)={\left\{ \begin{array}{ll}0 &{} \text {if}~x\le 1.5\\ (x-1.5)^2 &{} \text {otherwise}\end{array}\right. } \end{aligned}$$
(3)
and conversely, each global minimum point of \(f_1\) is also a model of \(x\le 1.5\), since \(f_1(x)\ge 0\) and \(f_1(x)=0\) iff \(x\le 1.5\) (see Sect. 3 for a formalization). In the MO literature, the kind of problem of minimizing \(f_1\) is called unconstrained MO, meaning that its search space is the whole domain of the objective function. Unconstrained MO problems are generally regarded easier to solve than constrained MO2 since they can be efficiently solved if the objective function \(f_1\) is smooth to some degree [34]. Figure 2 shows the curve of \(f_1\) and a common local optimization method, which uses tangents of the curve to quickly converge to the minimum point. The smoothness makes it possible to deduce information about the function’s behavior at points of the neighborhood of a particular point x by using objective and constraint information at x.
Fig. 2.

(a) The curve of \(f_1\) (Eq. 3) and (b) illustration of a classic local optimization method for finding a minimum point of \(f_1\). The method uses tangents of the curve to quickly converge to a minimum point. (Color figure online)

2.3 Efficiently Solve MO Problems via MCMC

Suppose we slightly complicate constraint (2) by adding a non-linear conjunct:
$$\begin{aligned} x\le 1.5\wedge (x-1)^2=4. \end{aligned}$$
(4)
This SMT problem can still be reduced to an unconstrained MO problem, with objective function
$$\begin{aligned} f_2(x)=f_1(x) + ((x-1)^2-4)^2 \end{aligned}$$
(5)
where \(f_1\) is as in Eq. (3). The equivalence of the two problems follows from the fact that \(f_2(x)=0\) if and only if \(f_1(x)=0\) and \(((x-1)^2-4)^2=0\). The curve of \(f_2\) is shown in Fig. 3(a), which has local minimum points at both \(x=-1\) and \(x=3\), and only \(x=-1\) is the global minimum point. It is more difficult to locate the global minimum point of this function, because local optimization methods such as the one illustrated in the previous example can be trapped in local minimum points, e.g., terminating and returning \(x=3\) in Fig. 3.

In this paper, we use a Monte Carlo Markov Chain (MCMC) method [9] as a general approach for unconstrained MO problems. MCMC is a random sampling technique used to simulate a target distribution. Consider, for example, the target distribution of coin tossing, with 0.5 probability for having the head or tail. An MCMC sampling is a sequence of random variables \(x_1\),..., \(x_n\), such that the probability of \(x_n\) being “head”, denoted by \(P_n\), converges to 0.5, i.e., \(\lim _{n\rightarrow \infty }P_n=0.5\). The fundamental fact regarding MCMC sampling can be summarized as the lemma below [9]. For simplicity, we only show the result with discrete-valued probabilities here.

Lemma 1

Let x be a random variable, A be an enumerable set of the possible values of x. Let f be a target distribution function for each \(a\in A\). Then, for an MCMC sampling sequence \(x_1,\ldots ,x_n\ldots \) and a probability density function \(P(x_i=a)\) for each \(x_i\), we have:
$$\begin{aligned} P(x_n =a) \rightarrow f(a). \end{aligned}$$
(6)
In short, the MCMC sampling follows the target distribution asymptotically.

Why do we adopt MCMC? There are multiple advantages. First, the search space in our problem setting involves floating-point values. Even in the one-dimensional case, a very small interval contains a large number of floating-point numbers. MCMC is known as an effective technique to deal with large search spaces. Because MCMC samples follow the distribution asymptotically, it can be configured so that the sampling process has more chance to attain the minimum points than the others (by sampling for a target distribution based on \(\lambda x: \exp ^{-f(x)}\) for example, where f is the function to minimize). Second, MCMC has many mature techniques and implementations that integrate well with classic local search techniques. These implementations have proven efficient for real-world problems with a large number of local minimum points, and can even handle functions beyond classic MO, e.g., discontinuous objective functions. Other MO techniques, e.g., genetic programming, may also be used for our problem setting, which we leave for future investigation.

Figure 3(b) illustrates the iteration of MCMC sampling combined with a local optimization. As in the previous example, the local optimization can quickly converge (shown as steps \(p_0\rightarrow p_1\) and \(p_2\rightarrow p_3\) in the figure). The MCMC step, shown as the \(p_1\rightarrow p_2\) step, allows the search to escape from being trapped in local minimum points. This MCMC step is random, but follows a probability model which we will explain in Sect. 4.
Fig. 3.

(a) The curve of \(f_2\) (Eq. 5); (b) Illustration of how MCMC can be combined with local optimization for locating the global minimum point \(x=-1\). MCMC starts from \(p_0\), converges to local minimum \(p_1\), then performs a random move to \(p_2\) (called a Monte-Carlo step, see Sect. 4) and converges to \(p_3\), which is the global minimum point.

3 Technical Formulation

This section presents the theoretical underpinning of our approach. We write \(\mathbbm {F}\) for the set of floating-point numbers. Given a function f, we call x a zero of f if \(f(x)=0\).

Language. The language of interest is modeled as the set of quantifier-free floating-point constraints. Each constraint \(\pi \) is a conjunction or disjunction of arithmetic comparisons.where \(\bowtie \in \{\le ,<,\ge ,>,==,\ne \}\), \(\oplus \in \{+,-,*, /\}\), c is a floating-point numeral, X is a floating-point variable, and \({\texttt {foo}} \) is an interpreted floating-point function, which can be a library function, e.g., trigonometric, logarithmic or user-defined ones. We denote the language by \( FP \).

Let \(\pi \in FP \) be a constraint with variables \(X_1,\cdots , X_N\). We write \(\mathsf {dom}(\pi ) \) for the value domain of its variables. Usually, \(\mathsf {dom}(\pi ) =\mathbbm {F}^N\). We say a vector of floating-point numbers \((x_1,\cdots ,x_N)\) is a model of \(\pi \), denoted by \((x_1,\cdots ,x_N)~\models \pi \), if \(\pi \) becomes a tautology by substituting \(X_i\) with the corresponding \(x_i\) for all \(i\in [1,N]\). In the following, we shall use a meta-variable x for a vector of floating-point numbers \((x_1,\cdots , x_N)\).

As mentioned in Sect. 1, our idea is to derive from \(\pi \) a floating-point program that represents how far a floating-point input, i.e., \((x_1,\cdots , x_N)\), is from being a model of \(\pi \). We specify this program as below:

Definition 1

Given a floating-point constraint \(\pi \), a floating-point program \(\mathtt {R}\) of type \(\mathsf {dom}(\pi ) \rightarrow \mathbbm {F}\) is called a representing function of \(\pi \) if the following properties hold:
  • R1. \(\mathtt {R}(x)\ge 0\) for all \(x\in \mathsf {dom}(\pi ) \),

  • R2. Every zero of \(\mathtt {R}\) is a model of \(\pi \): \(\forall x\in \mathsf {dom}(\pi ), \mathtt {R}(x)=0\implies x\models \pi \), and

  • R3. The zeros of \(\mathtt {R}\) include all models of \(\pi \): \(\forall x\in \mathsf {dom}(\pi ), x\models \pi \implies \mathtt {R}(x)=0\).

The concept of representing functions allows us to establish an equivalence between the floating-point satisfiability problem and an MO problem. This is shown in the theorem below:

Theorem 1

Let \(\pi \) be a floating-point constraint, and \(\mathtt {R}\) be its representing function. Let \(x^*\) be a global minimum point of \(\mathtt {R}\). Then we have
$$\begin{aligned} \pi ~ \text {is satisfiable}~ \Leftrightarrow \mathtt {R}(x^*)=0. \end{aligned}$$
(7)

Proof

Let \(x^*\) be an arbitrary global minimum point of \(\mathtt {R}\). If \(\pi \) is satisfiable with x being one of its models, then we have \(\mathtt {R}(x)=0\) by R3. By R1, x is also a global minimum point of \(\mathtt {R}\). Thus \(\mathtt {R}(x^*)=\mathtt {R}(x)=0\) since at most one global minimum exists. The proof for the “\(\Leftarrow \)” part follows directly from R2.

A simple procedure for solving floating-point constraints follows from Theorem 1.
Analysis of Procedure P. One challenge faced with procedure P lies in step P2. In general, global optimization may not return a true global minimum point. To make this point clear, we use the notation \(\hat{x^*}\) for the global minimum point produced by the MO tool, and \(x^*\) for a true global minimum point. Then we have
$$\begin{aligned} \mathtt {R}(\hat{x^*})\ge \mathtt {R}(x^*). \end{aligned}$$
(8)
We consider two cases in analyzing procedure P. In the first case, if procedure P reports sat, we have \(\mathtt {R}(\hat{x^*})=0\). Thus, \(\mathtt {R}(x^*)=0\) as well because of Eq. (8) and condition R1. Following Theorem 1, we conclude that \(\pi \) is necessarily satisfiable in this case. As for the second case, if procedure P reports unsat, we have \(\mathtt {R}(\hat{x^*})>0\). In this case, it is still possible that \(\pi \) is satisfiable, meaning that step P2 produces a conservative global minimum point, i.e., \(\mathtt {R}(\hat{x^*})> 0\) but \(\mathtt {R}(x^*)=0\). To summarize, the following lemma holds:

Lemma 2

Let \(\pi \) be a floating-point constraint of \( FP \). Procedure P has the following two properties: (1) Soundness: If procedure P reports sat, \(\pi \) is necessarily satisfiable, and (2) Incompleteness: Procedure P may incorrectly report unsat when \(\pi \) is actually satisfiable. This case happens if the MO tool at step P2 calculates a wrong global minimum point.

In the next section, we present XSat, a solver that realizes procedure P. As we will show in Sect. 5, by carefully designing the representing function, the incompleteness in theory can be largely mitigated in practice.

4 The XSat Solver

This section presents the algorithmic design of XSat, an SMT solver to handle quantifier-free floating-point constraints. XSat is an instance of Procedure P (Sect. 3).

Notation. Given a set A, we write |A| to denote its cardinality. We adopt C’s ternary operator “\(p\ ?\ a\ :\ b\)” to denote a code fragment that evaluates to a if p holds, or b otherwise. As in the previous section, we use \(\mathbbm {F}\) for the set of floating-point numbers, and \( FP \) for the language of quantifier-free floating-point constraints that we have defined.

Let \(\pi \) be a floating-point constraint of \( FP \) in the form of a conjunction. If we have a representing function for each of the conjuncts, we can construct the representing function of \(\pi \) as
$$\begin{aligned} \mathtt {R}_{\pi _1\wedge \pi _2} = \mathtt {R}_{\pi _1} + \mathtt {R}_{\pi _2}. \end{aligned}$$
(9)
Similarly, if \(\pi \) is in the form of a disjunction, we can use
$$\begin{aligned} \mathtt {R}_{\pi _\vee \pi _2} = \mathtt {R}_{\pi _1} * \mathtt {R}_{\pi _2}. \end{aligned}$$
(10)
Above, both “+” and “*” denote the operations as given by IEEE floating-point arithmetic. Clearly, both \(\mathtt {R}_{\pi _1\wedge \pi _2}\) and \(\mathtt {R}_{\pi _\vee \pi _2}\) satisfy conditions R1–3 since both \(\mathtt {R}_{\pi _1}\) and \(\mathtt {R}_{\pi _2}\) do, and since \(\forall a,b\ge 0\), we have \(a+b=0\Leftrightarrow a=0\wedge b=0\) and \(a*b=0\Leftrightarrow a=0 \vee b=0\).
To construct \(\mathtt {R}\) for arithmetic comparisons, we need to introduce a helper function \( \theta \). Its idea is similar to the representation distance implemented in Boost [2], which counts the number of floating-point numbers between two bit-pattern representations. Because the IEEE-754 standard ensures that the next higher representable floating point value from a floating-point number a is a simple integer increment up from the previous one [21], we can view \( \theta (a,b)\) for \(a,b\in \mathbbm {F}\setminus \{\textit{NaN},\textit{Inf},-\textit{Inf}\}\) as
$$\begin{aligned} \theta (a,b)= |\{x\in \mathbbm {F}\mid \min (a,b)< x< \max (a,b)\}|. \end{aligned}$$
(11)
In general, for arbitrary \(a,b\in \mathbbm {F}\), \( \theta (a,b)\) always returns a non-negative integer; it vanishes if and only if a and b hold the same floating-point value. Then, we use
$$\begin{aligned} \mathtt {R}_{e_1\le e_2}\overset{\text {def}}{=}e_1\le e_2\ ?~0\ :\ \theta (e_1,e_2) \end{aligned}$$
(12)
The representing function for the other arithmetic comparisons can be derived using the lemma below. The lemma directly follows from Definition 1.

Lemma 3

Given \(\pi ,\pi ' \in FP \) such that \(\pi \Leftrightarrow \pi '\) (logical equivalence), any representing function of \(\pi \) is also a representing function of \(\pi '\).

Now, we can define \(\mathtt {R}_{x\ge y}\) as \(\mathtt {R}_{y\le x}\), \(\mathtt {R}_{x==y}\) as \(\mathtt {R}_{x\ge y \wedge x\le y}\). For the strict inequalities, we use the notation \(x^-\) for the largest floating point that is strictly smaller than x, and reduce \(\mathtt {R}_{x<y}\) to \(\mathtt {R}_{x\le y^-}\), \(\mathtt {R}_{x>y}\) to \(\mathtt {R}_{y<x}\), and \(\mathtt {R}_{x\ne y}\) to \(\mathtt {R}_{x<y\vee x>y}\). We summarize the representing function used in XSat in the theorem below:

Theorem 2

Let F be a conjunctive normal form of \( FP \):
$$\begin{aligned} F\overset{\textit{def}}{=}\bigwedge _{j\in J}\bigvee _{i\in I} e_{i,j}\bowtie _{i,j}e'_{i,j} \end{aligned}$$
(13)
where \(e_{i,j}\) and \(e'_{i,j}\) are quantities to be interpreted over floating-point numbers or expressions, and \(\bowtie _{i,j}\in \{\le , \ge , ==,<,>,\ne \}\). Then, the function below is a representing function of F:
$$\begin{aligned} \sum _{j\in J}\prod _{i\in I} d(\bowtie , e_{i,j},e'_{i,j}) \end{aligned}$$
(14)
where
$$\begin{aligned} d(==, x,y)&\overset{\textit{def}}{=} \theta (x,y)\end{aligned}$$
(15)
$$\begin{aligned} d(\le , x,y)&\overset{\textit{def}}{=}x\le y\ ?~0\ :\ \theta (x,y)\end{aligned}$$
(16)
$$\begin{aligned} d(\ge ,x,y)&\overset{\textit{def}}{=}x\ge y\ ?~0\ :\ \theta (x,y)\end{aligned}$$
(17)
$$\begin{aligned} d(<,x,y)&\overset{\textit{def}}{=}x < y\ ?~0\ :\ \theta (x,y)+1\end{aligned}$$
(18)
$$\begin{aligned} d(>, x,y)&\overset{\textit{def}}{=}x > y\ ?~0\ :\ \theta (x,y)+1\end{aligned}$$
(19)
$$\begin{aligned} d(\ne , x,y)&\overset{\textit{def}}{=}x\ne y\ ?~0\ :\ 1 \end{aligned}$$
(20)
Algorithm 1 shows the main steps of the XSat algorithm. (Line 1–5): The algorithm follows the three steps in procedure P, except that in practice, more than one starting points are used to launch MCMC. (Such a technique is commonly used in the MO literature, since most MO algorithms are sensible to its starting points [34].) If none of these starting points leads to a minimum point \(x^*\) such that \(\mathtt {R}(x^*)=0\), unsat is reported. (Line 7–15): The function GEN is a simple code generator that generates the representing function. It works by recursively walking through the logical and arithmetic expressions of the language FP. (Lines 16–27): Each iteration of the loop can be regarded as an MCMC sampling over the space of the local minimum points [29]. In Algorithm 1, Line 17 enforces that the initial x is already a local minimum point. Each iteration (Lines 18–25) is composed of the two phases that are classic in the Metropolis-Hasting algorithm family of MCMC [13]. In Phase 1 (Lines 19-20), the algorithm proposes a new sample \(x^*\) from the current sample x. Then, the algorithm relies on a local minimization procedure to only propose local minimum points. Phase 2 (Lines 21–25) decides whether \(x^*\) should be accepted. As an algorithm of Metropolis-Hasting family, we use \(f(x^*)/f(x)\) as the acceptance ratio. If \(f(x^*)<f(x)\), the proposed \(x^*\) will be accepted. Otherwise, \(x^*\) may still be accepted, but only with the probability \(\exp (f(x)-f(x^*))\).3
Example. Consider the floating-point formulawhere \(\mathtt {SIN}\) is an implementation of the sine function, say, from glibc 2.21 [4]. Deciding A(x) is challenging for traditional SAT/SMT solvers. In fact, the part \(x==\mathtt {SIN}(x)\) is unsatisfiable in the theory of reals (because \(x=\mathtt {SIN}(x)\Leftrightarrow x=0\) in reals) but it can be satisfied in the floating-point semantics (\(\mathtt {SIN}(x) = x\) if \(|x|<2^{-26}\) in glibc’s implementation).
Following Theorem 2, we use a representing functionTwo models of A(x) that XSat finds are Open image in new window and Open image in new window .

Discussion. The example above illustrates that XSat is execution-based—XSat executes the function (22) (so to minimize it) rather than analyzing the semantics of the logic formula (21). While this feature allows XSat to handle floating-point formulas that are difficult for traditional solvers, it also implies that XSat may be affected by floating-point inaccuracy: Let \(\mathtt {R}\) be the representing function and \(x^{*}\) be its minimal point. Imagine that \(\mathtt {R}(x^*) > 0\) but the calculating \(\mathtt {R}(x^{*})\) incorrectly gives 0 due to a truncation error. Then, XSat reports sat for an unsatisfiable formula. To overcome this issue, we test the original constraint to confirm the satisfiability (Sect. 5.1). Also, we have designed XSat’s representing function using \( \theta \) to sense small perturbations when calculating \(\mathtt {R}\). In the literature of search-based algorithms [28, 30], fitness functions have been proposed based on an absolute-value norm or Euclidean distance. They are valid representing functions in the sense of R1–3, but may trigger floating-point inaccuracies.

5 Experiments

5.1 Implementation

As a proof-of-concept demonstration, we have implemented XSat as a research prototype. Our implementation is written in Python and C. It is composed of two building blocks.

(B1). The front-end uses Z3’s parser_smt_file API [8] to parse an SMT2-Lib file to its syntax tree representation, which is then transformed to a representing function following Lines 7–15 and Line 1 of Algorithm 1. The transformed program is compiled by Clang with optimization level -O2 and invoked via Python’s C extension.

(B2). The back-end uses an implementation of a variant of MCMC, known as Basinhopping, taken from the scipy.optimize library of Python [6]. This MCMC tool has multiple options, including notably (1) the number of Monte-Carlo iterations and (2) the local optimization algorithm. These options are used in Algorithm 1 as the input parameters \(\mathtt {iter} \) and \({\mathtt {LM}}\) respectively. In the experiment, we set \(\mathtt {iter} = 100\) and \({\mathtt {LM}} =\) “Powell” (which refers to Powell’s algorithm [36]). To ensure that XSat does not returns sat yet the formula is unsatisfiable, we have used Z3’s front-end to check XSat’s calculated model.

The XSat solver does not yet support all floating-point operations that are specified in the SMT-LIB 2 standard [37]. The floating-point operators currently supported by XSat mainly include the common arithmetic operations: fp.leq, fp.lt, fp.geq, fp.gt, fp.eq, fp.neg, fp.add, fp.mult, fp.sub and fp.div. To extend XSat with other operators such as fp.min, fp.max, fp.abs and fp.sqrt, etc. should be straightforward since they can be directly translated into arithmetic expressions. XSat currently only accepts the rounding mode RNE (round to nearest). Other rounding modes can be easily supported in the front-end by setting appropriate floating-point flags in the C program. For example, the rounding mode RTZ (round toward zero) can be realized by introducing Open image in new window in the representing function. The unsupported features listed above do not occur in the tested floating-point benchmarks (see below).

It is worth noting that, as mentioned in Sect. 4, XSat has the potential to handle floating-point constraints beyond the current SMT2-LIB’s specification, because interpreted functions, such as trigonometric functions or any user-defined functions, can be readily implemented by translating them to their corresponding C implementations. An illustrative example of dealing with the sine function is given in Sect. 4.

5.2 Experimental Setup

Tested Floating-Point Benchmark.s. We have evaluated XSat over a set of more than 200 benchmark SMT2 formulas. These benchmarks are proposed by Griggio (a main contributor of MathSat.4) for SMT-COMP 2015. They are accessible online [1]. To present our experimental results, we first divide Griggio’s benchmarks into three parts:
  1. (1)

    <=10K in file size: 131 SMT2 files

     
  2. (2)

    11K – 20K in size: 34 SMT2 files

     
  3. (3)

    >20K in size: 49 SMT2 files

     

We have run XSat on all these benchmarks. This section presents our experiments on (2). We include our experimental results on (1) and (3) in Appendices A and B.

Compared Floating-Point Solvers. We have compared XSat with MathSat, Z3 and Coral, state-of-the-art solvers that are freely available online. MathSat and Z3 competed in the QF_FP (quantifier-free floating-point) track of the 2015 SMT Competition (SMT-COMP) [7]. The Coral solver was initially used in symbolic execution. It uses a search based approach to solve path constraints [25]. Unlike MathSat or Z3, Coral does not directly support the SMT2 language. Thus, we have transformed the benchmarks in the SMT2 language to the input language of Coral [3].

For each solver, we use its default setting for running the benchmarks. All experiments were performed on a laptop with a 2.6 GHz Intel Core i7 and 16 GB RAM running MacOS 10.10.

Evaluation Objectives. There are two specific evaluation objectives:
  • Correctness testing: For each benchmark, we run all solvers and check the consistency of their satisfiability results. MathSat’s result is used as the reference because the selected benchmarks are initially used and provided by the MathSat developers.

  • Efficiency testing: For each benchmark, we run all solvers with 48 h as the timeout limit. The time is wall time measured by the standard Unix command “time”.

5.3 Quantitative Results

This subsection presents the empirical evaluation results with respect to the correctness and the efficiency of the solvers (Table 1).
Table 1.

Comparison of MathSat, Z3, Coral and XSat on the SMT-Competition 2015 benchmarks proposed by Griggio, of file sizes 11K-20K.

Benchmark

Satisfiability

Time (seconds)

SMT2-LIB program

size(byte)

#var

MathSat

Z3

Coral

XSat

MathSat

Z3

Coral

XSat

div2.c.30

11430

32

sat

sat

Open image in new window

sat

131.73

14633.28

2.43

7.41

mult1.c.30

11478

33

sat

sat

Open image in new window

sat

293.28

14.55

1.37

0.80

div3.c.30

11497

33

sat

sat

Open image in new window

sat

139.30

212.68

1.37

0.76

div.c.30

11527

33

sat

sat

Open image in new window

sat

90.75

140.09

1.37

0.75

mult2.c.30

11567

34

sat

sat

Open image in new window

sat

358.77

12.87

1.39

0.77

test_v7_r7_vr10_c1_s24535

14928

7

sat

sat

sat

sat

35.27

85.56

0.78

0.77

test_v5_r10_vr5_c1_s13195

15013

5

sat

sat

sat

sat

160.30

260.32

0.54

0.76

div2.c.40

15060

42

sat

sat

Open image in new window

sat

419.57

6011.65

3.38

11.90

mult1.c.40

15088

43

sat

sat

Open image in new window

sat

726.95

31.88

1.57

0.83

test_v7_r7_vr1_c1_s24449

15090

7

unsat

unsat

unsat

unsat

359.42

669.88

1.34

3.93

div3.c.40

15117

43

sat

sat

sat

sat

301.53

226.78

0.92

0.80

div.c.40

15157

43

sat

sat

Open image in new window

sat

290.41

375.42

1.57

0.79

mult2.c.40

15177

44

sat

sat

Open image in new window

sat

1680.93

30.03

1.59

0.77

test_v7_r7_vr5_c1_s3582

15184

7

sat

sat

sat

sat

101.78

78.10

0.55

0.83

test_v7_r7_vr1_c1_s22845

15273

7

sat

sat

sat

sat

138.76

2619.23

0.72

0.78

test_v7_r7_vr5_c1_s19694

15275

7

sat

sat

sat

sat

705.91

20862.74

0.64

0.86

test_v7_r7_vr5_c1_s14675

15277

7

sat

sat

sat

sat

66.90

227.70

0.53

0.76

test_v7_r7_vr10_c1_s32506

15277

7

sat

sat

sat

sat

291.32

1401.88

0.74

0.96

test_v7_r7_vr10_c1_s10625

15277

7

sat

sat

sat

sat

2971.82

1335.51

0.53

0.76

test_v7_r7_vr1_c1_s4574

15279

7

sat

sat

sat

sat

90.80

2381.56

0.80

0.77

test_v5_r10_vr5_c1_s8690

15393

5

unsat

unsat

unsat

unsat

264.36

563.48

1.37

1.58

test_v5_r10_vr1_c1_s32538

15393

5

unsat

unsat

unsat

unsat

38.88

153.65

1.35

2.22

test_v5_r10_vr5_c1_s13679

15395

5

sat

sat

Open image in new window

sat

256.88

1748.58

1.36

0.76

test_v5_r10_vr10_c1_s15708

15395

5

unsat

unsat

unsat

unsat

3586.89

9099.97

1.35

1.89

test_v5_r10_vr10_c1_s7608

15400

5

unsat

unsat

unsat

unsat

2098.50

4941.08

1.36

1.89

test_v5_r10_vr1_c1_s19145

15486

5

sat

sat

sat

sat

125.61

190.75

0.88

0.76

test_v5_r10_vr1_c1_s13516

15488

5

sat

sat

sat

sat

107.16

89.15

0.89

0.76

test_v5_r10_vr10_c1_s21502

15488

5

unsat

unsat

unsat

unsat

1810.06

4174.55

1.35

2.00

sin2.c.10

17520

37

sat

>48h

crash

sat

43438.34

timeout

crash

26.64

div2.c.50

18755

52

sat

sat

Open image in new window

sat

972.07

1803.04

4.57

15.81

mult1.c.50

18757

53

sat

sat

Open image in new window

sat

2742.47

61.49

1.71

1.32

div3.c.50

18798

53

sat

sat

Open image in new window

sat

350.13

473.64

1.72

0.99

mult2.c.50

18848

54

sat

sat

Open image in new window

sat

2890.08

106.22

1.71

0.96

div.c.50

18849

53

sat

sat

Open image in new window

sat

464.64

554.38

1.70

0.97

SUMMARY

-

100.0 %

54.6 %

100.0 %

2014.75

2290.05

1.38

2.80

Correctness. We sort the 34 benchmark programs by size in Table 1 (Col. 1-2), show each benchmark’s number of variables (Col. 3) and report its satisfiability result (Col. 4–7). As mentioned above, MathSat’s satisfiability results (Col. 4) are used as the reference. It shows that Z3 provides consistent results except for the benchmark sin.2.c.10, for which it times out after 48 h. Coral cannot solve 15 of the benchmarks with the wrong results marked by a framed box. For the benchmark sin2.c.10, Coral crashes due to an internal error (java.lang.NullPointerException).5 Col. 7 shows the results of XSat, which is 100 % consistent compared with MathSat. We have summarized the correctness ratio for each solver on the last row of the table: 100 % for Z3,6 54.6 % for Coral,7 and 100 % for XSat.

Efficiency. Table 1 also reports the time used by the solvers (the last four columns). Both Z3 and Mathsat show large performance variances over different benchmarks. Some of the benchmarks take very long time, such as sin.2.c.10 for MathSat, which takes 43438.34 s (> 12 h) or test_v7_r7_vr5_c1_s19694 for Z3, which takes 20862.74 s. On average, both MathSat and Z3 need more than 2,000 s (shown in the last row of the table).8 By contrast, Coral and XSat (the last two columns) perform significantly better than MathSat or Z3. Both can finish most benchmarks within seconds. On average, Coral requires 1.38 s, which is less than XSat (2.80 s). Note that Coral only obtains accurate satisfiability results on 54.6 % of the benchmarks.

Appendices A and B list our experimental results of XSat versus Z3 and Mathsat on the rest of Griggio’s benchmarks. Similar to Table 1, the results in Tables 2 and 3 show an important performance improvement of XSat over MathSat and Z3. Note that on five of the listed benchmarks, XSat reports unsat while MathSat and Z3 report sat. We have recognized such incompleteness in Lemma 2. Thus, although XSat has achieved significantly better results than the other evaluated solvers, it is generally unable to prove unsat, while Z3 and MathSat can. Therefore, XSat does not compete, but rather complements these solvers.

6 Related Work

The study on floating-point theory is relatively sparse compared to other theories. Eager approaches encode floating-point constraints as propositional formulas [15, 35], relying on a SAT solver as the backend; the lazy approaches, on the other hand, use a propositional CDCL solver [24] to reason about the Boolean structure and an ad-hoc floating-point procedures for theory reasoning. The issues of these decision procedures are well-known: The eager approaches may produce large propositional encoding, which can be a considerable time burden for the worst-case exponential SAT solvers, while the lazy approaches may have difficulties to deal with nontrivial numerical (e.g., non-linear) operations that are frequent in real-world floating-point code. Although, we have seen active development and enhancement for these solutions, such as the mixed abstractions [11], theory-based heuristics [22], or the natural domain SMT [23], state-of-the-art floating-point decision procedures still face performance challenges.

The idea of using numerical methods in program reasoning has been explored. As an example, the SMT-solver dReal [20] combines numerical search with logical techniques for solving problems that can be encoded as first-order logic formulas over the real numbers. There is also a body of work on symbolic and numerical methods [28, 31, 32] for test generation in scientific programs.

Perhaps the closely related work to XSat is the Coral solver [10, 39]. It involves mostly heuristic-based fitness functions integrated in symbolic execution [25], which has been successfully integrated in Java Pathfinder [5]. However, to the best of our knowledge, it has not seen much adoption. Compared to XSat, Coral does not provide a precise and systematic solution for using mathematical optimization in solving floating-point constraints.

7 Conclusion

We have introduced XSat, a floating-point satisfiability solver that is grounded on the concept of representing functions. Given constraint \(\pi \) and program \(\mathtt {R}\) such that R1-3 hold, the theoretical guarantee of Theorem 1 stipulates that the problem of deciding \(\pi \) can be equivalently solved via minimizing \(\mathtt {R}\) and checking whether \(\mathtt {R}(x^*) = 0\) where \(x^*\) is a global minimum point of \(\mathtt {R}\).

The key challenge of such an approach lies in minimizing the representing function \(\mathtt {R}\), which involves an unconstrained mathematical optimization problem. While many MO problems are intractable, our sight is that carefully designed representing functions can lead to MO problems efficiently solvable in practice. We have implemented the XSat solver to empirically validate our theory. XSat systematically transforms quantifier-free floating-point formulas into representing functions, and minimizes them via MCMC methods. We have compared XSat with the state-of-the-art floating-point solvers, MathSat, Z3 and Coral. Evaluated on benchmarks taken from the SMT-Competition 2015, XSat is shown to significantly outperform these solvers.

Footnotes

  1. 1.

    The term “representing function” in English should first appear in Kleene’s book [27], where the author used it to define recursive functions. “Representing function” is also called “characteristic function” or “indicator function” in the literature.

  2. 2.

    For example, it is common practice to transform a constrained MO problem by replacing its constraints with penalized terms in the objective function and to solve the problem as an unconstrained MO [34].

  3. 3.

    In a general Metropolis-Hasting algorithm, in the case of \(f(x^*)>f(x)\), \(x^*\) is to be accepted with the probability of \(\exp (- \frac{f(x^*)-f(x)}{T})\), where T is the “annealing temperature” [26]. Our algorithm sets \(T=1\) for simplicity.

  4. 4.

    Griggio initially used these benchmarks for comparing MathSat and Z3 [23].

  5. 5.

    More precisely, our JVM reports errors at coral. util. visitors. adaptors. TypedVisitorAdaptor. visitSymBoolOperations (TypedVisitorAdaptor.java: 94). We are unsure whether this is due to bugs in Coral or our misusing it.

  6. 6.

    The benchmark that Z3 times out on, sin2.c.10, is not included in calculating Z3’s correctness.

  7. 7.

    The one that Coral crashes on, sin2.c.10, is not included when calculating Coral’s correctness.

  8. 8.

    The one that Z3 times out on, sin2.c.10, is omitted in calculating Z3’s performance.

  9. 9.

    The benchmarks that Z3 or MathSat timeouts are not included when measuring their mean times (the last row of Table 2).

Notes

Acknowledgments

We thank the anonymous reviewers for their useful comments on earlier versions of this paper. Our special thanks go to Viktor Kuncak for his thoughtful feedback. This work was supported in part by NSF Grant No. 1349528. The information presented here does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred.

References

  1. 1.
    Benchmarks of the QF_FP track in SMT-COMP (2015). http://www.cs.nyu.edu/~barrett/smtlib/QF_FP_Hierarchy.zip. Accessed 29 Jan 2016
  2. 2.
    Boost c++ libraries. www.boost.org/. Accessed 27 Jan 2016
  3. 3.
    Coral input language. http://pan.cin.ufpe.br/coral/InputLanguage.html. Accessed 24 Jan 2016
  4. 4.
    The GNU C library (glibc). https://www.gnu.org/software/libc/. Accessed 28 Jan 2016
  5. 5.
    The main page for Java Pathfinder. http://babelfish.arc.nasa.gov/trac/jpf. Accessed 29 Jan 2016
  6. 6.
  7. 7.
    SMT-COMP (2015). http://smtcomp.sourceforge.net/2015/. Accessed 24 Jan 2016
  8. 8.
  9. 9.
    Andrieu, C., de Freitas, N., Doucet, A., Jordan, M.I.: An introduction to MCMC for machine learning. Mach. Learn. 50, 5–43 (2003)CrossRefzbMATHGoogle Scholar
  10. 10.
    Borges, M., d’Amorim, M., Anand, S., Bushnell, D., Pasareanu, C.S.: Symbolic execution with interval solving and meta-heuristic search. In: Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, ICST 2012, Washington, DC, USA, pp. 111–120. IEEE Computer Society (2012)Google Scholar
  11. 11.
    Brillout, A., Kroening, D., Wahl, T.: Mixed abstractions for floating-point arithmetic. In: FMCAD, pp. 69–76 (2009)Google Scholar
  12. 12.
    Chen, Y., Zhendong, S.: Guided differential testing of certificate validation in SSL/TLS implementations. In: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, Bergamo, Italy, 30 August–4 September 2015, pp. 793–804 (2015)Google Scholar
  13. 13.
    Chib, S., Greenberg, E.: Understanding the metropolis-hastings algorithm. Am. Stat. 49(4), 327–335 (1995)Google Scholar
  14. 14.
    Cimatti, A., Griggio, A., Schaafsma, B.J., Sebastiani, R.: The mathSAT5 SMT solver. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013 (ETAPS 2013). LNCS, vol. 7795, pp. 93–107. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Clarke, E., Kroning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    de Moura, L., Bjørner, N.S.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Espírito-Santo, I.A., Costa, L.A., Rocha, A.M.A.C., Azad, M.A.K., Fernandes, E.M.G.P.: On Challenging Techniques for Constrained Global Optimization. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  18. 18.
    Zhoulai, F., Bai, Z., Zhendong, S.: Automated backward error analysis for numerical code. In: OOPSLA, pp. 639–654 (2015)Google Scholar
  19. 19.
    Ganzinger, H., Hagen, G., Nieuwenhuis, R., Oliveras, A., Tinelli, C.: DPLL(T): fast decision procedures. In: Alur, R., Peled, D.A. (eds.) CAV 2004. LNCS, vol. 3114, pp. 175–188. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  20. 20.
    Gao, S., Kong, S., Clarke, E.M.: \({\sf dReal}\): an SMT solver for nonlinear theories over the reals. In: Bonacina, M.P. (ed.) CADE 2013. LNCS, vol. 7898, pp. 208–214. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  21. 21.
    Goldberg, D.: What every computer scientist should know about floating point arithmetic. ACM Comput. Surv. 23(1), 5–48 (1991)CrossRefGoogle Scholar
  22. 22.
    Goldwasser, D., Strichman, O., Fine, S.: A theory-based decision heuristic for DPLL(T). In: FMCAD, pp. 1–8 (2008)Google Scholar
  23. 23.
    Haller, L., Griggio, A., Brain, M., Kroening, D.: Deciding floating-point logic with systematic abstraction. In: FMCAD, pp. 131–140 (2012)Google Scholar
  24. 24.
    Bayardo Jr., R.J., Schrag, R.: Using CSP look-back techniques to solve real-world SAT instances. In: Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference, AAAI 1997, IAAI 1997, 27–31 July 1997, pp. 203–208. Providence, Rhode Island (1997)Google Scholar
  25. 25.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Kleene, S.C.: Introduction to Metamathematics. North-Holland, Amsterdam (1962)zbMATHGoogle Scholar
  28. 28.
    Lakhotia, K., Tillmann, N., Harman, M., de Halleux, J.: Flopsy-search-based floating point constraint solving for symbolic execution. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 142–157. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  29. 29.
    Li, Z., Scheraga, H.A.: Monte Carlo-minimization approach to the multiple-minima problem in protein folding. In: Proceedings of the National Academy of Sciences of the United States of America, vol. 84, No. 19, pp. 6611–6615 (1987)Google Scholar
  30. 30.
    McMinn, P.: Search-based software test data generation: a survey: research articles. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  31. 31.
    Meinke, K., Niu, F.: A learning-based approach to unit testing of numerical software. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 221–235. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  32. 32.
    Miller, W., Spooner, D.L.: Automatic generation of floating-point test data. IEEE Trans. Softw. Eng. 2(3), 223–226 (1976)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Nieuwenhuis, R., Oliveras, A., Tinelli, C.: Solving SAT and SAT modulo theories: from an abstract Davis-Putnam-Logemann-Loveland procedure to DPLL(T). J. ACM 53(6), 937–977 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Berlin (2006)zbMATHGoogle Scholar
  35. 35.
    Peleska, J., Vorobev, E., Lapschies, F.: Automated test case generation with SMT-solving and abstract interpretation. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) NFM 2011. LNCS, vol. 6617, pp. 298–312. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  36. 36.
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: The Art of Scientific Computing, 3rd edn. Cambridge University Press, New York (2007)zbMATHGoogle Scholar
  37. 37.
    Rümmer, P., Wahl, T.: An SMT-LIB theory of binary floating-point arithmetic. In: Informal proceedings of 8th International Workshop on Satisfiability Modulo Theories (SMT) at FLoC, Edinburgh, Scotland (2010)Google Scholar
  38. 38.
    Schkufza, E., Sharma, R., Aiken, A.: Stochastic optimization of floating-point programs with tunable precision. In: PLDI, pp. 53–64 (2014)Google Scholar
  39. 39.
    Souza, M., Borges, M., d’Amorim, M., Păsăreanu, C.S.: CORAL: solving complex constraints for symbolic pathfinder. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) NFM 2011. LNCS, vol. 6617, pp. 359–374. Springer, Heidelberg (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.University of CaliforniaDavisUSA

Personalised recommendations