Nonlinear Craig Interpolant Generation

Craig interpolant generation for non-linear theory and its combination with other theories are still in infancy, although interpolation-based techniques have become popular in the verification of programs and hybrid systems where non-linear expressions are very common. In this paper, we first prove that a polynomial interpolant of the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$h(\mathbf {x})>0$$\end{document} exists for two mutually contradictory polynomial formulas \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\phi (\mathbf {x},\mathbf {y})$$\end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\psi (\mathbf {x},\mathbf {z})$$\end{document}, with the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_1\ge 0\wedge \cdots \wedge f_n\ge 0$$\end{document}, where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_i$$\end{document} are polynomials in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf {x},\mathbf {y}$$\end{document} or \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf {x},\mathbf {z}$$\end{document}, and the quadratic module generated by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_i$$\end{document} is Archimedean. Then, we show that synthesizing such interpolant can be reduced to solving a semi-definite programming problem (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{SDP}$$\end{document}). In addition, we propose a verification approach to assure the validity of the synthesized interpolant and consequently avoid the unsoundness caused by numerical error in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{SDP}$$\end{document} solving. Besides, we discuss how to generalize our approach to general semi-algebraic formulas. Finally, as an application, we demonstrate how to apply our approach to invariant generation in program verification.


Introduction
Interpolation-based techniques have become popular in recent years because of their inherently modular and local reasoning, which can scale up existing formal verification techniques like theorem proving, model-checking, abstract interpretation, and so on, while the scalability is the bottleneck of these techniques. The study of interpolation was pioneered by Krajícek [20] and Pudlák [30] in connection with theorem proving, by McMillan in connection with model-checking [25], by Graf and Saïdi [14], Henzinger et al. [16] and McMillan [26] in connection with abstraction like CEGAR, by Wang et al. [17] in connection with machine-learning based program verification.
Craig interpolant generation plays a central role in interpolation-based techniques, and therefore has drawn increasing attention. In the literature, there are various efficient algorithms proposed for automatically synthesizing interpolants for decidable fragments of first-order logic, linear arithmetic, array logic, equality logic with uninterpreted functions (EUF), etc., and their combinations, and their use in verification, e.g., [6,16,18,19,26,27,33,33,37] and the references therein. Additionally, how to compare the strength of different interpolants is investigated in [9]. However, interpolant generation for non-linear theory and its combination with the aforementioned theories is still in infancy, although nonlinear polynomials inequalities are quite common in safety-critical software and embedded systems [38,39].
In [7], Dai et al. had a first try and gave an algorithm for generating interpolants for conjunctions of mutually contradictory nonlinear polynomial inequalities based on the existence of a witness guaranteed by Stengle's Positivstellensatz [36], which is computable using semi-definite programming (SDP). Their algorithm is incomplete in general but if all variables are bounded (called Archimedean condition), then it becomes complete. A major limitation of their work is that two mutually contradictory formulas φ and ψ must have the same set of variables. In [10], Gan et al. proposed an algorithm to generate interpolants for quadratic polynomial inequalities. The basic idea is based on the insight that for analyzing the solution space of concave quadratic polynomial inequalities, it suffices to linearize them by proving a generalization of Motzkin's transposition theorem for concave quadratic polynomial inequalities. Moreover, they also discussed how to generate interpolants for the combination of the theory of quadratic concave polynomial inequalities and EUF based on the hierarchical calculus proposed in [34] and used in [33]. Obviously, quadratic concave polynomial inequalities is a very restrictive class of polynomial formulas, although most of existing abstract domains fall within it as argued in [10]. Meanwhile, in [13], Gao and Zufferey presented an approach to extract interpolants for non-linear formulas possibly containing transcendental functions and differential equations from proofs of unsatisfiability generated by δ-decision procedure [12] based on interval constraint propagation (ICP) [1] by transforming proof traces from δ-complete decision procedures into interpolants that consist of Boolean combinations of linear constraints. Thus, their approach can only find the interpolants between two formulas whenever their conjunction is not δ-satisfiable. Similar idea was also reported in [21]. In [5], Chen et al. proposed an approach for synthesizing non-linear interpolants based on counterexample-guided and machine-learning, but it relies on quantifier elimination in order to guarantee the completeness and convergence, which gives rise to the low efficiency of their approach theoretically. In [35], Srikanth et al. presented an approach called CAMPY to exploit non-linear interpolant generation, which is achieved by abstracting non-linear formulas (possibly with non-polynomial expressions) to the theory of linear arithmetic with uninterpreted functions, i.e., EUFLIA, to prove and/or disprove if a given program satisfies a given property, that may contain nonlinear expressions. Example 1. In order to compare the approach proposed in this paper and the ones aforementioned, consider It can be checked that φ ∧ ψ |= ⊥.
Obviously, synthesizing interpolants for φ and ψ in this example is beyond the ability of the above approaches reported in [7,10]. Using the method in [13] implemented in dReal3 it would return "SAT" with δ = 0.001, i.e., φ ∧ ψ is δsatisfiable, and hence it cannot synthesize any interpolant using [12]'s approach with any precision greater than 0.001 1 . While, using our method, an interpolant h > 0 with degree 10 can be found as shown in Fig. 1 2 . Additionally, using the symbolic procedure REDUCE, it can be proved that h > 0 is indeed an interpolant of φ and ψ. In this paper, we investigate this issue and consider how to synthesize an interpolant for two polynomial formulas φ(x, y) and x ∈ R r , y ∈ R s , z ∈ R t are variable vectors, r, s, t ∈ N, and f 1 , . . . , f m , g 1 , . . . , g n are polynomials. In addition, M x,y {f 1 (x, y), . . . , f m (x, y)} and M x,z {g 1 (x, z), . . ., g n (x, z)} are two Archimedean quadratic modules. Here we allow uncommon variables, that are not allowed in [7], and drop the constraint that polynomials must be concave and quadratic, which is assumed in [10]. The Archimedean condition amounts to that all the variables are bounded, which is reasonable in program verification, as only bounded numbers can be represented in computer in practice. We first prove that there exists a polynomial h(x) such that h(x) = 0 separates the state space of x defined by φ(x, y) from the one defined by ψ(x, z) theoretically, and then propose an algorithm to compute such h(x) based on SDP. Furthermore, we propose a verification approach to assure the validity of the synthesized interpolant and consequently avoid the unsoundness caused by numerical error in SDP solving. Finally, we also discuss how to extend our results to general semi-algebraic constraints.
Another contribution of this paper is that as an application, we illustrate how to apply our approach to invariant generation in program verification by revising Lin et al.'s framework proposed in [22] for invariant generation based on weakest precondition, strongest postcondition and interpolation by allowing to generate nonlinear invariants.
The paper is organized as follows. Some preliminaries and the problem of interest are introduced in Sect. 2. Section 3 shows the existence of an interpolant for two mutually contradictory polynomial formulas only containing conjunction, and Sect. 4 presents SDP-based methods to compute it. In Sect. 5, we discuss how to avoid unsoundness caused by numerical error in SDP. Section 6 extends our approach to general polynomial formulas. Section 7 demonstrates how to apply our approach to invariant generation in program verification. We conclude this paper in Sect. 8.

Preliminaries
In this section, we first give a brief introduction on some notions used throughout this paper and then describe the problem of interest.

Quadratic Module
N, Q and R are the sets of integers, rational numbers and real numbers, respectively. Q[x] and R[x] denotes the polynomial ring over rational numbers and real numbers in r ≥ 1 indeterminates x : (x 1 , . . . , x r ). We use R[x] 2 := {p 2 | p ∈ R[x]} for the set of squares and R[x] 2 for the set of sums of squares of polynomials in x. Vectors are denoted by boldface letters. ⊥ and stand for false and true, respectively. [24]

Definition 1 (Quadratic Module
Archimedean condition plays a key role in the study of polynomial optimization.

Problem Description
Craig showed that given two formulas φ and ψ in a first-order theory T , if φ |= ψ, then there always exists an interpolant I over the common symbols of φ and ψ s.t. φ |= I and I |= ψ. In the verification literature, this terminology has been abused following [26], where a reverse interpolant (coined by Kovács and Voronkov in [19]) I over the common symbols of φ and ψ is defined by The interpolant synthesis problem of interest in this paper is given in Problem 1.

Existence of Interpolants
The basic idea and steps of proving the existence of interpolants are as follows: Because an interpolant of φ and ψ contains only the common symbols in φ and ψ, it is natural to consider the projections of the sets defined by φ and ψ on x, i.e. P x (φ(x, y))={x | ∃y. φ(x, y)} and P x (ψ(x, z))={x | ∃z. ψ(x, z)}, which are obviously disjoint. We therefore prove that, if h(x) = 0 separates P x (φ(x, y)) and P x (ψ(x, z)), then h(x) solves Problem 1 (see Proposition 1). Thus, we only need to prove the existence of such h(x) through the following steps: First, we prove that P x (φ(x, y)) and P x (ψ(x, z)) are compact semi-algebraic sets which are unions of finitely many basic closed semi-algebraic sets (see Lemma 1). Second, using Putinar's Positivstellensatz, we prove that, for two disjoint basic closed semi-algebraic sets S 1 and S 2 of the Archimedean form, there exists a polynomial h 1 (x) such that h 1 (x) = 0 separates S 1 and S 2 (see Lemma 2). This result is then extended to the case that S 2 is a finite union of basic closed semi-algebraic sets (see Lemma 3). Finally, by generalizing Lemma 3 to the case that two compact semi-algebraic sets both are unions of finitely many basic closed semi-algebraic sets and together with Proposition 1, we prove the existence of interpolant in Theorem 2 and Corollary 1.
Proof. According to Definition 3, it is enough to prove that φ(x, In order to synthesize such h(x) in Proposition 1, we first dig deeper into the two sets P x (φ(x, y)) and P x (ψ(x, z)). As shown later, i.e. in Lemma 1, we will find that these two sets are compact semi-algebraic sets of the form {x | Before this lemma, we introduce Finiteness theorem pertinent to a basic closed semi-algebraic subset of R n , which will be used in the proof of Lemma 1, where a basic closed semi-algebraic subset of R n is a set of

Theorem 1 (Finiteness Theorem, Theorem 2.7.2 in [3]). Let A ⊂ R n be a closed semi-algebraic set. Then A is a finite union of basic closed semi-algebraic sets.
Lemma 1. The set P x (φ(x, y)) is compact semi-algebraic set of the following form . . , c, j = 1, . . . , J i . The same claim applies to the set P x (ψ(x, z)) as well.
Because S is a compact set and π is a continuous map that maps compact set to compact set, π(S), which is the image of a compact set under a continuous map, is compact. Moreover, as S is a semi-algebraic set and the projection of a semi-algebraic set is also a semi-algebraic set by Tarski-Seidenberg theorem [2], this implies that π(S) is a semi-algebraic set. Thus, π(S) is a compact semialgebraic set.
Since π(S) is a compact semi-algebraic set, and also a closed semialgebraic set, we have that π(S) is a finite union of basic closed semialgebraic sets from Theorem 1. Hence, there exist a series of polynomi- Ji j=1 α i,j (x) ≥ 0}. This concludes this lemma.
After knowing the structure of P x (φ(x, y)) and P x (ψ(x, z)) being a union of some basic semialgebraic sets as illustrated in Lemma 1, we next prove the existence of h(x) ∈ R[x] satisfying (1), as formally stated in Theorem 2.

Theorem 2.
Suppose that φ(x, y) and ψ(x, z) are defined as in Problem 1, then there exists a polynomial h(x) satisfying (1).
As pointed out by an anonymous reviewer that Theorem 2 can be obtained by some properties of the ring of Nash functions proved in [29]. In what follows, we give a simpler and more intuitive proof. To the end, it requires some preliminaries first. The main tool in our proof is Putinar's Positivstellensatz, as formulated in Theorem 3. [31]). Let

Theorem 3 (Putinar's Positivstellensatz
With Putinar's Positivstellensatz we can draw a conclusion that there exists a polynomial such that its zero level set 3 separates two compact semi-algebraic sets of the Archimedean form, as claimed in Lemmas 2 and 3.
implying that there exists a set of sums of squares polynomials u 2 , . . . , Let It is easy to check that (2) holds. Lemma 3 generalizes the result of Lemma 2 to more general compact semialgebraic sets of the Archimedean form, which is the union of multiple basic semi-algebraic sets.
In order to prove this lemma, we prove the following lemma first.
. Now we give a proof for Lemma 3 as follows.

Proof (of Lemma 3). For any
Since S 0 is a semialgebraic set of the Archimedean form, S 0 is compact and thus h i (x) has minimum value and maximum value on S 0 , denoted by c i and d i respectively. Let Let . We next prove that h 0 (x) satisfies (3) in Lemma 3.
For all (5). Therefore, the first constraint in (3) For Thus, we obtain the conclusion that there exists a polynomial h 0 (x) such In Lemma 3 we proved that there exists a polynomial h(x) ∈ R[x] such that its zero level set is a barrier between two semi-algebraic sets of the Archimedean form, of which one set is a union of finitely many basic semi-algebraic sets. In the following we will give a formal proof of Theorem 2, which is a generalization of Lemma 3.
Proof (of Theorem 2). According to Lemma 1 we have that P x (φ(x, y)) and P x (ψ(x, z)) are compact sets, and there respectively exists a set of polynomials Since P x (φ(x, y)) and P x (ψ(x, z)) are compact sets, there exists a positive , y)) and P x (ψ(x, z)). For each i = 1, . . . , a and each l = 1, . . . , b, set p It is easy to see that P 1 = P x (φ(x, y), P 2 = P x (ψ(x, z)).
Consequently, we immediately have the following conclusion.

Corollary 1. Let φ(x, y) and ψ(x, z) be defined as in Problem 1. There must exist a polynomial h(x) ∈ R[x] such that h(x) > 0 is an interpolant for φ and ψ.
Actually, since P x (φ(x, y)) and P x (ψ(x, z)) both are compact set by Lemma 1, and h(x) > 0 on P x (φ(x, y)) and h(x) < 0 on P x (ψ(x, z)), we can obtain h (x) by giving a small perturbation to the coefficients of h(x) such that h (x) has the property of h(x). Hence, there should exist a h(x) ∈ Q[x] such that h(x) > 0 is an interpolant for φ and ψ, intuitively. φ(x, y) and ψ(x, z) (1). Since P x (φ(x, y)) and P x (ψ(x, z)) are compact sets, h (x) > 0 on P x (φ(x, y)) and h (x) < 0 on P x (ψ(x, z)), there exist η 1 > 0 and η 2 > 0 such that
So, the existence of h(x) ∈ Q[x] is guaranteed. Moreover, from the proof of Theorem 4, we know that a small perturbation of h(x) is permitted, which is a good property for computing h(x) in a numeric way. In the subsequent subsection, we recast the problem of finding such h(x) as a semi-definite programming problem.

SOS Formulation
Similar to [7], in this section, we discuss how to reduce the problem of finding h(x) satisfying (1) to a sum of squares programming problem. φ(x, y) and ψ(x, z) be defined as in the Problem 1. Then there exist m + n + 2 SOS (sum of squares) polynomials u i (x, y) (i = 1, . . . , m + 1), v j (x, z) (j = 1, . . . , n + 1) and a polynomial h(x) such that

Theorem 6 (Soundness). Suppose that φ(x, y) and ψ(x, z) are defined as in Problem 1, and h(x) is a feasible solution to (9), then h(x) solves Problem 1, i.e. h(x) > 0 is an interpolant for φ and ψ.
Moreover, we have the following completeness theorem stating that if the . . . , m, j = 1, . . . , n, are large enough, h(x) can be synthesized definitely via solving (9). (11) for some positive integer N , where R k [·] stands for the family of polynomials of degree no more than k.

Theorem 7 (Completeness). For Problem 1, there must be polynomials
Proof. This is an immediate result of Theorem 5.

Example 2. Consider two contradictory formulas φ and ψ defined by
It is easy to observe that φ and ψ satisfy the conditions in Problem 1. Since there are local variables in φ and ψ and the degree of f 2 is 4, the interpolant generation methods in [7] and [10] are not applicable. We get a concrete SDP problem of the form (9) by setting the degree of the polynomial h(x, y, z) in (9) to be 2. Using the MATLAB package YALMIP [23] and Mosek [28], we obtain h(x, y, z) = − 416.7204 − 914.7840x + 472.6184y + 199.8985x 2 + 190.2252y 2 + 690.4208z 2 − 187.1592xy.
In order to solve formula (9) to obtain h(x), we first need to fix a degree bound of u i , v j and h, say 2d, d ∈ N. It is well-known that any u(x) ∈ R[x] 2 with degree 2d can be represented by where is a column vector with all monomials in x, whose total degree is not greater than d, and E d (x) T stands for the transposition of E d (x). Equaling the corresponding coefficient of each monomial whose degree is less than or equal to 2d at the two sides of (10), we can get a linear equation system as where A u,k ∈ R ( r+d d )×( r+d d ) is constant matrix, b u,k ∈ R is constant, tr(A) stands for the trace of matrix A. Thus, searching for u i , v j and h satisfying (9) can be reduced to the following SDP problem: where is a linear combination of C u1 , . . . , C um and C h ; similarly, C −h−1−vg is the matrix corresponding to polynomial −h − 1 − n j=1 v j g j , which is a linear combination of C v1 , . . . , C vn and C h ; and diag(C 1 , . . . , C k ) is a block-diagonal matrix of C 1 , . . . , C k .

Theorem 8 ([32], Theorem 3). C
0 if there exists C ∈ F D×D such that the following conditions hold: 1. C ij = C ij , for any i = j; 2. C ii ≤ C ii − α, for any i; and 3. the Cholesky algorithm implemented in floating-point arithmetic can conclude that C is positive semi-definite, where F is a floating-point format, α = (D+1)κ 1−(2D+2)κ tr(C) + 4(D + 1)(2(D + 2) + max i {C ii })η, in which κ is the unit roundoff of F and η is the underflow unit of F. According to Remark 5 in [32], for IEEE 754 binary64 format with rounding to nearest, κ = 2 −53 ( 10 −16 ) and η = 2 −1075 ( 10 −323 ). In this case, the order of magnitude of β is 10 −10 and (D+1)Dκ 1−(2D+2)κ +4(D +1)η is 10 −13 , much less than 1 2 . Obviously, β becomes smaller when the length of binary format becomes longer. W.l.o.g., we suppose that the Cholesky algorithm succeed in computing C the solution of (12), which is reasonable as if an SDP solver returns a solution C, then C should be considered to be positive semi-definite in the sense of numeric computation.
So, by Corollary 2, we have C + 2βI 0 holds, where I is the identity matrix with the corresponding dimension. Then we have . . , u m , v 1 , . . . , v n , h}, which can be regarded as the tolerance of the SDP solver. Since |tr(A p,i C p ) − b p,i | is the error term for each monomial of p, i.e., can be considered as the error bound on the coefficients of polynomials u i , v j and h, for any polynomialû i (v j andĥ), computed from (11) by replacing C u with the corresponding C u , there exists a corresponding remainder term R ui (resp. R vj and R h ) with degree not greater than 2d, whose coefficients are bounded by . Hence, we have Now, in order to avoid unsoundness of our approach caused by the numerical issue due to SDP, we have to prove Regarding (14), let R 2d,x be a polynomial in R[|x|], whose total degree is 2d, and all coefficients are 1, e.g., R 2,x,y = 1 + |x| + |y| + |x 2 | + |xy| + |y 2 |. Since  , y). So for any (x 0 , y 0 ) ∈ S, considering the polynomials below at (x 0 , y 0 ) ∈ S, by the first and third line in (13), Whence, , z) on S , and M gj an upper bound of g j on S . Similarly, it follows So, the following proposition is immediately.

Proposition 2.
There exist two positive constants γ 1 and γ 2 such that Since and β heavily rely on the numerical tolerance and the floating point representation, it is easy to see that and β become small enough with γ 1 < 1 2 and γ 2 β < 1 2 , if the numerical tolerance is small enough and the length of the floating point representation is long enough. This implies If so, any numerical result h > 0 returned by calling an SDP solver to (12) is guaranteed to be a real interpolant for φ and ψ, i.e., a correct solution to Problem 1.
Due to the fact that the default error tolerance is 10 −8 in the SDP solver Mosek and h is rounding to 4 decimal places, we have = 10 −4 2 . In addition, as the absolute value of each element in C is less than 10 3 , and the dimension of D is less than 10 3 , we obtain Consequently, γ 1 ≤ 6557 · 10 −4 2 < 1 2 , γ 2 β ≤ 2320 · 10 −6 < 1 2 , which imply that h(x, y, z) > 0 presented in Example 2 is indeed a real interpolant.
Remark 1. Besides, the result could be verified by the following symbolic computation procedure instead: computing P x (φ) and P x (ψ) first by some symbolic tools, such as Redlog [8] which is a package that extends the computer algebra system REDUCE to a computer logic system; then verifying x ∈ P x (φ) ⇒ h(x) > 0 and x ∈ P x (ψ) ⇒ h(x) < 0. For this example, P x,y,z (φ) and P x,y,z (ψ) obtained by Redlog are too complicated and therefore not presented here. The symbolic computation can verify that h(x, y, z) in this example is exactly an interpolant, which confirms our conclusion. Alternatively, we can also solve the SDP in (9) using a SDP solver with infinite precision [15], and obtain an exact result. But this only works for problems with small size because a SDP solver with infinite precision is essentially based on symbolic computation as commented in [15].

Generalizing to General Polynomial Formulas
Problem 2. Let φ(x, y) and ψ(x, z) be two polynomial formulas defined as follows, Proof. We just need to prove that Lemma 1 holds for Problem 2 as well. Since   φ(x, y) and ψ(x, z): Proof. By the property of Archimedean, the proof is same as that for T heorem 5.
Similarly, Problem 2 can be equivalently reformulated as the problem of searching for sum of squares polynomials satisfying We get a concrete SDP problem of the form (20) by setting the degree of h(x, y) in (20) to be 2. Using the MATLAB package YALMIP and Mosek, we obtain h(x, y) = −2.3238 + 0.6957x 2 + 0.6957y 2 + 7.6524xy.
The result is plotted in Fig. 3, and can be verified either by numerical error analysis as in Example 2 or by a symbolic procedure like REDUCE as described in Remark 1.
Example 5 (Ultimate). Consider the following example taken from [5], which is a challenging benchmark to existing approaches for nonlinear interpolant generation. We first convert φ and ψ to the disjunction normal form as: We get a concrete SDP problem of the form (20)  The result is plotted in Fig. 4, and can be verified either by numerical error analysis as in Example 2 or by a symbolic procedure like REDUCE as described in Remark 1.

Application to Invariant Generation
In this section, as an application, we sketch how to apply our approach to invariant generation in program verification, the details can be found in [11].
In [22], Lin et al. proposed a framework for invariant generation using weakest precondition, strongest postcondition and interpolation, which consists of two procedures, i.e., synthesizing invariants by forward interpolation based on strongest postcondition and interpolant generation, and by backward interpolation based on weakest precondition and interpolant generation. In [22], only linear invariants can be synthesized as no powerful approaches are available to synthesize nonlinear interpolants. Obviously, our results can strengthen their framework by allowing to generate nonlinear invariants. For example, we can revise the procedure Squeezing Invariant -Forward in their framework and obtain Algorithm 1.
The major revisions include: -firstly, we exploit our method to synthesize interpolants see line 4 in Algorithm 1; -secondly, we add a conditional statement for A i+1 at line 7-10 in Algorithm 1 in order to make A i+1 to be Archimedean.
The procedure Squeezing Invariant -backward can be revised similarly. Example 6. Consider a loop program given in Algorithm 2 for controlling the acceleration of a car adapted from [21]. Suppose we know that vc is in [0, 40] at the beginning of the loop, we would like to prove that vc < 49.61 holds after the loop. Since the loop guard is unknown, it means that the loop may terminate after any number of iterations.

Algorithm 1. Revised Squeezing Invariant -Forward
We apply Algorithm 1 to the computation of an invariant to ensure that vc < 49.61 holds. Since vc is the velocity of car, 0 ≤ vc < 49.61 is required to hold in order to maintain safety. Via Algorithm 1, we have Firstly, it is evident that A 0 : vc(40 − vc) ≥ 0 implies A 0 ∧ B |= ⊥. By applying our approach, we obtain an interpolant Consequently, we have the conclusion that I 2 is an inductive invariant which witnesses the correctness of the loop.

Conclusion
In this paper we propose a sound and complete method to synthesize Craig interpolants for mutually contradictory polynomial formulas φ(x, y) and ψ(x, z), with the form f 1 ≥ 0 ∧ · · · ∧ f n ≥ 0, where f i 's are polynomials in x, y or x, z and the quadratic module generated by f i 's is Archimedean. The interpolant is generated by solving a semi-definite programming problem, which is a generalization of the method in [7] dealing with mutually contradictory formulas with the same set of variables and the method in [10] dealing with mutually contradictory formulas with concave quadratic polynomial inequalities. As an application, we apply our approach to invariant generation in program verification.
As a future work, we would like to consider interpolant synthesizing for formulas with strict polynomial inequalities. Also, it deserves to consider how to synthesize interpolants for the combination of non-linear formulas and other theories based on our approach and other existing ones, as well as further applications to the verification of programs and hybrid systems.