1 Introduction

A posynomial is a function of the form

$$\begin{aligned} P (x) = \sum _{a\in A} c_a x_1^{a_1}x_2^{a_2}\cdots x_n^{a_n} \end{aligned}$$

where the variable \(x=(x_1,\dots ,x_n)\) is a vector with real positive entries, A is a finite subset of vectors of \(\mathbb {R}^n\), and the \(c_a\) are positive real numbers. Here for any \(a\in \mathbb {R}^n\), we denote by \(a_i\) the i-th coordinate of a. The set A is called the support of P, also denoted by \(S_P\), its elements are called the exponents of the posynomial and the \(c_a\) its coefficients.

Unlike polynomials, posynomials can have arbitrary exponents. They arise in convex optimization, especially in geometric and entropic programming [6] and in polynomial optimization [7]. They also arise in the theory of nonnegative tensors [8, 11], in risk sensitive control [3] and game theory [1].

A tropical posynomial is a function of the form

where is the usual dot product of \(\mathbb {R}^n\), the \(c_a\) are now real coefficients, and \(x=(x_1,\dots ,x_n)\) can take its values in \(\mathbb {R}^n\). The terminology used comes from the tropical (or max-plus) semi-field, whose additive law is the maximum and the multiplicative law is the usual sum.

In this paper, we are interested in solving (square) classical posynomial systems, that are of the form

(1)

with \(x\in (\mathbb {R}_{>0})^n\), and the \(P_i\) are classical posynomials. We will also study the tropical counterpart,

$$\begin{aligned} P^{\text {trop}}_{i}(x)&= 0 \quad \text {for all} \; i \in [n] \end{aligned}$$
(2)

with now \(x\in \mathbb {R}^n\), and the \(P_i^{\text {trop}}\) are tropical posynomials (hereafter we shall write \(P_i\) instead of \(P_i^{\text {trop}}\), for brevity). The optimality equations of Markov decision processes [13] are special cases of tropical posynomial systems. More general tropical posynomial systems arise in the performance analysis of timed discrete event systems, see [2].

Solving (square) posynomial systems is in general NP-hard (Sect. 2). However, we identify a tractable subclass. The tropical version can be solved exactly in polynomial time by reduction to a linear program (Sect. 3), whereas the classical version can be solved approximately by reduction to a geometric program (Sect. 4). Our approach is based on a notion of colorful interior of a collection of cones. A point is in the colorful interior if it is a positive linear combination of vectors of these cones, and if at least one vector of every cone is needed in such a linear combination. Our reductions are valid when the colorful interior of the cone generated by the supports of the posynomials is nonempty, and when a point in this interior is known. As special cases, we recover the linear programming formulation of Markov decision processes, and the geometric programming formulation of risk sensitive problems. Properties of the colorful interior and related open problems are discussed in Sect. 5.

2 Solving Posynomial Systems Is NP-hard

The following two results show that the feasibility problems for classical or tropical posynomial systems are NP-hard, even with integer exponents.

Proposition 1

Solving a square tropical posynomial system is NP-hard.

Proof

We reduce 3-SAT to the problem (2). Let us consider a Boolean formula in conjunctive normal form \(C_1\wedge \dots \wedge C_p\) made of p clauses, each one of them using three out of n real variables \(x_1,\dots ,x_n\) (\(p,n\in \mathbb {N}\)).

We introduce the following tropical posynomial system in the \(2n+2p\) variables \((x_1,\dots ,x_n,y_1,\dots ,y_n,z_1\dots ,z_p,s_1,\dots ,s_p)\), with the same number of equations:

This system can be constructed in polynomial time from the Boolean formula. The first 2n equations ensure that for all \(i\in [n]\), \(x_i\in \{0,1\}\) and that \(x_i\) and \(y_i\) have opposite logical values. The next p equations express that for all \(j\in [p]\), the variable \(z_j\) has the same Boolean value as the clause \(C_j\), with the notation \(x_i\in C_j\) (resp. \(\lnot x_i\in C_j\)) if the variable \(x_i\) occurs positively (resp. negatively) in the clause \(C_j\). The last equations ensure that \(z_j = 1\) for all \(j \in [p]\). The instance \(C_1 \wedge \dots \wedge C_p\) is satisfiable if and only if this system admits a solution.    \(\square \)

Theorem 2

Solving a square classical posynomial system is NP-hard.

Proof

We modify the previous construction to obtain a square posynomial system over \(\mathbb {R}_{> 0}^{2n+2p}\), along the lines of Maslov’s dequantization principle [12] or Viro’s method [14]:

From the first 2n equations, the variables \(x_i\) and \(y_i\) range over \(\{2,1/2\}\), the values 2 and 1/2 respectively encode the true and false Boolean values. The variable \(y_i = 1/x_i\) corresponds to the Boolean negation of \(x_i\). Since each clause has precisely three literals, using the p next equations, we deduce that the variable \(z_j\) takes one of the values \(\{1/2,3/4,1\}\) if the clause \(C_j\) is satisfied, and that it takes the value 1/4 otherwise. The last p equations impose that \(z_j\) can take any value in \((1/3,\infty )\). We deduce that the formula \(C_1 \wedge \dots \wedge C_p\) is satisfied if and only if the posynomial system that we have obtained in this way admits a solution in \(\mathbb {R}_{> 0}^{2n+2p}\).    \(\square \)

3 A Linear Programming Approach to Solve Tropical Posynomial Systems

Given tropical posynomials \(P_1, \dots , P_n\), we write the system (2) as \(P(x) = 0\), where . The support of this system, denoted \(\mathbf {S}\), is defined as the disjoint union \(\biguplus _{i \in [n]} S_{P_i}\) of the supports of the posynomials \(P_i\). By disjoint union, we mean the coproduct in the category of sets (these supports may have non-empty intersections, and they may even coincide).

Definition 1

We say that a vector y in the (convex) conic hull \({\mathrm {cone}}(\mathbf {S})\) is colorful if, for all \(\mu \in (\mathbb {R}_{\geqslant 0})^\mathbf {S}\),

$$ y=\sum _{a \in \mathbf {S}} \mu _a\, a \implies \forall i\in [n], \; \exists a\in S_{P_i} , \; \mu _a > 0 . $$

In other words, a vector \(y\in \mathbb {R}^n\) is colorful if it arises as a nonnegative combination of the exponents of P, but also if all such combinations make use of at least one exponent of each of the tropical posynomials \(P_1,\dots ,P_n\).

In this way, if we think of \(S_{P_1},\dots ,S_{P_n}\) as colored sets, we need all the colors to decompose a colorful vector y over these. Moreover, by Carathéodory’s theorem, every vector in the conic hull \({\mathrm {cone}}(\mathbf {S})\) can be written as a positive linear combination of an independent family of vectors of \(\mathbf {S}\). Hence, when y is a colorful vector, it is obtained as a positive linear combination of precisely one vector \(a_i\) in each color class \(S_{P_i}\), and the family \(a_1,\dots ,a_n\) must be a basis. (If not, Carathéodory’s Theorem would imply that y is a positive linear combination of a proper subset of \(\{a_1, \dots , a_n\}\), so that y could not be a colorful vector.)

Given a vector y, we consider the following linear program:

figure a

Remark that the feasibility set of this linear program consists of the vectors \(x \in \mathbb {R}^n\) satisfying \(P(x) \leqslant 0\). In other words, it can be thought of as a relaxation of the system \(P(x) = 0\). The following theorem shows that this relaxation provides a solution of \(P(x) = 0\) if y is a colorful vector.

Theorem 3

Assume that y is a colorful vector, and that the linear program (LP(y)) is feasible. Then, the linear program (LP(y)) has an optimal solution, and any optimal solution x satisfies \(P(x) = 0\).

Proof

Since the feasibility set of (LP(y)), , is nonempty, we can consider its recession cone, which is given by . As a colorful vector, y belongs to the polyhedral cone generated by the vectors \(a \in \mathbf {S}\), so for all \(x \in \mathcal {C}\). By the Minkowski–Weyl theorem, \(\mathcal {F}\) is a Minkowski sum of the form \(\mathcal {P}+\mathcal {C}\) where \(\mathcal {P}\) is a polytope, i.e., every feasible point x can be written as \(x=x'+x''\) with \(x'\in \mathcal {P}\) and \(x''\in \mathcal {C}\). Since , the maximum of the objective function over the polyhedron \(\mathcal {F}\) is attained (by an element of \(\mathcal {P}\)).

Let \(x^{\star }\in \mathbb {R}^n\) be an optimal solution of (LP(y)). From the strong duality theorem, the dual linear program admits an optimal solution \((\mu ^{\star }_a)_{a\in \mathbf {S}} \in (\mathbb {R}_{\geqslant 0})^\mathbf {S}\) which satisfies \(y = \sum _{a\in \mathbf {S}}\mu ^{\star }_a\,a\) and for all \(a \in \mathbf {S}\). Since y is a colorful vector, for all \(i\in [n]\), there is some \(a_i\in S_{P_i}\) such that \(\mu _{a_i}^{\star }>0\). We then get that, for all \(i \in [n]\), . As a result, \(P(x^{\star })=0\).    \(\square \)

We next provide a geometric condition ensuring that the linear program (LP(y)) is feasible regardless of the coefficients \(c_a\). We say that the tropical posynomial function P has pointed exponents if its support is contained in an open halfspace, i.e. there exists \(z \in \mathbb {R}^n\) such that \(\forall a\in \mathbf {S}\), . Our interest for pointed systems comes from the following property:

Proposition 4

The inequality problem \(P(x)\leqslant 0\) has a solution \(x\in \mathbb {R}^n\) regardless of the coefficients of P if and only if \(P\) has pointed exponents.

Proof

Suppose that for all values of \((c_a)_{a\in \mathbf {S}}\), there exists \(x\in \mathbb {R}^n\) such that \(P(x)\leqslant 0\). By choosing \(c_a\equiv 1\), there exists \(x_0\in \mathbb {R}^n\) that satisfies \(\forall a\in \mathbf {S}\), . Hence, for all \(i\in [n]\), the exponents of \(P_i\) lie in the open halfspace \(\{a\in \mathbb {R}^n\;|\; \langle a,x_0\rangle < 0\}\).

Suppose now that \(P\) has pointed exponents. Then there is some \(z\in \mathbb {R}^n\) such that for all \(a\in \mathbf {S}\), we have . We define so that \(\forall a\in \mathbf {S}\), and therefore for all \(i\in [n]\), \(P_i(\lambda z)\leqslant 0\).    \(\square \)

As a consequence of Theorem 3 and Proposition 4, if the tropical posynomial system \(P(x) = 0\) has pointed exponents and there exists a colorful vector, then the system admits a solution which can be found by linear programming.

A remarkable special case consists of Markov decision processes. In this framework, the set \([n]\) represents the state space, and at each state \(i\in [n]\), a player has a finite set \(B_i\) of available actions included in the n-dimensional simplex \(\{p\in \mathbb {R}_{\geqslant 0}^n:\sum _{j=1}^{n} p_j \leqslant 1\}\). If \(p\in B_i\), \(p_j\) stands for the probability that the next state is j, given that the current state is i and action p is chosen by the player, so the difference \(1-\sum _{j=1}^{n}p_j\) is the death probability in state i when this action is picked. To each action p is attached a reward \(c_p\in \mathbb {R}\). Given an initial state \(i\in [n]\), one looks for the value \(v_i\in \mathbb {R}\), which is defined as the maximum over all the strategies of the expectation of the sum of rewards up to the death time, we refer the reader to [13] for background. The value vector \(v=(v_i)_{i\in [n]}\) is solution of the tropical posynomial problem

This reduces to the form (2) with , where \(e_i\) denotes the i-th element of the canonical basis of \(\mathbb {R}^n\). We say that a Markov decision process is of discounted type if for every state \(i\in [n]\) there is at least one action \(p\in B_j\) such that \(\sum _{j=1}^{n}p_j<1\).

Proposition 5

If a Markov decision process is of discounted type, then any negative vector is colorful with respect to the associated posynomial system.

Thus, we recover the linear programming approach to Markov decision processes (see [13]), showing that the value is obtained by minimizing the function \(v\mapsto \sum _{i\in [n]}v_i\) subject to the constraints for \(i\in [n]\) and \(p\in B_i\).

4 Geometric Programming Approach of Posynomials Systems

We refer the reader to [6] for background on geometric programming.

Given a collection \(P = (P_1, \dots , P_n)\) of classical posynomials, we now deal with the system \(P_i(x) = 1\) for all \(i \in [n]\), which, for brevity, we denote by \(P(x) = 1\). We keep the notation of Sect. 3 for the supports of the posynomials. Moreover, the definitions of colorful vectors and pointed exponents, which only depend on these supports, still make sense in the setting of this section.

Lemma 6

If y is a colorful vector, the polyhedron \(\mathcal {P}\) defined by

is bounded (possibly empty), regardless of our choice of positive \((c_a)_{a\in \mathbf {S}}\) or \(\mu \in \mathbb {R}\).

Proof

If \(\mathcal {P}\) is nonempty, let denote its recession cone, and let \(x \in \mathcal {C}\). Since y is a colorful vector, there exists \((\lambda _1,\dots ,\lambda _n)\in \mathbb {R}_{> 0}^n\) and a basis \((a_1,\dots ,a_n)\in \prod _{i\in [n]} S_{P_i}\) such that \(y=\sum _{i=1}^n \lambda _i a_i\). Thus, , and so . As a consequence, since \(\lambda _i > 0\) for all \(i\in [n]\), . Since \((a_1,\dots ,a_n)\) is a basis, we get \(x = 0\). Thus, \(\mathcal {C} = \{0\}\), and \(\mathcal {P}\) is bounded by Minkowski–Weyl Theorem.    \(\square \)

Given \(X \in \mathbb {R}^n\), we denote by \(\exp X\) the vector with entries \(\exp X_i\), \(i \in [n]\).

Theorem 7

Let \(P(x) = 1\) be a posynomial system with pointed exponents, and y be a colorful vector. Then, the system has a solution \(x = \exp X^* \in (\mathbb {R}_{> 0})^n\), where \(X^*\) is an arbitrary solution of the following geometric program:

figure b

where .

Proof

For \(x\in \mathbb {R}^n_{>0}\), we define \(X=\log (x)\) (component-wise) so that \(P(x)=1\) is equivalent to solving \(g_i(X) = 0\) for all \(i \in [n]\). By Hölder’s inequality, the functions \((g_i)_{i\in [n]}\) are convex. We define for \(i\in [n]\) and we observe that \(h_i(X)\leqslant g_i(X) \leqslant h_i(X)+\log (|S_{P_i}|)\).

Since the system \(P(x) = 1\) has pointed exponents, by Proposition 4, the polyhedron \(\{X \in \mathbb {R}^n :\forall i\in [n], \; h_i(X)+\log (|S_{P_i}|)\leqslant 0\}\) is nonempty. A fortiori, the feasible set of (G) is nonempty.

Let us now prove that the maximum of (G) is finite and attained, by proving that the \(\mu \)-superlevel set of the objective function (included in the feasible set) is compact for all \(\mu \in \mathbb {R}\). Closedness is direct, and we observe that for \(\mu \in \mathbb {R}\), , but by Lemma 6, this polyhedron is bounded. Hence, (G) admits an optimal solution \(X^{\star }\).

Furthermore, again by Proposition 4, there exists \(\overline{X}\) such that for all \(i\in [n]\), \(h_i(\overline{X})+\log (|S_{P_i}|)+1\leqslant 0\). Therefore, for all \(i\in [n]\), \(g_i(\overline{X})<0\), which means that (G) satisfies Slater’s condition. Problem (G) being convex, optimality of \(X^{\star }\) is characterized by the Karush–Kuhn–Tucker conditions (see [4]). Hence, there is a vector of nonnegative multipliers \(\lambda ^{\star }=(\lambda ^{\star }_1,\dots ,\lambda ^{\star }_n)\) such that \((X^{\star },\lambda ^{\star })\) is a stationary point of the Lagrangian of (G), and the complementarity slackness conditions hold, i.e. for all \(i\in [n]\), \(\lambda _i ^{\star }\,g_i(X^{\star })=0\). Defining for \(i\in [n]\), the stationarity conditions give

Since y is colorful, for all \(i\in [n]\), \(\lambda _i^{\star }>0\). The complementarity slackness conditions yield \(g_i(X^{\star })=0\) for all \(i\in [n]\). So satisfies \(P(x^{\star })=1\).    \(\square \)

5 Properties of the Colorful Interior of Convex Sets

Theorems 3 and 7 rely on the existence of a colorful vector. The purpose of this section is to study the properties of the set of such vectors. In fact, colorful vectors can be defined more generally from a family of n closed convex cones.

Definition 2

Let \(\mathcal {C}= (C_1,\dots ,C_n)\) be a collection of n closed convex cones of \(\mathbb {R}^n\). A vector \(y \in \mathbb {R}^n\) is said to be colorful if it belongs to the set

$$ {\mathrm {cone}}(C_1 \cup \dots \cup C_n) \setminus {\textstyle \bigcup \limits _{i\in [n]}}\mathrm {cone} \big ({\textstyle \bigcup \limits _{j\ne i}} C_{j}\big ) . $$

The latter set is referred to as the colorful interior of \(\mathcal {C}\).

Remark that Definition 1 can be recovered by taking for all \(i \in [n]\). In what follows, we restrict to the case where the collection \(\mathcal {C}\) is pointed, i.e. \({\mathrm {cone}}(C_1 \cup \dots \cup C_n)\) is a pointed cone (in the non pointed case, the colorful interior enjoys much less structure than the one proved in Theorem 10, in particular it may not even be connected). Suppose that is an open halfspace containing the \((C_i)_{i\in [n]}\). Then, as a cone, the colorful interior of \(\mathcal {C}\) can be more simply studied from its cross-section with . The latter can be shown to coincide with the set

$$\begin{aligned} {\mathrm {conv}}(S_1 \cup \dots \cup S_n) \setminus {\textstyle \bigcup \limits _{i\in [n]}}\mathrm {conv} \big ({\textstyle \bigcup \limits _{j\ne i}} S_j\big ) \end{aligned}$$
(3)

where for \(i\in [n]\), \(S_i\) is the cross-section of the cone \(C_i\) by . Given a collection \(\mathcal {S}= (S_1, \dots , S_n)\) of closed convex sets of \(\mathbb {R}^{n-1}\), we refer to the set (3) as the colorful interior of \(\mathcal {S}\), and denote it by . We start with a lemma justifying the terminology we have chosen:

Lemma 8

Let \(\mathcal {S}= (S_1, \dots , S_n)\) be a collection of n closed convex sets of \(\mathbb {R}^{n-1}\). Then is an open set included in .

The set has appeared in a work of Lawrence and Soltan [9], in the proof of the characterization of the intersection of convex transversals to a collection of sets. In more details, Lemma 8 and [9, Lemma 6] imply:

Proposition 9

Let \(\mathcal {S}= (S_1, \dots , S_n)\) be a collection of n closed convex sets of \(\mathbb {R}^{n-1}\). Define , the set of colorful simplices, i.e. with one vertex in each colored set. Then we have

Remark that Proposition 9 still holds if the colorful simplices \(\varDelta \in \mathcal {D}\) are replaced by the convex transversals to the sets \(S_1, \dots , S_n\).

Given a hyperplane , we shall denote below by \(H^{>}\) (resp.  \(H^{\leqslant }\)) the open (resp.  closed) halfspace (resp.  ). As a corollary of [9, Th. 2], we get the following characterization of the colorful interior:

Theorem 10

Let \(\mathcal {S}= (S_1,\dots ,S_n)\) be a collection of n closed convex sets of \(\mathbb {R}^{n-1}\), and assume that is nonempty. Then, is the interior of a \((n-1)\)-dimensional simplex.

Moreover, if the sets \((S_i)_{i\in [n]}\) are bounded, then there are n unique hyperplanes \((H_i)_{i\in [n]}\) such that for all \(i\in [n]\), \(S_i\subset H_i^>\), and for all \(j\ne i\), \(S_j\subset H_i^{\leqslant }\) and \(S_j \cap H_i\ne \varnothing \). In this case, we have .

Geometrically, every \(H_i\) in Theorem 10 is a tangent hyperplane to the convex sets \((S_j)_{j \ne i}\) which separates them from the set \(S_i\). The existence (and uniqueness) of such tangent hyperplanes follows from the work of Cappell et al. [5], see also the work of Lewis, Klee and von Hohenbalken [10] for a constructive proof. We depict on Fig. 1a three colored sets \(S_1, S_2\) and \(S_3\) in \(\mathbb {R}^2\) with nonempty colorful interior , illustrating that the latter is the interior of a simplex as claimed in Theorem 10.

Fig. 1.
figure 1

(a) three convex sets \(S_1\) (blue), \(S_2\) (green) and \(S_3\) (orange) in \(\mathbb {R}^2\) and their colorful interior (white). The sets \((\widehat{S}_i)_{1\leqslant i\leqslant 3}\) (resp. \((\overline{S}_i)_{1\leqslant i\leqslant 3}\)) are seen by taking convex hulls of \((S_i)_{1\leqslant i\leqslant n}\) (resp. intersection of \((\widehat{S}_i)_{1\leqslant i \leqslant 3}\)) pairwise. Observe that the edges of the colorful interior are supported by tangent hyperplanes to two sets of \((S_1,S_2,S_3)\). (b) the colorful interior of \((S_1,S_2,S_3)\) is here empty, although these sets are separated (any three points in each of them are in general position), contrary to the sets \((\overline{S}_1,\overline{S}_2,\overline{S}_3)\), whose intersection is seen in the center of the figure. (Color figure online)

Given a collection \(\mathcal {S}= (S_1, \dots , S_n)\) of n closed convex sets of \(\mathbb {R}^{n-1}\), we now discuss necessary and sufficient conditions for to be nonempty. To this purpose we recall that the collection \(\mathcal {S}\) is separated if for any choice of \(k \leqslant n\) points \(x_1, \dots , x_k\) in \(S_{i_1}\times \dots \times S_{i_k}\) (where \(i_1, \dots , i_k\) are pairwise distinct), the points \(x_1,\dots ,x_k\) are in general position (spanning a \((k-1)\)-dimensional affine space).

Proposition 11

Let \(S_1,\dots ,S_n\) be a collection of n compact convex sets of \(\mathbb {R}^{n-1}\), and let us define for all \(i\in [n]\).

Then, the family \((\overline{S}_i)_{i\in [n]}\) is separated if and only if \(\bigcap _{i\in [n]} \widehat{S}_i = \varnothing \).

Proposition 12

Let \(S_1,\dots ,S_n\) be a collection of n compact convex sets of \(\mathbb {R}^{n-1}\). Let us define, for all \(i\in [n]\),

Then, if is nonempty, the family \((\overline{S}_i)_{i\in [n]}\) is separated.

Proposition 12 provides a necessary condition to ensure that . Since for all \(i\in [n]\), we have \(S_i\subset \overline{S}_i\), we also obtain that the separation of \((S_i)_{i\in [n]}\) is necessary as well for to be nonempty. However, Fig. 1b shows that this last condition is not sufficient. We conjecture that the necessary condition stated in Proposition 12 is sufficient:

Conjecture 13

Let \(S_1,\dots ,S_n\) be a collection of n compact convex sets of \(\mathbb {R}^{n-1}\). Then is nonempty if and only if the family \((\overline{S}_i)_{i\in [n]}\) is separated.

We prove this conjecture in the case where \(n = 3\) (it is also straightforward to establish for \(n=2\)).

Proposition 14

Let \(\mathcal {S}= (S_1, S_2, S_3)\) be a collection of three convex compact sets of \(\mathbb {R}^2\). Then, is nonempty if and only if \((\overline{S}_1, \overline{S}_2, \overline{S}_3)\) is separated.

Proof

Suppose that \((\overline{S}_1, \overline{S}_2, \overline{S}_3)\) is separated. We know from [10] that for all \(i\in \{1,2,3\}\) we have two hyperplanes (in this case affine lines) tangent to sets of the collection \((\overline{S}_j)_{j\ne i}\) and inducing opposite orientation on these. Such lines cannot meet \(\overline{S}_i\) by separation property, so one of them, denoted \(H_i\), is such that \(\overline{S}_i \subset H_i^>\) and \(\overline{S}_j \subset H_i^{\leqslant }\) for \(j\ne i\). In particular, note that \({\mathrm {conv}}((S_j)_{j\ne i})\subset H_i^{\leqslant }\). For \(i,j\in \{1,2,3\}\) and \(j\ne i\), the hyperplane \(H_i\) is not only tangent to \(\overline{S}_j\) but also to \(S_j\): indeed take a support \(y^j_i\) of \(H_i\) in \(\overline{S}_j\), it arises as a convex combination \(y^j_i=\sum _{k\ne i}\lambda _k x_k\) with \(x_i\in S_k\) for \(y^j_i\in \widehat{S_i}\). By \(S_i\subset \overline{S}_i\), we derive for all \(k\ne i\), \(x_k\in H_k\) or \(\lambda _k=0\), the latter being ruled out by separation. Hence, let us denote by \(x^j_i\) a support of hyperplane \(H_i\) in \(S_j\). Note that once again from the separation of \((\overline{S}_1, \overline{S}_2, \overline{S}_3)\), two supports of a tangent line in two different colors cannot be equal.

If and are two distinct vectors of \(\mathbb {R}^2\), we denote , the usual cross-product of two vectors in \(\mathbb {P}^2\). As is customary, (resp. and ) is a normal vector to \(H_1\) (resp. \(H_2\) and \(H_3\)), and is an equation defining \(H_i\). Furthermore, the intersection of \(H_1\) and \(H_2\) is given by , or using the triple product formula, by

(4)

Because \(x_2^1\in \overline{S}_1 \subset H_1^>\) and \(x_2^3\in \overline{S}_3 \subset H_1^{\leqslant }\), we have that is nonzero and . As a result of (4), \(s_3\) indeed exists and arises as a convex combination of \(x_2^3\) and \(x_2^1\), so \(s_3\in {\mathrm {conv}}({S}_1\cup {S}_3)\). By writing \(s_3 = (x_1^2\wedge x_1^3)\wedge h_2\) as in (4), we show likewise that \(s_3\) is a convex combination of \(x_1^2\) and \(x_1^3\), thus \(s_3\in {\mathrm {conv}}({S}_2 \cup {S}_3)\). This finally entails that \(s_3\in \overline{S}_3\) and therefore \(s_3\in H_3^>\). It now suffices to define and in a similar way and consider \(y= (s_1+s_2+s_3)/3\). It is clear that \(y\in {\mathrm {conv}}(S_1\cup S_2\cup S_3)\), and for all \(i\in \{1,2,3\}\), \(y\in H_i^>\), in particular \(y\notin {\mathrm {conv}}((S_j)_{j\ne i})\). As a consequence, y is a colorful vector for \(S_1\), \(S_2\) and \(S_3\).    \(\square \)

To conclude, we point out that another interesting problem is the computational complexity of determining whether the colorful interior is empty or not, in the case where the sets \(S_i\) are polytopes. Remark that as a consequence of Proposition 11, if Conjecture 13 holds, then we can determine if is empty in polynomial time using linear programming. Alternatively, the problem could be tackled by studying the complexity of separating a point from the colorful interior. This is tightly linked with the computation of the tangent hyperplanes of Theorem 10, for which the status of the complexity is not well understood.