Solving polyhedral d.c. optimization problems via concave minimization

The problem of minimizing the difference of two convex functions is called polyhedral d.c. optimization problem if at least one of the two component functions is polyhedral. We characterize the existence of global optimal solutions of polyhedral d.c. optimization problems. This result is used to show that, whenever the existence of an optimal solution can be certified, polyhedral d.c. optimization problems can be solved by certain concave minimization algorithms. No further assumptions are necessary in case of the first component being polyhedral and just some mild assumptions to the first component are required for the case where the second component is polyhedral. In case of both component functions being polyhedral, we obtain a primal and dual existence test and a primal and dual solution procedure. Numerical examples are discussed.


Introduction
D.c. programming and concave minimization are known to be closely related problems, see e.g. [16]. Theory and methods for concave minimization are surveyed, for instance, in [2]. An overview about the field of d.c. programming is given, for example, in [8,17].
Let g : R n → R ∪ {∞} and h : R n → R ∪ {∞} be convex functions, where one of the functions g or h is assumed to be polyhedral, i.e., the epigraph of the respective function is a convex polyhedron. We consider the polyhedral d.c. optimization problem This problem will be transformed into a concave minimization problem under linear constraints. The general form of such a problem is as follows. For a concave function f : where the feasible set P is an arbitrary polyhedron. For the reformulation of (DC) we choose the concave objective function f (x, r) . .= r − h(x).
The feasible set P is the epigraph of g. The concave minimization problem associated to (DC) min x,r f (x, r) s.t. (x, r) ∈ epi g (ConcMin) is equivalent to (DC) in the following sense: • If (x 0 , r 0 ) solves (ConcMin) then x 0 solves (DC).
In the literature on concave minimization, many authors assume a compact feasible set in order to guaranty the existence of optimal solutions, see e.g. [1,3,12]. However, problem (ConcMin) always has a non-compact feasible set. In [4], algorithms for concave (even quasi-concave) minimization based on a modification of methods for vector linear programming (VLP) are studied. An implementation of these methods based on the VLP solver bensolve [10,11] is provided by the Octave/Matlab package bensolve tools [5,6]. This approach allows non-compact feasible sets but requires certain other assumptions.
While the reformulation of (DC) as (ConcMin) is straightforward, our research focuses on the assumptions which are required for the solution methods. It turns out that verifying or disproving the existence of optimal solutions of (DC) is the crucial task here. For the case where g is polyhedral, we prove that whenever an optimal solution of (DC) exists, it can be computed by solving the associated problem (ConcMin) using the methods of [4]. In the case where h is polyhedral, the same applies to the dual problem of (DC). Under mild assumptions, an optimal solution of (DC) can be obtained from an optimal solution of the dual problem.
This article is organized as follows. In Section 2 we characterize the existence of optimal solutions for polyhedral d.c. programs. Section 3 explains how a polyhedral d.c. program can be solved using a (quasi-)concave minimization solver like bensolve tools after the existence of optimal solutions has been verified. The last section presents two numerical examples for the case where both g and h are polyhedral. Both existence tests and solution methods are addressed. We use the following notation. The domain of a function f : A convex function f is called closed if epi f is a closed set. We write R n + for the set of vectors with non-negative components. The recession cone 0 + C of a convex set C ⊆ R n is the set of all y with C + {y} ⊆ C. The lineality space of C is the set lineal(C) . . = 0 + C ∩ (−0 + C).

Existence of global optimal solutions
In this section we discuss the existence of optimal solutions of problem (DC) for the following three cases: (1) g being polyhedral, (2) h being polyhedral, (3) both g and h being polyhedral.

The case of g being polyhedral
The following characterization of the existence of optimal solutions is the main result of this article. Theorem 1. Problem (DC) with g being polyhedral has an optimal solution if and only if the following three properties hold: Proof. Let (DC) have an optimal solution x 0 . Since > −∞ and hence x ∈ dom h, i.e., (ii) holds. Assume (iii) is violated, that is, 0 + epi g 0 + (epi h∩(dom g×R)). Then we can choose (d, s) ∈ 0 + epi g\0 + (epi h∩ (dom g × R)). By the definition of the recession cone, there exists some (x, t) ∈ epi h ∩ (dom g × R) such that (x, t) + α(d, s) / ∈ epi h ∩ (dom g × R) for some α > 0. Without loss of generality we can set α = 1. Since t ≥ h(x) we get (x, h(x)) + (d, s) / ∈ epi h ∩ (dom g × R). We find some r ∈ R such that (x, r) ∈ epi g.
by definition of the epigraph. We consider problem (ConcMin) which is equivalent to (DC) as discussed above. For its objective function f defined in (2) we obtain For any n ∈ N, (x + nd, r + ns) is feasible for (ConcMin). But f (x + nd, r + ns) tends to −∞, by concavity of f and (4). This contradicts the assumption that (DC) has an optimal solution. Thus, (iii) is satisfied. Assume now that (i), (ii) and (iii) hold. By (i), epi g is non-empty. Let (x, r) ∈ epi g and (d, s) ∈ 0 + epi g. Then (x, r) + α(d, s) ∈ epi g for all α > 0. By (ii), we have (x, h(x)) ∈ epi h ∩ (dom g × R).
for all α > 0. From the definition of the epigraph, we obtain h(x) + αs ≥ h(x + αd). For the objective function f of problem (ConcMin), this implies Since epi g is a polyhedron, it can be expressed by a polytope Q as epi g = Q + 0 + epi g. By (ii), the values of f are finite over Q. The polytope Q is the convex hull of its vertices. Concavity of f implies that f attains its minimum at a vertex (x 0 , r 0 ) of Q. Since (5) holds for every (x, r) ∈ Q and every (d, s) ∈ 0 + epi g, (x 0 , r 0 ) is an optimal solution of (ConcMin) and hence x 0 is an optimal solution of (DC).
If g is not polyhedral, the conditions (i), (ii) and (iii) in Theorem 1 are still necessary for the existence of optimal solutions. This can be seen in the first part of the proof, where the assumption of g being polyhedral was not used. However, the conditions are no longer sufficient, not even if h is polyhedral, as the following simple example shows.
Then (DC) is unbounded and thus has no optimal solution. We have dom g = dom h = R + and 0 + epi g = 0 + epi h = R 2 + . Thus, the conditions (i), (ii) and (iii) of Theorem 1 are satisfied. An extension to the non-polyhedral case requires further assumptions as discussed in the following remark.
Remark 3. The second part of the proof of Theorem 1 still works for non-polyhedral functions g if epi g is of the special form Q + 0 + epi g for some compact set Q and if h is assumed to be upper semicontinuous. Then, by the Weierstrass theorem, the minimum of the objective function f of (ConcMin) is attained in Q.
Under certain assumptions, condition (iii) in Theorem 1 can be simplified. We start with two propositions and formulate this result as a corollary of Theorem 1.
Proposition 4. Let A, B ⊆ R n be non-empty convex sets with A ⊆ B and let B be closed. Then Proof. This follows from [13,Theorem 8.3], which states that for a non-empty closed convex set By definition of the recession cone, we have x + αd ∈ A for all x ∈ A. Since A ⊆ B, we get d ∈ 0 + B.
Proof. Let (iii) be satisfied. Then Corollary 6. Problem (DC) with g being polyhedral and h being closed has an optimal solution if and only if the following properties hold: Proof. If h is closed, we have epi h = cl epi h and hence 0 + epi h = 0 + cl epi h. Thus, the result follows from Proposition 5 and Theorem 1.
The following example shows that condition (iii') is not adequate if h is not assumed to be closed.
Both g and h are convex and g is polyhedral. Both functions coincide on dom g = R × [1, ∞), whence (DC) has optimal solutions of the form (0, y) for y ≥ 1. The recession cones of the functions are We see that (1, 0, 1) ∈ 0 + epi g \ 0 + epi h, i.e., (iii') is violated.

The case of h being polyhedral
We consider the Toland-Singer dual problem of (DC), see [14,15], that is, where g * (y) . . = sup x∈R n [y T x − g(x)] is the conjugate of g and likewise for h. The duality theory by Toland and Singer states that the optimal objective values of (DC) and (DC*) coincide under the assumption of h being closed.
Since g * is convex and h * is polyhedral convex, the existence result of Theorem 1 applies to problem (DC*). The following result provides the relation between optimal solutions of (DC) and (DC*). We denote by ∂f (x) . . = {y ∈ R n | ∀z ∈ R n : f (z) ≥ f (x) + y T (z − x)} the subdifferential of a convex function f : R n → R ∪ {∞} at x ∈ dom f . We set ∂f (x) . .= ∅ for x ∈ dom f . (i) If x is an optimal solution of (DC), then each y ∈ ∂h(x) is an optimal solution of (DC*).
If, in addition, g and h are closed, a dual statement holds: (ii) If y is an optimal solution of (DC*), then each x ∈ ∂g * (y) is an optimal solution of (DC).

Remark 9.
As already mentioned in [9], the assumption of g and h being closed for statement (ii) of Proposition 8 is missing in [8,Proposition 4.7]. In [18,Proposition 3.20] the assumption of g being closed is required. Examples can be found in [9].
Theorem 11. Let g be closed and let (6) have an optimal solution for every y ∈ dom g * . Let h be polyhedral. Then, problem (DC) has an optimal solution if and only if the following properties hold: Proof. Let (DC) have an optimal solution x. Since x ∈ dom h and h is polyhedral, there exists some y ∈ ∂h(x), see e.g. [13, Theorem 23.10]. Proposition 8 states that y is an optimal solution of (DC*). Theorem 1 applied to (DC*) yields the conditions (i*), (ii*) and (iii*). Let the conditions (i*), (ii*) and (iii*) be satisfied. By Theorem 1 we obtain that (DC*) has an optimal solution y. By assumption, (6) has an optimal solution x, which belongs to ∂g * (y), by Proposition 10. Since g and h are closed, Proposition 8 yields that x is an optimal solution to (DC).

The case of both g and h being polyhedral
Combining the previous results we obtain the following statement. Proof. By Corollary 6, (a) is equivalent to (c). Since g and h are polyhedral, the assumptions of Theorem 11 are satisfied. Indeed, by y ∈ dom g * , problem (6) has a finite optimal value and hence the minimum is attained as g is polyhedral. Theorem 11 yields that (b) is equivalent to (d). Proposition 8 and the fact that the subdifferential of a polyhedral function is non-empty at points of the domain of the function, we see that (a) is equivalent to (b).

Solution procedure
Let g be polyhedral in problem (DC). We are going to solve (DC) by the following procedure. First we check whether or not an optimal solution of (DC) exists by using Theorem 1. If so, we solve the associated problem (ConcMin) by the solution methods of [4]. By using Theorem 1 again, we will check the following assumptions which are required to be satisfied for the algorithms presented in [4]: There exists a polyhedral convex pointed cone C ⊆ R n such that Proof. The set C = 0 + epi g is a polyhedral convex cone. Obviously, (B) holds. It remains to show (M). Let (x, r), (y, s) ∈ R n × R such that By definition of epi h, we obtain h(x) + s − r ≥ h(y) and hence r − h(x) ≤ s − h(y), which proves (M).

Remark 14.
In the previous theorem, the assumption of h being closed can be omitted if the definition of f in (2) is replaced by The proof is similar by using Theorem 1 (iii) instead of Corollary 6 (iii').
The cone C in the previous theorem is not necessarily pointed, as required for the solution methods of [4], see above. However, pointedness can be achieved by a reformulation of problem (DC): Denote by L the lineality space of the convex function g which is defined by L . .= x ∈ R n ∃r ∈ R : x r ∈ lineal(epi g) .
Let L ⊥ be the orthogonal complement of L. For some fixedx ∈ dom g we definē We denote by (DC) the polyhedral d.c. optimization problem (DC) where g is replaced byḡ.
Proposition 15. Let (DC) with g being polyhedral have an optimal solution. Then (DC) has an optimal solution and every optimal solution of (DC) is also an optimal solution of (DC).
Proof. We have domḡ = ∅, domḡ ⊆ dom g and 0 + epiḡ ⊆ 0 + epi g. Theorem 1 yields the first statement. Now let x 0 be an optimal solution of the modified problem (DC). The point x 0 is feasible for (DC). Assume there is somex ∈ dom g such that g( From Theorem 1 (iii) we conclude that where h| dom g is the function that coincides with h on dom g and is ∞ elsewhere. From [13,Theorem 8.8] we conclude that for all x ∈ R n and all λ ∈ R. Likewise we get for all x ∈ dom g and all λ ∈ R. We obtain Sincex ∈ domḡ, we have g(x) =ḡ(x). Together we haveḡ(x) − h(x) <ḡ(x 0 ) − h(x 0 ) which contradicts the assumption that x 0 is optimal for (DC).
The following example shows that an optimal solution of (DC) is not necessarily an optimal solution of (DC).
Summarizing the results, we solve (DC) with g being polyhedral by the following procedure: (1) Check whether (DC) has an optimal solution or not using Theorem 1, if not, stop.
In the case where h is polyhedral, we need to assume additionally that g is closed and (6) has an optimal solution for every y ∈ dom g * . Then we can check whether or not (DC) has an optimal solution by using Theorem 11. If so, by Theorem 1 we know that (DC*) has an optimal solution. An optimal solution y of (DC*) can be obtained by the same method (steps (2) and (3) of the above procedure) but applied to (DC*) rather than (DC) (replace g by h * and h by g * ). Finally, we solve (6), which provides an optimal solution of (DC).
If both g and h are polyhedral, (DC) can be solved by two different methods: We speak about the primal method in case we use the method where g is required to be polyhedral. The term dual method refers to the method where h is required to be polyhedral. Furthermore there are two different tests for existence of an optimal solution of (DC). The test in Corollary 12 (c) is referred to as primal existence test whereas (d) in Corollary 12 is called dual existence test.

Numerical results
We implemented the results of this article in Matlab 9.6 by using bensolve tools, version 2.3, see [5,6]. The code and the test instances are available at http://tools.bensolve.org/dcsolve. By two (new) commands dcsolve and dcdsolve the user can run, respectively, the primal and dual method described in the previous section. The input arguments of both commands are two arbitrary polyhedral convex functions g and h in the usual format of Bensolve Tools. Both commands solve arbitrary polyhedral d.c. optimization problems (of small size) or certify that no optimal solution exists.
The following two numerical examples were run on a computer with Intel R Core TM i5 CPU with 3.3 GHz. Example 17. Let A ∈ R n×m be a matrix and denote by a i its columns. We define a polyhedral convex function f A : R n → R by where y 1 = n j=1 |y j | denotes the sum norm of a vector y. Given two matrices G ∈ R n×mG and H ∈ R n×mH we consider the polyhedral d.c. optimization problem Problems of this type occur in locational analysis, see e.g. [9] and the references therein. In Figure  1 numerical results are depicted for matrices G and H with components g ij = sin(i + j) and h ij = cos(i + j) (just to make the results easily reproducible in comparison to random numbers). The recession cone of epi f A is just the recession cone of epi(m · 1 ) and dom f A = R n . Thus, by Corollary 6, a solution of (11) exists if and only if m G ≥ m H . Figure 1 (left) shows the run time of a numeric verification of this fact for some problem instances by checking the conditions of Corollary 6 (primal existence test) and Theorem 11 (dual existence test). Figure 1 (right) shows the run time of the primal and dual solution methods.
The following example from [7] was solved in [9] using the MOLP solver Bensolve [10]. We implemented the algorithms of [9] with Bensolve Tools and compare them with our algorithms. While the methods of [9] compute all vertices of epi g in the primal algorithm and all vertices of epi h * in the dual algorithm, we compute only part of these vertices by using the qcsolve command of Bensolve Tools. We observe a better performance for the bigger instances. primal method dual method primal alg. of [9] dual alg. of [9] Figure 2: Numerical results for Example 18. Left: existence tests. Right: Comparison of our algorithms (without existence test) with the methods from [9] (also without existence test). We observe that both dual methods perform better than the primal ones. This is due to the easier structure of h in comparison to g. We observe that our methods perform better in case of instances with sufficiently many variables. One can easily verify that the all-one vector provides an optimal solution. In Figure 2, the run time of the primal and dual solution method proposed in this article is compared to the primal and dual method of [9].

Summary
We characterized the existence of global optimal solutions of polyhedral d.c. optimization problems in Theorem 1, Theorem 11 and Corollary 12 depending on whether the first, the second, or both components of the objective function are polyhedral. We provided a solution procedure based on both an existence tests and a reformulation of the polyhedral d.c. optimization problem into a quasi-concave minimization problem. Numerical experiments were run for the case where both components of the objective function are polyhedral.