1 Introduction

D.c. programming and concave minimization are known to be closely related problems, see e.g. [16]. Theory and methods for concave minimization are surveyed, for instance, in [2]. An overview about the field of d.c. programming is given, for example, in [8, 17].

Let \(g :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) be convex functions, where one of the functions g or h is assumed to be polyhedral, i.e., the epigraph of the respective function is a convex polyhedron. We consider the polyhedral d.c. optimization problem

$$\begin{aligned} \min _{x\in \mathrm{dom~}g} [g(x)-h(x)]. \end{aligned}$$
(DC)

This problem will be transformed into a concave minimization problem under linear constraints. The general form of such a problem is as follows. For a concave function \(f:\mathbb {R}^k \rightarrow \mathbb {R}\cup \{-\infty \}\) we consider

$$\begin{aligned} \min _y f(y) \quad \text { s.t. }\quad y \in P, \end{aligned}$$
(1)

where the feasible set P is an arbitrary polyhedron. For the reformulation of (DC) we choose the concave objective function

$$\begin{aligned} f(x,r) :=r-h(x). \end{aligned}$$
(2)

The feasible set P is the epigraph of g. The concave minimization problem associated to (DC)

$$\begin{aligned} \min _{x,r} f(x,r) \quad \text { s.t. } \quad (x,r) \in \mathrm{epi~}g \end{aligned}$$
(ConcMin)

is equivalent to (DC) in the following sense:

  • If \((x^0,r^0)\) solves (ConcMin) then \(x^0\) solves (DC).

  • If \(x^0\) solves (DC) then \((x^0,g(x^0))\) solves (ConcMin).

In the literature on concave minimization, many authors assume a compact feasible set in order to guaranty the existence of optimal solutions, see e.g. [1, 3, 12]. However, problem (ConcMin) always has a non-compact feasible set. In [4], algorithms for concave (even quasi-concave) minimization based on a modification of methods for vector linear programming (VLP) are studied. An implementation of these methods based on the VLP solver bensolve [10, 11] is provided by the Octave/Matlab package bensolve tools [5, 6]. This approach allows non-compact feasible sets but requires certain other assumptions.

While the reformulation of (DC) as (ConcMin) is straightforward, our research focuses on the assumptions which are required for the solution methods. It turns out that verifying or disproving the existence of optimal solutions of (DC) is the crucial task here. For the case where g is polyhedral, we prove that whenever an optimal solution of (DC) exists, it can be computed by solving the associated problem (ConcMin) using the methods of [4]. In the case where h is polyhedral, the same applies to the dual problem of (DC). Under mild assumptions, an optimal solution of (DC) can be obtained from an optimal solution of the dual problem.

This article is organized as follows. In Sect. 2 we characterize the existence of optimal solutions for polyhedral d.c. programs. Section 3 explains how a polyhedral d.c. program can be solved using a (quasi-)concave minimization solver like bensolve tools after the existence of optimal solutions has been verified. The last section presents two numerical examples for the case where both g and h are polyhedral. Both existence tests and solution methods are addressed.

We use the following notation. The domain of a function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) is defined by \(\mathrm{dom~}f :=\{x \in \mathbb {R}^n \mid f(x) < \infty \}\) and the epigraph of f is the set \(\mathrm{epi~}f :=\{(x,r) \in \mathbb {R}^n\times \mathbb {R}\mid r \ge f(x)\}\). A convex function f is called closed if \(\mathrm{epi~}f\) is a closed set. We write \(\mathbb {R}^n_+\) for the set of vectors with non-negative components. The recession cone\(0^+C\) of a convex set \(C \subseteq \mathbb {R}^n\) is the set of all y with \(C + \{y\} \subseteq C\). The lineality space of C is the set \(\mathrm{lineal}(C):=0^+C\cap (-0^+C)\).

2 Existence of global optimal solutions

In this section we discuss the existence of optimal solutions of problem (DC) for the following three cases: (1) g being polyhedral, (2) h being polyhedral, (3) both g and h being polyhedral.

2.1 The case of g being polyhedral

The following characterization of the existence of optimal solutions is the main result of this article.

Theorem 1

Problem (DC) with g being polyhedral has an optimal solution if and only if the following three properties hold:

  1. (i)

    \(\mathrm{dom~}g \ne \emptyset \),

  2. (ii)

    \(\mathrm{dom~}g \subseteq \mathrm{dom~}h\),

  3. (iii)

    \(0^+\mathrm{epi~}g \subseteq 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\).

Proof

Let (DC) have an optimal solution \(x^0\). Since \(x^0 \in \mathrm{dom~}g\), (i) is satisfied. Let \(x \in \mathrm{dom~}g\), then \(-h(x) \ge g(x^0)-h(x^0)-g(x) > -\infty \) and hence \(x \in \mathrm{dom~}h\), i.e., (ii) holds. Assume (iii) is violated, that is, \(0^+\mathrm{epi~}g \nsubseteq 0^+(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). Then we can choose \((d,s) \in 0^+ \mathrm{epi~}g\setminus 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). By the definition of the recession cone, there exists some \((x,t) \in \mathrm{epi~}h\cap (\mathrm{dom~}g \times \mathbb R)\) such that \((x,t)+\alpha (d,s) \notin \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\) for some \(\alpha > 0\). Without loss of generality we can set \(\alpha = 1\). Since \(t \ge h(x)\) we get \((x,h(x))+ (d,s) \notin \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\). We find some \(r \in \mathbb {R}\) such that \((x,r) \in \mathrm{epi~}g\).

Case 1 If \((x,h(x))+(d,s) \notin \mathrm{dom~}g \times \mathbb {R}\), then \((x,r)+ (d,s) \notin \mathrm{dom~}g \times \mathbb {R}\) and hence \((d,s) \not \in 0^+\mathrm{epi~}g\), a contradiction.

Case 2 If \((x,h(x))+ (d,s) \notin \mathrm{epi~}h\), then

$$\begin{aligned} h(x)+s < h(x+d) \end{aligned}$$
(3)

by definition of the epigraph. We consider problem (ConcMin) which is equivalent to (DC) as discussed above. For its objective function f defined in (2) we obtain

$$\begin{aligned} f(x+d,r+s) < f(x,r). \end{aligned}$$
(4)

For any \(n \in \mathbb {N}\), \((x+nd,r+ns)\) is feasible for (ConcMin). But \(f(x+nd,r+ns)\) tends to \(-\infty \), by concavity of f and (4). This contradicts the assumption that (DC) has an optimal solution. Thus, (iii) is satisfied.

Assume now that (i), (ii) and (iii) hold. By (i), \(\mathrm{epi~}g\) is non-empty. Let \((x,r) \in \mathrm{epi~}g\) and \((d,s) \in 0^+ \mathrm{epi~}g\). Then \((x,r)+\alpha (d,s) \in \mathrm{epi~}g\) for all \(\alpha > 0\). By (ii), we have \((x,h(x)) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\). By (iii) we obtain \((d,s) \in 0^{+}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))\). Thus, \((x,h(x))+\alpha (d,s) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})\) for all \(\alpha > 0\). From the definition of the epigraph, we obtain \(h(x) + \alpha s \ge h(x+\alpha d)\). For the objective function f of problem (ConcMin), this implies

$$\begin{aligned} f(x+\alpha d, r+\alpha s) = r+\alpha s -h(x+\alpha d) \ge r-h(x) = f(x,r). \end{aligned}$$
(5)

Since \(\mathrm{epi~}g\) is a polyhedron, it can be expressed by a polytope Q as \(\mathrm{epi~}g = Q + 0^+ \mathrm{epi~}g\). By (ii), the values of f are finite over Q. The polytope Q is the convex hull of its vertices. Concavity of f implies that f attains its minimum at a vertex \((x^0,r^0)\) of Q. Since (5) holds for every \((x,r) \in Q\) and every \((d,s) \in 0^+ \mathrm{epi~}g\), \((x^0,r^0)\) is an optimal solution of (ConcMin) and hence \(x^0\) is an optimal solution of (DC). \(\square \)

If g is not polyhedral, the conditions (i), (ii) and (iii) in Theorem 1 are still necessary for the existence of optimal solutions. This can be seen in the first part of the proof, where the assumption of g being polyhedral was not used. However, the conditions are no longer sufficient, not even if h is polyhedral, as the following simple example shows.

Example 2

Let \(g,h:\mathbb {R}\rightarrow \mathbb {R}\cup \{\infty \}\) be defined as

$$\begin{aligned} g(x) :=\left\{ \begin{array}{cl} - \sqrt{x} &{} \text { if } x \ge 0\\ +\infty &{} \text { otherwise } \end{array}\right. \qquad \text {and}\qquad h(x) :=\left\{ \begin{array}{cl} 0 &{} \text { if } x \ge 0\\ +\infty &{} \text { otherwise } \end{array}\right. . \end{aligned}$$

Then (DC) is unbounded and thus has no optimal solution. We have \(\mathrm{dom~}g = \mathrm{dom~}h = \mathbb {R}_+\) and \(0^+ \mathrm{epi~}g = 0^+ \mathrm{epi~}h = \mathbb {R}^2_+\). Thus, the conditions (i), (ii) and (iii) of Theorem 1 are satisfied.

An extension to the non-polyhedral case requires further assumptions as discussed in the following remark.

Remark 3

The second part of the proof of Theorem 1 still works for non-polyhedral functions g if \(\mathrm{epi~}g\) is of the special form \(Q + 0^+ \mathrm{epi~}g\) for some compact set Q and if h is assumed to be upper semicontinuous. Then, by the Weierstrass theorem, the minimum of the objective function f of (ConcMin) is attained in Q.

Under certain assumptions, condition (iii) in Theorem 1 can be simplified. We start with two propositions and formulate this result as a corollary of Theorem 1.

Proposition 4

Let \(A,B \subseteq \mathbb {R}^n\) be non-empty convex sets with \(A \subseteq B\) and let B be closed. Then \(0^+A \subseteq 0^+B\).

Proof

This follows from [13, Theorem 8.3], which states that for a non-empty closed convex set B, \(d \in 0^+B\) if and only if there is some \(x \in B\) satisfying \(x+\alpha d \in B\) for all \(\alpha \ge 0\). Let \(d \in 0^+A\). By definition of the recession cone, we have \(x+\alpha d \in A\) for all \(x \in A\). Since \(A \subseteq B\), we get \(d \in 0^+B\). \(\square \)

Proposition 5

Let \(0^+\mathrm{epi~}h=0^+\mathrm{cl\,}\mathrm{epi~}h\). Then condition (iii) in Theorem 1 is equivalent to

  1. (iii’)

    \(0^+ \mathrm{epi~}g \subseteq 0^+ \mathrm{epi~}h\).

Proof

Let (iii) be satisfied. Then

$$\begin{aligned} 0^+\mathrm{epi~}g \overset{\text {(iii)}}{\subseteq } 0^{+}(\underbrace{\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R})}_{\subseteq \mathrm{cl\,}\mathrm{epi~}h}) \overset{\text {Prop.4}}{\subseteq } 0^+\mathrm{cl\,}\mathrm{epi~}h = 0^+\mathrm{epi~}h. \end{aligned}$$

Let (iii’) be satisfied. Let \((d,s) \in 0^+\mathrm{epi~}g\) and \((x,r) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb R)\). By (iii’), we have \((x,r)+\alpha (d,s) \in \mathrm{epi~}h\) for all \(\alpha \ge 0\). Assuming that \((x,r)+\alpha (d,s) \notin \mathrm{dom~}g \times \mathbb R\), we get \((x,g(x))+\alpha (d,s) \notin \mathrm{dom~}g \times \mathbb R\), which contradicts the precondition \((d,s) \in 0^+\mathrm{epi~}g\). Consequently, \((x,r)+\alpha (d,s) \in \mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb R)\) for all \(\alpha \ge 0\). \(\square \)

Corollary 6

Problem (DC) with g being polyhedral and h being closed has an optimal solution if and only if the following properties hold:

  1. (i)

    \(\mathrm{dom~}g \ne \emptyset \),

  2. (ii)

    \(\mathrm{dom~}g \subseteq \mathrm{dom~}h\),

  3. (iii’)

    \(0^+\mathrm{epi~}g \subseteq 0^+ \mathrm{epi~}h\).

Proof

If h is closed, we have \(\mathrm{epi~}h = \mathrm{cl\,}\mathrm{epi~}h\) and hence \(0^+\mathrm{epi~}h = 0^+\mathrm{cl\,}\mathrm{epi~}h\). Thus, the result follows from Proposition 5 and Theorem 1. \(\square \)

The following example shows that condition (iii’) is not adequate if h is not assumed to be closed.

Example 7

Consider problem (DC) for the functions \(g, h:\mathbb {R}^2 \rightarrow \mathbb {R}\cup \{\infty \}\) with

$$\begin{aligned} g(x,y)={\left\{ \begin{array}{ll} |x| &{} \text {if }y \in [1,\infty ) \\ \infty &{} \text {otherwise}\end{array}\right. } \quad \text { and } \quad h(x,y)={\left\{ \begin{array}{ll} |x| &{} \text {if }y \in (0,\infty ) \\ 2|x| &{} \text {if }y=0 \\ \infty &{} \text {otherwise}\end{array}\right. }. \end{aligned}$$

Both g and h are convex and g is polyhedral. Both functions coincide on \(\mathrm{dom~}g=\mathbb {R}\times [1,\infty )\), whence (DC) has optimal solutions of the form (0, y) for \(y \ge 1\). The recession cones of the functions are

$$\begin{aligned} 0^+\mathrm{epi~}g=\mathrm{cone\,}\left\{ \left( \begin{array}{r} -1 \\ 0 \\ 1 \end{array}\right) ,\left( \begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right) ,\left( \begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right) \right\} \end{aligned}$$

and

$$\begin{aligned} 0^+\mathrm{epi~}h=\mathrm{cone\,}\left\{ \left( \begin{array}{r} -1 \\ 0 \\ 2 \end{array}\right) ,\left( \begin{array}{c} 1 \\ 0 \\ 2 \end{array}\right) ,\left( \begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right) \right\} . \end{aligned}$$

We see that \((1,0,1)^{T} \in 0^+\mathrm{epi~}g \setminus 0^+\mathrm{epi~}h\), i.e., (iii’) is violated.

2.2 The case of h being polyhedral

We consider the Toland-Singer dual problem of (DC), see [14, 15], that is,

figure a

where \(g^{*}(y) :=\sup _{x \in \mathbb {R}^{n}} [{y}^Tx - g(x)]\) is the conjugate of g and likewise for h. The duality theory by Toland and Singer states that the optimal objective values of (DC) and (DC\(^{*}\)) coincide under the assumption of h being closed.

Since \(g^{*}\) is convex and \(h^{*}\) is polyhedral convex, the existence result of Theorem 1 applies to problem (DC\(^{*}\)). The following result provides the relation between optimal solutions of (DC) and (DC\(^{*}\)). We denote by \(\partial f(x):=\{y \in \mathbb {R}^n \mid \forall z \in \mathbb {R}^n: f(z) \ge f(x) + {y}^T(z-x)\}\) the subdifferential of a convex function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{ \infty \}\) at \(x \in \mathrm{dom~}f\). We set \(\partial f(x) :=\emptyset \) for \(x \not \in \mathrm{dom~}f\).

Proposition 8

(e.g. [8, Proposition 4.7] or [18, Proposition 3.20]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h:\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) be convex functions with non-empty domain. Then:

  1. (i)

    If x is an optimal solution of (DC), then each \(y \in \partial h(x)\) is an optimal solution of (DC\(^{*}\)).

If, in addition, g and h are closed, a dual statement holds:

  1. (ii)

    If y is an optimal solution of (DC\(^{*}\)), then each \(x \in \partial g^*(y)\) is an optimal solution of (DC).

Remark 9

As already mentioned in [9], the assumption of g and h being closed for statement (ii) of Proposition 8 is missing in [8, Proposition 4.7]. In [18, Proposition 3.20] the assumption of g being closed is required. Examples can be found in [9].

Proposition 10

(e.g. [13, Theorem 23.5]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) be a proper closed convex function. Then \(x \in \partial g^{*}(y)\) if and only if x is an optimal solution of

$$\begin{aligned} \min \limits _{z \in \mathbb {R}^n}[g(z)-y^Tz]. \end{aligned}$$
(6)

Theorem 11

Let g be closed and let (6) have an optimal solution for every \(y \in \mathrm{dom~}g^*\). Let h be polyhedral. Then, problem (DC) has an optimal solution if and only if the following properties hold:

  1. (i*)

    \(\mathrm{dom~}h^{*} \ne \emptyset \),

  2. (ii*)

    \(\mathrm{dom~}h^{*} \subseteq \mathrm{dom~}g^{*}\),

  3. (iii*)

    \(0^+\mathrm{epi~}h^{*} \subseteq 0^+ \mathrm{epi~}g^{*}\).

Proof

Let (DC) have an optimal solution x. Since \(x \in \mathrm{dom~}h\) and h is polyhedral, there exists some \(y \in \partial h(x)\), see e.g. [13, Theorem 23.10]. Proposition 8 states that y is an optimal solution of (DC\(^{*}\)). Theorem 1 applied to (DC\(^{*}\)) yields the conditions (i*), (ii*) and (iii*).

Let the conditions (i*), (ii*) and (iii*) be satisfied. By Theorem 1 we obtain that (DC\(^{*}\)) has an optimal solution y. By assumption, (6) has an optimal solution x, which belongs to \(\partial g^*(y)\), by Proposition 10. Since g and h are closed, Proposition 8 yields that x is an optimal solution to (DC). \(\square \)

2.3 The case of both g and h being polyhedral

Combining the previous results we obtain the following statement.

Corollary 12

Let \(g :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) and \(h :\mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\) are polyhedral convex functions. Then, the following statements are equivalent:

  1. (a)

    Problem (DC) has an optimal solution.

  2. (b)

    Problem (DC\(^{*}\)) has an optimal solution.

  3. (c)

    The conditions (i), (ii), (iii’) in Corollary 6 are satisfied.

  4. (d)

    The conditions (i*), (ii*), (iii*) in Theorem 11 are satisfied.

Proof

By Corollary 6, (a) is equivalent to (c). Since g and h are polyhedral, the assumptions of Theorem 11 are satisfied. Indeed, by \(y \in \mathrm{dom~}g^*\), problem (6) has a finite optimal value and hence the minimum is attained as g is polyhedral. Theorem 11 yields that (b) is equivalent to (d). By Proposition 8 and the fact that the subdifferential of a polyhedral function is non-empty at points of the domain of the function, we see that (a) is equivalent to (b). \(\square \)

3 Solution procedure

Let g be polyhedral in problem (DC). We are going to solve (DC) by the following procedure. First we check whether or not an optimal solution of (DC) exists by using Theorem 1. If so, we solve the associated problem (ConcMin) by the solution methods of [4]. By using Theorem 1 again, we will check the following assumptions which are required to be satisfied for the algorithms presented in [4]:

There exists a polyhedral convex pointed cone \(C \subseteq \mathbb {R}^n\) such that

  1. (M)

    f is C-monotone (i.e. \(y-x \in C\) implies \(f(x)\le f(y)\)),

  2. (B)

    P is C-bounded (i.e. \(0^+ P \subseteq C\)).

If (M) and (B) are satisfied for some polyhedral convex pointed cone \(C \subseteq \mathbb {R}^n\), then (ConcMin) has an optimal solution ([4, Corollary 6]). Moreover, under these assumptions the methods in [4] compute optimal solutions of (ConcMin), see [4, Algorithm 2, Theorem 16] for the primal algorithm, [4, Algorithm 4, Theorem 22] for the dual algorithm, and [4, Section 6] for the extension to the case of the interior of C being empty.

Theorem 13

Let problem (DC) with g being polyhedral have an optimal solution and let h be closed. Then, for the associated problem (ConcMin), assumptions (M) and (B) are satisfied for \(P = \mathrm{epi~}g\) and the polyhedral convex cone \(C = 0^+\mathrm{epi~}g\).

Proof

The set \(C=0^+\mathrm{epi~}g\) is a polyhedral convex cone. Obviously, (B) holds. It remains to show (M). Let \((x,r),(y,s) \in \mathbb {R}^n\times \mathbb {R}\) such that

$$\begin{aligned} \begin{pmatrix} y-x\\ s-r \end{pmatrix}\in C = 0^+\mathrm{epi~}g \overset{\text {Cor. 6(iii')}}{\subseteq } 0^+\mathrm{epi~}h. \end{aligned}$$

If \(x \notin \mathrm{dom~}g\), then \(f(x,r) = -\infty \le f(y,s)\). We have

$$\begin{aligned} \underbrace{(x,h(x))}_{\in \mathrm{epi~}h} + \underbrace{(y-x,s-r)}_{\in 0^+\mathrm{epi~}h} = (y,h(x)+s-r) \in \mathrm{epi~}h. \end{aligned}$$

By definition of \(\mathrm{epi~}h\), we obtain \(h(x)+s-r \ge h(y)\) and hence \(r-h(x) \le s-h(y)\), which proves (M). \(\square \)

Remark 14

In the previous theorem, the assumption of h being closed can be omitted if the definition of f in (2) is replaced by

$$\begin{aligned} f(x,r):={\left\{ \begin{array}{ll} r-h(x) &{} \text { if } (x,r)\in \mathrm{dom~}g \\ -\infty &{} \text { otherwise} \end{array}\right. }. \end{aligned}$$
(7)

The proof is similar by using Theorem 1 (iii) instead of Corollary 6 (iii’).

The cone C in the previous theorem is not necessarily pointed, as required for the solution methods of [4], see above. However, pointedness can be achieved by a reformulation of problem (DC): Denote by L the lineality space of the convex function g which is defined by Let \(L^{\bot }\) be the orthogonal complement of L. For some fixed \(\bar{x} \in \mathrm{dom~}g\) we define

$$\begin{aligned} \bar{g} (x) :={\left\{ \begin{array}{ll} g(x) &{} \text { if } x-\bar{x} \in L^\bot \\ \infty &{} \text { otherwise} \end{array}\right. }. \end{aligned}$$
(8)

We denote by (\(\mathrm{{\overline{DC}}}\)) the polyhedral d.c. optimization problem (DC) where g is replaced by \(\bar{g}\).

Proposition 15

Let (DC) with g being polyhedral have an optimal solution. Then \(\mathrm (\overline{DC})\) has an optimal solution and every optimal solution of \(\mathrm (\overline{DC})\) is also an optimal solution of (DC).

Proof

We have \(\mathrm{dom~}\bar{g} \ne \emptyset \), \(\mathrm{dom~}\bar{g} \subseteq \mathrm{dom~}g\) and \(0^+ \mathrm{epi~}\bar{g} \subseteq 0^+ \mathrm{epi~}g\). Theorem 1 yields the first statement. Now let \(x^0\) be an optimal solution of the modified problem \(\mathrm (\overline{DC})\). The point \(x^0\) is feasible for (DC). Assume there is some \(\tilde{x} \in \mathrm{dom~}g\) such that \(g(\tilde{x})-h(\tilde{x}) < g(x^0)-h(x^0) = \bar{g}(x^0)-h(x^0)\). Define

$$\begin{aligned} {\hat{x}} :=(\{\tilde{x}\} + L)\cap (\{\bar{x}\}+L^{\bot }) \in \mathrm{dom~}\bar{g}. \end{aligned}$$

We show that \(g(\tilde{x})-h( \tilde{x}) = \bar{g}(\hat{x})- h(\hat{x})\). Indeed, we have \({\hat{x}} - \tilde{x} \in L\), hence there is some \(r \in \mathbb {R}\) such that

$$\begin{aligned} \begin{pmatrix} {\hat{x}}-{\tilde{x}}\\ r \end{pmatrix} \in \mathrm{lineal}(\mathrm{epi~}g). \end{aligned}$$

From Theorem 1 (iii) we conclude that

$$\begin{aligned} \begin{pmatrix} {\hat{x}}-{\tilde{x}}\\ r \end{pmatrix} \in \mathrm{lineal}(\mathrm{epi~}h \cap (\mathrm{dom~}g \times \mathbb {R}))= \mathrm{lineal}(\mathrm{epi~}(h \vert _{\mathrm{dom~}g})), \end{aligned}$$

where \(h\vert _{\mathrm{dom~}g}\) is the function that coincides with h on \(\mathrm{dom~}g\) and is \(\infty \) elsewhere. From [13, Theorem 8.8] we conclude that

$$\begin{aligned} g(x + \lambda ({\hat{x}} - {\tilde{x}})) = g(x) + \lambda r \end{aligned}$$
(9)

for all \(x \in \mathbb {R}^n\) and all \(\lambda \in \mathbb {R}\). Likewise we get

$$\begin{aligned} h(\underbrace{x + \lambda (\hat{x} - \tilde{x})}_{\in \mathrm{dom~}g}) = h(x) + \lambda r \end{aligned}$$
(10)

for all \(x \in \mathrm{dom~}g\) and all \(\lambda \in \mathbb {R}\). We obtain

$$\begin{aligned} g(\tilde{x}) - h(\tilde{x}) {\mathop {=}\limits ^{(9),(10)}} g(\tilde{x} + \lambda (\hat{x} - \tilde{x})) - h(\tilde{x} + \lambda (\hat{x} - \tilde{x})) {\mathop {=}\limits ^{\lambda = 1}} g(\hat{x}) - h(\hat{x}). \end{aligned}$$

Since \(\hat{x} \in \mathrm{dom~}\bar{g}\), we have \(g(\hat{x})=\bar{g}(\hat{x})\). Together we have \(\bar{g}(\hat{x})- h(\hat{x}) < \bar{g}(x^0)-h(x^0)\) which contradicts the assumption that \(x^0\) is optimal for \(\mathrm (\overline{DC})\). \(\square \)

The following example shows that an optimal solution of (DC) is not necessarily an optimal solution of \(\mathrm (\overline{DC})\).

Example 16

Let \(g,h:\mathbb {R}\rightarrow \mathbb {R}\), \(g\equiv 0\) and \(h\equiv 0\). Then \(L=\mathbb {R}\), \(L^\bot =\{0\}\). We have \(\bar{g}(0)=0\) and \(\bar{g}(x)=\infty \) if \(x\ne 0\). Thus 0 is the only optimal solution of \(\mathrm (\overline{DC})\) but every \(x\in \mathbb {R}\) is optimal solution of (DC).

Summarizing the results, we solve (DC) with g being polyhedral by the following procedure:

  1. (1)

    Check whether (DC) has an optimal solution or not using Theorem 1, if not, stop.

  2. (2)

    Determine \(\bar{x} \in \mathrm{dom~}g\) and \(L^\bot \) in order to define the function \(\bar{g}\) in (8).

  3. (3)

    Solve (ConcMin) with g replaced by \(\bar{g}\) using the methods of [4].

In the case where h is polyhedral, we need to assume additionally that g is closed and (6) has an optimal solution for every \(y \in \mathrm{dom~}g^{*}\). Then we can check whether or not (DC) has an optimal solution by using Theorem 11. If so, by Theorem 1 we know that (DC\(^{*}\)) has an optimal solution. An optimal solution y of (DC\(^{*}\)) can be obtained by the same method (steps (2) and (3) of the above procedure) but applied to (DC\(^{*}\)) rather than (DC) (replace g by \(h^*\) and h by \(g^{*}\)). Finally, we solve (6), which provides an optimal solution of (DC).

If both g and h are polyhedral, (DC) can be solved by two different methods: We speak about the primal method in case we use the method where g is required to be polyhedral. The term dual method refers to the method where h is required to be polyhedral. Furthermore there are two different tests for existence of an optimal solution of (DC). The test in Corollary 12 (c) is referred to as primal existence test whereas (d) in Corollary 12 is called dual existence test.

4 Numerical results

We implemented the results of this article in Matlab 9.6 by using bensolve tools, version 2.3, see [5, 6]. The code and the test instances are available at http://tools.bensolve.org/dcsolve. By two (new) commands dcsolve and dcdsolve the user can run, respectively, the primal and dual method described in the previous section. The input arguments of both commands are two arbitrary polyhedral convex functions g and h in the usual format of bensolve tools. Both commands solve arbitrary polyhedral d.c. optimization problems (of small size) or certify that no optimal solution exists.

The following two numerical examples were run on a computer with Intel® Core™ i5 CPU with 3.3 GHz.

Fig. 1
figure 1

Numerical results for Example 17. The number of columns of G and H are \(m_G=20\) and \(m_H=15\). Left: Run time of test for existence of solutions in dependance of the number of variables. Right: Run time of primal and dual solution algorithms (without existence test) in dependance of number of variables

Example 17

Let \(A \in \mathbb {R}^{n \times m}\) be a matrix and denote by \(a^i\) its columns. We define a polyhedral convex function \(f_A:\mathbb {R}^n \rightarrow \mathbb {R}\) by

$$\begin{aligned} f_A(x) :=\sum _{i=1}^m \Vert x-a^{i}\Vert _1, \end{aligned}$$

where \(\Vert y\Vert _1 = \sum _{j=1}^n |y_j|\) denotes the sum norm of a vector y. Given two matrices \(G \in \mathbb {R}^{n\times m_G}\) and \(H \in \mathbb {R}^{n \times m_H}\) we consider the polyhedral d.c. optimization problem

$$\begin{aligned} \min _{x\in \mathbb {R}^n} f_G(x)-f_H(x). \end{aligned}$$
(11)

Problems of this type occur in locational analysis, see e.g. [9] and the references therein. In Fig. 1 numerical results are depicted for matrices G and H with components \(g_{ij}=sin(i+j)\) and \(h_{ij}=cos(i+j)\) (just to make the results easily reproducible in comparison to random numbers). The recession cone of \(\mathrm{epi~}f_A\) is just the recession cone of \(\mathrm{epi~}(m \Vert \cdot \Vert _1)\) and \(\mathrm{dom~}f_A = \mathbb {R}^n\). Thus, by Corollary 6, a solution of (11) exists if and only if \(m_G \ge m_H\). Figure 1 (left) shows the run time of a numeric verification of this fact for some problem instances by checking the conditions of Corollary 6 (primal existence test) and Theorem 11 (dual existence test). Fig. 1 (right) shows the run time of the primal and dual solution methods.

The following example from [7] was solved in [9] using the VLP solver bensolve [10]. We implemented the algorithms of [9] with bensolve tools and compare them with our algorithms. While the methods of [9] compute all vertices of \(\mathrm{epi~}g\) in the primal algorithm and all vertices of \(\mathrm{epi~}h^*\) in the dual algorithm, we compute only part of these vertices by using the qcsolve command of bensolve tools. We observe a better performance for the bigger instances.

Fig. 2
figure 2

Numerical results for Example 18. Left: existence tests. Right: Comparison of our algorithms (without existence test) with the methods from [9] (also without existence test). We observe that both dual methods perform better than the primal ones. This is due to the easier structure of h in comparison to g. We observe that our methods perform better in case of instances with sufficiently many variables

Example 18

Consider the polyhedral d.c. optimization problem (DC) with

$$\begin{aligned} g(x) = |x_1-1|+200 \sum _{i=2}^n \max \left\{ 0,|x_{i-1}|-x_i\right\} \end{aligned}$$

and

$$\begin{aligned} h(x) = 100 \sum _{i=2}^n \left( |x_{i-1}|-x_i\right) . \end{aligned}$$

One can easily verify that the all-one vector provides an optimal solution. In Fig. 2, the run time of the primal and dual solution method proposed in this article is compared to the primal and dual method of [9].

5 Summary

We characterized the existence of global optimal solutions of polyhedral d.c. optimization problems in Theorem 1, Theorem 11 and Corollary 12 depending on whether the first, the second, or both components of the objective function are polyhedral. We provided a solution procedure based on both an existence tests and a reformulation of the polyhedral d.c. optimization problem into a quasi-concave minimization problem. Numerical experiments were run for the case where both components of the objective function are polyhedral.