1 Introduction

Tropical linear algebra (also called max-algebra or path algebra) is an analogue of linear algebra, where addition is replaced by maximization and multiplication by conventional addition.

The interest in tropical linear algebra was originally motivated by the possibility of dealing with a class of nonlinear problems in pure and applied mathematics, operational research, science and engineering as if they were linear due to the fact that the underlying structure is a commutative and idempotent semifield. Besides the main advantage of using linear rather than nonlinear techniques, tropical linear algebra enables us to efficiently describe and deal with complex sets (see Butkovič [1]), reveal combinatorial aspects of problems (see Butkovič [2]) and view a class of problems in a new, unconventional way. The first pioneering papers by Cuninghame-Green [3] and Vorobyov [4] appeared in the 1960s, followed by substantial contributions in the 1970s and 1980s (see Cuninghame-Green [5], Gondran and Minoux [6] and Zimmermann [7]). Since 1995, we have seen a remarkable expansion of this research field following a number of findings and applications in areas as diverse as control theory and optimization (see Baccelli et al. [8]) phylogenetics in Speyer and Sturmfels [9], modeling of the cellular protein production by Brackley et al. [10] and railway scheduling (see Heidergott et al. [11]). A number of research monographs have been published, for instance, by Baccelli et al. [8], Butkovič [12] and Heidergott et al. [11].

Tropical linear algebra covers a range of linear-algebraic problems in the max-linear setting, such as systems of linear equations and inequalities, linear independence and rank, bases and dimension, polynomials, characteristic polynomials, matrix equations, matrix orbits and periodicity of matrix powers (see Baccelli et al. [8], Butkovič [12], Cuninghame-Green [5] and Heidergott et al. [11]). Among the most intensively studied questions were the eigenproblem and z-matrix equations. These have been answered with numerically stable low-order polynomial algorithms (see Butkovič et al. [13], Cuninghame-Green [5], Gondran and Minoux [6], Gaubert [14] and Bapat et al. [15]). The same is true about the subeigenproblem, which appears to be strongly linked to the eigenproblem. In contrast, attention has only recently been paid to the supereigenproblem, which is trivial for small eigenvalues, but in general the description of the whole solution set seems to be much more difficult than for the eigenproblem (see Butkovič [16]). At the same time, tropical linear and integer linear programs have also been studied, for instance, by Zimmermann [7], Butkovič [12], Butkovič and Aminu [17], Gaubert et al. [18] and Butkovič and MacCaig [19]. While one-sided tropical linear systems of equations and inequalities are solvable in low-order polynomial time, no polynomial method seems to exist for solving their two-sided counterparts in general. Consequently, linear programs with one-sided constraints are easily solvable, while a polynomial solution method for linear or integer linear programs with two-sided constraints remains an open question.

The aim of this paper is to give a simple compact proof of the strong duality theorem for tropical linear programs with one-sided constraints (the weak duality is trivial, and its proof follows the lines of the conventional weak duality and is given here purely for completeness) and then use this result to present efficient solution methods for solving

  1. (a)

    a special type of tropical linear programs with two-sided constraints, and

  2. (b)

    tropical dual integer programs with one-sided constraints.

Note that the weak and strong duality in tropical linear programming has been investigated in the past, in more general settings, by Hoffman [20] and Zimmermann [21]. The strong duality theorem has been presented by Superville [22] with a rather complicated proof. The present paper uses residuation to provide a simple and compact proof.

2 Definitions, Notation and Preliminary Results

As usual in tropical linear algebra, the conventional operators \(\left( +,.\right) \) are replaced by the pair \(\left( \oplus ,\otimes \right) \) where

$$\begin{aligned} a\oplus b=\max (a,b) \end{aligned}$$

and

$$\begin{aligned} a\otimes b=a+b \end{aligned}$$

for \(a,b\in \overline{\mathbb {R}}:=\mathbb {R}\cup \{-\infty \}.\) This pair is extended to matrices and vectors as in conventional linear algebra. That is if \(A=(a_{ij}),~B=(b_{ij})\) and \(C=(c_{ij})\) are matrices of compatible sizes with entries from \(\overline{\mathbb {R}}\), we write \(C=A\oplus B\) if \(c_{ij}=a_{ij}\oplus b_{ij}\) for all ij and \(C=A\otimes B\) if

$$\begin{aligned} c_{ij}=\bigoplus \limits _{k}a_{ik}\otimes b_{kj}=\max _{k}(a_{ik}+b_{kj}) \end{aligned}$$

for all ij. If \(\alpha \in \overline{\mathbb {R}}\), then \(\alpha \otimes A=\left( \alpha \otimes a_{ij}\right) \). For simplicity, we will use the convention of not writing the symbol \(\otimes .\) Thus, in what follows the symbol \(\otimes \) will not be used (except when necessary for clarity), and unless explicitly stated otherwise, all multiplications indicated are in max-algebra.

Consider the following motivational example (see Cuninghame-Green [5]). Products \(P_{1},\ldots ,P_{m}\) are prepared using n machines (or processors), every machine contributing to the completion of each product by producing a partial product. It is assumed that each machine can work for all products simultaneously and that all these actions on a machine start as soon as the machine starts to work. Let \(a_{ij}\) be the duration of the work of the jth machine needed to complete the partial product for \(P_{i}\)\((i=1,\ldots ,m;j=1,\ldots ,n).\) If this interaction is not required for some i and j, then \(a_{ij}\) is set to \(-\infty .\) The matrix \(A=\left( a_{ij}\right) \) is called the production matrix. Let us denote by \(x_{j}\) the starting time of the jth machine \((j=1,\ldots ,n)\). Then, all partial products for \(P_{i}\)\((i=1,\ldots ,m)\) will be ready at time

$$\begin{aligned} \max (x_{1}+a_{i1},\ldots ,x_{n}+a_{in}). \end{aligned}$$

Hence, if \(b_{1},\ldots ,b_{m}\) are given completion times, then the starting times have to satisfy the system of equations:

$$\begin{aligned} \max (x_{1}+a_{i1},\ldots ,x_{n}+a_{in})=b_{i}\text { for all }i=1,\ldots ,m. \end{aligned}$$

Using max-algebra, this system can be written in a compact form as a system of linear equations:

$$\begin{aligned} Ax=b. \end{aligned}$$
(1)

A system of the form (1) is called a one-sided system of max-linear equations (or briefly a one-sided max-linear system or just a max-linear system). Such systems are easily solvable (see Cuninghame-Green [3], Butkovič [12] and Zimmermann [7]; see also the end of this section).

In applications, it may be required that the starting times are as small as possible (to reflect the fact that the producer has free capacity and wishes to start production as soon as possible). This means to minimize the value of

$$\begin{aligned} f(x)=\max (x_{1},\ldots ,x_{n}) \end{aligned}$$
(2)

with respect to (1). In general, we may need to minimize (or possibly to maximize) a max-linear function, that is,

$$\begin{aligned} f(x)=f^{T}x=\max (f_{1}+x_{1},\ldots ,f_{n}+x_{n}), \end{aligned}$$
(3)

where \(f=\left( f_{1},\ldots ,f_{n}\right) ^{T}.\) So in (2) we have \(f=\left( 0,\ldots ,0\right) ^{T}.\) Thus, the max-linear programs are of the form

$$\begin{aligned} f^{T}x\longrightarrow \min \text { or }\max \text {, s.t. }Ax=b. \end{aligned}$$

Sometimes the vector b is given to require the earliest or latest rather than exact completion times. In such cases, the constraints of the optimization problem are \(Ax\ge b\) or, \(Ax\le b.\) In some cases, two processes with the same starting times and production matrices, say, A and B,  have to be coordinated so that the products of the second are completed not before those completed by the first and possibly also not before given time restrictions given by a vector \(d=\left( d_{1},\ldots ,d_{m}\right) ^{T}.\) Then, the constraints have the form

$$\begin{aligned} Ax\oplus d\le Bx. \end{aligned}$$
(4)

In full generality, max-linear programs are of the form

$$\begin{aligned} f^{T}x\longrightarrow \min \text { or }\max \text {, s.t. }Ax\oplus c=Bx\oplus d. \end{aligned}$$

These have been first studied by Butkovič and Aminu (see [17]) with pseudopolynomial methods later presented by Akian et al. [23] and Gaubert et al. [18]. No polynomial solution method seems to exist in general, not even for feasibility checking, although it is known that this problem in \(NP\cap co\)-NP;  see Bezem et al. [24].

Throughout the paper, we denote \(-\infty \) by \(\varepsilon \) (the neutral element with respect to \(\oplus \)), and for convenience, we also denote by the same symbol any vector, whose all components are \(-\infty ,\) or a matrix whose all entries are \(-\infty .\) A matrix or vector with all entries equal to 0 will also be denoted by 0. If \(a\in \mathbb {R}\), then the symbol \(a^{-1}\) stands for \(-a.\) Matrices and vectors whose all entries are real numbers are called finite. We assume everywhere that \(n,m\ge 1\) are integers and denote \(N=\left\{ 1,\ldots ,n\right\} ,M=\left\{ 1,\ldots ,m\right\} \).

It is easily proved that if ABC and D are matrices of compatible sizes (including vectors considered as \(m\times 1\) matrices), then the usual laws of associativity and distributivity hold and also isotonicity are satisfied:

$$\begin{aligned} A\ge B\Longrightarrow AC\ge BC\text { and }DA\ge DB. \end{aligned}$$
(5)

A square matrix is called diagonal if all its diagonal entries are real numbers and off-diagonal entries are \(\varepsilon .\) More precisely, if \(x=\left( x_{1},\ldots ,x_{n}\right) ^{T}\in \mathbb {R}^{n}\), then \(diag\left( x_{1},\ldots ,x_{n}\right) \) or just \(diag\left( x\right) \) is the \(n\times n\) diagonal matrix

$$\begin{aligned} \left( \begin{array} [c]{cccc} x_{1} &{} \varepsilon &{} ... &{} \varepsilon \\ \varepsilon &{} x_{2} &{} ... &{} \varepsilon \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \varepsilon &{} \varepsilon &{} ... &{} x_{n} \end{array} \right) . \end{aligned}$$

The matrix \(diag\left( 0\right) \) is called the unit matrix and denoted I. Obviously, \(AI=IA=A\) whenever A and I are of compatible sizes. A matrix obtained from a diagonal matrix [unit matrix] by permuting the rows and/or columns is called a generalized permutation matrix [permutation matrix]. It is known that in tropical linear algebra generalized permutation matrices are the only type of invertible matrices (see Cuninghame-Green [5] and Butkovič [12]). Clearly,

$$\begin{aligned} \left( diag\left( x_{1},\ldots ,x_{n}\right) \right) ^{-1}=diag\left( x_{1}^{-1},\ldots ,x_{n}^{-1}\right) . \end{aligned}$$

If A is a square matrix, then the iterated product AA...A in which the symbol A appears k-times will be denoted by \(A^{k}\). By definition, \(A^{0}=I\).

Given \(A\in \overline{\mathbb {R}}^{n\times n}\), it is usual (see Cuninghame-Green [5], Baccelli et al. [8], Heidergott et al. [11] and Butkovič [12]) in max-algebra to define the infinite series

$$\begin{aligned} A^{*}=I\oplus A^{+}=I\oplus A\oplus A^{2}\oplus A^{3}\oplus .... \end{aligned}$$
(6)

The matrix \(A^{*}\) is called the strong transitive closure of A,  or the Kleene Star.

It follows from the definitions that every entry of the matrix sequence

$$\begin{aligned} \left\{ A\oplus A^{2}\oplus ...\oplus A^{k}\right\} _{k=0}^{\infty } \end{aligned}$$

is a non-decreasing sequence in \(\overline{\mathbb {R}}\), and therefore, either it is convergent to a real number (when bounded), or its limit is \(+\infty \) (when unbounded). If \(\lambda (A)\le 0\), then

$$\begin{aligned} A^{*}=I\oplus A\oplus A^{2}\oplus ...\oplus A^{k-1} \end{aligned}$$

for every \(k\ge n\) and can be found using the Floyd–Warshall algorithm in \(O\left( n^{3}\right) \) time (see Butkovič [12]).

The matrix \(\lambda ^{-1}A\) for \(\lambda \in \mathbb {R}\) will be denoted by \(A_{\lambda }\) and \((A_{\lambda })^{*}\) will be shortly written as \(A_{\lambda }^{*}.\)

Given \(A=(a_{ij})\in \overline{\mathbb {R}}^{n\times n}\), the symbol \(D_{A}\) will denote the weighted digraph \(\left( N,E,w\right) \) (called associated with A) where \(E=\left\{ \left( i,j\right) ;a_{ij}>\varepsilon \right\} \) and \(w\left( i,j\right) =a_{ij}\) for all \((i,j)\in E.\) The symbol \(\lambda (A)\) will stand for the maximum cycle mean of A, that is:

$$\begin{aligned} \lambda (A)=\max _{\sigma }\mu (\sigma ,A), \end{aligned}$$
(7)

where the maximization is taken over all elementary cycles in \(D_{A}\) and

$$\begin{aligned} \mu (\sigma ,A)=\frac{w(\sigma ,A)}{l\left( \sigma \right) } \end{aligned}$$
(8)

denotes the mean of a cycle \(\sigma \). With the convention \(\max \emptyset =\varepsilon \), the value \(\lambda \left( A\right) \) always exists since the number of elementary cycles is finite. An \(O\left( n^{3}\right) \) algorithm for finding \(\lambda \left( A\right) \) has been first presented by Karp [25] (see also Butkovič [12]). Observe that \(\lambda \left( A\right) =\varepsilon \) if and only if \(D_{A}\) is acyclic.

The tropical eigenvalue–eigenvector problem (briefly eigenproblem) is the following:

Given \(A\in \overline{\mathbb {R}}^{n\times n}\), find all\(\lambda \in \overline{\mathbb {R}}\)(eigenvalues) and \(x\in \overline{\mathbb {R}}^{n},x\ne \varepsilon \) (eigenvectors) such that

$$\begin{aligned} Ax=\lambda x. \end{aligned}$$

This problem has been studied since the work of Cuninghame-Green. An \(n\times n\) matrix has up to n eigenvalues with \(\lambda \left( A\right) \) always being the largest eigenvalue (called principal). This finding was first presented by Cuninghame-Green [5] and Gondran and Minoux [6] (see also Vorobyov [4]). The full spectrum was first described by Gaubert [14] and Bapat et al. [15]. The spectrum and bases of all eigenspaces can be found in \(O(n^{3})\) time (see Butkovič et al. [26] and Butkovič [12]).

The aim of this paper is to study integer solutions to tropical linear programs, and therefore, we summarize here only the results on finite solutions and for finite A and b. The following has been proved by Cuninghame-Green [5] and Gondran and Minoux [6].

Theorem 2.1

If \(A\in \mathbb {R}^{n\times n}\), then \(\lambda \left( A\right) \) is the unique eigenvalue of A and all eigenvectors of A are finite.

A matrix \(A\in \overline{\mathbb {R}}^{n\times n}\) is called irreducible if \(D_{A}\) is strongly connected. Note that the statement of Theorem 2.1 remains valid if the premise \(A\in \mathbb {R}^{n\times n}\) is replaced by \(A\in \overline{\mathbb {R} }^{n\times n},\)A irreducible. For details, see Butkovič [12].

If \(A\in \mathbb {R}^{n\times n}\) and \(\lambda \in \mathbb {R}\), then a vector \(x\in \mathbb {R}^{n},x\ne \varepsilon \) satisfying

$$\begin{aligned} Ax\le \lambda x \end{aligned}$$
(9)

is called a subeigenvector of A with associated subeigenvalue \(\lambda \). We denote \(V_{*}(A,\lambda )=\left\{ x\in \mathbb {R}^{n}:Ax\le \lambda x\right\} \). The following has been proved by Butkovič and Schneider [27].

Theorem 2.2

Let \(A\in \mathbb { R}^{n\times n}\). Then, \(V_{*}\left( A,\lambda \right) \ne \emptyset \) if and only if \(\lambda \ge \lambda \left( A\right) \) and \(V_{*}\left( A,\lambda \right) =\left\{ A_{\lambda }^{*}u:u\in \mathbb {R}^{n}\right\} \).

Similarly as in Cuninghame-Green [5] and Butkovič [12], we define min-algebra over \(\mathbb {R}\) by

$$\begin{aligned} a\oplus ^{\prime }b=\min (a,b) \end{aligned}$$

and

$$\begin{aligned} a\otimes ^{\prime }b=a\otimes b \end{aligned}$$

for all a and b. We extend the pair of operations \(\left( \oplus ^{\prime },\otimes ^{\prime }\right) \) to matrices and vectors in the same way as in max-algebra.

We also define the conjugate \(A^{\#}=-A^{T}.\) It is easily seen that

$$\begin{aligned}&\left( A^{\#}\right) ^{\#}=A, \end{aligned}$$
(10)
$$\begin{aligned}&\left( A\otimes B\right) ^{\#}=B^{\#}\otimes ^{\prime }A^{\#} \end{aligned}$$
(11)

and

$$\begin{aligned} \left( A\otimes ^{\prime }B\right) ^{\#}=B^{\#}\otimes A^{\#} \end{aligned}$$
(12)

whenever A and B are compatible. Further, for any \(u\in \mathbb {R}^{n}\) we have

$$\begin{aligned} u^{\#}\otimes u=0 \end{aligned}$$
(13)

and

$$\begin{aligned} u\otimes u^{\#}\ge I. \end{aligned}$$
(14)

We will usually not write the operator \(\otimes ^{\prime }\), and for matrices, the convention applies that if no operator appears, then the product is in min-algebra whenever it follows the symbol \(\#\), otherwise it is in max-algebra. In this way, a residuated pair of operations (a special case of the Galois connection) has been defined, namely

$$\begin{aligned} Ax\le y\Longleftrightarrow x\le A^{\#}y \end{aligned}$$
(15)

for all \(x,y\in \mathbb {R}^{n}.\) Hence \(Ax\le y\) implies \(A(A^{\#}y)\le y\). It follows immediately that a one-sided system \(Ax=b\) has a solution if and only if \(A\left( A^{\#}b\right) =b\) and that the system \(Ax\le b\) always has an infinite number of solutions with \(A^{\#}b\) being the greatest solution.

3 Duality for Tropical Linear Programs

Let \(A=\left( a_{ij}\right) \in \mathbb {R}^{m\times n},b=\left( b_{1},\ldots ,b_{m}\right) ^{T}\in \mathbb {R}^{m},c=\left( c_{1},\ldots ,c_{n} \right) ^{T}\in \mathbb {R}^{n}\) and consider the following primal-dual pair of tropical linear programs:

$$\begin{aligned} \max f\left( x\right) =c^{T}x,\text { s.t. }Ax\le b,x\in \mathbb {R}^{n} \end{aligned}$$
(P)

and

$$\begin{aligned} \min \varphi \left( \pi \right) =\pi ^{T}b,\text { s.t. }\pi ^{T}A\ge c^{T} ,\pi \in \mathbb {R}^{m}. \end{aligned}$$
(D)

Let us denote \(S_{P}=\left\{ x\in \mathbb {R}^{n}:Ax\le b\right\} \) and \(S_{D}=\left\{ \pi \in \mathbb {R}^{m}:\pi ^{T}A\ge c^{T}\right\} .\) The sets of optimal solutions will be denoted \(S_{P}^{opt}\) and \(S_{D}^{opt},\) respectively. The optimal objective function values will be denoted \(f^{\max } \) and \(\varphi ^{\min }.\)

Theorem 3.1

(Weak Duality Theorem) The inequality \(c^{T}x\le \pi ^{T}b\) holds for any \(x\in S_{P}\) and \(\pi \in S_{D}.\)

Proof

By isotonicity and associativity, we have

$$\begin{aligned} c^{T}x\le \left( \pi ^{T}A\right) x=\pi ^{T}\left( Ax\right) \le \pi ^{T}b. \end{aligned}$$

\(\square \)

The following was proved by Superville [22]. We aim to provide here a substantially simpler proof.

Theorem 3.2

( Strong Duality Theorem) Optimal solutions to both (P) and (D) exist and

$$\begin{aligned} \max _{x\in S_{P}}c^{T}x=\min _{\pi \in S_{D}}\pi ^{T}b. \end{aligned}$$

We first prove a lemma:

Lemma 3.1

\(A^{\#}b\in S_{P}^{opt}\) and hence \(f^{\max }=c^{T}\left( A^{\#}b\right) \).

Proof

By residuation (15), we have

$$\begin{aligned} x\in S_{P}\Longleftrightarrow x\le A^{\#}b \end{aligned}$$

and the rest follows by isotonicity of \(\otimes .\)\(\square \)

Proof

(Of strong duality.) Let us denote \(t=c^{T}\left( A^{\#}b\right) \) and set \( \pi ^{T}=tb^{\#}.\) It remains to prove that \(\pi \) is dual feasible and \( \pi ^{T}b=t.\) Using associativity and isotonicity (5) and also (12) and (14), we have

$$\begin{aligned} \pi ^{T}A&=\left( t\otimes b^{\#}\right) \otimes A \\&=t\otimes \left( b^{\#}\otimes A\right) \\&=c^{T}\otimes \left( A^{\#}\otimes ^{\prime }b\right) \otimes \left( A^{\#}\otimes ^{\prime }b\right) ^{\#} \\&\ge c^{T}\otimes I \\&=c^{T}. \end{aligned}$$

On the other hand (using (13))

$$\begin{aligned} \pi ^{T}b=\left( t\otimes b^{\#}\right) \otimes b=t\otimes \left( b^{\#}\otimes b\right) =t\otimes 0=t, \end{aligned}$$

which completes the proof. \(\square \)

It follows that there is no duality gap for the pair of programs (P) and (D). Their optimal solutions are \(\overline{x}=A^{\#}b\) (no matter what c is) and \(\overline{\pi }^{T}=c^{T}\left( A^{\#}b\right) b^{\#},\) respectively. Their common objective function value is \(c^{T}\left( A^{\#}b\right) \). If Abc are integer, then there is also no duality gap for the corresponding pair of integer programs since then both \(\overline{x}\) and \(\overline{\pi }\) are integer. However, this is not the case when some of Abc are non-integer and the question of a duality gap arises. This question will be discussed in the next section.

4 Dual Integer Programs

Let \(A=\left( a_{ij}\right) \in \mathbb {R}^{m\times n},b=\left( b_{1},\ldots ,b_{m}\right) ^{T}\in \mathbb {R}^{m},c=\left( c_{1},\ldots ,c_{n} \right) ^{T}\in \mathbb {R}^{n}\) and consider the following pair of tropical integer programs:

$$\begin{aligned} \max f\left( x\right) =c^{T}x,\text { s.t. }Ax\le b,x\in \mathbb {Z}^{n} \end{aligned}$$
(PI)

and

$$\begin{aligned} \min \varphi \left( \pi \right) =\pi ^{T}b\text {, s.t. }\pi ^{T}A\ge c^{T} ,\pi \in \mathbb {Z}^{m}. \end{aligned}$$
(DI)

Let us denote \(S_{PI}=\left\{ x\in \mathbb {Z}^{n}:Ax\le b\right\} \) and \(S_{DI}=\left\{ \pi \in \mathbb {Z}^{m}:\pi ^{T}A\ge c^{T}\right\} \). The sets of optimal solutions will be denoted \(S_{PI}^{opt}\) and \(S_{DI}^{opt},\) respectively. The optimal objective function values will be denoted \(f_{I}^{\max }\) and \(\varphi _{I}^{\min }.\)

It follows immediately from residuation that \(S_{PI}=\left\{ x\in \mathbb {Z}^{n}:x\le \left\lfloor A^{\#}b\right\rfloor \right\} \). Hence by isotonicity, \(\left\lfloor A^{\#}b\right\rfloor \in S_{PI}^{opt}\) and \(f_{I}^{\max }=c^{T}\left\lfloor A^{\#}b\right\rfloor \).

Solving (DI) is less straightforward but can be done directly when \(b\in \mathbb {Z}^{m}.\) This will be shown below, but first we answer a principle question for general (real) Ab and c.

Proposition 4.1

\(S_{DI}^{opt}\ne \emptyset .\)

Proof

Since \(S_{DI}\) is a closed, non-empty set and \(\varphi \) is continuous and bounded below due to the weak duality theorem, we only need to prove that \( S_{DI}\) can be restricted to a bounded set without affecting \(\varphi _{I}^{\min }.\) Let \(U=\varphi \left( \pi _{0}\right) \) for an arbitrarily chosen \(\pi _{0}\in S_{DI}\) and L be any lower bound following from the weak and strong duality theorems. Then, disregarding \(\pi \in S_{DI}\) with \( \pi _{i}>U-b_{i}\) for at least one i will have no affect on \( \varphi _{I}^{\min }.\) Similarly, disregarding those \(\pi \) (if any) where \( \pi _{i}<L-b_{i}\) for at least one i. Hence for solving (DI), we may assume without loss of generality that

$$\begin{aligned} B^{-1}L\le \pi \le B^{-1}U, \end{aligned}$$

where \(B=diag\left( b\right) \). The feasible set in (DI) is now restricted to a compact set and the statement follows. \(\square \)

We will transform (DI) to an equivalent “normalized” tropical integer program. Let us denote \(B=diag\left( b\right) ,\)\(C=diag\left( c\right) \) and \(\sigma ^{T}=\pi ^{T}B.\) Hence \(\pi ^{T}=\sigma ^{T}B^{-1},\pi ^{T}b=\sigma ^{T}0\) and the inequality in (DI) is equivalent to

$$\begin{aligned} \sigma ^{T}B^{-1}AC^{-1}\ge 0 \end{aligned}$$
(16)

or, component-wise:

$$\begin{aligned} \max _{i}\left( \sigma _{i}-b_{i}+a_{ij}-c_{j}\right) \ge 0\text { for all }j. \end{aligned}$$

Let us denote the matrix \(B^{-1}AC^{-1}\) by D. The new tropical integer program is

$$\begin{aligned} \min \varphi ^{\prime }\left( \sigma \right) =\sigma ^{T}0,\text { s.t. } \sigma ^{T}D\ge 0^{T},\sigma \in \mathbb {Z}^{m}. \end{aligned}$$
(DI')

Proposition 4.2

If \(b\in \mathbb {Z}^{m}\), then (DI) and (DI’) are equivalent and a one-to-one correspondence between feasible solutions of these problems is given by \( \pi ^{T}=\sigma ^{T}B^{-1}.\)

Proof

If \(b\in \mathbb {Z}^{m}\), then \(\pi \) is integer if and only if \(\sigma \) is integer; the rest follows from the previous discussion. \(\square \)

Proposition 4.3

If

$$\begin{aligned} t=\min _{\sigma \in S_{DI^{\prime }}}\varphi ^{\prime }\left( \sigma \right) , \end{aligned}$$

then \(\sigma _{0}=\left( t,\ldots ,t\right) ^{T}\) is an optimal solution to (DI’).

Proof

Let \(t=\min _{\sigma \in S_{DI^{\prime }}}\varphi ^{\prime }\left( \sigma \right) =\varphi ^{\prime }\left( \widetilde{\sigma }\right) \) for some \( \widetilde{\sigma }\in S_{DI^{\prime }}.\) Then, \(t=\widetilde{\sigma } ^{T}0=\max \left( \widetilde{\sigma }_{1},\ldots ,\widetilde{\sigma }_{m}\right) \in \mathbb {Z}\) and so we have \(\sigma _{0}\ge \widetilde{\sigma },\) hence \( \sigma _{0}^{T}D\ge \widetilde{\sigma }^{T}D\ge 0^{T},\sigma _{0}\in \mathbb {Z} ^{m}\) and \(\varphi ^{\prime }\left( \sigma _{0}\right) =t.\) The statement follows. \(\square \)

By Proposition 4.3, we may restrict our attention to constant vectors when searching for optimal solutions of (DI’). If \(\sigma ^{T}=\left( s,\ldots ,s\right) \), then (16) reads

$$\begin{aligned} \max _{i}\left( s-b_{i}+a_{ij}-c_{j}\right) \ge 0\text { for all }j, \end{aligned}$$

equivalently,

$$\begin{aligned} s\ge \min _{i}\left( b_{i}-a_{ij}+c_{j}\right) \text { for all }j, \end{aligned}$$

or,

$$\begin{aligned} s\ge \max _{j}\min _{i}\left( b_{i}-a_{ij}+c_{j}\right) . \end{aligned}$$

This can also be written as

$$\begin{aligned} s&\ge \max _{j}\left( c_{j}+\min _{i}\left( b_{i}-a_{ij}\right) \right) \\&=\max _{j}\left( c_{j}+\min _{i}\left( a_{ji}^{\#}+b_{i}\right) \right) \\&=c^{T}\left( A^{\#}b\right) . \end{aligned}$$

Since t is the minimal possible integer value of s,  we have

$$\begin{aligned} t=\left\lceil c^{T}\left( A^{\#}b\right) \right\rceil . \end{aligned}$$

We have proved:

Proposition 4.4

If \(b\in \mathbb {Z}^{m},\) then \(\min _{\sigma \in S_{DI^{\prime }}}\varphi ^{\prime }\left( \sigma \right) =\left\lceil c^{T}\left( A^{\#}b\right) \right\rceil ,\) and \(\sigma =\left( t,\ldots ,t\right) ^{T}\) is an optimal solution of (DI’), where \(t=\left\lceil c^{T}\left( A^{\#}b\right) \right\rceil \).

We also conclude that \(\pi ^{T}=t0^{T}B^{-1}=tb^{\#}\) is an optimal solution of (DI) with \(\varphi _{I}^{\min }=\pi ^{T}b=t\otimes b^{\#}\otimes b=t0=t.\) Therefore, the duality gap for the pair (PI)–(DI) when \(b\in \mathbb {Z}^{m}\) is the interval

$$\begin{aligned} \left( c^{T}\left\lfloor A^{\#}b\right\rfloor ,\left\lceil c^{T}\left( A^{\#}b\right) \right\rceil \right) . \end{aligned}$$

A solution method for dual integer programs without the assumption \(b\in \mathbb {Z}^{m}\) is presented in Chapter 6.

5 Using Duality for Solving a Two-Sided Linear Program

We will now use the results of Sect. 3 to directly solve a special tropical linear program with two-sided constraints.

Consider the two-sided tropical linear program

$$\begin{aligned} \min g\left( y\right) =c^{T}y,\text { s.t. }Ay\oplus d\le y,y\in \mathbb {R}^{n} \end{aligned}$$
(TSLP)

where \(A\in \mathbb {R}^{n\times n}\) and \(d\in \mathbb {R}^{n}.\) The inequality \(Ay\oplus d\le y\) is equivalent to the following system of inequalities:

$$\begin{aligned} Ay\le y,d\le y. \end{aligned}$$

The first of these inequalities \(Ay\le y\) describes the set of (finite) subeigenvectors of A (see Sect. 2) corresponding to \(\lambda =0,\) that is \(V_{*}(A,0).\) By Theorem 2.2, this set is non-empty if and only if \(\lambda \left( A\right) \le 0.\) Therefore, we suppose now that this condition is satisfied. By homogeneity of \(Ay\le y\), we may assume that a subeigenvector sufficiently large exists and in particular one that also satisfies \(y\ge d. \) This implies that the feasible set of TSLP is non-empty.

Let us denote

$$\begin{aligned} S=\left\{ y\in \mathbb {R}^{n}:Ay\oplus d\le y\right\} . \end{aligned}$$

By Theorem 2.2, we have

$$\begin{aligned} S=\left\{ y=A^{*}u:u\in \mathbb {R}^{n},A^{*}u\ge d\right\} , \end{aligned}$$

where \(A^{*}\) is the Kleene star defined by (6).

Hence, \(g\left( y\right) =c^{T}A^{*}u\) and (TSLP) now reads (in the form of a dual problem):

$$\begin{aligned} \min h\left( u\right) =u^{T}\left( A^{*^{T}}c\right) ,\text { s.t. }u^{T}A^{*^{T}}\ge d^{T},u\in \mathbb {R}^{n}. \end{aligned}$$
(TSLP')

In order to use the results of Sect. 3, we substitute as follows: \(\pi \rightarrow u,\)\(\varphi \rightarrow h,\)\(b\rightarrow A^{*^{T}}c,\)\(A\rightarrow A^{*^{T}},\)\(c\rightarrow d.\) This yields the vector

$$\begin{aligned} \overline{y}=A^{*}\overline{u}, \end{aligned}$$

as an optimal solution of (TSLP) where

$$\begin{aligned} \overline{u}=d^{T}\otimes \left( -A^{*}\otimes ^{\prime }\left( A^{*^{T} }\otimes c\right) \right) \otimes \left( c^{\#}\otimes ^{\prime }\left( -A^{*}\right) \right) . \end{aligned}$$

The optimal objective function value is

$$\begin{aligned} g^{\min }=d^{T}\otimes \left( -A^{*}\otimes ^{\prime }\left( A^{*^{T} }\otimes c\right) \right) . \end{aligned}$$

Computationally, the most demanding part here is the calculation of \(A^{*} \) which is \(O\left( n^{3}\right) \). We conclude that (TSLP) can be solved directly in \(O\left( n^{3}\right) \) time.

We finish this section by a remark on a tropical linear program obtained from TSLP by replacing the inequalities with equations:

$$\begin{aligned} \min g\left( y\right) =c^{T}y,\text { s.t. }Ay\oplus d=y,y\in \mathbb {R}^{n}. \end{aligned}$$
(TSLP2)

It is known (see Butkovič et al [13]) that if we denote

$$\begin{aligned} S=\left\{ y:Ay\oplus d=y\right\} , \end{aligned}$$

then assuming \(\lambda \left( A\right) \le 0\) again we have

$$\begin{aligned} S=\left\{ y=v\oplus A^{*}d:Av=v\right\} \end{aligned}$$

and thus

$$\begin{aligned} \min _{y\in S}g\left( y\right) =\min _{v\in \mathbb {R}^{n}}\left\{ c^{T}v\oplus c^{T}A^{*}d:Av=v\right\} =c^{T}A^{*}d \end{aligned}$$

since for sufficiently small v (which can be assumed by homogeneity of \(Av=v\)) we have \(c^{T}v\le c^{T}A^{*}d.\)

Note that if \(\lambda \left( A\right) <0\), then \(S=\left\{ A^{*}d\right\} \) and so (TSLP2) has a non-trivial set of feasible solutions if and only if \(\lambda \left( A\right) =0.\)

Remark 5.1

If B is an invertible matrix (that is, a generalized permutation matrix), then the system

$$\begin{aligned} Ay\oplus d\le By \end{aligned}$$

easily transforms to that in (TSLP), similarly for equations. Thus, (TSLP) and (TSLP2) actually cover a bit wider class of optimization problems.

6 An Algorithm for General Integer Dual Programs

The explicit solution of (DI) in Sect. 4 depends on the assumption \(b\in \mathbb {Z}^{m}.\) If b is non-integer, then Proposition 4.3 is not available and it is not clear whether a direct solution method can be produced. Therefore, we now present an algorithm for solving (DI) without any assumption on b other than \(b\in \mathbb {R} ^{m}.\) Note that existence of a lower bound of the objective function follows from the weak duality theorem immediately.

First we repeat the transformation to a “normalized” tropical linear program (DI’) as in Sect. 4. With the same notation, the new tropical program (but without integrality) is

$$\begin{aligned} \min \varphi ^{\prime }\left( \sigma \right) =\sigma ^{T}0,\text { s.t. } \sigma ^{T}D\ge 0^{T}. \end{aligned}$$
(DI2')

Now we cannot assume \(\sigma \in \mathbb {Z}^{m}\) since the inverse transformation \(\pi ^{T}=\sigma ^{T}B^{-1}\) would not produce an integer vector in general. It is also not sufficient to require \(\sigma \in \mathbb {R}^{m}\) in order to obtain \(\pi \in \mathbb {Z}^{m}.\) However, since \(\sigma _{i}=\pi _{i}+b_{i},\pi _{i}\in \mathbb {Z},\) for every i we have that \(\sigma \) should satisfy

$$\begin{aligned} {\text {fr}}\left( \sigma _{i}\right) ={\text {fr}}\left( b_{i}\right) \text { for all }i, \end{aligned}$$

where \({\text {fr}}\left( x\right) \) for \(x\in \mathbb {R}\) stands for the fractional part of x,  that is

$$\begin{aligned} {\text {fr}}\left( x\right) =x-\left\lfloor x\right\rfloor . \end{aligned}$$

In order to meet this requirement, we introduce (for a given \(b\in \mathbb {R}^{m}\)) real functions \(\left\lceil .\right\rceil ^{\left( i\right) }\) (\(i=1,\ldots ,m\)) as follows: If \(x\in \mathbb {R}\), then \(\left\lceil x\right\rceil ^{\left( i\right) }\) is the least real number u such that \(x\le u\) and \({\text {fr}}\left( u\right) ={\text {fr}}\left( b_{i}\right) \).

Condition \(\sigma ^{T}D\ge 0^{T}\) can be stated as follows:

$$\begin{aligned} \left( \forall j\right) \left( \exists i\right) \left( \sigma _{i} -b_{i}+a_{ij}-c_{j}\ge 0\right) \end{aligned}$$

or, equivalently,

$$\begin{aligned} \left( \forall j\right) \left( \exists i\right) \left( \sigma _{i}\ge b_{i}-a_{ij}+c_{j}\right) \end{aligned}$$

and taking into account desired integrality of \(\pi :\)

$$\begin{aligned} \left( \forall j\right) \left( \exists i\right) \left( \sigma _{i} \ge \left\lceil -d_{ij}\right\rceil ^{\left( i\right) }\right) , \end{aligned}$$
(17)

where \(D=\left( d_{ij}\right) \). Because of the minimization of the objective function, every component \(\sigma _{i}\) of an optimal solution \(\sigma \) may be assumed to be actually equal to \(\left\lceil -d_{ij} \right\rceil ^{\left( i\right) }\) for at least one j or to \(\left\lceil L\right\rceil ^{\left( i\right) }\) where L is any lower bound of \(\varphi ^{\prime }\left( \sigma \right) \) in (DI2’). Conversely, any \(\sigma \) satisfying (17), where for every i equality is attained for at least one j or \(\sigma _{i}=\left\lceil L\right\rceil ^{\left( i\right) },\) produces an integer solution \(\pi \) of (DI) using the transformation \(\pi ^{T}=\sigma ^{T}B^{-1}.\)

Let us denote for \(\sigma \in \mathbb {R}^{m}\) and \(i=1,\ldots ,m:\)

$$\begin{aligned} N_{i}\left( \sigma \right) =\left\{ j\in N:\sigma _{i}\ge \left\lceil -d_{ij}\right\rceil ^{\left( i\right) }\right\} . \end{aligned}$$

We can summarize our discussion as follows:

Proposition 6.1

The vector \(\pi ^{T}=\sigma ^{T}B^{-1}\) is a feasible solution of (DI) only if

$$\begin{aligned} \bigcup \nolimits _{i=1,\ldots ,m}N_{i}\left( \sigma \right) =N. \end{aligned}$$

There is an optimal solution \(\pi \) such that the vector \(\sigma ^{T}=\pi ^{T}B\) also satisfies

$$\begin{aligned} \left( \forall i\right) \left( \exists j\right) \left( \sigma _{i}=\left\lceil -d_{ij}\right\rceil ^{\left( i\right) }\text { or } \left\lceil L\right\rceil ^{\left( i\right) }\right) . \end{aligned}$$

Proposition 6.1 enables us to compile the following algorithm for finding an optimal solution of (DI) for general (real) entries Ab and c. Here we denote

$$\begin{aligned} M_{ij}=\left\lceil -d_{ij}\right\rceil ^{\left( i\right) }\text { for every }i\text { and }j \end{aligned}$$
(18)

and

$$\begin{aligned} M_{i,n+1}=\left\lceil L\right\rceil ^{\left( i\right) }\text { for every }i. \end{aligned}$$
(19)

ALGORITHM

Input: \(A\in \mathbb {R}^{m\times n},b\in \mathbb {R}^{m},c\in \mathbb {R}^{n}.\)

Output: An optimal solution to (DI).

  1. 1.

    \(D:=\left( diag\left( b\right) \right) ^{-1}A\left( diag\left( c\right) \right) \) and \(M_{ij}\) as defined in (18) and (19).

  2. 2.

    \(\sigma _{i}:=\max _{j=1,\ldots ,n+1}M_{ij}\) for \(i=1,\ldots ,m.\)

  3. 3.

    \(K:=\left\{ i\in M:\varphi ^{\prime }\left( \sigma \right) =\sigma _{i}\ne \left\lceil L\right\rceil ^{\left( i\right) }\right\} \).

  4. 4.
    1. (a)

      For all \(i\in K\) set \(\sigma _{i}^{\prime }:=\max _{j}\left\{ M_{ij}:M_{ij}<\sigma _{i}\right\} \).

    2. (b)

      For all \(i\notin K\) set \(\sigma _{i}^{\prime }:=\sigma _{i}\).

  5. 5.

    If \(\bigcup \nolimits _{i=1,\ldots ,m}N_{i}\left( \sigma ^{\prime }\right) \ne N\) or \(K=\emptyset \), then stop (\(\pi =B^{-1}\sigma \) is an optimal solution to (DI)).

  6. 6.

    \(\sigma :=\sigma ^{\prime }.\)

  7. 7.

    Go to 2.

The number of iterations of this algorithm does not exceed mn since each of m variables \(\sigma _{i}\) can decrease at most n times. The number of operations in each iteration is \(O\left( m\right) \) in steps 2,3 and 5 and \(O\left( mn\right) \) in step 4. Hence, the algorithm is \(O\left( m^{2} n^{2}\right) \). This includes a possible pre-ordering of each of the sets \(\left\{ M_{ij}:j=1,\ldots ,n+1\right\} ,i=1,\ldots ,m,\) which is \(O\left( mn\log n\right) \) but can be done once before the start of the main loop.

Note that by taking \(\left\lfloor b\right\rfloor \) for b we get (DI) that can be solved directly (Sect. 4) and the difference between \(\varphi _{I}^{\min }\) and \(\varphi ^{\min }\) is up to 1 since \(\left\lfloor \pi ^{T}b\right\rfloor =\pi ^{T}\left\lfloor b\right\rfloor \) if \(\pi \) is integer. Hence, a good estimate of the optimal objective function value can be obtained by directly solving (DI) where b is replaced by \(\left\lfloor b\right\rfloor \).

7 Conclusions

We have presented a simple proof of the known strong duality theorem of tropical linear programming with one-sided constraints. This result together with known results on subeigenvectors enables us to solve a special tropical linear program with two-sided constraints in a low-order polynomial time.

We have then studied the duality gap in tropical integer linear programming. A direct solution is available for the primal problem. An algorithm of quadratic complexity has been presented for the dual problem. A direct solution of the dual problem is available provided that all coefficients of the objective function are integer. This solution readily provides a good estimate of the optimal objective function value for general dual integer programs.