1 Introduction

In the max-algebraic setting, we take maximization as our addition operation, addition as our multiplication operation and work with the set of extended reals; the real numbers extended by \(-\infty \). Max-algebra (also called tropical linear algebra) is a rapidly evolving area of idempotent mathematics, linear algebra, and applied discrete mathematics. Its creation was motivated by the need to solve a class of non-linear problems in mathematics, operational research, science, and engineering [14].

The question of finding integer solutions to max-linear systems of equations was first addressed in [5]. Equations in max algebra are useful to model, for example, scheduling problems; therefore, finding integer solutions is applicable to real world examples.

The two-sided system (TSS) in max-algebra is a matrix equation whose solution can be used to describe, for example, the starting times of a synchronized system of machines. The study of its solutions is also of interest since it is known that TSSs in max-algebra are equivalent to mean payoff games [6, 7]. Mean payoff games are a well-known problem in NP \(\cap \) co-NP and the existence of a polynomial algorithm for finding a solution remains open. Combinatorial simplex algorithms for solving mean payoff games were discussed in [8].

The problems of finding solutions to two-sided max-linear systems have been previously studied and one solution approach is to use the Alternating Method [9, 10]. If \(A\) and \(B\) are integer matrices, then the solution found by this method is integer, however, this cannot be guaranteed if \(A\) and \(B\) are real. The Alternating Method can, however, be adapted [11] in order to find integer solutions to TSSs. These methods find solutions in pseudopolynomial time if the input matrices are finite.

Note that various other methods for solving TSS are known [1214], but none of them has been proved polynomial and there is no obvious way of adapting them to integrality constraints. In [11], a generic class of matrices was defined for which it could be determined, in strongly polynomial time, whether an integer solution to a TSS exists, and find one if it does. The current paper extends the use of this generic case to max-linear optimization problems (MLOP) with constraints in the form of a TSS.

The MLOP is a problem seeking to maximize, or minimize, the value of a max-linear function subject to a two-sided constraint. Note that in other literature this is also known as a max-linear programming problem. Without the integrality constraint, solution methods to solve the MLOP are known, for example in [9, 15], a bisection method is applied to obtain an algorithm that finds an approximate solution to the MLOP. Solutions using simplex methods were described in [8]. Also, a Newton type algorithm has been designed [16] to solve a more general, max-linear fractional optimization problem by a reduction to a sequence of mean payoff games. For integer solutions, a pseudopolynomial algorithm was described in [11]. In this paper, we describe a strongly polynomial solution method in a generic case. It remains open to find a polynomial algorithm to solve a general MLOP with two-sided constraints.

2 Defining the Problem

In max-algebra, for \(a,b\in \mathbb {\overline{R}}=\mathbb {R}\cup \{-\infty \}\), we define \(a\oplus b:=\max (a,b)\), \(a\otimes b:=a+b\) and extend the pair \((\oplus , \otimes )\) to matrices and vectors in the same way as in linear algebra, that is (assuming compatibility of sizes),

$$\begin{aligned}(A\oplus B)_{ij}&:=a_{ij}\oplus b_{ij},\\ (A\otimes B)_{ij}&:=\bigoplus _k a_{ik}\otimes b_{kj}\text { and}\\(\alpha \otimes A)_{ij}&:=\alpha \otimes a_{ij}. \end{aligned}$$

Except for computational complexity arguments, all multiplications in this paper are in max-algebra and, where appropriate, we will omit the \(\otimes \) symbol. Note that \(\alpha ^{-1}\) stands for \(-\alpha \).

We will use \(\varepsilon \) to denote \(-\infty \) as well as any vector or matrix whose every entry is \(-\infty \). Note that \(\varepsilon \) is the max-algebraic additive identity, and \(0\) is the max-algebraic multiplicative identity. A vector/matrix whose every entry belongs to \(\mathbb {R}\) is called finite. A vector whose \(j^{th}\) component is zero and every other component is \(\varepsilon \) will be called a max-algebraic unit vector, and denoted \(e_j\). We use \(\mathbf{0}\) to denote the all zero vector of appropriate size. An \(n\times n\) matrix in the max-algebra is called diagonal, and denoted by \(diag(d_1,\ldots ,d_n)=diag(d)\), if and only if its diagonal entries are \(d_1,\ldots ,d_n\in \mathbb {R}\) and off diagonal entries are \(\varepsilon \) (that is \(-\infty \)). The max-algebraic identity matrix of appropriate size is \(I:=diag(0,\ldots ,0)\).

For \(a\in \mathbb {R}\), the fractional part of \(a\) is \(fr(a):=a-\lfloor a\rfloor \), where \(\lfloor \cdot \rfloor \) denotes the lower integer part. We extend these definitions to include \(\varepsilon =-\infty \) by defining

$$\begin{aligned} \lfloor \varepsilon \rfloor :=\varepsilon \text {, }\lceil \varepsilon \rceil :=\varepsilon \text { and } fr(\varepsilon ):=\varepsilon . \end{aligned}$$

For a matrix \(A\in \mathbb {\overline{R}}^{m\times n}\), we use \(\lfloor A\rfloor \) (\(\lceil A\rceil \)) to denote the matrix with \((i, j)\) entry equal to \(\lfloor a_{ij}\rfloor \) (\(\lceil a_{ij}\rceil \)) and similarly for vectors.

In this paper, a vector \(x\in \mathbb {\overline{R}}^n\) is understood to be a column vector. Its transpose is denoted by \(x^T\in \mathbb {\overline{R}}^{1\times n}\). Similarly, for a matrix \(A\in \mathbb {\overline{R}}^{m\times n}\), its transpose is \(A^T\in \mathbb {\overline{R}}^{n\times m}.\)

A two-sided max-linear system is of the form

$$\begin{aligned} Ax\oplus c=Bx\oplus d \end{aligned}$$

where \(A, B\in \mathbb {\overline{R}}^{m\times n}\) and \(c, d\in \mathbb {\overline{R}}^m\). If \(c=d=\varepsilon \), then we say that the system is homogeneous, otherwise it is called nonhomogeneous. Nonhomogeneous systems can be transformed to homogeneous systems [9]. If \(B\in \mathbb {\overline{R}}^{m\times k}\), a system of the form

$$\begin{aligned} Ax=By \end{aligned}$$

is called a system with separated variables.

If \(f\in \mathbb {\overline{R}}^n\), then the function \(f(x)=f^T\otimes x\) is called a max-linear function. MLOPs seek to minimize, or maximize, a max-linear function subject to constraints given by max-linear equations described by TSS. Throughout this paper, the input of an MLOP will always be finite matrices and vectors.

The integer max-linear optimization problem (IMLOP) is given by

$$\begin{aligned}&f^T\otimes x \rightarrow \text{ min } \text{ or } \text{ max } \\&\text{ s.t. } Ax\oplus c=B x \oplus d, x\in \mathbb {Z}^n \end{aligned}$$

where \(A,B\in \mathbb {R}^{m\times n}, c,d\in \mathbb {R}^m, f\in \mathbb {R}^n\). We will use IMLOP\(^{\min }\) to mean the problem minimizing \(f^Tx\) and IMLOP\(^{\max }\) to mean the problem maximizing \(f^Tx\).

One example of an application of the TSS and the IMLOP is the multiprocessor interactive system (MPIS) [1, 9], which can be described as follows.

Products \(P_{1},\ldots ,P_{m}\) are made up of a number of components which are prepared using \(n\) processors. Each processor contributes to the final product \(P_i\) by producing one of its components. We assume that processors work on a component for every product simultaneously and that work begins on all products as soon as the processor is switched on.

Let \(a_{ij}\) be the time taken for the \(j^{th}\) processor to complete its component for \(P_{i}\) \((i=1,\ldots ,m;j=1,\ldots ,n).\) Denote the starting time of the \(j^{th}\) processor by \(x_{j}\) \((j=1,\ldots ,n)\). Then, for each product \(P_i\), all components will be completed at time \(\max (x_{1}+a_{i1},\ldots ,x_{n}+a_{in})\).

Further, \(k\) other processors prepare components for products \(Q_{1},\ldots ,Q_{m}\) with duration and starting times denoted by \(b_{ij}\) and \(y_{j}\), respectively. The synchronization problem is to find starting times of all \(n+k\) processors so that each pair \((P_{i},Q_{i})\) \((i=1,\ldots ,m)\) is completed at the same time. This task is equivalent to solving the system of equations

$$\begin{aligned} \max (x_{1}+a_{i1},\ldots ,x_{n}+a_{in})=\max (y_{1}+b_{i1},\ldots ,y_{k}+b_{ik}) \ (i=1,\ldots ,m). \end{aligned}$$

Additionally, we can introduce deadlines \(c_{i}\) and \(d_{i}\), writing the equations as

$$\begin{aligned} \max (x_{1}+a_{i1},\ldots ,x_{n}+a_{in},c_{i})=\max (y_{1}+b_{i1},\ldots ,y_{k}+b_{ik},d_{i}) \ (i=1,\ldots ,m), \end{aligned}$$

or equivalently, \(Ax\oplus c=By\oplus d\). For \(c_i=d_i\), this indicates that the synchronization of \(P_i\) and \(Q_i\) is only required after the deadline \(d_i\). The case \(c_i<d_i\) [\(c_i>d_i\)] is similar, but additionally models the requirement that \(P_i\) [\(Q_i\)] is not completed before time \(d_i\) [\(c_i\)].

When solving the MPIS, it may be required that the starting times are restricted to discrete values, in which case we would want to look for integer solutions to the TSS.

In applications, it may also be required that the starting times of the MPIS are optimized with respect to a given criterion. As an example, suppose that all processors in an MPIS should begin as soon [late] as possible, that is, the latest starting time of a processor is as small [big] as possible. In this case, we would set \(f=\mathbf{0}\) and seek to minimize [maximize] \(f^Tx=\max (x_{1},\ldots ,x_{n}).\)

With this extra requirement, we obtain the MLOP,

$$\begin{aligned}&f^T\otimes x \rightarrow \text{ min } \text{ or } \text{ max } \\&\text{ s.t. } Ax\oplus c=B x \oplus d. \end{aligned}$$

It is important to note that throughout this paper, an integer solution is a finite solution, \(x\in \mathbb {Z}^n\), and so does not contain \(\varepsilon \) components. For the problems described above, it would also be valid to ask when there is a solution with entries from \(\mathbb {Z}\cup \{\varepsilon \},\) but we do not deal with this task here.

3 Preliminary Results

We will use the following standard notation and terminology based on [1, 9]. For positive integers \(m, n, k\), we denote \(M=\{1,\ldots ,m\}\), \(N=\{1,\ldots ,n\}\), and \(K=\{1,\ldots ,k\}\). If \(A=(a_{ij})\in \mathbb {\overline{R}}^{n\times n}\), then \(\lambda (A)\) denotes the maximum cycle mean, that is,

$$\begin{aligned} \lambda (A):=\max \bigg \{\frac{a_{i_1 i_2}+\ldots +a_{i_ ti_1}}{t} : i_1,\ldots ,i_t\in N, t=1,\ldots ,n\bigg \}. \end{aligned}$$

The maximum cycle mean can be calculated in \(\mathcal {O}(n^3)\) time [17], see also [9]. If \(\lambda (A)=0\), then we say that \(A\) is definite. For a definite matrix, we define

$$\begin{aligned} A^*:=I\oplus A\oplus A^2\oplus \ldots \oplus A^{n-1}, \end{aligned}$$

where \(I\) is the max-algebraic identity matrix. Using the Floyd–Warshall algorithm, see, e.g., [9], \(A^*\) can be calculated in \(\mathcal {O}(n^3)\) time.

If \(a, b\in \mathbb {\overline{\overline{R}}}=\mathbb {\overline{R}}\cup \{+\infty \}\), then we define \(a\oplus ' b:=\min (a, b)\). Moreover, \(a\otimes ' b:=a+b\) exactly when at least one of \(a, b\) is finite, otherwise

$$\begin{aligned} (-\infty )\otimes '(+\infty ):=+\infty \text { and }(+\infty )\otimes '(-\infty ):=+\infty . \end{aligned}$$

This differs from max-multiplication where

$$\begin{aligned} (-\infty )\otimes (+\infty ):=-\infty \text { and }(+\infty )\otimes (-\infty ):=-\infty . \end{aligned}$$

For \(A\in \mathbb {\overline{\overline{R}}}^{m\times n}\), we define \(A_j\) to be the \(j^{th}\) column of \(A\). Further

$$\begin{aligned} A^{\#}:=-A^T\in \mathbb {\overline{\overline{R}}}^{n\times m}\text { and }A^{(-1)}:=-A\in \mathbb {\overline{\overline{R}}}^{m\times n}. \end{aligned}$$

Similarly, for \(\gamma \in \mathbb {\overline{\overline{R}}}^n\), we denote \(\gamma ^{(-1)}=-\gamma \in \mathbb {\overline{\overline{R}}}^n\). For a scalar \(\alpha \), there is no difference between \(\alpha ^{-1}\) and \(\alpha ^{(-1)}\).

Given a solution \(x\) to \(Ax=b\), we say that a position \((i,j)\) is active with respect to \(x\) if and only if \(a_{ij}+x_j=b_i\), it is called inactive otherwise. It will be useful in this paper to talk about the entries of the matrix corresponding to active positions and therefore we say that an element/entry \(a_{ij}\) of \(A\) is active if and only if the position \((i,j)\) is active. In the same way, we call a column \(A_j\) active exactly when it contains an active entry. We also say that a component \(x_j\) of \(x\) is active in the equation \(Ax=Bx\) if and only if there exists \(i\) such that either \(a_{ij}+x_j=(Bx)_i\) or \((Ax)_i=b_{ij}+x_j\). Lastly, \(x_j\) is active in \(f^Tx\) if and only if \(f_jx_j=f^Tx\).

Next, we give an overview of some basic properties.

Proposition 3.1

[1, 9] If \(A\in \mathbb {\overline{R}}^{m\times n}\) and \(x, y\in \mathbb {\overline{R}}^{n}\), then

$$\begin{aligned} x\le y \Rightarrow A\otimes x\le A\otimes y \text { and }A\otimes ' x\le A\otimes ' y. \end{aligned}$$

Corollary 3.1

[9] If \(f\in \mathbb {\overline{R}}^n\) and \(x, y\in \mathbb {\overline{R}}^n\), then

$$\begin{aligned} x\le y\Rightarrow f^Tx\le f^Ty. \end{aligned}$$

Lemma 3.1

[9] Let \(A, B\in \mathbb {\overline{R}}^{m\times n}\), \(c, d\in \mathbb {\overline{R}}^m\). Then there exists \(x\in \mathbb {R}^n\) satisfying \(Ax\oplus c=Bx\oplus d\) if and only if there exists \(z\in \mathbb {R}^{n+1}\) satisfying \((A|c)z=(B|d)z\).

Theorem 3.1

[11] In IMLOP\(^{\min }\), with finite input, \(f^{\min }=-\infty \) if and only if \(c=d\).

Theorem 3.2

[11] In IMLOP\(^{\max }\), with finite input, \(f^{\max }=+\infty \) if and only if there exists an integer solution to \(Ax=Bx\).

If \(A\in \mathbb {\overline{R}}^{m\times n}\) and \(b\in \mathbb {R}^m\), then, for all \(j\in N\), define

$$\begin{aligned} M_j(A, b):=\{t \in M:a_{tj} b_t^{-1}=\max _i a_{ij} b_i^{-1}\}. \end{aligned}$$

Proposition 3.2

[5] Let \(A\in \mathbb {\overline{R}}^{m\times n}\), \(b\in \mathbb {R}^m\) and

$$\begin{aligned} \bar{x}:=A^{\#}\otimes ' b. \end{aligned}$$
  1. (a)

    An integer solution to \(A x\le b\) always exists. All integer solutions can be described as the integer vectors \(x\) satisfying \(x\le \bar{x}\).

  2. (b)

    If, moreover, \(A\) is doubly \(\mathbb {R}\)-astic, then an integer solution to \(A x=b\) exists if and only if

    $$\begin{aligned} \bigcup _{j:\bar{x}_j\in \mathbb {Z}} M_j(A,b)=M. \end{aligned}$$

    If an integer solution exists, then all integer solutions can be described as the integer vectors \(x\) satisfying \(x\le \bar{x}\) with

    $$\begin{aligned} \bigcup _{j: x_j=\bar{x}_j} M_j(A,b)=M. \end{aligned}$$

A vector \(x\in \mathbb {\overline{R}}^n\) [\(x\in \mathbb {Z}^n\)] satisfying \(Ax\le \lambda x\), \(x\ne \varepsilon \), is called an [integer] subeigenvector of \(A\) with respect to subeigenvalue \(\lambda \). Since integer vectors are finite, we deal only with finite subeigenvectors here. The set of all finite [integer] subeigenvectors with respect to subeigenvalue \(\lambda \) is denoted

$$\begin{aligned} V^*(A, \lambda )&:=\{x\in \mathbb {R}^n: Ax\le \lambda x\} \\ [IV^*(A, \lambda )&:=\{x\in \mathbb {Z}^n: Ax\le \lambda x\}]. \end{aligned}$$

Existence of [integer] subeigenvectors can be determined, and the whole set can be described, in polynomial time using the following result.

Theorem 3.3

[5, 9] Let \(A\in \mathbb {\overline{R}}^{n\times n}\), \(\lambda \in \mathbb {R}\).

  1. (i)

    \(V^*(A, \lambda )\ne \emptyset \) if and only if

    $$\begin{aligned} \lambda (\lambda ^{-1}A)\le 0. \end{aligned}$$
  2. (ii)

    If \(V^*(A, \lambda )\ne \emptyset \), then

    $$\begin{aligned} V^*(A, \lambda )=\{(\lambda ^{-1} A)^* u: u\in \mathbb {R}^n\}. \end{aligned}$$
  3. (iii)

    \(IV^*(A, \lambda )\ne \emptyset \) if and only if

    $$\begin{aligned} \lambda (\lceil \lambda ^{-1} A\rceil )\le 0. \end{aligned}$$
  4. (iv)

    If \(IV^*(A, \lambda )\ne \emptyset \), then

    $$\begin{aligned} IV^*(A, \lambda )=\{ \lceil \lambda ^{-1} A\rceil ^* z: z\in \mathbb {Z}^n\}. \end{aligned}$$

We will need the following immediate corollary.

Corollary 3.2

If \(A\) is integer and \(\lambda (A)\le 0\), then

$$\begin{aligned} IV^*(A, 0)=\{A^*z: z\in \mathbb {Z}^n\}. \end{aligned}$$

For any TSS, we can deduce a simple criterion for when no integer solution exists. This idea is key in proving the main results of the paper.

Proposition 3.3

[11] Let \(A\in \mathbb {\overline{R}}^{m\times n}, B\in \mathbb {\overline{R}}^{m\times k}\). If

$$\begin{aligned} (\exists i\in M)(\forall j\in N, t\in K) \ fr(a_{ij})\ne fr(b_{it})\text { and }a_{ij}, b_{it}\in \mathbb {R}, \end{aligned}$$

then neither \(A x=B y\) nor (if \(n=k\)) \(A x=B x\) has an integer solution.

Observe that, if either matrix has an \(\varepsilon \) row, row \(i\) say, then the existence of an integer solution would imply that the other matrix also has its \(i^{th}\) row equal to \(\varepsilon \). In this case, the \(i^{th}\) row of the equation \(Ax=Bx\) can be removed without affecting the existence of integer solutions.

By Proposition 3.3, we can assume, without loss of generality, that in every row there exists a pair of indices \(j, t\) for which the finite entries \(a_{ij}, b_{it}\) satisfy

$$\begin{aligned} fr(a_{ij})=fr(b_{it}). \end{aligned}$$

We will restrict our attention to matrices \(A\) and \(B\) that have exactly one pair of indices \(j,t\) per row. (Note that, if we randomly generated real matrices \(A\) and \(B\), it is likely that \((A, B)\) will have very few such pairs and so this assumption is not too restrictive, provided that we are working with real valued, and not integer valued, matrices; of course, for integer matrices, the existing methods [9] for finding real solutions to the systems discussed will find integer solutions, and hence the interesting case to consider is indeed when the input matrices are not integer). Given a pair of matrices with such an assumption on the fractional parts of entries we define, for all rows \(i\in M\), the pair \((r(i), r'(i))\) to be the indices such that

$$\begin{aligned} fr(a_{i, r(i)})=fr(b_{i, r'(i)}). \end{aligned}$$

Without loss of generality, we may assume that the entries \((a_{i,r(i)}, b_{i,r'(i)})\) are integer and that no other entries in the equation for either matrix are integer (this is since we may subtract a constant from each row of the system without affecting the answer to the question).

We summarize this in the following definition.

Definition 3.1

Let \(A\in \mathbb {\overline{R}}^{m\times n}, B\in \mathbb {\overline{R}}^{m\times k}\). We say that \((A, B)\) satisfies Property OneFP if, for each \(i\in M\), there is exactly one pair \((r(i), r'(i))\) such that

$$\begin{aligned} a_{ir(i)}, b_{ir'(i)}\in \mathbb {Z},\text { and} \end{aligned}$$

for all \(i\in M\), if \(j\ne r(i)\) and \(t\ne r'(i)\), then

$$\begin{aligned} a_{ij}, b_{it}>\varepsilon \Rightarrow fr(a_{ij})\ne fr(b_{it}). \end{aligned}$$

Remark 3.1

Note that this definition allows for multiple \(\varepsilon \) entries in each row, for example, the pair \((I, I)\) satisfies Property OneFP with \(r(i)=i=r'(i)\) for all \(i\).

Throughout this paper, we restrict our attention to pairs of matrices satisfying Property OneFP.

Recall, from Proposition 3.3, that a necessary condition for an integer solution to exist is that there is at least one pair of entries sharing the same fractional part in each row. As mentioned above, if we randomly generated two real matrices \(A\) and \(B\), then we would expect there to be very few pairs of entries, \((a_{ir(i)}, b_{ir'(i)})\), which share the same fractional part. So, when given a random two-sided solvable system, the most likely outcome is that there is at most one such pair of entries in each row. While this discussion is not mathematically rigorous, it does allow us to conclude that \((A, B)\) having exactly one such pair per row represents a generic case for solvable systems.

Proposition 3.4

[11] Let \(A\in \mathbb {\overline{R}}^{m\times n}, B\in \mathbb {\overline{R}}^{m\times k}\) satisfy Property OneFP. Then, the entries \(a_{i, r(i)}\) [\(b_{i, r'(i)}\)] are the only possible active entries in the matrix \(A\) [\(B\)] with respect to any integer vector \(x\) [\(y\)] satisfying \(Ax=By\).

Note that general systems can be converted into systems with separated variables by Proposition 3.5 below and that this conversion will preserve Property OneFP. So Proposition 3.4 holds accordingly for general systems.

Proposition 3.5

[11] Let \(A, B\in \mathbb {\overline{R}}^{m\times n}\). The problem of finding \(x\in \mathbb {Z}^n\) such that \(Ax=Bx\) is equivalent to finding \(x\in \mathbb {Z}^n, y\in \mathbb {Z}^n\) such that

$$\begin{aligned} \begin{pmatrix} A\\ I\end{pmatrix}x=\begin{pmatrix}B\\ I\end{pmatrix}y. \end{aligned}$$

Hence we restrict our attention to the case of separated variables.

All integer solutions to TSS satisfying Property OneFP can be described by the following.

Theorem 3.4

[11] Let \(A\in \mathbb {\overline{R}}^{m\times n}\), \(B\in \mathbb {\overline{R}}^{m\times k}\) satisfy Property OneFP. For all \(i,j\in M\), let

$$\begin{aligned} l_{ij}:=a^{-1}_{i,r(i)}\lceil a_{j,r(i)}\rceil \oplus b^{-1}_{i,r'(i)}\lceil b_{j,r'(i)}\rceil \end{aligned}$$

and \(L:=(l_{ij})\). Then, an integer solution to \(A x=B y\) exists if and only if \(\lambda (L)\le 0\). If this is the case, then \(Ax=B y=\gamma ^{(-1)}\) where \(\gamma \in IV^*(L, 0)\).

Corollary 3.3

[11] For \(A\in \mathbb {\overline{R}}^{m\times n}, B\in \mathbb {\overline{R}}^{m\times k}\) satisfying Property OneFP, it is possible to decide whether an integer solution to \(Ax=By\) exists in

$$\begin{aligned} \mathcal {O}(m^3+n+k) \end{aligned}$$

time.

Remark 3.2

  1. (i)

    The \(i^{th}\) row of \(L\), as defined in Theorem 3.4, is equal to \(H(i)^T\) where

    $$\begin{aligned} H(i):=(a_{i,r(i)})^{-1}\lceil A_{r(i)}\rceil \oplus (b_{i,r'(i)})^{-1}\lceil B_{r'(i)}\rceil . \end{aligned}$$
  2. (ii)

    Knowing \(Ax=\gamma ^{(-1)}=By\) for any \(\gamma \in IV^*(L, 0)\), we can easily find \(x\) and \(y\) using Proposition 3.2.

  3. (iii)

    It follows from the definition that \(\lceil \varepsilon \rceil \varepsilon ^{-1}=(-\infty )(+\infty )=\varepsilon \).

4 Strongly Polynomial Method to Solve IMLOP for Systems with Property OneFP

In [11], a polynomial algorithm for finding integer solutions to an IMLOP satisfying Property OneFP was described. The aim of this paper is to develop strongly polynomial methods for solving \(IMLOP^{\min }\) and \(IMLOP^{\max }\) under the assumption that Property OneFP holds. Recall that IMLOP has the form,

$$\begin{aligned}&f^T\otimes x \rightarrow \text{ min } \text{ or } \text{ max } \nonumber \\&\text{ s.t. } Ax\oplus c=B x \oplus d, x\in \mathbb {Z}^n \end{aligned}$$
(4.1)

where \(A,B\in \mathbb {R}^{m\times n}, c,d\in \mathbb {R}^m, f\in \mathbb {R}^n\). We can write the constraints of the IMLOP as

$$\begin{aligned} \begin{pmatrix}A|c\end{pmatrix}\begin{pmatrix} x\\ 0\end{pmatrix}&=\begin{pmatrix}B|d\end{pmatrix}\begin{pmatrix} x\\ 0\end{pmatrix}, x\in \mathbb {Z}^n. \end{aligned}$$
(4.2)

4.1 Consequences of Property OneFP

Let \(z=(x^T, 0)^T\in \mathbb {Z}^{n+1}\). By Proposition 3.5, the constraint (4.2) is equivalent to the condition that there exists \(y\in \mathbb {Z}^{n+1}\) such that \((z, y)\) is an integer solution to \(A'z=B'y\) where

$$\begin{aligned} A':=\begin{pmatrix} A | c\\ I\end{pmatrix}\in \mathbb {\overline{R}}^{(m+n+1)\times (n+1)}, B':=\begin{pmatrix}B | d\\ I\end{pmatrix}\in \mathbb {\overline{R}}^{(m+n+1)\times (n+1)}. \end{aligned}$$

This is since, if \((z, y)\) is an integer solution to \(A'z=B'y\), then so is \((z^{-1}_{n+1}z, z^{-1}_{n+1}y)\) where \(z^{-1}_{n+1}z=(x^T, 0)^T\) and \(z^{-1}_{n+1}y=y^{-1}_{n+1}y=(x^T, 0)^T\).

Proposition 4.1

Let \(A, B\in \mathbb {R}^{m\times n}, c,d\in \mathbb {R}^m\). If there exists a row in which the matrices \((A|c)\) and \((B|d)\) do not have entries with the same fractional part, then the feasible set of \(IMLOP^{\min }\) is empty.

Proof

It follows from Proposition 3.3. \(\square \)

For the rest of the paper, we will assume that the pair \(((A|c), (B|d))\) satisfies Property OneFP, and hence so does \((A', B')\). Note that an example is provided at the end of this paper to clarify many of the concepts that will be introduced in what follows.

Corollary 4.1

Let \(A', B'\) be as defined above. Let

$$\begin{aligned} L:=(l_{ij})\in \mathbb {\overline{Z}}^{(m+n+1)\times (m+n+1)} \end{aligned}$$

where, for all \(i,j\in \{1,\ldots ,m+n+1\}\),

$$\begin{aligned} l_{ij}:= (a'_{i,r(i)})^{-1}\lceil a'_{j,r(i)}\rceil \oplus (b'_{i,r'(i)})^{-1}\lceil b'_{j,r'(i)}\rceil . \end{aligned}$$

Then, a feasible solution to IMLOP exists if and only if \(\lambda (L)\le 0\). If this is the case, then

$$\begin{aligned} A'z=B'z \end{aligned}$$

where \(z_j=\gamma _{m+j}^{-1}\) for any \(\gamma \in IV^*(L, 0)\) and \(j\in \{1,\ldots ,n+1\}\).

Proof

Existence follows from Theorem 3.4.

Assume that \(\lambda (L)\le 0\), hence for all \(\gamma \in IV^*(L, 0)\),

$$\begin{aligned} \begin{pmatrix}A|c\\ I\end{pmatrix} z=\gamma ^{(-1)}=\begin{pmatrix} B|d\\ I\end{pmatrix}y. \end{aligned}$$

Let \(\mu \in \mathbb {Z}^{n+1}\) be defined by \(\mu _j=\gamma _{m+j}, j=1,\ldots ,n+1\), and note that since \(\gamma \) is finite so is \(\mu \). Then,

$$\begin{aligned} Iz=\mu ^{(-1)}=Iy. \end{aligned}$$

\(\square \)

Remark 4.1

  1. (i)

    For \(A', B'\) as defined above, \(L\) can be calculated in \(\mathcal {O}((m+n)^2)\) time, \(\lambda (L)\) in \(\mathcal {O}((m+n)^3)\) time, and \(L^*\) in \(\mathcal {O}((m+n)^3)\) time.

  2. (ii)

    Clearly, \(l_{ii}=0\) for all \(i\in \{1,\ldots ,m+n+1\}\), and so \(\lambda (L)\ge 0\). Hence, an integer solution to the TSS exists if and only if \(\lambda (L)=0\).

This matrix \(L\), constructed from \(A'\) and \(B'\), will play a key role in the solution of the IMLOP. To construct the \(i^{th}\) row of \(L\), we only consider columns \(A'_{r(i)}\) and \(B'_{r'(i)}\). From Remark 3.2, the \(i^{th}\) row is equal to \(H(i)^T\) for

$$\begin{aligned} H(i)=(a'_{i,r(i)})^{-1} \begin{pmatrix}\lceil A''_{r(i)}\rceil \\ I_{r(i)}\end{pmatrix}\oplus (b'_{i,r'(i)})^{-1}\begin{pmatrix}\lceil B''_{r'(i)}\rceil \\ I_{r'(i)}\end{pmatrix}, \end{aligned}$$
(4.3)

where \(A'':=(A|c)\) and \(B'':=(B|d)\). Observe that,

$$\begin{aligned} H(i)_t>\varepsilon \text { for all } i\in \{1,\ldots ,m+n+1\}, t\in \{1,\ldots ,m\} \end{aligned}$$

since \(A\) and \(B\) are finite. Further, when \(i\in \{m+1,\ldots ,m+n+1\}\), \(i=m+j\) say, then \(r(i)=j=r'(i)\) and \(I_{i,r(i)}=0=I_{i, r'(i)}\). Hence,

$$\begin{aligned} H(i)=\begin{pmatrix}\lceil A''_j\rceil \\ I_j\end{pmatrix}\oplus \begin{pmatrix}\lceil B''_j\rceil \\ I_j\end{pmatrix}=\begin{pmatrix}\lceil A''_j\rceil \oplus \lceil B''_j\rceil \\ I_j\end{pmatrix}. \end{aligned}$$

Therefore, the matrix \(L\in \mathbb {\overline{Z}}^{m+n+1}\) has the form

$$\begin{aligned} \begin{pmatrix} P&{}\quad Q\\ R&{}\quad I \end{pmatrix} \end{aligned}$$

where \(P\in \mathbb {Z}^{m\times m}\), \(Q\in \mathbb {\overline{Z}}^{m\times (n+1)}\), \(R\in \mathbb {Z}^{(n+1)\times m}\), \(I\in \mathbb {\overline{Z}}^{(n+1)\times (n+1)}\).

Moreover, each row of \(Q\) has either one or two finite entries, for a fixed \(i\in \{1,\ldots ,m\}\), the entries \(l_{ij}, j\in \{m+1,\ldots ,m+n+1\}\) are obtained by calculating

$$\begin{aligned} \max (\lceil a'_{j,r(i)}\rceil - a'_{i,r(i)}, \ \lceil b'_{j,r'(i)}\rceil - b'_{i,r'(i)}), \end{aligned}$$

where

$$\begin{aligned} a'_{j,r(i)}, j\in \{m+1,\ldots ,m+n+1\} \end{aligned}$$

form a max-algebraic unit vector, as do

$$\begin{aligned} b'_{j,r'(i)}, j\in \{m+1,\ldots ,m+n+1\}. \end{aligned}$$

Thus, at least one will be finite and, if \(r(i)\ne r'(i)\), there will be exactly two.

From Corollary 4.1, we have

$$\begin{aligned} \begin{pmatrix} x\\ 0 \end{pmatrix} =z=\mu ^{(-1)} \end{aligned}$$

where \(\mu \) is the vector of the last \(n+1\) entries of some \(\gamma \in IV^*(L, 0)\). By Corollary 3.2, \(\gamma =L^*w\) for some integer vector \(w\). Let \(V=(v_{ij})\) be the matrix formed of the last \(n+1\) rows of \(L^*\), so that \(\mu =V\otimes w\) for \(w\in \mathbb {Z}^{m+n+1}\), equivalently

$$\begin{aligned} \begin{pmatrix} x\\ 0 \end{pmatrix} =z=V^{(-1)}\otimes ' w^{(-1)}. \end{aligned}$$
(4.4)

Now, (4.4) can be split into two equations, one for the vector \(x\) and one for the scalar \(0\). Further, we would like the second equation to be of the form \(\min _k w_k=0\) for ease of calculations later. This leads to the following definition.

Definition 4.1

Let \(V^{(0)}\) be the matrix formed from \(V^{(-1)}\) by max-multiplying each finite column \(j\) by \(v_{m+n+1, j}\), and then removing the final row (at least one finite column exists by Property OneFP). Let \(U\in \mathbb {\overline{R}}^{1\times (m+n+1)}\) be the row that was removed.

Note that \(U\) contains only \(0\) or \(+\infty \) entries.

Proposition 4.2

Let \(A, B, c, d, V^{(0)}\), and \(U\) be as defined in (4.1) and Definition 4.1. Then, \(x\in \mathbb {Z}^n\) is a feasible solution to IMLOP if and only if it satisfies

$$\begin{aligned}&x=V^{(0)}\otimes ' \nu \\\text {where }&0=U\otimes '\nu \text { for some }\nu \in \mathbb {Z}^{m+n+1}. \end{aligned}$$

Proof

By Corollary 4.1, \(x\) is feasible if and only if \((x^T, 0)^T=\mu ^{(-1)}\) where \(\mu \) is the vector containing the last \(n+1\) components of some \(\gamma \in IV^*(L, 0)\). By the above discussion, this means that

$$\begin{aligned} \begin{pmatrix} x\\ 0 \end{pmatrix} = V^{(-1)}\otimes ' w^{(-1)}= \begin{pmatrix} V^{(0)} \\ U \end{pmatrix}\otimes ' \nu . \end{aligned}$$

\(\square \)

We will first consider, in Subsect. 4.2, solutions to \(IMLOP\) when \(L^*\), and hence also \(V^{(0)}\) and \(U\), is finite. In Subsects. 4.4.1 and 4.4.2 we deal with the case when \(L^*\) is not finite.

Before this we summarize key definitions and assumptions that will be used throughout the remainder of the paper, for easy reference later.

Assumption 4.1

We assume that the following are satisfied.

  1. (i)

    \(A,B\in \mathbb {R}^{m\times n}, c,d\in \mathbb {R}^m\).

  2. (ii)

    \(A'':=(A|c), B'':=(B|d)\) and

    $$\begin{aligned} A':= \begin{pmatrix} A | c\\ I \end{pmatrix}, B':= \begin{pmatrix} B | d\\ I \end{pmatrix}. \end{aligned}$$
  3. (iii)

    The pair \((A'', B'')\) satisfies Property OneFP (and therefore also \((A', B')\)).

  4. (iv)

    \(L\) is constructed from \(A', B'\) according to Corollary 4.1.

  5. (v)

    Without loss of generality, \(\lambda (L)=0\).

  6. (vi)

    \(V\) is the matrix containing the last \(n+1\) rows of \(L\).

4.2 Finding the Optimal Solution to IMLOP when \(L^*\) is Finite

Theorem 4.1

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(V^{(0)}\) be as in Definition 4.1. If \(L^*\) is finite, then the optimal objective value \(f^{\min }\) is attained for

$$\begin{aligned} x^{opt}=V^{(0)}\otimes ' \mathbf{0}. \end{aligned}$$

Proof

By Proposition 4.2, we know that any feasible \(x\) satisfies \(x=V^{(0)}\otimes ' \nu \) where, by the finiteness of \(L^*\) (and also \(V^{(0)}\)), we have \(U^T=\mathbf{0}\) and hence

$$\begin{aligned} \nu _1\oplus '\ldots \oplus ' \nu _{m+n+1}=0. \end{aligned}$$

Therefore, \(x\ge V^{(0)}\otimes '\mathbf{0}\) for any feasible \(x\) and further \(V^{(0)}\otimes ' \mathbf{0}\) is feasible. The statement now follows from the isotonicity of \(f^Tx\), see Corollary 3.1. \(\square \)

Theorem 4.2

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(V^{(0)}\) be as in Definition 4.1. If \(L^*\) is finite, then the optimal objective value \(f^{\max }\) is equal to

$$\begin{aligned} f^T\otimes V^{(0)}\otimes \mathbf{0}. \end{aligned}$$

Further, let \(y:=V^{(0)}\otimes \mathbf{0}\) and \(j\) be an index such that \(f^{\max }=f_jy_j\). If \(i\) is such that \(y_j=V^{(0)}_{ji}\), then an optimal solution is \(x^{opt}=V_i^{(0)}\).

Proof

By Proposition 4.2, we know that any feasible \(x\) satisfies \(x=V^{(0)}\otimes ' \nu \) where, by the finiteness of \(L^*\) (and also \(V^{(0)}\)), we have \(U^T=\mathbf{0}\) and hence

$$\begin{aligned} \nu _1\oplus '\ldots \oplus ' \nu _{m+n+1}=0. \end{aligned}$$

If \(\nu _j=0\), then \(x\le V^{(0)}_j\) and therefore all feasible \(x\) satisfy \(x\le y=V^{(0)} \otimes \mathbf{0}\). Note that \(y\) may not be feasible.

By isotonicity, \(f^T y\ge f^T x\) for any feasible \(x\). We claim that there exists a feasible solution \(x\) for which they are equal. Suppose that \(f^T y=f_j y_j\). Let \(i\) be an index such that \(v^{(0)}_{ji}=y_j.\) By setting \(\nu _i=0\) and all other components to large enough integers, we get a feasible solution \(\bar{x}\) such that \(\bar{x}_j=y_j\). In fact, \(\bar{x}=V^{(0)}_i\). Hence,

$$\begin{aligned} f_j \bar{x}_j=f_j y_j=f^T y\ge f^T \bar{x}\ge f_j \bar{x}_j, \end{aligned}$$

which implies \(f^T y=f^T \bar{x}\) as required. \(\square \)

It follows from Theorems 4.1 and 4.2 that, if \(\lambda (L)\le 0\) and \(L^*\) is finite, then an optimal solution to IMLOP\(^{\min }\) and IMLOP\(^{\max }\) always exists.

4.3 Criterion for Finiteness of \(L^*\)

Theorems 4.1 and 4.2 provide explicit solutions to IMLOP, which can be found in \(\mathcal {O}((m+n)^3)\) time by Remark , in the case when \(L^*\) is finite. We now consider criteria for \(L^*\) to be non-finite, and show how we can adapt the problem in this case so that IMLOP can be solved using the above methods in general.

Proposition 4.3

Let \(A, B, c, d\) satisfy Assumption 4.1.

Let \(e_j\in \mathbb {\overline{R}}^{m+n+1}\) be the \(j^{th}\) max-algebraic unit vector. The following are equivalent:

  1. (i)

    \(L^*\) contains an \(\varepsilon \) entry.

  2. (ii)

    There exists \(j\in \{1,\ldots ,n+1\}\) such that \(L^*_{m+j}=e_{m+j}\) .

  3. (iii)

    There exists \(j\in \{1,\ldots ,n+1\}\) such that \(L_{m+j}=e_{m+j}\).

  4. (iv)

    There exists \(j\in \{1,\ldots ,n+1\}\) such that neither \(A''_j\) nor \(B''_j\) contain an integer entry.

Further, the index \(j\) satisfies the condition in (ii) if and only if \(j\) satisfies the condition in (iii) if and only if \(j\) satisfies the condition in (iv).

Proof

Recall that \(L\) has the form

$$\begin{aligned} \begin{pmatrix} P&{}\quad Q\\ R&{}\quad I\end{pmatrix} \end{aligned}$$

where \(P\in \mathbb {Z}^{m\times m}\), \(Q\in \mathbb {\overline{Z}}^{m\times (n+1)}\), \(R\in \mathbb {Z}^{(n+1)\times m}\), \(I\in \mathbb {\overline{Z}}^{(n+1)\times (n+1)}\).

(ii)\(\Rightarrow \)(i): Obvious.

\(\lnot \)(iii)\(\Rightarrow \lnot \)(i): Assume that, for all \(j\), \(L_j\ne e_j\). We know that the first \(m\) columns of \(L\) are finite and, by assumption, every column of \(Q\) contains a finite entry. This means that \(L^2\) will be finite and thus so will \(L^*\).

(ii)\(\Leftrightarrow \)(iii): We show \(L_{m+j}=e_{m+j}\) if and only if \(L^2_{m+j}=e_{m+j}\). Fix \(j\) such that \(L_{m+j}=e_{m+j}\). Then clearly, \(L^2_{m+j}=e_{m+j}\) and hence (iii)\(\Rightarrow \)(ii). Although (ii)\(\Rightarrow \) (iii) follows from above, we need to also prove that the same index \(j\) satisfies both statements. To do this, we suppose that \(L^2_{m+j}=e_{m+j}\). Then, for all \(i\in \{1,\ldots ,m\}\) with \(i\ne j\), we have

$$\begin{aligned} \begin{pmatrix} l_{i,1}&\quad \dots&\quad l_{i,m}\end{pmatrix}\otimes \begin{pmatrix}l_{1,m+j}\\ \vdots \\ l_{m,m+j}\end{pmatrix}\oplus \begin{pmatrix} l_{i, m+1}&\quad \dots&\quad l_{i, m+n+1}\end{pmatrix}\otimes I_j=\varepsilon \end{aligned}$$

where \(l_{i,1},\ldots ,l_{i,m}\in \mathbb {R}\). Thus,

$$\begin{aligned} l_{1, m+j}=\ldots =l_{m, m+j}=\varepsilon \end{aligned}$$

and hence \(L_{m+j}=e_{m+j}\).

(iii) \(\Leftrightarrow \) (iv): By the structure of \(L\), (iii) holds if and only if \(Q\) contains an \(\varepsilon \) column. Fix \(j\in \{1,\ldots ,n+1\}\). Now, for any \(i\in M\),

$$\begin{aligned}&\ q_{ij}=\varepsilon \\ \Leftrightarrow&\ l_{i,m+j}=\varepsilon \\ \Leftrightarrow&\ a'_{m+j,r(i)}=\varepsilon =b'_{m+j, r'(i)} \\ \Leftrightarrow&\ r(i)\ne j \text { and } r'(i)\ne j \\ \Leftrightarrow&\ a''_{ij}, b''_{ij}\notin \mathbb {Z}. \end{aligned}$$

Therefore, \(Q\) contains an \(\varepsilon \) column if and only if neither \(A''=(A|c)\) nor \(B''=(B|d)\) contains an integer entry. \(\square \)

Observe that, for each \(j\in \{1,\ldots ,n+1\}\), either \(L^*_{m+j}=e_{m+j}\) or \(L^*_{m+j}\) is finite. Further \(L^*_t\) is finite for all \(t\in M\) since \(P\) and \(R\) are finite.

Corollary 4.2

Let \(A, B, c, d\) satisfy Assumption 4.1. \(L^*\) is finite if and only if, for all \(j\in \{1,\ldots ,n+1\}\), either \((A|c)_j\) or \((B|d)_j\) contains an integer entry.

4.4 IMLOP when \(L^*\) is Non-Finite

Theorems 4.1 and 4.2 solve IMLOP when \(L^*\) is finite. In this case, \(U^T=\mathbf{0}\) and we took advantage of the fact that \(\nu _i\ge 0\) held for every component of \(\nu \). However, if \(L^*_{m+j}=e_{m+j}\) for some \(j\in N\), then \(U_j=+\infty \) and so \(\nu _j\) will be unbounded. This suggests that feasible solutions \(x=V^{(0)}\otimes '\nu \) are not bounded from below and introduces the question of whether \(f^{\min }=\varepsilon \) in these cases. We define the set \(J\) to be

$$\begin{aligned} J:=\{j\in N: \hbox { Neither } A_j \hbox { nor } B_j \hbox { contain an integer entry}\}. \end{aligned}$$

Clearly, this definition of \(J\) is independent of whether or not \(c\) and \(d\) contain integer entries, this is necessary because, by the discussion above, only values \(\nu _j\) with \(j\in N\) may be unbounded (note that \(U_{m+n+1}=0\) regardless of whether or not \(L^*\) is finite). In the following sections, we will use it to identify “bad” or inactive columns of \(A\) and \(B\), which can be removed from the system. First, we consider the case \(J=\emptyset \), under which all \(\nu _i\) are bounded even though \(L^*\) may not be finite.

Observe that \(J=\emptyset \) if and only if \(U^T=\mathbf{0}\). Further, it can be verified that, the results in Theorems 4.1 and 4.2 hold when the assumption that \(L^*\) is finite is replaced by an assumption that \(U^T=\mathbf{0}\), in fact, the same proofs apply without any alterations. The case \(J=\emptyset \) is therefore solved as follows.

Proposition 4.4

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(V^{(0)}\) be as defined in Definition 4.1. Suppose \(J=\emptyset \).

  1. (1)

    For IMLOP\(^{\min }\), the optimal objective value \(f^{\min }\) is attained for

    $$\begin{aligned} x^{opt}=V^{(0)}\otimes ' \mathbf{0}. \end{aligned}$$
  2. (2)

    For IMLOP\(^{\max }\), the optimal objective value \(f^{\max }\) is equal to

    $$\begin{aligned} f^T\otimes V^{(0)}\otimes \mathbf{0}. \end{aligned}$$

    Further, let \(y:=V^{(0)}\otimes \mathbf{0}\) and \(j\) be an index such that \(f^{\max }=f_jy_j\). If \(i\) is such that \(y_j=V^{(0)}_{ji}\), then an optimal solution is \(x^{opt}=V_i^{(0)}\).

It remains to show how to find solutions to \(IMLOP^{\min }\) and \(IMLOP^{\max }\) in the case when \(U^T\ne \mathbf{0}\), i.e., when \(L^*\) is not finite and \(J\ne \emptyset \). We do this in the following subsections.

4.4.1 IMLOP\(^{\min }\) when \(L^*\) is Non-Finite

If \(J\ne \emptyset \), then we aim to remove the “bad” columns \(A_j, B_j, j\in J\) from our problem and use Theorem 4.1 to solve it. The next result allows us to do this when \(J\subset N\). It will turn out that, in this case, under Assumption 4.1, an optimal solution always exists; this will be shown in the proof of Proposition 4.7 below. The case \(J=N\) will be dealt with Proposition 4.8.

Proposition 4.5

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\).

Suppose \(\emptyset \ne J\subset N\). If an optimal solution \(x\) exists, then \(f^{\min }=f_j x_j\) for some \(j\in N-J\).

Proof

Suppose \(x\) is a feasible solution of \(IMLOP^{\min }\) such that \(f^Tx=f^{\min }\), but \(f^{\min }\ne f_l x_l\) for any \(l\in N-J\). Let

$$\begin{aligned} \bar{J}:=\{t\in J: f^{\min }=f_t x_t\}. \end{aligned}$$

Observe that, for all \(t\in \bar{J}\), neither \(A_t\) nor \(B_t\) contains an integer entry and so, by Proposition 3.4, \(x_t\) is not active in the equation \(Ax\oplus c=Bx\oplus d\). Thus, the vector \(x'\) with components

$$\begin{aligned} x'_j={\left\{ \begin{array}{ll}x_j&{}\text {if } j\notin \bar{J}\\ x_j\alpha ^{-1}&{}\text {otherwise}\end{array}\right. } \end{aligned}$$

for some integer \(\alpha >0\) is also feasible but \(f^Tx'<f^Tx\), a contradiction. \(\square \)

Hence, we can simply remove all columns \(j\in J\) from our system and solve this reduced system using previous methods. Formally, let \(g\) be obtained from \(f\) by removing entries with indices in \(J\). Let \(A^-, B^-\) be obtained from \(A\) and \(B\) by removing columns with indices in \(J\), so \(A^-, B^-\in \mathbb {\overline{R}}^{m\times n'}\) where \(n'=n-|J|\). By IMLOP\(_1\) and IMLOP\(_2\), we mean the IMLOPs

$$\begin{aligned} (IMLOP_1)&\min \ f^T\otimes x=f(x) \nonumber \\&s.t. \ Ax\oplus c=Bx\oplus d, x\in \mathbb {Z}^n \end{aligned}$$
(4.5)

and

$$\begin{aligned} (IMLOP_2)&\min \ g^T\otimes y =g(y) \nonumber \\&s.t.\ A^-y\oplus c=B^-y\oplus d, y\in \mathbb {Z}^{n'} \end{aligned}$$
(4.6)

where, by assumption, the pair \(((A|c), (B|d))\) satisfies Property OneFP, and therefore so does \(((A^-|c), (B^-|d))\).

To differentiate between solutions to IMLOP\(_1\) and IMLOP\(_2\), the matrices \(L\), \(L^*\), \(V^{(0)}\), \(U\) will refer to those obtained from \(A, B, c, d\) . When they are calculated using \(A^-, B^-, c, d\), we will call them \(\hat{L}\), \(\hat{L}^*\), \(\hat{V^{(0)}}\), \(\hat{U}\).

In order to prove that an optimal solution always exists, we recall the following results which tell us that, for any IMLOP, the problem is either unbounded, infeasible or has an optimal solution. Let

$$\begin{aligned}&IS=\{x\in \mathbb {Z}^n: A x\oplus c=Bx\oplus d\},\\&S^{\min }=\{x\in IS: f(x)\le f(z)\ \forall z\in IS \}\text { and} \\&S^{\max }=\{x\in IS: f(x)\ge f(z)\ \forall z\in IS \}. \end{aligned}$$

From Theorems 3.1 and 3.2,

$$\begin{aligned} f^{\min }=-\infty \Leftrightarrow c=d \text{ and } f^{\max }=+\infty \Leftrightarrow (\exists x\in \mathbb {Z}^n) Ax=Bx. \end{aligned}$$

Proposition 4.6

[11] Let \(A, B, c, d, f\) be as defined in (4.1). If \(IS\ne \emptyset \), then \(f^{\min }>-\infty \Rightarrow S^{\min }\ne \emptyset \) and \(f^{\max }<+\infty \Rightarrow S^{\max }\ne \emptyset \).

Proposition 4.7

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\). Let \(A^-\), \(B^-\), \(g\) be as defined in (4.6). Suppose \(\emptyset \ne J\subset N\). Then \(f^{\min }=g^{\min }\), \(x^{opt}\) can be obtained from its subvector \(y^{opt}\) by inserting suitable “small enough” integer components and IMLOP\(_2\) can be solved by Theorem 4.1.

Proof

First, observe that an optimal solution to IMLOP\(_2\) always exists since \(\hat{U}^T=\mathbf{0}\), so all components of \(\nu \) are bounded below. This implies that feasible solutions to IMLOP\(_2\), and therefore also IMLOP\(_1\), exist. So, by Proposition 4.6, IMLOP\(_1\) either has an optimal solution or \(f^{\min }=\varepsilon \). If \(f^{\min }=\varepsilon \), then, by Theorem 3.1, \(c=d\) which, under Property OneFP, means that \(c,d\in \mathbb {Z}^m\) and there are no integer entries in \(A\) or \(B\). This is impossible since \(J\ne N\).

Suppose \(x^{opt}\) is an optimal solution to IMLOP\(_1\) and let \(y'\) be obtained from \(x^{opt}\) by removing elements with indices in \(J\). Using Property OneFP, we know that components \(x^{opt}_j, j\in J\) are inactive in \(Ax\oplus c=Bx\oplus d\). Further, from Proposition 4.5, we can assume also that \(x^{opt}_j, j\in J\) are inactive in \(f^{\min }\) (can decrease their value if necessary without changing the solution). Hence,

$$\begin{aligned} f^{\min }=f^Tx^{opt}=g^Ty' \end{aligned}$$

and

$$\begin{aligned} A^-y'\oplus c=Ax^{opt}\oplus c=Bx^{opt}\oplus d=B^-y'\oplus d. \end{aligned}$$

So \(y'\) is feasible for IMLOP\(_2\). If \(y'\) is not optimal, then \(g^{\min }=g^Ty''<f^{\min }\) for some feasible (in IMLOP\(_2\)) \(y''\) . But letting \(x'=(x'_j)\) where, for \(j\in J\), \(x'_j\) corresponds to \(y''_j\) and \(x'_j, j\notin J\), are set to small enough integers, we obtain a feasible solution to IMLOP\(_1\) satisfying \(f^Tx'=g^{\min }<f^{\min }\), a contradiction. Therefore, \(y'=y^{opt}\). A similar argument holds for the other direction.

We now show how to solve IMLOP\(_2\). By Proposition 4.2, feasible solutions to IMLOP\(_2\) satisfy

$$\begin{aligned}&y=\hat{V}^{(0)}\otimes ' \nu ,\\&0=\hat{U}\otimes '\nu , \nu \in \mathbb {Z}^{m+n'+1}. \end{aligned}$$

Case 1: There exists an integer entry in either \(c\) or \(d\).

Observe that IMLOP\(_2\) can be solved immediately by Theorem 4.1 since \(\hat{L}^*\) is finite.

Case 2: Neither \(c\) nor \(d\) contains an integer entry.

Now \(\hat{L}^*\) is not finite. However, \(\hat{U}\) is finite and

$$\begin{aligned} \hat{V}^{(0)}_{m+n'+1}= \begin{pmatrix}+\infty \\ \vdots \\ +\infty \end{pmatrix}. \end{aligned}$$

All other columns of \(\hat{V}^{(0)}\) are finite. The single \(+\infty \) column contains no finite entries and will never be active in determining the value of a feasible solution. Hence, any feasible solution \(y\) still satisfies \(y\ge \hat{V}^{(0)}\otimes ' \mathbf{0}\) and \(y^{opt}=\hat{V}^{(0)}\otimes ' \mathbf{0}\) as in the proof of Theorem 4.1. \(\square \)

Corollary 4.3

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\). Let \(A^-, B^-, g\), and \(\hat{V}^{(0)}\) be as defined in (4.6). If \(\emptyset \ne J\ne N\), the optimal objective value \(f^{\min }\) of IMLOP\(_1\) is equal to \(g^T y^{opt}\) for

$$\begin{aligned} y^{opt}=\hat{V}^{(0)}\otimes ' \mathbf{0}. \end{aligned}$$

The final case for IMLOP\(^{\min }\) is when \(J=N\).

Proposition 4.8

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\). Suppose \(J=N\). If \(c=d\), then \(f^{\min }=-\infty \). If, instead, \(c\ne d\), then \(IMLOP^{\min }\) is infeasible.

Proof

Follows from Theorem 3.1 and the fact that entries in columns with indices in \(J\) are never active. \(\square \)

4.4.2 IMLOP\(^{\max }\) when \(L^*\) is Non-Finite

We will now discuss \(IMLOP^{\max }\) when \(J\ne \emptyset \). The case when neither \(c\) nor \(d\) contains an integer is trivial and will be described in Proposition 4.10. We first assume that either \(c\) or \(d\) contains an integer entry. Here, we cannot make the same assumptions about active entries in the objective function as in the minimization case, as demonstrated by the following example.

Example 4.1

Suppose we want to maximize \((0,1)^Tx\) subject to

$$\begin{aligned} \begin{pmatrix}0&{}\quad -1.5\\ -0.5&{}\quad -1.5 \end{pmatrix} x\oplus \begin{pmatrix} -0.5\\ 0 \end{pmatrix}= \begin{pmatrix} 0&{}\quad -1.6\\ -0.6&{}\quad -1.6 \end{pmatrix} x\oplus \begin{pmatrix} -0.6\\ 0 \end{pmatrix}. \end{aligned}$$

Note that \(J=\{2\}\). It can be seen that the largest integer vector \(x\) which satisfies this equality is \((0,1)\).

Therefore, \(f^{\max }=2\), the only active entry with respect to \(f^T x\) is \(x_2\) and \(2\in J\).

Instead, we give an upper bound \(y\) on \(x\), for which \(f^{\max }=f^Ty\) and we can find a feasible \(x'\) where \(f^Tx'\) attains this maximum value. For all \(j\in J\), we have \(U_j=+\infty \) and also \(V^{(0)}_j\) non-finite since \(L^*_{m+j}=e_{m+j}\). We will therefore adapt the matrix \(V^{(0)}\) to reflect this.

Definition 4.2

Let \(\bar{V}\) be obtained from \(V^{(0)}\) by removing all columns \(j\in J\).

Proposition 4.9

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\). Let \(\bar{V}\) be as defined in Definition 4.2. Suppose either \(c\) or \(d\) contains an integer and \(\emptyset \ne J\subseteq N\). Then, the optimal objective value \(f^{\max }\) is equal to \(f^T y\) for

$$\begin{aligned} y=\bar{V}\otimes \mathbf{0}. \end{aligned}$$

Further, let \(j\) be an index such that \(f^{\max }=f_j y_j\) and \(i\) satisfy \(y_j=\bar{V}_{ji}\). Then, an optimal solution is \(x^{opt}=\bar{V}_i.\)

Proof

From Proposition 4.2, any feasible \(x\) satisfies

$$\begin{aligned} x&=V^{(0)}\otimes '\nu \\ 0&=\min _{i\in T} \nu _i, \nu \in \mathbb {Z}^{m+n+1} \end{aligned}$$

where

$$\begin{aligned} T=\{1,\ldots ,m+n+1\}-\{m+j: j\in J\}. \end{aligned}$$

Note that \(T\) is the set of indices \(t\) for which \(U_t=0\) and \(|T|=m+n+1-|J|\).

Consider an arbitrary feasible solution \(x'=V^{(0)}\otimes '\nu '\). Let \(\mu '\) be the subvector of \(\nu '\) with indices from \(T\). Then,

$$\begin{aligned} x'=V^{(0)}\otimes '\nu '\le \bar{V}\otimes ' \mu '\le \bar{V}\otimes \mathbf{0}=y \end{aligned}$$

since \(\min _i \mu '_i=0.\) Therefore, \(f^Tx'\le f^Ty\).

We claim that there exists a feasible \(x\) such that \(f^Tx=f^Ty\) and hence it is an optimal solution with \(f^{max}=f^Ty\). Indeed, let \(j\in N\) be any index such that \(f^Ty=f_j y_j.\) Let \(i\in T\) be an index such \(v_{ji}^{(0)}=y_j\). Then, by setting \(\nu _i=0\) and \(\nu _j, j\ne i\) to large enough integers, we obtain a feasible solution \(\bar{x}=V_i^{(0)}\) which satisfies \(f^T\bar{x}=f^Ty\). \(\square \)

Proposition 4.10

Let \(A, B, c, d\) satisfy Assumption 4.1 and \(f\in \mathbb {R}^n\). Suppose neither \(c\) nor \(d\) contains an integer entry. If there exists \(x\in \mathbb {Z}^n\) such that \(Ax=Bx\), then \(f^{\max }=+\infty \). If no such \(x\) exists, then \(IMLOP^{\max }\) is infeasible.

Proof

Follows from Theorem 3.2 and the fact that \(c\ne d\) since they do not have any entries with the same fractional part. \(\square \)

We conclude by noting that all methods for solving the IMLOP under Property OneFP described in this paper are strongly polynomial.

Corollary 4.4

Given input \(A, B, c, d\) satisfying Assumption 4.1 and \(f\in \mathbb {R}^n\), both IMLOP\(^{\min }\) and IMLOP\(^{\max }\) can be solved in \(\mathcal {O}((m+n)^3)\) time.

Proof

From \(A, B, c, d\), we can calculate \(V^{(0)}, \bar{V}\), and \(U\) in \(\mathcal {O}((m+n+1)^3)\) time by Remark . Then \(V^{(0)}\otimes '\mathbf{0}, V^{(0)}\otimes \mathbf{0}\) or \(\bar{V}\otimes \mathbf{0}\) can be calculated in \(\mathcal {O}(n(m+n+1))\) time. From this we can calculate \(f^{\min }\) or \(f^{\max }\) in \(\mathcal {O}(n)\) time. Finally, for IMLOP\(^{\max }\), we can find an optimal solution in \(\mathcal {O}(m+n+1)\) time.

In the cases described in Proposition 4.10, we can perform the necessary checks in \(\mathcal {O}((m+n)^3)\) time. \(\square \)

4.5 An Example

Suppose we want to find \(f^{\min }\) and \(f^{\max }\) subject to the constraints \(x\in \mathbb {Z}^4\) and

$$\begin{aligned} \begin{pmatrix} 3&{}0.5&{}-1.7&{}-2.5\\ -3.7&{}-1.9&{}-2.1&{}-3.7 \end{pmatrix} x\oplus \begin{pmatrix} -0.3\\ -1 \end{pmatrix} \!=\! \begin{pmatrix} 1.4&{}1.1&{}1&{}-1.3\\ 0.8&{}1&{}-1.3&{}-2.2 \end{pmatrix} x\oplus \begin{pmatrix} -0.2\\ -2.4 \end{pmatrix}. \end{aligned}$$

Note that \(J=\{4\}\) and

$$\begin{aligned} A^-= \begin{pmatrix} 3&{}\quad 0.5&{}\quad -1.7\\ -3.7&{}\quad -1.9&{}\quad -2.1 \end{pmatrix} \text { and }B^-= \begin{pmatrix} 1.4&{}\quad 1.1&{}\quad 1&{}\quad \\ 0.8&{}\quad 1&{}\quad -1.3 \end{pmatrix}. \end{aligned}$$

We first construct \(A'\) and \(B'\), these are

$$\begin{aligned} \begin{pmatrix} 3&{}\quad 0.5&{}\quad -1.7&{}\quad -2.5&{}\quad -0.3\\ -3.7&{}\quad -1.9&{}\quad -2.1&{}\quad -3.7&{}\quad -1\\ 0&{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad 0&{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad 0&{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad 0&{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad 0 \end{pmatrix} \text { and } \begin{pmatrix} 1.4&{}\quad 1.1&{}\quad 1&{}\quad -1.3&{}\quad -0.2\\ 0.8&{}\quad 1&{}\quad -1.3&{}\quad -2.2&{}\quad -2.4\\ 0&{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad 0&{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad 0&{}\quad \varepsilon &{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad 0&{}\quad \varepsilon \\ \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad \varepsilon &{}\quad 0 \end{pmatrix}. \end{aligned}$$

Then,

$$\begin{aligned} L\!=\! \begin{pmatrix} 0&{}-2&{}-3&{}\varepsilon &{}-1&{}\varepsilon &{}\varepsilon \\ 1&{}0&{}\varepsilon &{}-1&{}\varepsilon &{}\varepsilon &{}1\\ 3&{}1&{}0&{}\varepsilon &{}\varepsilon &{}\varepsilon &{}\varepsilon \\ 2&{}1&{}\varepsilon &{}0&{}\varepsilon &{}\varepsilon &{}\varepsilon \\ 1&{}-1&{}\varepsilon &{}\varepsilon &{}0&{}\varepsilon &{}\varepsilon \\ -1&{}-2&{}\varepsilon &{}\varepsilon &{}\varepsilon &{}0&{}\varepsilon \\ 0&{}-1&{}\varepsilon &{}\varepsilon &{}\varepsilon &{}\varepsilon &{}0 \end{pmatrix} \text { and }L^*= \begin{pmatrix} 0&{}-2&{}-3&{}-3&{}-1&{}\varepsilon &{}-1\\ 1&{}0&{}-2&{}-1&{}0&{}\varepsilon &{}1\\ 3&{}1&{}0&{}0&{}2&{}\varepsilon &{}2\\ 2&{}1&{}-1&{}0&{}1&{}\varepsilon &{}2\\ 1&{}-1&{}-2&{}-2&{}0&{}\varepsilon &{}0\\ -1&{}-2&{}-4&{}-3&{}-2&{}0&{}-1\\ 0&{}-1&{}-3&{}-2&{}-1&{}\varepsilon &{}0 \end{pmatrix}. \end{aligned}$$

Note that \(\lambda (L)=0\) and hence feasible solutions exist, further \(L^*_{2+4}=e_{2+4}\) as expected from Proposition 4.3. Now, using Definitions 4.1 and 4.2,

$$\begin{aligned} \bar{V}= \begin{pmatrix} -3&{}-2&{}-3&{}-2&{}-3&{}-2\\ -2&{}-2&{}-2&{}-2&{}-2&{}-2\\ -1&{}0&{}-1&{}0&{}-1&{}0\\ 1&{}1&{}1&{}1&{}1&{}1 \end{pmatrix} \text { and } \hat{V}^{(0)}= \begin{pmatrix} -3&{}-2&{}-3&{}-2&{}-3&{}-2\\ -2&{}-2&{}-2&{}-2&{}-2&{}-2\\ -1&{}0&{}-1&{}0&{}-1&{}0 \end{pmatrix} \end{aligned}$$

(recall that \(\hat{V}^{(0)}\) is calculated from \(A^-, B^-\) as defined in (4.6)).

Suppose \(f^T=(0, -1, 1, 0)\). We first look for \(f^{\min }\). By Corollary 4.3, we have that

$$\begin{aligned} g^{\min }=(0,-1,1)\otimes (\hat{V}^{(0)}\otimes ' 0)=(0,-1,1)\otimes (-3,-2,-1)=0. \end{aligned}$$

Hence, \(f^{\min }=0\) and \(x^{opt}=(-3, -2, -1, x_4)^T\) for any small enough \(x_4\).

Now we look for \(f^{\max }\). By Proposition 4.9, we have that

$$\begin{aligned} f^{\max }=f^T\otimes y=(0, -1, 1,0)\otimes (-2, -2, 0,1)^T=1. \end{aligned}$$

Following the proof of this proposition, we see that the optimum is attained either for \(i=3\) or \(i=4\). For \(i=3\), this relates to columns \(2, 4\), or \(6\) of \(\bar{V}\) and hence the optimal solution can be obtained by setting either \(\nu _2, \nu _4\), or \(\nu _6\) to \(0\). This yields \(x^{opt}=(-2, -2, 0, x_4)^T\) for any small enough \(x_4\). If we instead choose \(i=4\), then we conclude that any column of \(\bar{V}\) admits an optimal solution.

Finally, observe that \(\hat{V}^{(0)}\) can be obtained from \(\bar{V}\) by removing rows with indices in \(J\). This is since \(A^-\) and \(B^-\) differ from \(A\) and \(B\) only in columns with indices from \(J\), meaning that \(\hat{L}=L[N-J]\) and \(\hat{L}^*=L[N-J]^*\).

5 Conclusions

In this paper, we presented a strongly polynomial method to determine whether an integer optimal solution exists to a max-linear optimization problem when the input matrices satisfy Property OneFP. We gave a necessary condition for existence of an integer feasible solution and, further, showed that, under this condition, an integer optimal solution always exists. We described how to find an optimal solution in strongly polynomial time. Our solution methods can be used to describe many possible integer optimal solutions to the system. It remains open to determine necessary and sufficient conditions for the existence of an integer solution to a TSS/IMLOP when Property OneFP does not hold. This is one direction for possible future work, as is the construction of a polynomial time algorithm to find integer solutions to the TSS, or prove that no such algorithm exists.

We restricted our attention to finding integer solutions without \(-\infty \), the zero entry in the max-algebraic semiring, as this is more applicable to a real world example. However, it would be interesting to study the set of integer solutions that do allow \(-\infty \) entries, it is expected that the generic case described in this paper will also allow for integer solutions with \(-\infty \) to be found in strongly polynomial time.

At the time of writing, for TSSs which do not satisfy the generic property, it is unknown whether an integer solution can be found in polynomial time. If we remove the integrality requirement, then it is known that finding a solution to a max-algebraic TSS is equivalent to finding a solution to a mean payoff game [6]. Mean payoff games are a well-known class of problems in NP \(\cap \) co-NP; it is expected that a polynomial solution method will be found in the future.