A Strongly Polynomial Method for Solving Integer Max-Linear Optimization Problems in a Generic Case

We study the existence of integer solutions to max-linear optimization problems. Specifically, we show that, in a generic case, the integer max-linear optimization problem can be solved in strongly polynomial time. This extends results from our previous papers where polynomial methods for this generic case were given.


Introduction
In the max-algebraic setting, we take maximization as our addition operation, addition as our multiplication operation and work with the set of extended reals; the real numbers extended by −∞.Max-algebra (also called tropical linear algebra) is a rapidly evolving area of idempotent mathematics, linear algebra, and applied discrete mathematics.Its creation was motivated by the need to solve a class of non-linear problems in mathematics, operational research, science, and engineering [1][2][3][4].
The question of finding integer solutions to max-linear systems of equations was first addressed in [5].Equations in max algebra are useful to model, for example, scheduling problems; therefore, finding integer solutions is applicable to real world examples.
The two-sided system (TSS) in max-algebra is a matrix equation whose solution can be used to describe, for example, the starting times of a synchronized system of machines.The study of its solutions is also of interest since it is known that TSSs in max-algebra are equivalent to mean payoff games [6,7].Mean payoff games are a well-known problem in NP ∩ co-NP and the existence of a polynomial algorithm for finding a solution remains open.Combinatorial simplex algorithms for solving mean payoff games were discussed in [8].
The problems of finding solutions to two-sided max-linear systems have been previously studied and one solution approach is to use the Alternating Method [9,10].If A and B are integer matrices, then the solution found by this method is integer, however, this cannot be guaranteed if A and B are real.The Alternating Method can, however, be adapted [11] in order to find integer solutions to TSSs.These methods find solutions in pseudopolynomial time if the input matrices are finite.
Note that various other methods for solving TSS are known [12][13][14], but none of them has been proved polynomial and there is no obvious way of adapting them to integrality constraints.In [11], a generic class of matrices was defined for which it could be determined, in strongly polynomial time, whether an integer solution to a TSS exists, and find one if it does.The current paper extends the use of this generic case to max-linear optimization problems (MLOP) with constraints in the form of a TSS.
The MLOP is a problem seeking to maximize, or minimize, the value of a max-linear function subject to a two-sided constraint.Note that in other literature this is also known as a max-linear programming problem.Without the integrality constraint, solution methods to solve the MLOP are known, for example in [9,15], a bisection method is applied to obtain an algorithm that finds an approximate solution to the MLOP.Solutions using simplex methods were described in [8].Also, a Newton type algorithm has been designed [16] to solve a more general, max-linear fractional optimization problem by a reduction to a sequence of mean payoff games.For integer solutions, a pseudopolynomial algorithm was described in [11].In this paper, we describe a strongly polynomial solution method in a generic case.It remains open to find a polynomial algorithm to solve a general MLOP with two-sided constraints.

Defining the Problem
In max-algebra, for a, b ∈ R = R ∪ {−∞}, we define a ⊕ b := max(a, b), a ⊗ b := a + b and extend the pair (⊕, ⊗) to matrices and vectors in the same way as in linear algebra, that is (assuming compatibility of sizes), (A ⊕ B) i j := a i j ⊕ b i j , (A ⊗ B) i j := k a ik ⊗ b k j and (α ⊗ A) i j := α ⊗ a i j .
Except for computational complexity arguments, all multiplications in this paper are in max-algebra and, where appropriate, we will omit the ⊗ symbol.Note that α −1 stands for −α.
We will use ε to denote −∞ as well as any vector or matrix whose every entry is −∞.Note that ε is the max-algebraic additive identity, and 0 is the max-algebraic multiplicative identity.A vector/matrix whose every entry belongs to R is called finite.A vector whose j th component is zero and every other component is ε will be called a max-algebraic unit vector, and denoted e j .We use 0 to denote the all zero vector of appropriate size.An n × n matrix in the max-algebra is called diagonal, and denoted by diag(d 1 , . . ., d n ) = diag(d), if and only if its diagonal entries are d 1 , . . ., d n ∈ R and off diagonal entries are ε (that is −∞).The max-algebraic identity matrix of appropriate size is I := diag(0, . . ., 0).
For a ∈ R, the fractional part of a is f r(a) := a − a , where • denotes the lower integer part.We extend these definitions to include ε = −∞ by defining For a matrix A ∈ R m×n , we use A ( A ) to denote the matrix with (i, j) entry equal to a i j ( a i j ) and similarly for vectors.
In this paper, a vector x ∈ R n is understood to be a column vector.Its transpose is denoted by x T ∈ R 1×n .Similarly, for a matrix A ∈ R m×n , its transpose is A two-sided max-linear system is of the form , then we say that the system is homogeneous, otherwise it is called nonhomogeneous.Nonhomogeneous systems can be transformed to homogeneous systems [9].If B ∈ R m×k , a system of the form Ax = By is called a system with separated variables.
MLOPs seek to minimize, or maximize, a max-linear function subject to constraints given by max-linear equations described by TSS.Throughout this paper, the input of an MLOP will always be finite matrices and vectors.
The integer max-linear optimization problem (IMLOP) is given by We will use IMLOP min to mean the problem minimizing f T x and IMLOP max to mean the problem maximizing f T x.
One example of an application of the TSS and the IMLOP is the multiprocessor interactive system (MPIS) [1,9], which can be described as follows.
Products P 1 , . . ., P m are made up of a number of components which are prepared using n processors.Each processor contributes to the final product P i by producing one of its components.We assume that processors work on a component for every product simultaneously and that work begins on all products as soon as the processor is switched on.
Let a i j be the time taken for the j th processor to complete its component for P i (i = 1, . . ., m; j = 1, . . ., n).Denote the starting time of the j th processor by x j ( j = 1, . . ., n).Then, for each product P i , all components will be completed at time max(x 1 + a i1 , . . ., x n + a in ).
Further, k other processors prepare components for products Q 1 , . . ., Q m with duration and starting times denoted by b i j and y j , respectively.The synchronization problem is to find starting times of all n + k processors so that each pair m) is completed at the same time.This task is equivalent to solving the system of equations max(x 1 + a i1 , . . ., Additionally, we can introduce deadlines c i and d i , writing the equations as max(x 1 + a i1 , . . ., or equivalently, Ax ⊕ c = By ⊕ d.For c i = d i , this indicates that the synchronization of P i and Q i is only required after the deadline d i .The case When solving the MPIS, it may be required that the starting times are restricted to discrete values, in which case we would want to look for integer solutions to the TSS.
In applications, it may also be required that the starting times of the MPIS are optimized with respect to a given criterion.As an example, suppose that all processors in an MPIS should begin as soon [late] as possible, that is, the latest starting time of a processor is as small [big] as possible.In this case, we would set f = 0 and seek to minimize With this extra requirement, we obtain the MLOP, It is important to note that throughout this paper, an integer solution is a finite solution, x ∈ Z n , and so does not contain ε components.For the problems described above, it would also be valid to ask when there is a solution with entries from Z ∪ {ε}, but we do not deal with this task here.

Preliminary Results
We will use the following standard notation and terminology based on [1,9].For positive integers m, n, k, we denote M = {1, . . ., m}, N = {1, . . ., n}, and K = {1, . . ., k}.If A = (a i j ) ∈ R n×n , then λ(A) denotes the maximum cycle mean, that is, The maximum cycle mean can be calculated in O(n 3 ) time [17], see also [9].If λ(A) = 0, then we say that A is definite.For a definite matrix, we define where I is the max-algebraic identity matrix.Using the Floyd-Warshall algorithm, see, e.g., [9], This differs from max-multiplication where For A ∈ R m×n , we define A j to be the j th column of A. Further Similarly, for γ ∈ R n , we denote γ (−1) = −γ ∈ R n .For a scalar α, there is no difference between α −1 and α (−1) .
Given a solution x to Ax = b, we say that a position (i, j) is active with respect to x if and only if a i j + x j = b i , it is called inactive otherwise.It will be useful in this paper to talk about the entries of the matrix corresponding to active positions and therefore we say that an element/entry a i j of A is active if and only if the position (i, j) is active.In the same way, we call a column A j active exactly when it contains an active entry.We also say that a component x j of x is active in the equation Ax = Bx if and only if there exists i such that either a i j + x j = (Bx) i or (Ax) i = b i j + x j .Lastly, x j is active in f T x if and only if f j x j = f T x.
Next, we give an overview of some basic properties.
Theorem 3.1 [11] In IMLOP min , with finite input, f min = −∞ if and only if c = d.
Theorem 3.2 [11] In IMLOP max , with finite input, f max = +∞ if and only if there exists an integer solution to Ax = Bx.
(a) An integer solution to Ax ≤ b always exists.All integer solutions can be described as the integer vectors x satisfying x ≤ x.(b) If, moreover, A is doubly R-astic, then an integer solution to Ax = b exists if and only if j: If an integer solution exists, then all integer solutions can be described as the integer vectors x satisfying x ≤ x with j: subeigenvector of A with respect to subeigenvalue λ.Since integer vectors are finite, we deal only with finite subeigenvectors here.The set of all finite [integer] subeigenvectors with respect to subeigenvalue λ is denoted Existence of [integer] subeigenvectors can be determined, and the whole set can be described, in polynomial time using the following result.
We will need the following immediate corollary.

Corollary 3.2 If A is integer and λ
For any TSS, we can deduce a simple criterion for when no integer solution exists.This idea is key in proving the main results of the paper.
Observe that, if either matrix has an ε row, row i say, then the existence of an integer solution would imply that the other matrix also has its i th row equal to ε.In this case, the i th row of the equation Ax = Bx can be removed without affecting the existence of integer solutions.
By Proposition 3.3, we can assume, without loss of generality, that in every row there exists a pair of indices j, t for which the finite entries We will restrict our attention to matrices A and B that have exactly one pair of indices j, t per row.(Note that, if we randomly generated real matrices A and B, it is likely that (A, B) will have very few such pairs and so this assumption is not too restrictive, provided that we are working with real valued, and not integer valued, matrices; of course, for integer matrices, the existing methods [9] for finding real solutions to the systems discussed will find integer solutions, and hence the interesting case to consider is indeed when the input matrices are not integer).Given a pair of matrices with such an assumption on the fractional parts of entries we define, for all rows i ∈ M, the pair (r (i), r (i)) to be the indices such that Without loss of generality, we may assume that the entries (a i,r (i) , b i,r (i) ) are integer and that no other entries in the equation for either matrix are integer (this is since we may subtract a constant from each row of the system without affecting the answer to the question).
We summarize this in the following definition.
We say that (A, B) satisfies Property OneFP if, for each i ∈ M, there is exactly one pair (r (i), r (i)) such that Remark 3.1 Note that this definition allows for multiple ε entries in each row, for example, the pair (I, I ) satisfies Property OneFP with r (i) = i = r (i) for all i.
Throughout this paper, we restrict our attention to pairs of matrices satisfying Property OneFP.
Recall, from Proposition 3.3, that a necessary condition for an integer solution to exist is that there is at least one pair of entries sharing the same fractional part in each row.As mentioned above, if we randomly generated two real matrices A and B, then we would expect there to be very few pairs of entries, (a ir(i) , b ir (i) ), which share the same fractional part.So, when given a random two-sided solvable system, the most likely outcome is that there is at most one such pair of entries in each row.While this discussion is not mathematically rigorous, it does allow us to conclude that (A, B) having exactly one such pair per row represents a generic case for solvable systems.Proposition 3.4 [11] Let A ∈ R m×n , B ∈ R m×k satisfy Property OneFP.Then, the entries a i,r (i) [b i,r (i) ] are the only possible active entries in the matrix A [B] with respect to any integer vector x [y] satisfying Ax = By.
Note that general systems can be converted into systems with separated variables by Proposition 3.5 below and that this conversion will preserve Property OneFP.So Proposition 3.4 holds accordingly for general systems.Proposition 3.5 [11] Hence we restrict our attention to the case of separated variables.All integer solutions to TSS satisfying Property OneFP can be described by the following.
Theorem 3.4 [11] Let A ∈ R m×n , B ∈ R m×k satisfy Property OneFP.For all i, j ∈ M, let and L := (l i j ).Then, an integer solution to Ax = By exists if and only if λ(L) ≤ 0.
Corollary 3.3 [11] For A ∈ R m×n , B ∈ R m×k satisfying Property OneFP, it is possible to decide whether an integer solution to Ax = By exists in The i th row of L, as defined in Theorem 3.4, is equal to H (i) T where (ii) Knowing Ax = γ (−1) = By for any γ ∈ I V * (L , 0), we can easily find x and y using Proposition 3.2.(iii) It follows from the definition that ε ε

Strongly Polynomial Method to Solve IMLOP for Systems with Property OneFP
In [11], a polynomial algorithm for finding integer solutions to an IMLOP satisfying Property OneFP was described.The aim of this paper is to develop strongly polynomial methods for solving I M L O P min and I M L O P max under the assumption that Property OneFP holds.Recall that IMLOP has the form, where We can write the constraints of the IMLOP as

Consequences of Property OneFP
Let z = (x T , 0) T ∈ Z n+1 .By Proposition 3.5, the constraint (4.2) is equivalent to the condition that there exists y ∈ Z n+1 such that (z, y) is an integer solution to A z = B y where This is since, if (z, y) is an integer solution to A z = B y, then so is If there exists a row in which the matrices (A|c) and (B|d) do not have entries with the same fractional part, then the feasible set of I M L O P min is empty.
Proof It follows from Proposition 3.3.
For the rest of the paper, we will assume that the pair ((A|c), (B|d)) satisfies Property OneFP, and hence so does (A , B ).Note that an example is provided at the end of this paper to clarify many of the concepts that will be introduced in what follows.
This matrix L, constructed from A and B , will play a key role in the solution of the IMLOP.To construct the i th row of L, we only consider columns A r (i) and B r (i) .From Remark 3.2, the i th row is equal to H (i) T for where A := (A|c) and B := (B|d).Observe that, since A and B are finite.Further, when i ∈ {m + 1, . . ., m + n + 1}, i = m + j say, then r (i) = j = r (i) and I i,r (i) = 0 = I i,r (i) .Hence, Therefore, the matrix L ∈ Z m+n+1 has the form Moreover, each row of Q has either one or two finite entries, for a fixed i ∈ {1, . . ., m}, the entries l i j , j ∈ {m + 1, . . ., m + n + 1} are obtained by calculating max( a j,r (i) − a i,r (i) , b j,r (i) − b i,r (i) ), where a j,r (i) , j ∈ {m + 1, . . ., m + n + 1} form a max-algebraic unit vector, as do b j,r (i) , j ∈ {m + 1, . . ., m + n + 1}.
Thus, at least one will be finite and, if r (i) = r (i), there will be exactly two.
From Corollary 4.1, we have μ is the vector of the last n + 1 entries of some γ ∈ I V * (L , 0).By Corollary 3.2, γ = L * w for some integer vector w.Let V = (v i j ) be the matrix formed of the last n + 1 rows of L * , so that μ = V ⊗ w for w ∈ Z m+n+1 , equivalently Now, (4.4) can be split into two equations, one for the vector x and one for the scalar 0. Further, we would like the second equation to be of the form min k w k = 0 for ease of calculations later.This leads to the following definition.Definition 4.1 Let V (0) be the matrix formed from V (−1) by max-multiplying each finite column j by v m+n+1, j , and then removing the final row (at least one finite column exists by Property OneFP).Let U ∈ R 1×(m+n+1) be the row that was removed.
Note that U contains only 0 or +∞ entries.Proof By Corollary 4.1, x is feasible if and only if (x T , 0) T = μ (−1) where μ is the vector containing the last n + 1 components of some γ ∈ I V * (L , 0).By the above discussion, this means that We will first consider, in Subsect.4.2, solutions to I M L O P when L * , and hence also V (0) and U , is finite.In Subsects.4.4.1 and 4.4.2we deal with the case when L * is not finite.
Before this we summarize key definitions and assumptions that will be used throughout the remainder of the paper, for easy reference later.If L * is finite, then the optimal objective value f min is attained for Proof By Proposition 4.2, we know that any feasible x satisfies x = V (0) ⊗ ν where, by the finiteness of L * (and also V (0) ), we have U T = 0 and hence Therefore, x ≥ V (0) ⊗ 0 for any feasible x and further V (0) ⊗ 0 is feasible.The statement now follows from the isotonicity of f T x, see Corollary 3.1.If L * is finite, then the optimal objective value f max is equal to Further, let y := V (0) ⊗ 0 and j be an index such that f max = f j y j .If i is such that y j = V (0) ji , then an optimal solution is x opt = V (0) i .
Proof By Proposition 4.2, we know that any feasible x satisfies x = V (0) ⊗ ν where, by the finiteness of L * (and also V (0) ), we have U T = 0 and hence j and therefore all feasible x satisfy x ≤ y = V (0) ⊗ 0. Note that y may not be feasible.
By isotonicity, f T y ≥ f T x for any feasible x.We claim that there exists a feasible solution x for which they are equal.Suppose that f T y = f j y j .Let i be an index such that v (0)  ji = y j .By setting ν i = 0 and all other components to large enough integers, we get a feasible solution x such that x j = y j .In fact, x = V (0) i .Hence, which implies f T y = f T x as required.
It follows from Theorems 4.1 and 4.2 that, if λ(L) ≤ 0 and L * is finite, then an optimal solution to IMLOP min and IMLOP max always exists.

Criterion for Finiteness of L *
4.1 and 4.2 provide explicit solutions to IMLOP, which can be found in O((m + n) 3 ) time by Remark 4.1, in the case when L * is finite.We now consider criteria for L * to be non-finite, and show how we can adapt the problem in this case so that IMLOP can be solved using the above methods in general.Let e j ∈ R m+n+1 be the j th max-algebraic unit vector.The following are equivalent: (i) L * contains an ε entry.

Further, the index j satisfies the condition in (ii) if and only if j satisfies the condition in (iii) if and only if j satisfies the condition in (iv).
Proof Recall that L has the form (ii)⇒(i): Obvious.¬(iii)⇒ ¬(i): Assume that, for all j, L j = e j .We know that the first m columns of L are finite and, by assumption, every column of Q contains a finite entry.This means that L 2 will be finite and thus so will L * .
(ii)⇔(iii): We show L m+ j = e m+ j if and only if L 2 m+ j = e m+ j .Fix j such that L m+ j = e m+ j .Then clearly, L 2 m+ j = e m+ j and hence (iii)⇒(ii).Although (ii)⇒ (iii) follows from above, we need to also prove that the same index j satisfies both statements.To do this, we suppose that L 2 m+ j = e m+ j .Then, for all i ∈ {1, . . ., m} with i = j, we have where l i,1 , . . ., l i,m ∈ R. Thus, l 1,m+ j = . . .= l m,m+ j = ε and hence L m+ j = e m+ j .
(iv): By the structure of L, (iii) holds if and only if Q contains an ε column.Fix j ∈ {1, . . ., n + 1}.Now, for any i ∈ M, Therefore 4.4 IMLOP when L * is Non-Finite Theorems 4.1 and 4.2 solve IMLOP when L * is finite.In this case, U T = 0 and we took advantage of the fact that ν i ≥ 0 held for every component of ν.However, if L * m+ j = e m+ j for some j ∈ N , then U j = +∞ and so ν j will be unbounded.This suggests that feasible solutions x = V (0) ⊗ ν are not bounded from below and introduces the question of whether f min = ε in these cases.We define the set J to be J := { j ∈ N : Neither A j nor B j contain an integer entry}.
Clearly, this definition of J is independent of whether or not c and d contain integer entries, this is necessary because, by the discussion above, only values ν j with j ∈ N may be unbounded (note that U m+n+1 = 0 regardless of whether or not L * is finite).In the following sections, we will use it to identify "bad" or inactive columns of A and B, which can be removed from the system.First, we consider the case J = ∅, under which all ν i are bounded even though L * may not be finite.
Observe that J = ∅ if and only if U T = 0. Further, it can be verified that, the results in Theorems 4.1 and 4.2 hold when the assumption that L * is finite is replaced by an assumption that U T = 0, in fact, the same proofs apply without any alterations.The case J = ∅ is therefore solved as follows.(1) For IMLOP min , the optimal objective value f min is attained for For IMLOP max , the optimal objective value f max is equal to Further, let y := V (0) ⊗ 0 and j be an index such that f max = f j y j .If i is such that y j = V (0)  ji , then an optimal solution is x opt = V (0) i .
It remains to show how to find solutions to I M L O P min and I M L O P max in the case when U T = 0, i.e., when L * is not finite and J = ∅.We do this in the following subsections.

IMLOP min when L * is Non-Finite
If J = ∅, then we aim to remove the "bad" columns A j , B j , j ∈ J from our problem and use Theorem 4.1 to solve it.The next result allows us to do this when J ⊂ N .It will turn out that, in this case, under Assumption 4.1, an optimal solution always exists; this will be shown in the proof of Proposition 4.7 below.The case J = N will be dealt with Proposition 4.8.

Proposition 4.5 Let A, B, c, d satisfy Assumption 4.1 and f
Suppose ∅ = J ⊂ N .If an optimal solution x exists, then f min = f j x j for some j ∈ N − J .
Proof Suppose x is a feasible solution of I M L O P min such that f T x = f min , but f min = f l x l for any l ∈ N − J .Let Observe that, for all t ∈ J , neither A t nor B t contains an integer entry and so, by Proposition 3.4, x t is not active in the equation Ax ⊕ c = Bx ⊕ d.Thus, the vector x with components for some integer α > 0 is also feasible but f T x < f T x, a contradiction.
Hence, we can simply remove all columns j ∈ J from our system and solve this reduced system using previous methods.Formally, let g be obtained from f by removing entries with indices in J .Let A − , B − be obtained from A and B by removing columns with indices in J , so A − , B − ∈ R m×n where n = n − |J |.By IMLOP 1 and IMLOP 2 , we mean the IMLOPs and where, by assumption, the pair ((A|c), (B|d)) satisfies Property OneFP, and therefore so does ((A − |c), (B − |d)).
To differentiate between solutions to IMLOP 1 and IMLOP 2 , the matrices L, L * , V (0) , U will refer to those obtained from A, B, c, d .When they are calculated using A − , B − , c, d, we will call them L, L * , V (0) , Û .
In order to prove that an optimal solution always exists, we recall the following results which tell us that, for any IMLOP, the problem is either unbounded, infeasible or has an optimal solution.Let From Theorems 3.1 and 3.2, Proposition 4.6 [11] Let A, B, c, d, f be as defined in (4.1).If I S = ∅, then f min > −∞ ⇒ S min = ∅ and f max < +∞ ⇒ S max = ∅.Proposition 4.7 Let A, B, c, d satisfy Assumption 4.1 and f ∈ R n .Let A − , B − , g be as defined in (4.6).Suppose ∅ = J ⊂ N .Then f min = g min , x opt can be obtained from its subvector y opt by inserting suitable "small enough" integer components and IMLOP 2 can be solved by Theorem 4.1.
Proof First, observe that an optimal solution to IMLOP 2 always exists since Û T = 0, so all components of ν are bounded below.This implies that feasible solutions to IMLOP 2 , and therefore also IMLOP 1 , exist.So, by Proposition 4.6, IMLOP 1 either has an optimal solution or f min = ε.If f min = ε, then, by Theorem 3.1, c = d which, under Property OneFP, means that c, d ∈ Z m and there are no integer entries in A or B. This is impossible since J = N .Suppose x opt is an optimal solution to IMLOP 1 and let y be obtained from x opt by removing elements with indices in J .Using Property OneFP, we know that components x opt j , j ∈ J are inactive in Ax ⊕ c = Bx ⊕ d.Further, from Proposition 4.5, we can assume also that x opt j , j ∈ J are inactive in f min (can decrease their value if necessary without changing the solution).Hence, So y is feasible for IMLOP 2 .If y is not optimal, then g min = g T y < f min for some feasible (in IMLOP 2 ) y .But letting x = (x j ) where, for j ∈ J , x j corresponds to y j and x j , j / ∈ J , are set to small enough integers, we obtain a feasible solution to IMLOP 1 satisfying f T x = g min < f min , a contradiction.Therefore, y = y opt .A similar argument holds for the other direction.We now show how to solve IMLOP 2 .By Proposition 4.2, feasible solutions to IMLOP 2 satisfy Case 1: There exists an integer entry in either c or d.Observe that IMLOP 2 can be solved immediately by Theorem 4.1 since L * is finite.Case 2: Neither c nor d contains an integer entry.Now L * is not finite.However, Û is finite and All other columns of V (0) are finite.The single +∞ column contains no finite entries and will never be active in determining the value of a feasible solution.Hence, any feasible solution y still satisfies y ≥ V (0) ⊗ 0 and y opt = V (0) ⊗ 0 as in the proof of Theorem 4.1.

Corollary 4.3
Let A, B, c, d satisfy Assumption 4.1 and f ∈ R n .Let A − , B − , g, and V (0) be as defined in (4.6).If ∅ = J = N , the optimal objective value f min of IMLOP 1 is equal to g T y opt for y opt = V (0) ⊗ 0.
The final case for IMLOP min is when J = N .Proof Follows from Theorem 3.1 and the fact that entries in columns with indices in J are never active.

IMLOP max when L * is Non-Finite
We will now discuss I M L O P max when J = ∅.The case when neither c nor d contains an integer is trivial and will be described in Proposition 4.10.We first assume that either c or d contains an integer entry.Here, we cannot make the same assumptions about active entries in the objective function as in the minimization case, as demonstrated by the following example.
Note that J = {2}.It can be seen that the largest integer vector x which satisfies this equality is (0, 1).Therefore, f max = 2, the only active entry with respect to f T x is x 2 and 2 ∈ J .
Instead, we give an upper bound y on x, for which f max = f T y and we can find a feasible x where f T x attains this maximum value.For all j ∈ J , we have U j = +∞ and also V (0) j non-finite since L * m+ j = e m+ j .We will therefore adapt the matrix V (0) to reflect this.Definition 4.2 Let V be obtained from V (0) by removing all columns j ∈ J .Proposition 4.9 Let A, B, c, d satisfy Assumption 4.1 and f ∈ R n .Let V be as defined in Definition 4.2.Suppose either c or d contains an integer and ∅ = J ⊆ N .Then, the optimal objective value f max is equal to f T y for y = V ⊗ 0.
Further, let j be an index such that f max = f j y j and i satisfy y j = Vji .Then, an optimal solution is x opt = Vi .
Proof From Proposition 4.2, any feasible x satisfies Consider an arbitrary feasible solution x = V (0) ⊗ ν .Let μ be the subvector of ν with indices from T .Then, We claim that there exists a feasible x such that f T x = f T y and hence it is an optimal solution with f max = f T y.Indeed, let j ∈ N be any index such that f T y = f j y j .Let i ∈ T be an index such v (0) ji = y j .Then, by setting ν i = 0 and ν j , j = i to large enough integers, we obtain a feasible solution x = V (0) i which satisfies f T x = f T y.We conclude by noting that all methods for solving the IMLOP under Property OneFP described in this paper are strongly polynomial.Proof From A, B, c, d, we can calculate V (0) , V , and U in O((m + n + 1) 3 ) time by Remark 4.1.Then V (0) ⊗ 0, V (0) ⊗ 0 or V ⊗ 0 can be calculated in O(n(m + n + 1)) time.From this we can calculate f min or f max in O(n) time.Finally, for IMLOP max , we can find an optimal solution in O(m + n + 1) time.
In the cases described in Proposition 4.10, we can perform the necessary checks in O((m + n) 3 ) time.

An Example
Suppose we want to find f min and f max subject to the constraints x ∈ Z 4 and 3 0.5 −1.7 −2.5 −3.7 −1.9 −2.1 −3.7 x ⊕ −0.
Following the proof of this proposition, we see that the optimum is attained either for i = 3 or i = 4.For i = 3, this relates to columns 2, 4, or 6 of V and hence the optimal solution can be obtained by setting either ν 2 , ν 4 , or ν 6 to 0. This yields x opt = (−2, −2, 0, x 4 ) T for any small enough x 4 .If we instead choose i = 4, then we conclude that any column of V admits an optimal solution.Finally, observe that V (0) can be obtained from V by removing rows with indices in J .This is since A − and B − differ from A and B only in columns with indices from J , meaning that L = L[N − J ] and L * = L[N − J ] * .

Conclusions
In this paper, we presented a strongly polynomial method to determine whether an integer optimal solution exists to a max-linear optimization problem when the input matrices satisfy Property OneFP.We gave a necessary condition for existence of an integer feasible solution and, further, showed that, under this condition, an integer optimal solution always exists.We described how to find an optimal solution in strongly polynomial time.Our solution methods can be used to describe many possible integer optimal solutions to the system.It remains open to determine necessary and sufficient conditions for the existence of an integer solution to a TSS/IMLOP when Property OneFP does not hold.This is one direction for possible future work, as is the construction of a polynomial time algorithm to find integer solutions to the TSS, or prove that no such algorithm exists.
We restricted our attention to finding integer solutions without −∞, the zero entry in the max-algebraic semiring, as this is more applicable to a real world example.However, it would be interesting to study the set of integer solutions that do allow −∞ entries, it is expected that the generic case described in this paper will also allow for integer solutions with −∞ to be found in strongly polynomial time.
At the time of writing, for TSSs which do not satisfy the generic property, it is unknown whether an integer solution can be found in polynomial time.If we remove the integrality requirement, then it is known that finding a solution to a max-algebraic TSS is equivalent to finding a solution to a mean payoff game [6].Mean payoff games are a well-known class of problems in NP ∩ co-NP; it is expected that a polynomial solution method will be found in the future.

Proposition 4 . 2
Let A, B, c, d, V(0) , and U be as defined in (4.1) and Definition 4.1.Then, x ∈ Z n is a feasible solution to IMLOP if and only if it satisfies x = V (0) ⊗ ν where 0 = U ⊗ ν for some ν ∈ Z m+n+1 .

Assumption 4 . 1
We assume that the following are satisfied.(i) A, B ∈ R m×n , c, d ∈ R m .(ii) A := (A|c), B := (B|d) and The pair (A , B ) satisfies Property OneFP (and therefore also (A , B )). (iv) L is constructed from A , B according to Corollary 4.1.(v) Without loss of generality, λ(L) = 0. (vi) V is the matrix containing the last n + 1 rows of L.

FindingTheorem 4 . 1
the Optimal Solution to IMLOP when L * is Finite Let A, B, c, d satisfy Assumption 4.1 and V (0) be as in Definition 4.1.

Theorem 4 . 2
Let A, B, c, d satisfy Assumption 4.1 and V (0) be as in Definition 4.1.

Proposition 4 . 8
Let A, B, c, d satisfy Assumption 4.1 and f ∈ R n .Suppose J = N .If c = d, then f min = −∞.If, instead, c = d, then I M L O P min is infeasible.

Proposition 4 .
10 Let A, B, c, d satisfy Assumption 4.1 and f ∈ R n .Suppose neither c nor d contains an integer entry.If there exists x ∈ Z n such that Ax = Bx, then f max = +∞.If no such x exists, then I M L O P max is infeasible.Proof Follows from Theorem 3.2 and the fact that c = d since they do not have any entries with the same fractional part.

Corollary 4 . 4
Given input A, B, c, d satisfying Assumption 4.1 and f ∈ R n , both IMLOP min and IMLOP max can be solved in O((m + n) 3 ) time.
, Q contains an ε column if and only if neither A = (A|c) nor B = (B|d) contains an integer entry.Observe that, for each j ∈ {1, . . ., n + 1}, either L * m+ j = e m+ j or L * m+ j is finite.Further L * t is finite for all t ∈ M since P and R are finite.Corollary 4.2 Let A, B, c, d satisfy Assumption 4.1.L * is finite if and only if, for all j ∈ {1, . . ., n + 1}, either (A|c) j or (B|d) j contains an integer entry.