Abstract
Let \(A \in \mathbb {Z}^{m \times n}\) be an integral matrix and a, b, \(c \in \mathbb {Z}\) satisfy a ≥ b ≥ c ≥ 0. The question is to recognize whether A is {a,b,c}modular, i.e., whether the set of n × n subdeterminants of A in absolute value is {a,b,c}. We will succeed in solving this problem in polynomial time unless A possesses a duplicative relation, that is, A has nonzero n × n subdeterminants k_{1} and k_{2} satisfying 2 ⋅k_{1} = k_{2}. This is an extension of the wellknown recognition algorithm for totally unimodular matrices. As a consequence of our analysis, we present a polynomial time algorithm to solve integer programs in standard form over {a,b,c}modular constraint matrices for any constants a, b and c.
Introduction
A matrix is called totally unimodular (TU) if all of its subdeterminants are equal to 0, 1 or − 1. Within the past 60 years the community has established a deep and beautiful theory about TU matrices. A landmark result in the understanding of such matrices is Seymour’s decomposition theorem [19]. It shows that TU matrices arise from network matrices and two special matrices using row, column, transposition, pivoting, and socalled ksum operations. As a consequence of this theorem it is possible to recognize in polynomial time whether a given matrix is TU [18, 22]. An implementation of the algorithm in [22] by Walter and Truemper [25] returns a certificate if A is not TU: For an input matrix with entries in {0,± 1}, the algorithm finds a submatrix \(\tilde {A}\) which is minimal in the sense that \(\det (\tilde {A}) = 2\) and every proper submatrix of \(\tilde {A}\) is TU. We refer to Schrijver [18] for a textbook exposition of Seymour’s decomposition theorem, a recognition algorithm arising therefrom and further material on TU matrices.
There is a wellestablished relationship between totally unimodular and unimodular matrices, i.e., matrices whose n × n subdeterminants are equal to 0, 1 or − 1. In analogy to this we define for \(A \in \mathbb {Z}^{m \times n}\) and m ≥ n,
the set of all n × n subdeterminants of A in absolute value, where A_{I,⋅} is the submatrix formed by selecting all rows with indices in I. It follows straightforwardly from the recognition algorithm for TU matrices that one can efficiently decide whether \(D(A)\subseteq \{1,0\}\). A technique in [2, Section 3] allows us to recognize in polynomial time whether \(D(A) \subseteq \{2, 0\}\). If all n × n subdeterminants of A are nonzero, the results in [1] can be applied to calculate D(A) given that \(\max \limits \{ k\colon k\in D(A)\}\) is constant (cf. Lemma 1). Nonetheless, with the exception of these results we are not aware of other instances for which it is known how to determine D(A) in polynomial time.
The main motivation for the study of matrices with bounded subdeterminants comes from integer optimization problems (IP s). It is a wellknown fact that IP s of the form \({\max \limits } \{c^{\mathrm {T}}x \colon Ax \leq b,~x \in \mathbb {Z}^{n}\}\) for \(A \in \mathbb {Z}^{m \times n}\) of full column rank, \(b \in \mathbb {Z}^{m}\) and \(c \in \mathbb {Z}^{n}\) can be solved efficiently if \(D(A) \subseteq \{1,0\}\), i.e., if A is unimodular. This naturally leads to the question whether these problems remain efficiently solvable when the assumptions on D(A) are further relaxed. Quite recently in [2] it was shown that when \(D(A) \subseteq \{2,1,0\}\) and rank(A) = n, integer optimization problems can be solved in strongly polynomial time. Recent results have also led to understand IP s when A is nondegenerate, i.e., if 0∉D(A). The foundation to study the nondegenerate case was laid by Veselov and Chirkov [23]. They describe a polynomial time algorithm to solve IP s if \(D(A) \subseteq \{2,1\}\). In [1] the authors showed that IP s over nondegenerate constraint matrices are solvable in polynomial time if the largest n × n subdeterminant of the constraint matrix is bounded by a constant.
The role of bounded subdeterminants in complexity questions and in the structure of IP s and LP s has also been studied in [8, 10, 12, 17], as well as in the context of combinatorial problems in [5, 6, 16]. The sizes of subdeterminants also play an important role when it comes to the investigation of the diameter of polyhedra, see [7] and [3].
Our Results
A matrix \(A \in \mathbb {Z}^{m \times n}\), m ≥ n, is called {a,b,c}modular if D(A) = {a,b,c}, where a ≥ b ≥ c ≥ 0.^{Footnote 1} The paper presents three main results. First, we prove the following structural result for a subclass of {a,b,0}modular matrices.
Theorem 1 (Decomposition Property)
Let a ≥ b > 0, \(\gcd (\{a,b\}) = 1\) and assume that (a,b)≠(2,1). Using row permutations, multiplications of rows by − 1 and elementary column operations, any {a,b,0}modular matrix \(A \in \mathbb {Z}^{m \times n}\) can be brought into a block structure of the form
in time polynomial in n, m and \(\log \A\_{\infty }\), where \(L\in \mathbb {Z}^{m_{1} \times n_{1}}\) and \(R\in \mathbb {Z}^{m_{2} \times n_{2}}\) are TU, n_{1} + n_{2} = n − 1, m_{1} + m_{2} = m. In the representation above the rightmost column has entries in {0,a} and {0,b}, respectively. The matrix \(\left [\begin {array}{cc} L & 0 \\ 0 & R \end {array}\right ]\) contains the (n − 1)dimensional unit matrix as a submatrix.
The first n − 1 columns of (1) are TU since they form a 1sum of two TU matrices (see [18, Chapter 19.4]). This structural property lies at the core of the following recognition algorithm. We say that a matrix A possesses a duplicative relation if it has nonzero n × n subdeterminants k_{1} and k_{2} satisfying 2 ⋅k_{1} = k_{2}.
Theorem 2 (Recognition Algorithm)
There exists an algorithm that solves the following recognition problem in time polynomial in n, m and \(\log \A\_{\infty }\): Either, calculate D(A), or give a certificate that D(A)≥ 4, or return a duplicative relation.
For instance, Theorem 2 cannot be applied to check whether a matrix is {4,2,0}modular, but it can be applied to check whether a matrix is {3,1,0}, or {6,4,0}modular. More specifically, Theorem 2 recognizes {a,b,c}modular matrices unless (a,b,c) = (2 ⋅ k,k,0), \(k \in \mathbb {Z}_{\geq 1}\). In particular, this paper does not give a contribution as to whether socalled bimodular matrices (the case k = 1) can be recognized efficiently. This is because Theorem 1 excludes the case (a,b) = (2,1).
The decomposition property established in Theorem 1 is a major ingredient for the following optimization algorithm to solve standard form IP s over {a,b,c}modular constraint matrices for any constant a ≥ b ≥ c ≥ 0.
Theorem 3 (Optimization Algorithm)
Consider a standard form integer program of the form
for \(b \in \mathbb {Z}^{m}\), \(c \in \mathbb {Z}^{n}\) and \(B \in \mathbb {Z}^{m \times n}\) of full row rank, where D(B^{T}) is constant, i.e., \(\max \limits \{ k\colon k\in D(B^{\mathrm {T}})\}\) is constant.^{Footnote 2} Then, in time polynomial in n, m and the encoding size of the input data, one can solve (2) or output that D(B^{T})≥ 4.
Notably, in Theorem 3, the assumption that D(B^{T}) is constant can be dropped if B is degenerate, i.e., if 0 ∈ D(B^{T}).
Three Examples of {a,b,c}Modular Matrices

Generalized network flow. Let G = (V,E) be a directed graph whose vertices can be partitioned as V = S ∪{v}∪ T such that no arc runs between S and T, and no arc runs from v to S. Consider a generalized network flow problem in G, where s ∈ S and t ∈ T, with capacities \(u \colon E \rightarrow \mathbb {Z}_{> 0}\) and gains
$$ \gamma(e) := \begin{cases} \frac{a}{b} & \text{ if } e \text{ runs from } S \text{ to}\ v,\\ 1 & \text{otherwise}, \end{cases} $$where a, \(b \in \mathbb {Z}_{> 0}\). Then, the natural formulation of finding a maximal st flow in G with respect to u and γ is an integer optimization problem whose constraint matrix A satisfies \(D(A^{\mathrm {T}}) \subseteq \{a,b,0\}\) after multiplying the constraint corresponding to v by b.

Perfect dmatching. Let G = (V,E) be an undirected graph with edge capacities \(u \colon E \rightarrow \mathbb {Z}_{> 0} \cup \{\infty \}\), weights \(c \colon E \rightarrow \mathbb {R}\) and numbers \(d \colon V \rightarrow \mathbb {Z}_{> 0}\). Then, the perfect dmatching problem is the problem of finding a function \(f \colon E \rightarrow \mathbb {Z}_{\geq 0}\) of maximal cost \({\sum }_{e \in E} f(e) \cdot c(e)\) which satisfies f(e) ≤ u(e) for all e ∈ E and \({\sum }_{e \in \delta (v)} f(e) = d(v)\) for all v ∈ V, see [14, 15].
Consider the following variation of this problem: Let G_{1} = (V_{1},E_{1}) and G_{2} = (V_{2},E_{2}) be two bipartite graphs with edge capacities \(u \colon E_{1} \cup E_{2} \rightarrow \mathbb {Z}_{> 0} \cup \{\infty \}\), weights \(c \colon E_{1} \cup E_{2} \rightarrow \mathbb {R}\) and numbers \(d \colon V_{1} \cup V_{2} \rightarrow \mathbb {Z}_{> 0}\). Let a, \(b \in \mathbb {Z}_{> 0}\) and \(g \in \mathbb {Z}\). Fix edges e_{1} ∈ E_{1} and e_{2} ∈ E_{2}. Then, the natural formulation of finding an optimal, perfect dmatching f in (V_{1} ∪ V_{2},E_{1} ∪ E_{2}) w.r.t. u, c and d and under the additional constraint that
$$ \pm a \cdot f(e_{1}) \pm b \cdot f(e_{2}) = g, $$is an integer optimization problem whose constraint matrix A satisfies \(D(A^{\mathrm {T}}) \subseteq \{a,b,0\}\). The signs of a and b can be different.

Edgeweighted vertex cover. Given an undirected graph G = (V,E) and weights \(w \colon E \rightarrow \mathbb {Z}_{\geq 0}\), an edgeweighted vertex cover is a function \(f \colon V \rightarrow \mathbb {Z}_{\geq 0}\) such that for each e = {u,v}∈ E, f(u) + f(v) ≥ w(e).
Consider the following variation of this problem: Let G_{1} = (V_{1},E_{1}) and G_{2} = (V_{2},E_{2}) be two bipartite graphs with weights \(w \colon E_{1} \cup E_{2} \rightarrow \mathbb {Z}_{\geq 0}\) and costs \(c \colon V_{1} \cup V_{2} \rightarrow \mathbb {R}_{\geq 0}\). Let a, \(b \in \mathbb {Z}_{> 0}\). Then, the natural formulation of finding a minimal edgeweighted vertex cover f in G_{1} ∪ G_{2} w.r.t. c, where the constraints are replaced by
$$ \begin{array}{@{}rcl@{}} f(u) + f(v) \pm a \cdot z &\geq &w(e), \quad \forall e = \{u,v\} \in E_{1},\\ f(u) + f(v) \pm b \cdot z &\geq & w(e), \quad \forall e = \{u,v\} \in E_{2}, \end{array} $$is an integer optimization problem whose constraint matrix is {a,b} or {a,b,0}modular.^{Footnote 3} Note in particular that the signs of the two constraints can be different.
Notation and Preliminaries
For \(k \in \mathbb {Z}_{\geq 1}\), [k] := {1,…,k}. \(\mathcal I_{n}\) is the ndimensional unit matrix, where we leave out the subscript if the dimension is clear from the context. For a matrix \(A \in \mathbb {Z}^{m \times n}\), we denote by A_{i,⋅} the ith row of A. For a subset I of [m], A_{I,⋅} is the submatrix formed by selecting all rows with indices in I, in increasing order. An analogous notation is used for the columns of A. For \(k \in \mathbb {Z}\), we write I_{k} := {i ∈ [m]: A_{i,n} = k}, the indices of the rows whose nth entry is equal to k. Set \(\A\_{\infty }:=\max \limits _{i \in [m], j \in [n]} A_{i,j}\). For simplicity, we assume throughout the document that the input matrix A to any recognition algorithm satisfies m ≥ n and rank(A) = n as rank(A) < n implies that D(A) = {0}. Leftout entries in figures and illustrations are equal to zero.
D(A) is preserved under elementary column operations, permutations of rows and multiplications of rows by − 1. By a series of elementary column operations on A, any nonsingular n × n submatrix B of A can be transformed to its Hermite normal form (HNF), in which B becomes a lowertriangular, nonnegative submatrix with the property that each of its rows has a unique maximum entry on its main diagonal [18]. This can be done in time polynomial in n, m and \(\log \A\_{\infty }\) [21].
At various occasions we will make use of the following simple adaptation of the HNF described in [1, Section 3]. Begin by choosing a nonsingular n × n submatrix B of A: Either, the choice of B will be clear from the context or otherwise, choose B to be any nonsingular n × n submatrix of A. Permute the rows of A such that A_{[n],⋅} = B. Apply elementary column operations to A such that B is in HNF. After additional row and column permutations and multiplications of rows by − 1,
where A_{⋅,n} ≥ 0, \(\det (B) = ({\prod }_{i=1}^{l} \delta _{i}) \cdot A_{n,n}\), all entries marked by ∗ are numbers between 0 and the corresponding entry on the main diagonal minus one, and δ_{i} ≥ 2 for all i ∈ [l]. In particular, the rows of A_{[n],⋅} whose entries on the main diagonal are strictly larger than one are at positions n − l,…,n.
We note that it is not difficult to efficiently recognize nondegenerate matrices given a constant upper bound on D(A). This will allow us to exclude the nondegenerate case in all subsequent algorithms. We wish to emphasize that the results in [1] can be applied to solve this task given that \(\max \limits \{ k\colon k\in D(A) \}\) is constant.
Lemma 1
Given a constant \(d\in \mathbb {Z}\), there exists an algorithm that solves the following recognition problem in time polynomial in n, m and \(\log \A\_{\infty }\): Either, calculate D(A), or give a certificate that D(A)≥ d + 1, or that 0 ∈ D(A).
Proof
We first prove that for n ≥ 2, any nondegenerate matrix \(A \in \mathbb {Z}^{m \times n}\) with D(A)≤ d has at most (n − 1) + d ⋅ (2 ⋅ d + 1) rows. Given such a matrix A, transform it to (3). We introduce the following family of 2 × 2 subdeterminants: For any i,j ≥ n − 1, i≠j, set
Any such subdeterminant can be extended to an n × n subdeterminant Θ_{i,j} of A by appending rows 1,…,n − 2 and columns 1,…,n − 2 to 𝜃_{i,j}. Applying Laplace expansion yields \({{\varTheta }}_{i,j} = ({\prod }_{k=1}^{l1} \delta _{k}) \cdot \theta _{i,j}\).
Starting with row n, partition the rows of A into submatrices (bins) A[1], …, A[s] such that the rows with the same entry in the nth column are in the same bin. Thus, s is the number of different entries in the nth column, starting from the nth row. Denote by I[i] the indices of the rows in A[i], i.e., A_{I[i],⋅} = A[i]. In what follows, we first derive a bound on s and subsequently bound the number of rows in each bin.
Regarding the former, as A_{n− 1,n} = 0, it holds that 𝜃_{n− 1,j} = δ_{l} ⋅ A_{j,n} for any j ≥ n. This implies that A_{j,n}≠ 0 for any j ≥ n as otherwise, Θ_{n− 1,j} = 0, contradicting the nondegeneracy of A. As we have assumed that A_{⋅,n} ≥ 0, it follows that A_{⋅,n} > 0. Therefore, 𝜃_{n− 1,j} > 0 and Θ_{n− 1,j} > 0 for any j ≥ n. Varying j among different bins induces pairwise different and nonnegative subdeterminants Θ_{n− 1,j}, i.e., s ≤ d.
Next, we derive a bound on the number of rows of each bin. Choose an arbitrary bin k ∈ [s]. Let i be the smallest element of I[k] and let j_{1}, j_{2} ∈ I[k] ∖{i}, j_{1}≠j_{2}. As \(A_{i,n} = A_{j_{1},n} = A_{j_{2},n}\), it holds that
From (i), it follows that \(A_{j_{1},n1} \neq A_{j_{2},n1}\) as otherwise, \(\theta _{j_{1},j_{2}} = 0\). Regarding (ii) and (iii), note that \(A_{j_{1},n1} \neq A_{j_{2},n1} \Leftrightarrow \theta _{i,j_{1}} \neq \theta _{i,j_{2}}\). This is equivalent to \({{\varTheta }}_{i,j_{1}} \neq {{\varTheta }}_{i,j_{2}}\). In other words, every element of I[k] ∖{i} yields a different n × n subdeterminant. Since those n × n subdeterminants are not necessarily different in absolute value, I[k] ∖{i}≤ 2 ⋅ d and I[k]≤ 2 ⋅ d + 1. Combining this bound with our bound on the number of bins, we obtain m ≤ (n − 1) + d ⋅ (2 ⋅ d + 1).
From the result above we straightforwardly deduce the following simple recognition algorithm: Let A be the input matrix. If n = 1 perform an exhaustive search. If m ≤ (n − 1) + d ⋅ (2 ⋅ d + 1), enumerate all \(\left (\begin {array}{cc}(n1)+d \cdot (2 \cdot d+1)\\ n \end {array}\right )\) n × n subdeterminants. Otherwise, we find at least d + 1 elements of D(A) or an n × n subdeterminant equal to zero as described above. □
Proof of Theorem 1
Transform A to (3), thus A_{⋅,n} ≥ 0. As a first step we show that we may assume that A_{n,n} > 1 without loss of generality. For this purpose, assume that A_{n,n} = 1, i.e., that l = 0. This implies that \(A_{[n],\cdot } = \mathcal I_{n}\). Consequently, any nonsingular submatrix B of A can be extended to an n × n submatrix with same determinant in absolute value by (a) appending unit vectors from the topmost n rows of A to B and (b) for each unit vector appended in step (a), by appending the column to B in which this unit vector has its nonzero entry. By Laplace expansion, the n × n submatrix we obtain admits the same determinant in absolute value. Therefore, if we identify any submatrix of A with determinant larger than one in absolute value, we can transform A once more to (3) with respect to its corresponding n × n submatrix which yields A_{n,n} > 1 as desired. To find a subdeterminant of A of absolute value larger than one, if present, test A for total unimodularity. If the test fails, it returns a desired subdeterminant. If the test returns that A is TU, then a = b = 1 and A is already of the form \(\left [\begin {array}{cc}L & {}^{0}\!/_{1} \end {array}\right ]\) for L TU.
The nth column is not divisible by any integer larger than one as otherwise, all n × n subdeterminants of A would be divisible by this integer, contradicting \(\gcd (\{a,b\}) = 1\). In particular, since A_{n,n} > 1, A_{⋅,n} is not divisible by A_{n,n}. This implies that there exists an entry A_{k,n}≠ 0 such that A_{k,n}≠A_{n,n}. Thus, there exist two n × n subdeterminants, \(\det (A_{[n],\cdot })\) and \(\det (A_{[n1] \cup k,\cdot })\), of different absolute value. This allows us to draw two conclusions: First, the precondition A_{n,n} > 1 which we have established in the previous paragraph implies a > b. We may therefore assume for the rest of the proof that a ≥ 3 as {a,b}≠{2,1}. Secondly, it follows that l = 0: For the purpose of contradiction assume that l ≥ 1. Then the aforementioned two subdeterminants \(\det (A_{[n],\cdot })\) and \(\det (A_{[n1] \cup k,\cdot })\) are both divisible by \({\prod }_{i=1}^{l} \delta _{i} > 1\). Since one of those subdeterminants must be equal to ± a and the other must be equal to ± b, this contradicts our assumption of \(\gcd (\{a,b\}) = 1\).
As a consequence of l = 0, the topmost n − 1 rows of A form unit vectors. Thus, any subdeterminant of rows n,…,m of A which includes elements of the nth column can be extended to an n × n subdeterminant of the same absolute value by appending an appropriate subset of these unit vectors. To further analyze the structure of A we will study the 2 × 2 subdeterminants
where i,j ≥ n, i≠j and h ≤ n − 1. It holds that \({\theta }_{i,j}^{h} \in \{a,b,0\}\).
Claim 1
There exists a sequence of elementary column operations and row permutations which turn A into the form
where \(\left [\begin {array}{c}{}^{0}\!/_{\pm 1} a \end {array}\right ]\), \(\left [\begin {array}{c}{}^{0}\!/_{\pm 1} b \end {array}\right ]\) and \(\left [\begin {array}{c}{}^{0}\!/_{\pm 1} 0 \end {array}\right ]\) are submatrices consisting of rows whose first n − 1 entries lie in {0,± 1}, and whose nth entry is equal to a, b or 0, respectively.
Proof Proof of Claim 1
We start by analyzing the nth column of A. To this end, note that \(\det (A_{[n1] \cup k,\cdot }) = A_{k,n} \in \{a,b,0\}\) for k ≥ n. It follows that \(A_{\cdot ,n} \in \{a,b,0\}^{m}\) as A_{⋅,n} ≥ 0. In addition, by what we have observed two paragraphs earlier, at least one entry of A_{⋅,n} must be equal to a and at least one entry must be equal to b. Sort the rows of A by their respective entry in the nth column as in (4).
In the remaining proof we show that after column operations, \(A_{\cdot ,[n1]} \in \{0, \pm 1\}^{m \times (n1)}\). To this end, let h ∈ [n − 1] be an arbitrary column index. Recall that I_{k} := {i ∈ [m]: A_{i,n} = k}. We begin by noting a few properties which will be used to prove the claim. For i ∈ I_{a} and j ∈ I_{b}, it holds that \(\theta ^{h}_{i,j} = b \cdot A_{i,h}  a \cdot A_{j,h} \in \{\pm a, \pm b, 0\}\). These are Diophantine equations which are solved by
Furthermore, for any i_{1},i_{2} ∈ I_{a}, it holds that \(\theta ^{h}_{i_{1},i_{2}} = a \cdot (A_{i_{1},h}  A_{i_{2},h})\). Since this quantity is a multiple of a and since \({\theta }_{i_{1} ,i_{2}}^{h}\in \{\pm a, \pm b, 0\}\), it follows that
We now perform a column operation on A_{⋅,h}: Let us fix arbitrary indices p ∈ I_{a} and q ∈ I_{b}. The pair (A_{p,h},A_{q,h}) solves one of the three Diophantine equations for a fixed k. Add (−k) ⋅ A_{⋅,n} to A_{⋅,h}. Now, (A_{p,h},A_{q,h}) ∈{(0,∓ 1),(± 1,0),(0,0)}. We claim that as a consequence of this column operation, \(A_{\cdot ,h} \in \{0, \pm 1\}^{m}\).
We begin by showing that \(A_{I_{a},h}\) and \(A_{I_{b},h}\) have entries in {0,± 1}. First, assume for the purpose of contradiction that there exists j ∈ I_{b} such that A_{j,h} > 1. This implies that the pair (A_{p,h},A_{j,h}) satisfies (5) for k≥ 1. As a ≥ 3, this contradicts that A_{p,h} ∈{0,± 1}. Secondly, assume for the purpose of contradiction that there is i ∈ I_{a} such that A_{i,h} > 1.^{Footnote 4} As A_{p,h}≤ 1, it follows from (6) that A_{i,h} = 2 and that A_{p,h} = 1. Therefore, as either A_{p,h} or A_{q,h} must be equal to zero, A_{q,h} = 0. This implies that \(\theta ^{h}_{i,q} = 2 \cdot b\) which is a contradiction to \(\theta ^{h}_{i ,q}\in \{\pm a, \pm b, 0\}\) as A has no duplicative relation, i.e., 2 ⋅ b≠a.
It remains to prove that \(A_{I_{0},h}\) has entries in {0,± 1}. For the purpose of contradiction, assume that there exists i ∈ I_{0}, i ≥ n, such that A_{i,h}≥ 2. Choose any s ∈ I_{a}. Then,
which is larger than a in absolute value, a contradiction. □
In the next claim, we establish the desired block structure (1). We will show afterwards that the blocks L and R are TU.
Claim 2
There exists a sequence of row and column permutations which turn A into (1) for matrices L and R with entries in {0,± 1}.
Proof Proof of Claim 2
For reasons of simplicity, we assume that A has no rows of the form \(\left [\begin {array}{ccc} 0 & {\cdots } 0 &a \end {array}\right ]\) or \(\left [\begin {array}{cccc} 0 & {\cdots } & 0 & b \end {array}\right ]\) as such rows can be appended to A while preserving the properties stated in this claim. We construct an auxiliary graph G = (V,E), where we introduce a vertex for each nonzero entry of A_{⋅,[n− 1]} and connect two vertices if they share the same row or column index. Formally, set
Let \(K_{1}, \ldots , K_{k} \subseteq V\) be the vertex sets corresponding to the connected components in G. For each l ∈ [k], set
These index sets form a partition of [m], resp. [n − 1]: Since every row of A_{⋅,[n− 1]} is nonzero, \(\bigcup _{l \in [k]} \mathbf {I}_{l} = [m]\) and since rank(A) = n, every column of A contains a nonzero entry, i.e., \(\bigcup _{l \in [k]} \mathbf {J}_{l} = [n1]\). Furthermore, by construction, it holds that I_{p} ∩I_{q} = ∅ and J_{p} ∩J_{q} = ∅ for all p≠q. The entries A_{i,j} for which \((i,j) \notin \bigcup _{l\in [k]} (\mathbf {I}_{l} \times \mathbf {J}_{l})\) are equal to zero. Therefore, sorting the rows and columns of A_{⋅,[n− 1]} with respect to the partition formed by I_{l}, resp. J_{l}, l ∈ [k], yields
In what follows we show that for all l ∈ [k], it holds that either I_{l} ∩ I_{a} = ∅ or that I_{l} ∩ I_{b} = ∅. From this property the claim readily follows: We obtain the form (1) from (7) by permuting the rows and columns such that the blocks which come first are those which correspond to the connected components K_{l} with I_{l} ∩ I_{a}≠∅. For the purpose of contradiction, assume that there exists l ∈ [k] such that I_{l} ∩ I_{a}≠∅ and I_{l} ∩ I_{b}≠∅. Set \({K_{l}^{a}} := \{(i,j) \in K_{l} \colon i \in I_{a}\}\) and \({K_{l}^{b}} := \{(i,j) \in K_{l} \colon i \in I_{b}\}\). Among all paths in the connected component induced by K_{l} which connect the sets \({K_{l}^{a}}\) and \({K_{l}^{b}}\), let P := {(i^{(1)},j^{(1)}),…,(i^{(t)},j^{(t)})} be a shortest one. By construction, (i^{(1)},j^{(1)}) is the only vertex of P which lies in \({K_{l}^{a}}\) and (i^{(t)},j^{(t)}) is the only vertex of P which lies in \({K_{l}^{b}}\). This implies that P starts and ends with a change in the first component, i.e., i^{(1)}≠i^{(2)} and i^{(t− 1)}≠i^{(t)}. Furthermore, since P has minimal length, it follows from the construction of the edges that it alternates between changing first and second components, i.e., for all s = 1,…,t − 2, i^{(s)}≠i^{(s+ 1)} ⇔ j^{(s+ 1)}≠j^{(s+ 2)} and j^{(s)}≠j^{(s+ 1)} ⇔ i^{(s+ 1)}≠i^{(s+ 2)}.
Define \(B := A_{\{i^{(1)}, \ldots , i^{(t)}\}, \{j^{(1)}, \ldots , j^{(t)} ,n\}}\). B is a square matrix with \(\frac {t+2}{2}\) rows and columns for the following reason: P starts with a change of rows, after which the remaining path of length t − 2 starts with a change of columns, ends with a change of rows and alternates between row and column changes in between. Thus, t − 2 is even and in total P changes rows \(1 + \frac {t2}{2}\) times and columns \(\frac {t2}{2}\) times. It follows that B has \(1 + \frac {t2}{2} + 1\) rows and \(\frac {t2}{2} + 1\) columns excluding the column with entries of A_{⋅,n}. Permute the rows and columns of B such that they are ordered with respect to the order i^{(1)},…,i^{(t)} and j^{(1)},…,j^{(t)},n. We claim that B is of the form
To see this, first observe that the entries on the main diagonal and the diagonal below are equal to ± 1 and that the entries \(B_{1,\frac {t+2}{2}}\) and \(B_{\frac {t+2}{2},\frac {t+2}{2}}\) are equal to a or b, respectively, by construction. As we have observed above, (i^{(1)},j^{(1)}) is the only vertex of P which lies in \({K_{l}^{a}}\) and (i^{(t)},j^{(t)}) is the only vertex of P which lies in \({K_{l}^{b}}\), implying that all other entries of the rightmost column, \(B_{2,\frac {t+2}{2}}, \ldots , B_{\frac {t+2}{2}  1,\frac {t+2}{2}}\), are equal to zero. For the purpose of contradiction, assume that any other entry of B, say B_{v,w}, is nonzero. Let us consider the case that w > v, i.e., that B_{v,w} lies above the main diagonal of B. Among all vertices of P whose corresponding entries lie in the same row as B_{v,w}, let (i^{(s)},j^{(s)}) be the vertex with minimal s. Among all vertices of P whose corresponding entries lie in the same column as B_{v,w}, let \((i^{(s^{\prime })},j^{(s^{\prime })})\) be the vertex with maximal \(s^{\prime }\). Then, consider the following path: Connect (i^{(1)},j^{(1)}) with (i^{(s)},j^{(s)}) along P; then take the edge to the vertex corresponding to B_{v,w}; take the edge to \((i^{(s^{\prime })},j^{(s^{\prime })})\); then connect \((i^{(s^{\prime })},j^{(s^{\prime })})\) with (i^{(t)},j^{(t)}) along P. This path is shorter than P, a contradiction. An analogous argument yields a contradiction if w < v − 1, i.e., if B_{v,w} lies in the lowertriangular part of B. Thus, B is of the form stated above. By Laplace expansion applied to the last column of B, \(\det (B) = \pm a \pm b \notin \{\pm a, \pm b, 0\}\) as 2 ⋅ b≠a. Since B can be extended to an n × n submatrix of A of the same determinant in absolute value, this is a contradiction.
Regarding the computational running time of this operation, note that G can be created and its connected components can be calculated in time polynomial in n and m. A subsequent permutation of the rows and columns as noted above yields the desired form. □
It remains to show that L and R are TU. We will prove the former, the proof of the latter property is analogous. Assume for the purpose of contradiction that L is not TU. As the entries of L are all ± 1 or 0, it contains a submatrix \(\tilde {A}\) of determinant ± 2 [18, Theorem 19.3]. Extend \(\tilde {A}\) by appending the corresponding entries of A_{⋅,n} and by appending an arbitrary row of \(A_{I_{b},\cdot }\). We obtain a submatrix of the form
whose determinant in absolute value is 2 ⋅ b, a contradiction as A has no duplicative relations. Similarly, one obtains a submatrix of determinant ± 2 ⋅ a if R is not TU.
Proof of Theorem 2
In this section, the following recognition problem will be of central importance: For numbers a ≥ b ≥ c ≥ 0, we say that an algorithm tests for {a,b,c}modularity if, given an input matrix \(A \in \mathbb {Z}^{m \times n}\) (m ≥ n), it checks whether D(A) = {a,b,c}. As a first step we state the following lemma which allows us to reduce the \(\gcd \) in all subsequent recognition algorithms. The proof uses a similar technique as was used in [11, Remark 5].
Lemma 2 (cf. 11 Remark 5)
Let a ≥ b > 0 and \(\gamma := \gcd (\{a,b\})\). There exists an algorithm with running time polynomial in n, m and \(\log \A\_{\infty }\) which reduces testing for {a,b,0}modularity to testing for \(\left \{\frac a \gamma ,\frac b \gamma ,0\right \}\)modularity or returns that \(\gcd (D(A)) \neq \gcd (\{a,b\})\), where A is the input matrix.
Proof
Calculate the transformation matrices which transform A to its Smith normal form: Find \(P \in \mathbb {Z}^{m \times m}\) and \(Q \in \mathbb {Z}^{n \times n}\) unimodular such that
where \(S \in \mathbb {Z}^{n \times n}\) is a diagonal matrix satisfying \({\prod }_{i=1}^{n} S_{i,i} = \gcd (D(A))\), cf. [18]. The matrices P and Q can be calculated in time polynomial in n, m and \(\log \A\_{\infty }\), see [20]. It must hold that \({\prod }_{i=1}^{n} S_{i,i} = \gamma \), otherwise \(\gcd (D(A)) \neq \gcd (\{a,b\})\), i.e., we have found a certificate that A is not {a,b,0}modular. Since S and P are invertible, we obtain that
As P is unimodular, P^{− 1} is integral, i.e., AQS^{− 1} is integral. Since Q is unimodular, AQ corresponds to performing elementary column operations on A, i.e., D(AQ) = D(A). As AQS^{− 1} is integral, the ith column of AQ is divisible by S_{i,i}, 1 ≤ i ≤ n. Multiplying A by S^{− 1} from the right corresponds to performing these divisions. We conclude that \(D(AQS^{1}) = \frac {1}{\gcd (D(A))} \cdot D(A)\). Testing A for {a,b,0}modularity is therefore equivalent to testing AQS^{− 1} for \(\left \{\frac a \gamma , \frac b \gamma , 0\right \}\)modularity. □
We proceed by using the decomposition established in Theorem 1 to construct an algorithm which tests for {a,b,0}modularity if a ≥ b > 0 and if 2 ⋅ b≠a, i.e., if there are no duplicative relations. We will see that this algorithm quickly yields a proof of Theorem 2. The main difference between the following algorithm and Theorem 2 is that in the former, a and b are fixed input values while in Theorem 2, a, b (and c) also have to be determined.
Lemma 3
Let a ≥ b > 0 such that 2 ⋅ b≠a. There exists an algorithm with running time polynomial in n, m and \(\log \A\_{\infty }\) which tests for {a,b,0}modularity or returns one of the following certificates: D(A)≥ 4, \(\gcd (D(A)) \neq \gcd (\{a,b\})\) or a set \(D^{\prime } \subsetneq \{a,b,0\}\) such that \(D(A) = D^{\prime }\).
Proof
Applying Lemmata 1 and 2 allows us to assume w.l.o.g. that 0 ∈ D(A) and that \(\gcd (\{a,b\}) = 1\). Note that the case a = b implies a = b = 1. Then, testing for {1,0}modularity can be accomplished by first transforming A to (3) and by subsequently testing whether the transformed matrix is TU.
Thus, assume that a > b. Since \(\gcd (\{a,b\}) = 1\), 2 ⋅ b≠a ⇔{a,b}≠{2,1}. Therefore, the numbers a and b fulfill the prerequisites of Theorem 1. Follow the proof of Theorem 1 to transform A to (1). If the matrix is {a,b,0}modular, we will arrive at a representation of the form (1). Otherwise, the proof of Theorem 1 (as it is constructive) exhibits a certificate of the following form: D(A)≥ 4, \(\gcd (D(A)) \neq \gcd (\{a,b\})\) or a set \(D^{\prime } \subsetneq \{a,b,0\}\) such that \(D(A) = D^{\prime }\). Without loss of generality, we may further assume that at least one n × n subdeterminant of A is equal to ± a and that at least one n × n subdeterminant of A is equal to ± b, i.e., that \(\{a,b,0\} \subseteq D(A)\).
Next, we show that i) holds if and only if both ii) and iii) hold, where

i)
every nonsingular n × n submatrix of A has determinant ± a or ± b,

ii)
every nonsingular n × n submatrix of \(A_{I_{a} \cup I_{0}, \cdot }\) has determinant ± a,

iii)
every nonsingular n × n submatrix of \(A_{I_{b} \cup I_{0}, \cdot }\) has determinant ± b.
\(A_{I_{0},\cdot }\) contains the first n − 1 unit vectors and hence, \(A_{I_{a} \cup I_{0}, \cdot }\) and \(A_{I_{b} \cup I_{0}, \cdot }\) admit full column rank.
We first show that ii) and iii) follow from i). Let us start with iii). For the purpose of contradiction, assume that i) holds but not iii). By construction, the nth column of \(A_{I_{b} \cup I_{0},\cdot }\) is divisible by b. Denote by \(A^{\prime }\) the matrix which we obtain by dividing the last column of \(A_{I_{b} \cup I_{0} ,\cdot }\) by b. The entries of \(A^{\prime }\) are all equal to ± 1 or 0. As we have assumed that iii) is invalid, there exists a nonsingular n × n submatrix of \(A^{\prime }\) whose determinant is not equal to ± 1. In particular, \(A^{\prime }\) is not TU. By [18, Theorem 19.3], as the entries of \(A^{\prime }\) are all ± 1 or 0 but it is not TU, it contains a submatrix of determinant ± 2. Since \(A^{\prime }_{\cdot ,[n1]}\) is TU, this submatrix must involve entries of the nth column of \(A^{\prime }\). Thus, it corresponds to a submatrix of \(A_{I_{b} \cup I_{0} ,\cdot }\) of determinant ± 2 ⋅ b. Append a subset of the first n − 1 unit vectors to extend this submatrix to an n × n submatrix. This n × n submatrix is also an n × n submatrix of A. Its determinant is ± 2 ⋅ b, which is not contained in {±a,±b,0} because 2 ⋅ b≠a, a contradiction to i). For ii), the same argument yields an n × n subdeterminant of ± 2 ⋅ a, which is also a contradiction to i).
Next, we prove that i) holds if both ii) and iii) hold. This follows from the 1sum structure of A. Let B be any nonsingular n × n submatrix of A. B is of the form
where \(C \in \mathbb {Z}^{m_{1} \times n_{1}}\), \(D \in \mathbb {Z}^{m_{2} \times n_{2}}\) for n_{1}, n_{2}, m_{1} and m_{2} satisfying n_{1} + n_{2} + 1 = n = m_{1} + m_{2}. As B is nonsingular, n_{i} ≤ m_{i} and m_{i} ≤ n_{i} + 1, i ∈{1,2}. Thus, n_{i} ≤ m_{i} ≤ n_{i} + 1, i ∈{1,2}, and we identify two possible cases which are symmetric: m_{1} = n_{1} + 1, m_{2} = n_{2} and m_{1} = n_{1}, m_{2} = n_{2} + 1. We start with the analysis of the former. By Laplace expansion applied to the last column,
The latter determinant is zero as m_{1} = n_{1} + 1. As the former matrix is blockdiagonal, \(\det B = \det [C ~~{}^{0}\!/_{a}] \cdot  \det D\). B is nonsingular and D is TU, therefore \(\det D = 1\). [C  ^{0} /_{a}] is a submatrix of \(A_{I_{a} \cup I_{0},\cdot }\) which can be extended to an n × n submatrix of the same determinant in absolute value by appending a subset of the n − 1 unit vectors contained in \(A_{I_{a} \cup I_{0},\cdot }\). Therefore by ii), \(\det B = \det [C \mid {}^{0}\!/_{a}] = a\). In the case m_{1} = n_{1}, m_{2} = n_{2} + 1 a symmetric analysis leads to \(\det B = \det [ D ~~ {}^{0}\!/_{a}] = b\) by iii).
To test for ii), let \(A^{\prime }\) be the matrix which we obtain by dividing the nth column of \(A_{I_{a} \cup I_{0}, \cdot }\) by a. Then, ii) holds if and only if every nonsingular n × n submatrix of \(A^{\prime }\) has determinant ± 1. Transform \(A^{\prime }\) to (3). The topmost n rows of \(A^{\prime }\) must form a unit matrix. Therefore, ii) is equivalent to \(A^{\prime }\) being TU, which can be tested efficiently. Testing for iii) can be done analogously. □
We now have all the necessary ingredients to prove Theorem 2. In essence, what remains is a technique to find sample values a, b and c for which we test whether A is {a,b,c}modular using our previously established algorithms.
Proof Proof of Theorem 2
Apply Lemma 1 for d = 3. Either, the algorithm calculates D(A) or gives a certificate that D(A)≥ 4, or that 0 ∈ D(A). In the former two cases we are done. Assume therefore that 0 ∈ D(A). By assumption A has full column rank. Thus, find k_{1} ∈ D(A), k_{1}≠ 0, using Gaussian Elimination. Apply Lemma 3 to check whether D(A) = {k_{1},0}. Otherwise, D(A)≥ 3. At this point, a short technical argument is needed to determine an element k_{2} ∈ D(A) ∖{k_{1},0}.
To do so, we first apply a technique from the proof of Lemma 2. This technique allows us to reduce finding k_{2} ∈ D(A) ∖{k_{1},0} to finding \(k_{2} \in D(A^{\prime }) \setminus \big \{\frac {k_{1}}{\gcd (D(A))},0\big \}\) for a matrix \(A^{\prime } \in \mathbb {Z}^{m \times n}\) which satisfies \(\gcd (D(A^{\prime })) = 1\): Calculate the Smith normal form of A, i.e., calculate \(P \in \mathbb {Z}^{m \times m}\) and \(Q \in \mathbb {Z}^{n \times n}\) unimodular such that \(PAQ = \left [\begin {array}{cc} S \\ 0 \end {array}\right ]\), where S is a diagonal matrix and \({\prod }_{i=1}^{n} S_{i,i} = \gcd (D(A))\). This can be done in time polynomial in n, m and \(\log A_{\infty }\), see [20]. Then, the matrix \(A^{\prime } := AQS^{1}\) is integral, satisfies \(D(A^{\prime }) = \frac {1}{\gcd (D(A))} \cdot D(A)\) and consequently has the desired property.
Transform \(A^{\prime }\) to (3). Then, \(A^{\prime }_{\cdot ,n} \geq 0\). If \(A^{\prime }_{\cdot ,n}\) has two nonzero entries, say \(A^{\prime }_{p,n}\) and \(A^{\prime }_{q,n}\), such that \({A}_{p,n}^{\prime } \neq A^{\prime }_{q,n}\), then \(\det \big (A^{\prime }_{[n1] \cup p,\cdot }\big ) \neq \det \big (A^{\prime }_{[n1] \cup q,\cdot }\big )\), both of which are nonzero. Thus, either \(\det \big (A^{\prime }_{[n1] \cup p,\cdot }\big ) \neq \frac {k_{1}}{\gcd (D(A))}\) or \(\det \big (A^{\prime }_{[n1] \cup q,\cdot }\big ) \neq \frac {k_{1}}{\gcd (D(A))}\). If such entries do not exist, then \(A^{\prime }_{\cdot ,n} \in \{0,1\}^{m}\) as otherwise, \(A^{\prime }_{\cdot ,n}\) would be divisible by an integer larger than one, contradicting \(\gcd (D(A^{\prime })) = 1\). This implies that \(A^{\prime }_{[n],\cdot } = \mathcal I_{n}\), and in particular that every nonsingular submatrix of \(A^{\prime }\) can be extended to an n × n submatrix of the same determinant in absolute value. Test \(A^{\prime }\) for total unimodularity. Since we know that \(D(A^{\prime }) = D(A) \geq 3\), \(A^{\prime }\) is not TU, i.e., the algorithm returns a submatrix of determinant at least 2 in absolute value. The absolute value of this subdeterminant and \(\det (\mathcal I_{n}) = 1\) are two nonzero elements of \(D(A^{\prime })\), one of which cannot be equal to \(\frac {k_{1}}{\gcd (D(A))}\).
Assume w.l.o.g. that k_{1} > k_{2}. If 2 ⋅ k_{2} = k_{1}, then A has duplicative relations. If not, test A for {k_{1},k_{2},0}modularity using Lemma 3. As \(\{k_{1}, k_{2},0\} \subseteq D(A)\), either, this algorithm returns that D(A) = {k_{1},k_{2},0} or a certificate of the form D(A)≥ 4 or \(\gcd (D(A)) \neq \gcd (\{k_{1},k_{2}\})\). In the former two cases we are done and in the third case it also follows that D(A)≥ 4. □
Proof of Theorem 3
One ingredient to the proof of Theorem 3 is the following result by Gribanov, Malyshev and Pardalos [11] which reduces the standard form IP (2) to an IP in inequality form in dimension n − m such that the subdeterminants of the constraint matrices are in relation.
Lemma 4
([11, Corollary 1.1, Remark 5, Theorem 3] ^{Footnote 5})
In time polynomial in n, m and \(\log \B\_{\infty }\), (2) can be reduced to the inequality form IP
where \(h \in \mathbb {Z}^{nm}\), \(g \in \mathbb {Z}^{n}\) and \(C \in \mathbb {Z}^{n \times (nm)}\), with \(D(C) = \frac {1}{\gcd (D(B^{\mathrm {T}}))} \cdot D(B^{\mathrm {T}})\).
To prove this reduction, the authors apply a theorem by Shevchenko and Veselov [24] which was originally published in Russian. For completeness of presentation, we will provide an alternative but similar proof of Lemma 4 which uses the following wellknown determinant identity instead of the aforementioned result.
Lemma 5 (Jacobi’s complementary minor formula, see [4, Lemma A.1e] )
Let \(A \in \mathbb {Z}^{n \times n}\) be invertible and \(I, J \subseteq [n]\), I = J = k for k ∈ [n]. Then,
where \(\overline I := [n] \setminus I\) and \(\overline J := [n] \setminus J\).
Proof Proof (Alternative proof of Lemma 4)
We closely follow the proof of [11]. First, we reformulate (2) in such a way that the \(\gcd \) of the full rank subdeterminants of the constraint matrix becomes 1. To this end, calculate the Smith normal form of B, i.e., find \(P \in \mathbb {Z}^{m \times m}\) and \(Q \in \mathbb {Z}^{n \times n}\) unimodular and nonsingular such that B = P[S  0]Q, where \(S \in \mathbb {Z}^{m \times m}\) is a diagonal matrix satisfying \({\prod }_{i=1}^{m} S_{i,i} = \gcd (D(B^{\mathrm {T}}))\). This can be done in time polynomial in m, n and \(\log \B\_{\infty }\), see [20]. Thus, \(Bx = b \Leftrightarrow [\mathcal I_{m} ~~ 0] Q x = S^{1} P^{1} b\). For simplicity, set \(b^{\prime } := S^{1}P^{1}b\). If \(b^{\prime } \notin \mathbb {Z}^{m}\), then (2) is infeasible. Thus, assume that \(b^{\prime } \in \mathbb {Z}^{m}\). Summarizing, solving (2) is equivalent to solving
where due to multiplying by S^{− 1}, \(D(([\mathcal I_{m} ~~ 0] Q)^{\mathrm {T}}) = \frac {1}{\gcd (D(B^{\mathrm {T}}))} \cdot D(B^{\mathrm {T}})\).
Secondly, we reduce (9) to an IP in inequality form. Since Q is unimodular, substituting z := Qx yields
Plugging this identity into (9) yields
where \(h^{\mathrm {T}} := c^{\mathrm {T}} Q^{1}_{\cdot ,[n] \setminus [m]}\), \(g := Q^{1}_{\cdot ,[m]} b^{\prime }\) and \(C := Q^{1}_{\cdot ,[n] \setminus [m]}\).
Recall that \(D\big (([\mathcal I_{m} ~~ 0] Q)^{\mathrm {T}}\big ) = \frac {1}{\gcd (D(B^{\mathrm {T}}))} \cdot D(B^{\mathrm {T}})\). As \([\mathcal I_{m} ~~ 0] Q = Q_{[m],\cdot }\), it remains to show that \(D\big ((Q_{[m],\cdot })^{\mathrm {T}}\big ) = D\big (Q^{1}_{\cdot ,[n] \setminus [m]}\big )\). Lemma 5 applied to A := Q for I := [m] and \(J\subseteq [n]\), J = m, yields
i.e., the claim follows. □
As a second ingredient to the proof of Theorem 3, we will make use of some results for bimodular integer programs (BIP s). BIP s are IP s of the form
where \(c \in \mathbb {Z}^{n}\), \(b \in \mathbb {Z}^{m}\) and \(A \in \mathbb {Z}^{m \times n}\) is bimodular, i.e., rank(A) = n and \(D(A) \subseteq \{2,1,0\}\). As mentioned earlier, [2] proved that BIP s can be solved in strongly polynomial time. Their algorithm uses the following structural result for BIP s by [23] which will also be useful to us.
Theorem 4
([23, Theorem 2], as formulated in [2, Theorem 2.1]) Assume that the linear relaxation \({\max \limits } \{c^{\mathrm {T}} x \colon Ax \leq b,~x \in \mathbb {R}^{n}\}\) of a BIP is feasible, bounded and has a unique optimal vertex solution v. Denote by \(I \subseteq [m]\) the indices of the constraints which are tight at v, i.e., A_{I,⋅}v = b_{I}. Then, an optimal solution x^{∗} of \({\max \limits } \{c^{\mathrm {T}} x \colon A_{I,\cdot } x \leq b_{I},~x \in \mathbb {Z}^{n}\}\) is also optimal for the BIP.
Proof Proof of Theorem 3
Using Lemma 4, we reduce the standard form IP (2) to (8). Note that \(\gcd (D(C)) = 1\). Let us denote (8) with objective vector \(h \in \mathbb {Z}^{nm}\) by _{≤}(h) and its natural linear relaxation by _{≤}(h). We apply Theorem 2 to C and perform a casebycase analysis depending on the output.

i)
The algorithm calculates and returns D(C). If 0∉D(C), C is nondegenerate and _{≤}(h) can be solved using the algorithm in [1]. Thus, assume that 0 ∈ D(C). C has no duplicative relations. As \(\gcd (D(C)) = 1\), this implies that C is {a,b,0}modular for a ≥ b > 0, where \(\gcd (\{a,b\}) = 1\) and (a,b)≠(2,1). Thus, C satisfies the assumptions of Theorem 1. As a consequence of Theorem 1, there exist elementary column operations which transform C such that its first n − m − 1 columns are TU, i.e., there is \(U \in \mathbb {Z}^{(nm) \times (nm)}\) unimodular such that CU = [T∣d], where T is TU and \(d \in \mathbb {Z}^{n}\). Substituting z := U^{− 1}y yields the equivalent problem
$$ \max\{h^{\mathrm{T}} U z \colon [T ~~ d] z \leq g,~ z \in \mathbb{Z}^{nm}\}, $$(10)where we have used that \(y = Uz \in \mathbb {Z}^{nm} \Leftrightarrow z \in \mathbb {Z}^{nm}\) as U preserves integrality. Let z^{∗} be an optimal solution to the mixedinteger linear program
$$ \max \{h^{\mathrm{T}} U z \colon [T ~~ d] z \leq g,~z \in \mathbb{R}^{nm},~ z_{nm} \in \mathbb{Z}\}, $$(11)which can be found in polynomial time [18, Chapter 18.4]. If no such solution exists, (10) is infeasible. Fixing \(z_{nm} := z^{\ast }_{nm}\) in (11) induces an LP in dimension n − m − 1. Let \(\bar z\) be a vertex solution to this LP, which can be found efficiently (see, for example, [13]).^{Footnote 6} Since T is TU, \(\bar z \in \mathbb {Z}^{nm1}\). The solution \([\bar z ~~ z^{\ast }_{n}]\) has the same objective value as z^{∗} and is optimal for (10) since it is integral and (11) is a relaxation of (10).

ii)
The algorithm returns that D(C)≥ 4. Then, D(B^{T}) = D(C)≥ 4.

iii)
The algorithm returns a duplicative relation, i.e., \(\{2 \cdot k, k\} \subseteq D(C)\), k > 0. This case is more involved because we do not have any information as to which other elements might be contained in D(C).
Assume w.l.o.g. that _{≤}(h) is feasible and that _{≤}(h) is bounded. We postpone the unbounded case to the end of the proof. Calculate an optimal vertex solution v to _{≤}(h). If \(v \in \mathbb {Z}^{nm}\), then v is also optimal for _{≤}(h). Thus, assume that \(v \not \in \mathbb {Z}^{nm}\) and let \(I \subseteq [n]\) be the indices of tight constraints at v, i.e., C_{I,⋅}v = g_{I}. In what follows, we prove that we may assume w.l.o.g. that (a) 0 ∈ D(C), (b) k = 1, and (c) every nonzero (n − m) × (n − m) subdeterminant of C_{I,⋅} is equal to ± 2.^{Footnote 7}

(a)
From Lemma 1 applied to C for d = 3 we obtain three possible results: D(C)≥ 4, 0∉D(C) or 0 ∈ D(C). In the first case we are done and in the second case, C is nondegenerate and _{≤}(h) can be solved using the algorithm in [1]. Therefore, w.l.o.g., 0 ∈ D(C).

(b)
If \(\{2 \cdot k,k,0\} \subseteq D(C)\) for k > 1, it follows from \(\gcd (D(C)) = 1\) that D(C)≥ 4. Therefore, w.l.o.g., k = 1.

(c)
Since \(v \notin \mathbb {Z}^{nm}\), it holds that 1∉D(C_{I,⋅}) as otherwise, \(v \in \mathbb {Z}^{nm}\) due to Cramer’s rule. Apply Theorem 2 once more, but this time to C_{I,⋅}. If the algorithm returns that D(C_{I,⋅})≥ 4, then D(C)≥ 4. If the algorithm returns a duplicative relation, i.e., \(\{2 \cdot s,s\} \subseteq D(C_{I,\cdot })\), then s≠ 1 as 1∉D(C_{I,⋅}). Since by (a) and (b), \(\{2,1,0\} \subseteq D(C)\), it follows that \(\{2\cdot s, 2, 1, 0\} \subseteq D(C)\). Thus, D(C)≥ 4. If the algorithm calculates and returns D(C_{I,⋅}), then it either finds that every nonzero (n − m) × (n − m) subdeterminant of C_{I,⋅} is equal to ± 2 or it finds an element t ∈ D(C_{I,⋅}) ∖{2,0}. In the latter case, t≠ 1 as 1∉D(C_{I,⋅}), implying that \(\{t,2,1,0\} \subseteq D(C)\) and D(C)≥ 4.
Let \(\mathtt {IP}^{\text {cone}}_{{\leq }}(h) := {\max \limits } \{h^{\mathrm {T}} y \colon C_{I,\cdot } y \leq g_{I},~ y \in \mathbb {Z}^{nm}\}\). As C_{I,⋅} is bimodular, this is a BIP. By possibly perturbing the vector h (e.g. by adding \(\frac {1}{M} \cdot {\sum }_{i \in I} C_{i,\cdot }\) for a sufficiently large M > 0), we can assume that v is the unique optimal solution to _{≤}(h), which will allow us to apply Theorem 4. Solve \(\mathtt {IP}^{\text {cone}}_{{\leq }}(h)\) using the algorithm by [2]. If \(\mathtt {IP}^{\text {cone}}_{{\leq }}(h)\) is infeasible, so is _{≤}(h). Let \(y \in \mathbb {Z}^{nm}\) be an optimal solution for \(\mathtt {IP}^{\text {cone}}_{{\leq }}(h)\). It follows that either, y is also optimal for _{≤}(h) or that D(C)≥ 4: If y is feasible for _{≤}(h), it is also optimal since \(\mathtt {IP}^{\text {cone}}_{{\leq }}(h)\) is a relaxation of _{≤}(h). If C is bimodular, Theorem 4 states that y is feasible for _{≤}(h), i.e., it is optimal for _{≤}(h). Thus, if y is not feasible for _{≤}(h), D(C) contains an element which is neither 0 nor 1 nor 2. As \(\{2,1,0\} \subseteq D(C)\) by (a) and (b), this implies that D(C)≥ 4.
It remains to explain why we may assume that _{≤}(h) is bounded. If not, _{≤}(h) is either infeasible or unbounded. More precisely, _{≤}(h) is unbounded if and only if \(\{y \in \mathbb {Z}^{nm} \colon Cy \leq g\}\) is feasible. We reduce the feasibility test of this set to a bounded IP of the same form as above: Set s := C_{1,⋅}. By construction, _{≤}(s) is bounded. Solve _{≤}(s) using our algorithm above. Either, we determine a feasible point of \(\{y \in \mathbb {Z}^{nm} \colon Cy \leq g\}\) in which case _{≤}(h) is unbounded, we find that this set is infeasible, or we find that D(C)≥ 4.

(a)
□
Notes
For reasons of readability, we will stick to the notation that a ≥ b ≥ c although the order of these elements is irrelevant.
Note that we use D(B^{T}) instead of D(B) since B has full row rank and we refer to its m × m subdeterminants.
The proof of Theorem 3 contains an algorithm which solves such IP s in inequality form efficiently if the condition \(\gcd (a,b) = 1\) is fulfilled.
We cannot use the same approach as for the first case as b could be equal to 1 or 2.
Note that in [11], the onetoone correspondence between D(C) and D(B^{T}) is not explicitly stated in Corollary 1.1 but follows from Theorem 3.
As T has full column rank, the feasible region is pointed, i.e., such a vertex exists.
In particular, (c) implies that C_{I} is bimodular.
References
Artmann, S., Eisenbrand, F., Glanzer, C., Oertel, T., Vempala, S., Weismantel, R.: A note on nondegenerate integer programs with small subdeterminants. Oper. Res. Lett. 44, 635–639 (2016)
Artmann, S., Weismantel, R., Zenklusen, R.: A strongly polynomial algorithm for bimodular integer linear programming. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, pp 1206–1219. Association for Computing Machinery, New York (2017)
Bonifas, N., Di Summa, M., Eisenbrand, F., Hähnle, N., Niemeier, M.: On subdeterminants and the diameter of polyhedra. Discrete Comput Geom. 52, 102–115 (2014)
Caracciolo, S., Sokal, A.D., Sportiello, A.: Algebraic/combinatorial proofs of Cayleytype identities for derivatives of determinants and pfaffians. Adv. Appl. Math. 50, 474–594 (2013)
Conforti, M., Fiorini, S., Huynh, T., Joret, G., Weltge, S.: The stable set problem in graphs with bounded genus and bounded odd cycle packing number. In: Chawla, S. (ed.) Proceedings of the 2020 ACMSIAM Symposium on Discrete Algorithms (SODA), pp 2896–2915. SIAM (2020)
Conforti, M., Fiorini, S., Huynh, T., Weltge, S.: Extended formulations for stable set polytopes of graphs without two disjoint odd cycles. In: Bienstock, D., Zambelli, G. (eds.) Integer Programming and Combinatorial Optimization, pp 104–116. Springer International Publishing, Cham (2020)
Dyer, M., Frieze, A.: Random walks, totally unimodular matrices, and a randomised dual simplex algorithm. Math. Program. 64, 1–16 (1994)
Eisenbrand, F., Vempala, S.: Geometric random edge. Math. Program. 164, 325–339 (2017)
Glanzer, C., Stallknecht, I., Weismantel, R.: On the recognition of a,b,cmodular matrices. In: Singh, M., Williamson, D.P. (eds.) Integer Programming and Combinatorial Optimization, pp 238–251. Springer International Publishing, Cham (2021)
Glanzer, C., Weismantel, R., Zenklusen, R.: On the number of distinct rows of a matrix with bounded subdeterminants. SIAM J. Discrete Math. 32, 1706–1720 (2018)
Gribanov, D.V., Malyshev, D.S., Pardalos, P.M.: A note on the parametric integer programming in the average case: sparsity, proximity, and FPTalgorithms. arXiv:2002.01307v3 (2020)
Gribanov, D.V., Veselov, S.I.: On integer programming with bounded determinants. Optim. Lett. 10, 1169–1177 (2016)
Grötschel, M., Lovász, L., Schrijver A.: Geometric Algorithms and Combinatorial Optimization, vol. 2. Springer, Berlin (2012)
Hupp, L.M.: Integer and mixedinteger reformulations of stochastic, resourceconstrained, and quadratic matching problems. Ph.D. thesis, FriedrichAlexanderUniversität ErlangenNürnberg, Erlangen (2017)
Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms, 5th edn. SpringerVerlag, Berlin (2012)
Nägele, M., Sudakov, B., Zenklusen, R.: Submodular minimization under congruency constraints. Combinatorica 39, 1351–1386 (2019)
Paat, J., Schlöter, M., Weismantel, R.: The integrality number of an integer program. In: Bienstock, D., Zambelli, G. (eds.) Integer Programming and Combinatorial Optimization, pp 338–350. Springer International Publishing, Cham (2020)
Schrijver, A.: Theory of Linear and Integer Programming. Wiley, New York (1986)
Seymour, P.: Decomposition of regular matroids. J. Comb. Theory, Ser. B 28, 305–359 (1980)
Storjohann, A.: Algorithms for matrix canonical forms. Ph.D. thesis, ETH Zürich, Zürich (2000). https://doi.org/10.3929/ethza004141007
Storjohann, A., Labahn, G.: Asymptotically fast computation of Hermite normal forms of integer matrices. In: Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation, ISSAC ’96. https://doi.org/10.1145/236869.237083, pp 259–266. Association for Computing Machinery, New York (1996)
Truemper, K.: A decomposition theory for matroids V. Testing of matrix total unimodularity. J. Comb. Theory, Ser. B 49, 241–281 (1990)
Veselov, S.I., Chirkov, A.J.: Integer program with bimodular matrix. Discrete Optim. 6, 220–222 (2009)
Veselov, S.I., Shevchenko, V.N.: Bounds for the maximal distance between the points of certain integer lattices (in russian). CombinatorialAlgebraic Methods in Applied Mathematics, Izdat. Gor’kov. Univ., Gorki, 26–33 (1980)
Walter, M., Truemper, K.: Implementation of a unimodularity test. Math. Program. Comput. 5, 57–73 (2013)
Acknowledgements
This work was partially supported by the Einstein Foundation Berlin. We are grateful to Miriam Schlöter for proofreading the manuscript, and to the anonymous reviewers for several helpful comments.
Funding
Open Access funding provided by ETH Zurich.
Author information
Authors and Affiliations
Corresponding author
Additional information
We dedicate this paper to Bernd Sturmfels on the occasion of his 60th birthday.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is the extended version of a paper published in the Proceedings of the 22nd Conference on Integer Programming and Combinatorial Optimization [9]. It includes all deferred proofs and several examples of {a,b,c}modular matrices.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Glanzer, C., Stallknecht, I. & Weismantel, R. Notes on {a,b,c}Modular Matrices. Vietnam J. Math. 50, 469–485 (2022). https://doi.org/10.1007/s10013021005209
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013021005209
Keywords
 Integer optimization
 Recognition algorithm
 Bounded subdeterminants
 Total unimodularity
Mathematics Subject Classification (2010)
 90C10
 68Q25