Abstract
Given two matroids \(\mathcal {M}_{1} = (E, \mathcal {B}_{1})\) and \(\mathcal {M}_{2} = (E, \mathcal {B}_{2})\) on a common ground set E with base sets \(\mathcal {B}_1\) and \(\mathcal {B}_2\), some integer \(k \in \mathbb {N}\), and two cost functions \(c_{1}, c_{2} :E \rightarrow \mathbb {R}\), we consider the optimization problem to find a basis \(X \in \mathcal {B}_{1}\) and a basis \(Y \in \mathcal {B}_{2}\) minimizing the cost \(\sum _{e\in X} c_1(e)+\sum _{e\in Y} c_2(e)\) subject to either a lower bound constraint \(X \cap Y \le k\), an upper bound constraint \(X \cap Y \ge k\), or an equality constraint \(X \cap Y = k\) on the size of the intersection of the two bases X and Y. The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomialtime algorithm was left as an open question in Hradovich et al. (J Comb Optim 34(2):554–573, 2017). We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomialtime primaldual algorithm. We also present a strongly polynomial, primaldual algorithm that computes a minimum cost solution for every feasible size of the intersection k in one run with asymptotic running time equal to one run of Frank’s matroid intersection algorithm. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids. We obtain a strongly polynomial time algorithm for the recoverable robust polymatroid base problem with interval uncertainties.
1 Introduction
Matroids are fundamental and wellstudied structures in combinatorial optimization. Recall that a matroid \(\mathcal {M}\) is a tuple \(\mathcal {M}=(E,\mathcal {F})\), consisting of a finite ground set E and a family of subsets \(\mathcal {F}\subseteq 2^E\), called the independent sets, satisfying (i) \(\emptyset \in \mathcal {F}\), (ii) if \(F\in \mathcal {F}\) and \(F'\subset F\), then \(F'\in \mathcal {F}\), and (iii) if \(F,F'\in \mathcal {F}\) with \(F'>F\), then there exists some element \(e\in F'\setminus {F}\) satisfying \(F\cup \{e\}\in \mathcal {F}\). As usual when dealing with matroids, we assume that a matroid is specified via an independence oracle, that, given \(S\subseteq E\) as input, checks whether or not \(S\in \mathcal {F}\). Any inclusionwise maximal set in independence system \(\mathcal {F}\) is called a basis of \(\mathcal {F}\). Note that the set of bases \(\mathcal {B}=\mathcal {B}(\mathcal {M})\) of a matroid \(\mathcal {M}\) uniquely defines its independence system via \(\mathcal {F}(\mathcal {B})=\{F\subseteq E\mid F\subseteq B \text{ for } \text{ some } B\in \mathcal {B}\}.\) Because of their rich structure, matroids allow for various different characterizations (see, e.g., [18]). In particular, matroids can be characterized algorithmically as the only downwardclosed structures for which a simple greedy algorithm is guaranteed to return a basis \(B\in \mathcal {B}\) of minimum cost \(c(B)=\sum _{e\in B} c(e)\) for any linear cost function \(c:E\rightarrow \mathbb {R}_+.\) Moreover, the problem to find a mincost common base in two matroids \(\mathcal {M}_1=(E,\mathcal {B}_1)\) and \(\mathcal {M}_2=(E,\mathcal {B}_2)\) on the same ground set, or the problem to maximize a linear function over the intersection \(\mathcal {F}_1\cap \mathcal {F}_2\) of two matroids \(\mathcal {M}_1=(E,\mathcal {F}_1)\) and \(\mathcal {M}_2=(E,\mathcal {F}_2)\) can be done efficiently with a stronglypolynomial primaldual algorithm (cf. [5]). Optimization over the intersection of three matroids, however, is easily seen to be NPhard. See [14] for most recent progress on approximation results for the latter problem.
Optimization on matroids, their generalization to polymatroids, or on the intersection of two (poly)matroids, capture a wide range of interesting problems. In this paper, we introduce and study yet another variant of matroidoptimization problems. Our problems can be seen as a variant or generalization of matroid intersection: we aim at minimizing the sum of two linear cost functions over two bases chosen from two matroids on the same ground set with an additional cardinality constraint on the intersection of the two bases. As it turns out, the problems with lower and upper bound constraint are computationally equivalent to matroid intersection, while the problem with equality constraint seems to be strictly more general, and to lie somehow on the boundary of efficiently solvable combinatorial optimization problems. Interestingly, while the problems on matroids with lower, upper, or equality constraint on the intersection can be shown to be solvable in strongly polynomial time, the extension of the problems towards polymatroids is solvable in strongly polynomial time for the lower bound constraint, but NPhard for both, upper and equality constraints.
The model Given two matroids \(\mathcal {M}_{1} = (E, \mathcal {B}_{1})\) and \(\mathcal {M}_{2} = (E, \mathcal {B}_{2})\) on a common ground set E with base sets \(\mathcal {B}_1\) and \(\mathcal {B}_2\), some integer \(k \in \mathbb {N}\), and two cost functions \(c_{1}, c_{2} :E \rightarrow \mathbb {R}\), we consider the optimization problem to find a basis \(X \in \mathcal {B}_{1}\) and a basis \(Y \in \mathcal {B}_{2}\) minimizing \(c_{1}(X) + c_{2}(Y)\) subject to either a lower bound constraint \(X \cap Y \ge k\), an upper bound constraint \(X \cap Y \le k\), or an equality constraint \(X \cap Y = k\) on the size of the intersection of the two bases X and Y. Here, as usual, we write \(c_1(X)=\sum _{e\in X} c_1(e)\) and \(c_2(Y)=\sum _{e\in Y} c_2(e)\) to shorten notation. Let us denote the following problem by \((P_{= k})\).
Accordingly, if \(X \cap Y = k\) is replaced by either the upper bound constraint \(X \cap Y \le k\), or the lower bound constraint \(X \cap Y \ge k\), the problem is called \((P_{\le k})\) or \((P_{\ge k})\), respectively. Clearly, it only makes sense to consider integers k in the range between 0 and \(K:=\min \{{\text {rk}}(\mathcal {M}_1), {\text {rk}}(\mathcal {M}_2)\}\), where \({\text {rk}}(\mathcal {M}_i)\) for \(i\in \{1,2\}\) is the rank of matroid \(\mathcal {M}_i\), i.e., the cardinality of each basis in \(\mathcal {M}_i\) which is unique due to the augmentation property (iii). For details on matroids, we refer to [18].
Related literature on the Recoverable Robust Matroid Problem: Problem \((P_{\ge k})\) in the special case where \(\mathcal {B}_1=\mathcal {B}_2\) is known and wellstudied under the name Recoverable Robust Matroid Problem Under Interval Uncertainty Representation, see [1, 8, 9] and Sect. 4 below. For this special case of \((P_{\ge k})\), Büsing [1] presented an algorithm which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński [9] proved that the problem can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in [8] a strongly polynomial time primaldual algorithm for the special case of the problem on a graphical matroid. The question whether a strongly polynomial time algorithm for \((P_{\ge k})\) with \(\mathcal {B}_1=\mathcal {B}_2\) exists remained open.
In a recent preprint, which builds upon the preprint of this paper, Iwamasa and Takazawa [11] generalize the problems \((P_{\le k})\), \((P_{\ge k})\), \((P_{= k})\) to nonlinear convex cost functions and analyze them within the framework of discrete convex analysis [17]. Combining this with the ideas in this paper, they obtain generalizations of our results.
Our contribution. In Sect. 2, we show that both, \((P_{\le k})\) and \((P_{\ge k})\), can be polynomially reduced to weighted matroid intersection. Since weighted matroid intersection can be solved in strongly polynomial time by some very elegant primaldual algorithm [5], this answers the open question raised in [9] affirmatively.
As we can solve \((P_{\le k})\) and \((P_{\ge k})\) in strongly polynomial time via some combinatorial algorithm, the question arises whether or not the problem with equality constraint \((P_{=k})\) can be solved in strongly polynomial time as well, and whether there is an efficient parametric algorithm to compute the whole parametric curve with respect to k. In Sect. 3, we provide a strongly polynomial, primaldual algorithm that constructs an optimal solution for \((P_{=k})\) for every feasible value of k. This algorithm can also be used to solve an extension, called \((P^{\triangle })\), of the Recoverable Robust Matroid problem under Interval Uncertainty Representation. Our approach to solve \((P^{\triangle })\) uses the capability to solve \((P_{=k})\) for every k as an essential subroutine, see Sect. 4.
Then, in Sect. 5, we consider the generalization of problems \((P_{\le k})\), \((P_{\ge k})\), and \((P_{= k})\) from matroids to polymatroids with lower bound, upper bound, or equality constraint, respectively, on the size of the meet \(x \wedge y :=\sum _{e\in E} \min \{x_e, y_e\}\). Interestingly, as it turns out, the generalization of \((P_{\ge k})\) can be solved in strongly polynomial time via reduction to some polymatroidal flow problem, while the generalizations of \((P_{\le k})\) and \((P_{= k})\) can be shown to be weakly NPhard, already for uniform polymatroids.
Finally, in Sect. 6, we discuss the generalization of our matroid problems from two to three or more matroids. That is, we consider n matroids \(\mathcal {M}_i=(E,\mathcal {B}_i)\), \(i\in [n]\), and n linear cost functions \(c_i:E\rightarrow \mathbb {R}\), for \(i\in [n]\). The task is to find n bases \(X_i\in \mathcal {B}_i\), \(i\in [n]\), minimizing the cost \(\sum _{i=1}^n c_i(X_i)\) subject to a cardinality constraint on the size of intersection \(\bigcap _{i=1}^n X_i\). When we are given an upper bound on the size of this intersection we can find an optimal solution within polynomial time. Though, when we have an equality or a lower bound constraint on the size of the intersection, the problem becomes strongly NPhard.
2 Reduction of \((P_{\le k})\) and \((P_{\ge k})\) to weighted matroid intersection
We first note that \((P_{\le k})\) and \((P_{\ge k})\) are computationally equivalent. To see this, consider any instance \((\mathcal {M}_1, \mathcal {M}_2, k, c_1, c_2)\) of \((P_{\ge k})\), where \(\mathcal {M}_1=(E, \mathcal {B}_1)\), and \(\mathcal {M}_2=(E, \mathcal {B}_2)\) are two matroids on the same ground set E with base sets \(\mathcal {B}_1\) and \(\mathcal {B}_2\), respectively. Define \(c_2^*=c_2\), \(k^*={\text {rk}}(\mathcal {M}_1)k\), and let \(\mathcal {M}_2^*=(E, \mathcal {B}_2^*)\) with \(\mathcal {B}_2^*=\{E\setminus {Y}\mid Y\in \mathcal {B}_2\}\) be the dual matroid of \(\mathcal {M}_2\). Since

(i)
\(X\cap Y \le k \iff X\cap (E\setminus {Y}) =XX\cap Y \ge {\text {rk}}(\mathcal {M}_1)k=k^*\), and

(ii)
\(c_1(X)+c_2(Y)=c_1(X)+c_2(E)c_2(E\setminus {Y})= c_1(X)+c^*_2(E\setminus {Y})+c_2(E)\),
where \(c_2(E)\) is a constant, it follows that (X, Y) is a minimizer of \((P_{\ge k})\) for the given instance \((\mathcal {M}_1, \mathcal {M}_2, k, c_1, c_2)\) if and only if \((X, E\setminus {Y})\) is a minimizer of \((P_{\le k^*})\) for \((\mathcal {M}_1, \mathcal {M}^*_2, k^*, c_1, c^*_2)\). Similarly, it can be shown that any problem of type \((P_{\le k})\) polynomially reduces to an instance of type \((P_{\ge k^*})\).
Note that, for a set S and an element e we abbreviate \(S \cup \{e\}\) by \(S+e\) and \(S \setminus \{e\}\) by \(Se\).
Theorem 1
Both problems, \((P_{\le k})\) and \((P_{\ge k})\), can be reduced to weighted matroid intersection.
Proof
By our observation above, it suffices to show that \((P_{\le k})\) can be reduced to weighted matroid intersection. Let \({\tilde{E}} := E_{1} \mathbin {\dot{\cup }}E_{2}\), where \(E_{1}, E_{2}\) are two copies of our original ground set E. We consider \(\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1}), \mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})\), two special types of matroids on this new ground set \({\tilde{E}}\), where \(\mathcal {F}_{1}, \mathcal {F}_{2}, {\tilde{\mathcal {F}}}_{1}, {\tilde{\mathcal {F}}}_{2}\) are the sets of independent sets of \(\mathcal {M}_{1}, \mathcal {M}_{2}, \mathcal {N}_{1}, \mathcal {N}_{2}\) respectively. Firstly, let \(\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1})\) be the direct sum of \(\mathcal {M}_{1}\) on \(E_{1}\) and \(\mathcal {M}_{2}\) on \(E_{2}\). That is, for \(A \subseteq {\tilde{E}}\) it holds that \(A \in {\tilde{\mathcal {F}}}_{1}\) if and only if \(A \cap E_{1} \in \mathcal {F}_{1}\) and \(A \cap E_{2} \in \mathcal {F}_{2}\).
The second matroid \(\mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})\) is defined as follows: we call \(e_{1} \in E_{1}\) and \(e_{2} \in E_{2}\) a pair, if \(e_{1}\) and \(e_{2}\) are copies of the same element in E. If \(e_{1}, e_{2}\) are a pair then we call \(e_{2}\) the sibling of \(e_{1}\) and vice versa. Then
For any \(A \subseteq {\tilde{E}}\), \(X=A\cap E_1\) and \(Y=A\cap E_2\) forms a feasible solution for \((P_{\le k})\) if and only if A is a basis in matroid \(\mathcal {N}_1\) and independent in matroid \(\mathcal {N}_2.\) Thus, \((P_{\le k})\) is equivalent to the weighted matroid intersection problem
with weight function
for some constant \(C>0\) chosen large enough to ensure that A is a basis in \(\mathcal {N}_1\). To see that \(\mathcal {N}_2\) is indeed a matroid, we first observe that \({\tilde{\mathcal {F}}}_{2}\) is nonempty and downwardclosed (i.e., \(A\in {\tilde{\mathcal {F}}}_{2}\), and \(B\subset A\) implies \(B\in {\tilde{\mathcal {F}}}_{2}\)). To see that \({\tilde{\mathcal {F}}}_{2}\) satisfies the matroidcharacterizing augmentation property
take any two independent sets \(A,B\in {\tilde{\mathcal {F}}}_{2}\). If A cannot be augmented from B, i.e., if \(A+e\not \in {\tilde{\mathcal {F}}}_{2}\) for every \(e\in B\setminus {A}\), then A must contain exactly k pairs, and for each \(e\in B\setminus {A}\), the sibling of e must be contained in A. This implies \(B \le A\), i.e., \(\mathcal {N}_2\) is a matroid. \(\square \)
Weighted matroid intersection is known to be solvable within strongly polynomial time (e.g., see Frank [5]). Hence, both \((P_{\le k})\) and \((P_{\ge k})\) can be solved in strongly polynomial time.
The same result can be obtained by a reduction to independent matching (see Appendix A), which in bipartite graphs is known to be solvable within strongly polynomial time as well (see [10]).
3 A strongly polynomial primaldual algorithm for \((P_{=k})\)
We saw in the previous section that both problems, \((P_{\le k})\) and \((P_{\ge k})\), can be solved in strongly polynomial time via a weighted matroid intersection algorithm. This leads to the question whether we can solve the problem \((P_{=k})\) with equality constraint on the size of the intersection efficiently as well, and whether a parametric algorithm exists, which computes the whole parametric curve with respect to k.
3.1 The algorithm
In this section, we describe a primaldual strongly polynomial algorithm for \((P_{=k})\). Our algorithm can be seen as a generalization of the algorithm presented by Hradovich et al. in [9]. However, the analysis of our algorithm turns out to be much simpler than the one in [9].
Let us consider the following piecewise linear concave curve
which depends on parameter \(\lambda \ge 0\).
Note that \({\text {val}}(\lambda )+k \lambda \) is the Lagrangian relaxation of problem \((P_{=k})\). Observe that any base pair \((X,Y)\in \mathcal {B}_1\times \mathcal {B}_2\) determines a line \(L_{(X,Y)}(\lambda )\) that hits the vertical axis at \(c_1(X)+c_2(Y)\) and has negative slope \(X\cap Y\). Thus, val\((\lambda )\) is the lower envelope of all such lines. It follows that every base pair \((X,Y)\in \mathcal {B}_1\times \mathcal {B}_2\) which intersects with curve val\((\lambda )\) in either a segment or a breakpoint, and with \(X\cap Y=k\), is a minimizer of \((P_{=k})\).
Sketch of our algorithm. We first solve the problem
without any constraint on the intersection. Note that this problem can be solved with a matroid greedy algorithm. Let \(({\bar{X}}, {\bar{Y}})\) be an optimal solution of this problem.

1.
If \({\bar{X}}\cap {\bar{Y}}=k\), we are done as \(({\bar{X}}, {\bar{Y}})\) is optimal for \((P_{=k})\).

2.
Else, if \({\bar{X}}\cap {\bar{Y}}=k'<k\), our algorithm starts with the optimal solution \(({\bar{X}}, {\bar{Y}})\) for \((P_{=k'})\), and iteratively increases \(k'\) by one until \(k'=k\). Our algorithm maintains as invariant an optimal solution \(({\bar{X}}, {\bar{Y}})\) for the current problem \((P_{=k'})\), together with some dual optimal solution \(({\bar{\alpha }}, {\bar{\beta }})\) satisfying the optimality conditions, stated in Theorem 2 below, for the current breakpoint \({\bar{\lambda }}\). Details of the algorithm are described below.

3.
Else, if \({\bar{X}}\cap {\bar{Y}}>k\), we instead consider an instance of \((P_{=k^*})\) for \(k^*={\text {rk}}(\mathcal {M}_1)k\), costs \(c_1\) and \(c_2^*=c_2\), and the two matroids \(\mathcal {M}_1=(E, \mathcal {B}_1)\) and \(\mathcal {M}_2^*=(E, \mathcal {B}_2^*)\). As seen above, an optimal solution \((X, E\setminus {Y})\) of problem \((P_{=k^*})\) corresponds to an optimal solution (X, Y) of our original problem \((P_{=k})\), and vice versa. Moreover, \({\bar{X}}\cap {\bar{Y}}>k\) for the initial base pair \(({\bar{X}}, {\bar{Y}})\) implies that \({\bar{X}}\cap (E\setminus {{\bar{Y}}})={\bar{X}}{\bar{X}}\cap {\bar{Y}}<k^*\). Thus, starting with the initial feasible solution \(({\bar{X}}, E\setminus {{\bar{Y}}})\) for \((P_{=k^*})\), we can iteratively increase \({\bar{X}}\cap (E\setminus {{\bar{Y}}})\) until \({\bar{X}}\cap (E\setminus {{\bar{Y}}})=k^*,\) as described in step 2.
Note that a slight modification of the algorithm allows to compute the optimal solutions \(({\bar{X}}_k, {\bar{Y}}_k)\) for all \(k\in \{0, \ldots , K\}\) in only two runs: run the algorithm for \(k=0\) and for \(k=K\).
An optimality condition. The following optimality condition turns out to be crucial for the design of our algorithm.
Theorem 2
(Sufficient pair optimality conditions) For fixed \(\lambda \ge 0\), base pair \((X,Y)\in \mathcal {B}_1\times \mathcal {B}_2\) is a minimizer of \({\text {val}}(\lambda )\) if there exist \(\alpha ,\beta \in \mathbb {R}^{E}_+\) such that

(i)
X is a min cost basis for \(c_{1}  \alpha \), and Y is a min cost basis for \(c_{2}  \beta \);

(ii)
\(\alpha _{e} = 0\) for \(e \in X\setminus {Y}\), and \(\beta _{e} = 0\) for \(e \in Y\setminus {X}\);

(iii)
\(\alpha _{e} + \beta _{e} = \lambda \) for each \( e \in E\).
The proof of Theorem 2 can be found in Appendix B.
Construction of the auxiliary digraph. Given a tuple \((X,Y,\alpha ,\beta , \lambda )\) satisfying the optimality conditions stated in Theorem 2, we construct the auxiliary digraph \(D=D(X,Y,\alpha ,\beta )\) with redblue colored arcs as follows (see Fig. 2):

one vertex for each element in E;

a red arc (e, f) if \(e\not \in X\), \(Xf+e \in \mathcal {B}_1\), and \(c_1(e)\alpha _e=c_1(f)\alpha _f\); and

a blue arc (f, g) if \(g\not \in Y\), \(Yf+g\in \mathcal {B}_2\), and \(c_2(g)\beta _g=c_2(f)\beta _f.\)
Note Although not depicted in Fig. 2, there might well be blue arcs going from \(Y\setminus {X}\) to either \(E\setminus {(X\cup Y)}\) or \(X\setminus {Y}\), or red arcs going from \(Y\setminus {X}\) to \(X\setminus {Y}\).
Observe that any red arc (e, f) represents a move in \(\mathcal {B}_1\) from X to \( X \cup \{e\} \setminus \{f\}\in \mathcal {B}_1\). To shorten notation, we write \(X \cup \{e\} \setminus \{f\} := X \oplus (e, f)\). Analogously, any blue arc (e, f) represents a move from Y to \(Y \cup \{f\} \setminus \{e\}\in \mathcal {B}_2\). Like above, we write \(Y \cup \{f\} \setminus \{e\}:=Y\oplus (e, f)\). Given a redblue alternating path P in D we denote by \(X' = X \oplus P\) the set obtained from X by performing all moves corresponding to red arcs, and, accordingly, by \(Y' = Y \oplus P\) the set obtained from Y by performing all moves corresponding to blue arcs.
A. Frank proved the following result about sequences of moves, to show correctness of the weighted matroid intersection algorithm.
Lemma 1
(Frank [5, 12, Lemma 13.35]) Let \(\mathcal {M}= (E, \mathcal {F})\) be a matroid, \(c :E \rightarrow \mathbb {R}\) and \(X \in \mathcal {F}\). Let \(x_{1}, \dots , x_{l} \in X\) and \(y_{1}, \dots , y_{l} \notin X\) with

1.
\(X + y_{j}  x \in \mathcal {F}\) and \(c(x_{j}) = c(y_{j})\) for \(j=1,\dots ,l\) and

2.
\(X + y_{j} x_{i} \notin \mathcal {F}\) or \(c(x_{i}) > c(y_{j})\) for \(1 \le i,j \le l\) with \(i \ne j\).
Then \((X \setminus \{x_{1}, \dots , x_{l}\}) \cup \{y_{1}, \dots , y_{l}\} \in \mathcal {F}\).
This suggests the following definition.
Augmenting paths. We call any shortest (w.r.t. number of arcs) redblue alternating path linking a vertex in \(Y\setminus {X}\) to a vertex in \(X\setminus {Y}\) an augmenting path.
Lemma 2
If P is an augmenting path in D, then

\(X'=X\oplus P\) is min cost basis in \(\mathcal {B}_1\) w.r.t. costs \(c_1\alpha \),

\(Y'=Y\oplus P\) is min cost basis in \(\mathcal {B}_2\) w.r.t. costs \(c_2\beta \), and

\(X'\cap Y'=X\cap Y+1\).
Proof
By Lemma 1, we know that \(X'=X\oplus P\) is min cost basis in \(\mathcal {B}_1\) w.r.t. costs \(c_1\alpha \), and \(Y'=Y\oplus P\) is min cost basis in \(\mathcal {B}_2\) w.r.t. costs \(c_2\beta \). The fact that the intersection is increased by one follows directly by the construction of the digraph. \(\square \)
Primal update: Given \((X,Y,\alpha ,\beta , \lambda )\) satisfying the optimality conditions and the associated digraph D, we update (X, Y) to \((X', Y')\) with \(X'=X\oplus P\), and \(Y'=Y\oplus P\), as long as some augmenting path P exists in D. It follows by construction and Lemma 1 that in each iteration \((X',Y',\alpha , \beta , \lambda )\) satisfies the optimality conditions and that \(X'\cap Y'=X\cap Y+1\).
Dual update: If D admits no augmenting path, and \(X\cap Y < k\), let R denote the set of vertices/elements which are reachable from \(Y\setminus {X}\) on some redblue alternating path. Note that \(Y\setminus {X}\subseteq R\) and \((X\setminus {Y})\cap R=\emptyset \). For each \(e\in E\) define the residual costs
Note that, by optimality of X and Y w.r.t. \({\bar{c}}_1\) and \({\bar{c}}_2\), respectively, we have that \({\bar{c}}_1(e)\ge {\bar{c}}_1(f)\) whenever \(Xf+e\in \mathcal {B}_1\), and \({\bar{c}}_2(e)\ge {\bar{c}}_2(f)\) whenever \(Yf+e\in \mathcal {B}_2\).
We compute a “step length” \(\delta >0\) as follows: Compute \(\delta _1\) and \(\delta _2\) via
It is possible that the sets over which the minima are calculated are empty. In these cases we define the corresponding minimum to be \(\infty \). In the special case where \(\mathcal {M}_{1} = \mathcal {M}_{2}\) this case cannot occur.
Since neither a red, nor a blue arc goes from R to \(E\setminus {R}\), we know that both, \(\delta _1\) and \(\delta _2\), are strictly positive, so that \(\delta :=\min \{\delta _1, \delta _2\}>0\). Now, update
Lemma 3
\((X,Y,\alpha ', \beta ')\) satisfies the optimality conditions for \(\lambda '=\lambda +\delta \).
Proof
By construction, we have for each \(e\in E\)

\(\alpha _e'+\beta _e'=\alpha _e+\beta _e+\delta = \lambda +\delta =\lambda '\).

\(\alpha _e'=0\) for \(e\in X\setminus {Y}\), since \(\alpha _e=0\) and \(e \notin R\) (as \((X\setminus {Y})\cap R=\emptyset \)).

\(\beta '_e=0\) for \(e \in Y \setminus {X}\), since \(\beta _e=0\) and \((Y\setminus {X})\subseteq R\).
Moreover, by construction and choice of \(\delta \), we observe that X and Y are optimal for \(c_1\alpha '\) and \(c_2\beta '\), since

\(c_1(e)\alpha '_e\ge c_1(f)\alpha '_f\) whenever \(X'f+e \in \mathcal {B}_1\),

\(c_2(g)\beta '_g\ge c_2(f)\beta '_f\) whenever \(Y'f+g \in \mathcal {B}_2\).
To see this, suppose for the sake of contradiction that \(c_1(e)\alpha '_e < c_1(f)\alpha '_f\) for some pair \(\{e,f\}\) with \(e\not \in X\), \(f\in X\) and \(Xf+e \in \mathcal {B}_1\). Then \(e\in R\), \(f\not \in R\), \(\alpha '_e =\alpha _e\delta \), and \(\alpha '_f=\alpha _f,\) implying \(\delta > c_1(e) \alpha _e c_1(f) + \alpha _e ={\bar{c}}_1(e){\bar{c}}_{1}(f),\) in contradiction to our choice of \(\delta \). Similarly, it can be shown that Y is optimal w.r.t. \(c_2\beta '\). Thus, \((X,Y,\alpha ',\beta ')\) satisfies the optimality conditions for \(\lambda '=\lambda +\delta \). \(\square \)
J. Edmonds proved the following feasibility condition for the nonweighted matroid intersection problem.
Lemma 4
(Edmonds [4]) Consider the digraph \({\tilde{D}} = D(X,Y, 0, 0)\) for cost functions \(c_1 = c_2 = 0\) (nonweighted case). If there exists no augmenting path in \({\tilde{D}}\) then \(X \cap Y\) is maximum among all \(X \in \mathcal {B}_1, Y \in \mathcal {B}_2\).
Based on this result we show the following feasibility condition for our problem.
Lemma 5
If \(\delta = \infty \) and \(X \cap Y < k\) the given instance is infeasible.
Proof
This follows by the fact that \(\delta = \infty \) if and only if the set \((X \setminus Y) \cap R = \emptyset \), even if we construct the graph \(D'\) without the condition that for red edges \(c_{1}(e)  \alpha _{e} = c_{1}(f)  \alpha _{f}\) and for blue edges \(c_{2}(g)  \beta _{g} = c_{2}(f)  \beta _{f}\).
Non existence of such a path implies infeasibility of the instance by Lemma 4. \(\square \)
Lemma 6
If \((X,Y,\alpha , \beta , \lambda )\) satisfies the optimality conditions and \(\delta < \infty \), a primal update can be performed after at most E dual updates.
Proof
With each dual update, at least one more vertex enters the set \(R'\) of reachable elements in digraph \(D'=D(X,Y,\alpha ', \beta ')\). \(\square \)
The primaldual algorithm. Summarizing, we obtain the following algorithm.
Input: \(\mathcal {M}_1=(E,\mathcal {B}_1)\), \(\mathcal {M}_2=(E,\mathcal {B}_2)\), \(c_1, c_2:E\rightarrow \mathbb {R}\), \(k\in \mathbb {N}\)
Output: Optimal solution (X, Y) of \((P_{=k})\)

1.
Compute an optimal solution (X, Y) of
$$\begin{aligned} \min \{c_1(X)+c_2(Y)\mid X\in \mathcal {B}_1, Y\in \mathcal {B}_2\}. \end{aligned}$$ 
2.
If \(X\cap Y=k\), return (X, Y) as optimal solution of \((P_{=k})\).

3.
Else, if \(X\cap Y>k\), run algorithm on \(\mathcal {M}_1\), \(\mathcal {M}^*_2\), \(c_1\), \(c_2^*:=c_2\), and \(k^*:={\text {rk}}(\mathcal {M}_1)k\).

4.
Else, set \(\lambda :=0\), \(\alpha := 0, \beta := 0\).

5.
While \(X\cap Y<k\), do:

Construct auxiliary digraph D based on \((X,Y,\lambda , \alpha , \beta )\) .

If there exists an augmenting path P in D, update primal
$$\begin{aligned} X':=X\oplus P, \quad Y':=Y\oplus P. \end{aligned}$$ 
Else, compute step length \(\delta \) as described above.
If \(\delta = \infty \), terminate with the message “infeasible instance”.
Else, set \(\lambda :=\lambda +\delta \) and update dual:
$$\begin{aligned} \alpha _e := {\left\{ \begin{array}{ll} \alpha _e +\delta &{} \text{ if } \text{ e } \text{ reachable, }\\ \alpha _e &{} \text{ otherwise. } \end{array}\right. } \quad \beta _e := {\left\{ \begin{array}{ll} \beta _e &{} \text{ if } e \text{ reachable, }\\ \beta _e +\delta &{} \text{ otherwise. } \end{array}\right. } \end{aligned}$$ 
Iterate with \((X,Y,\lambda , \alpha , \beta )\)


6.
Return (X, Y).
As a consequence of our considerations, the following theorem follows.
Theorem 3
The algorithm above solves \((P_{=k})\) using at most \(k \times E\) primal or dual augmentations. Moreover, the entire sequence of optimal solutions \((X_k,Y_k)\) for all \((P_{=k})\) with \(k=0,1,\dots ,K\) can be computed within \(E^{2}\) primal or dual augmentations.
Proof
By running the algorithm for both \(k=0\) and \(k=K\), where we set \(K:=\min \{{\text {rk}}(\mathcal {M}_1), {\text {rk}}(\mathcal {M}_2)\}\) we obtain optimal bases \((X_k,Y_k)\) for \((P_{=k})\) for all \(k=1,2,\dots ,K\) within \(E^{2}\) primal or dual augmentations. \(\square \)
Remark 1
Note that an alternative method to solve \((P_{=k})\) for a given fixed k is to compute solutions \((X_1, Y_1), (X_2, Y_2)\) for both \((P_{\le k})\) and \((P_{\ge k})\) respectively, which can be done by running a matroid intersection algorithm twice. If one of those problems is infeasible, it directly follows that \((P_{=k})\) is infeasible. Otherwise, it holds that \(X_1 \cap Y_1 \le k \le X_2 \cap Y_2\). We can prove that both \(X_1, X_2\) are minimum cost bases with respect to \(c_1\) and both \(Y_1, Y_2\) are minimum cost bases with respect to \(c_2\). Using a similar pivot strategy as in the algorithm above one can then pivot from \(X_1\) to \(X_2\) and \(Y_2\) to \(Y_1\) maintaining this property and obtain a pair of bases (X, Y) such that \(X \cap Y = k\).
Note, that this approach does not show how to directly obtain the whole parametric curve with respect to k.
Comparison to the results in [11] We would also like to point out that in a recent preprint, which is partially based on the preprint of the current paper, Iwamasa and Takazawa [11] show that problem \((P_{=k})\) can also be solved by combining the ideas described in Appendix A for solving \((P_{\le k})\) and \((P_{\ge k})\) with Murota’s theory of valuated matroid intersection, as described in [15, 16]. We note that the approach in [11] could also be extended to obtain the whole parametric curve in a similar fashion as we do here. In addition, Iwamasa and Takazawa in [11] were able to generalize our results on polymatroids, as described in Sect. 5, using similar techniques as the ones described in Sect. 5.
One of the main contributions of [11] is the structured analysis and modelling (using DCA) of matroid base problems and their generalizations with intersection constraints. They also present some relations of the problems studied in this paper with the Combinatorial Optimization Problem with Interaction Costs (COPIC) studied in [2, 13].
The algorithm to solve \((P_{=k})\) as stated in [11] is similar to our algorithm stated above. The main difference is that the algorithm in [11] is based on an reduction to the valuated independent assignment problem, called \(\text {VIAP}(k)\), and the use of the existing augmenting path algorithm described in [15, 16].
There are two main differences of the algorithm in [11] to our algorithm described in this section. Firstly, the optimality conditions differ slightly. Secondly, the algorithm in [11] uses a different graph for the primal updating procedure and performs the dual updates simultaneously with their primal update.
4 The recoverable robust matroid basis problem–an application.
There is a strong connection between the model described in this paper and the recoverable robust matroid basis problem (RecRobMatroid) studied in [1, 8], and [9]. In RecRobMatroid, we are given a matroid \(\mathcal {M}= (E, \mathcal {B})\) on a ground set E with base set \(\mathcal {B}\), the recoverability parameter \(k \in \mathbb {N}\), a first stage cost function \(c_{1}\) and an uncertainty set \(\mathcal {U}\) that contains different scenarios S, where each scenario \(S \in \mathcal {U}\) gives a possible second stage cost \(S=(c_S(e))_{e \in E}\).
RecRobMatroid then consists out of two phases: in the first stage, one needs to pick a basis \(X \in \mathcal {B}\). Then, after the actual scenario \(S \in \mathcal {U}\) is revealed, there is a second “recovery” stage, where a second basis Y is picked with the goal to minimize the worstcase cost \(c_{1}(X) + c_{S}(Y)\) under the constraint that Y differs in at most k elements from the original basis X. That is, we require that Y satisfies \(X\triangle Y \le k\) or, equivalently, that \(X \cap Y \ge {\text {rk}}(\mathcal {M})  k\). Here, as usual, \(X\triangle Y=(X\setminus {Y}) \cup (Y\setminus {X})\). The recoverable robust matroid basis problem can be written as follows:
The main motivation for this model is a decision problem with two consecutive stages. The costs for the first stage are known but the costs for the second stage are uncertain. In addition, due to environmental constraints the changes between the first and second stage solution are bounded by the recoverability parameter k. For instance, in the case of transversal matroids the two stages could correspond to the morning and afternoon shift and one has to select the sets of workers able to perform the given tasks for both shifts. With the recoverability parameter one can configure how many workers have to be selected that work both shifts. On the other hand, the case of graphic matroids corresponds to selecting two consecutive sets of communication links in a network while minimizing usage costs in both stages with an additional bound on the number of links one can change between the two stages.
There are several ways in which the uncertainty set \(\mathcal {U}\) for the costs of the second stage can be represented. One popular way is the interval uncertainty representation. In this representation, we are given functions \(c': E \rightarrow \mathbb {R}\), \(d: E \rightarrow \mathbb {R}_+\) and assume that the uncertainty set \(\mathcal {U}\) can be represented by a set of E intervals:
In the worstcase scenario \({\bar{S}}\) we have for all \(e \in E\) that \(c_{{\bar{S}}}(e) =c'(e) + d(e)\). When we define \(c_2(e):= c_{{\bar{S}}}(e)\) and \(k' = {\text {rk}}(\mathcal {M})  k\), it is clear that the RecRobMatroid problem under interval uncertainty representation (RecRobMatroidInt, for short) is a special case of \((P_{\ge k'})\), in which \(\mathcal {B}_1=\mathcal {B}_2\).
Büsing [1] presented an algorithm for RecRobMatroidInt which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński [9] proved that RecRobMatroidInt can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in [8] a strongly polynomial time primaldual algorithm for the special case of RecRobMatroidInt on a graphical matroid. The question whether a strongly polynomial time algorithm for RecRobMatroidInt on general matroids exists remained open.
Furthermore, Hradovich, Kaperski, and Zieliński showed that an algorithm for \((P_{\le k})\) can be used to obtain an approximation algorithm for the recoverable robust matroid basis problem with more general uncertainty sets \(\mathcal {U}\). For details see [8, Theorem 5].
Theorem 3 directly implies the following result as a special case.
Corollary 1
RecRobMatroidInt can be solved in strongly polynomial time. In particular we can compute solutions for all possible choices of the recoverability parameter k using just one run of the primaldual algorithm.
Knowing solutions for all possible choices of recoverability parameter k is specifically useful if a decision maker has to balance between cost and capability of recoverability. This viewpoint inspires the first of the following variants of RecRobMatroidInt.
Two variants of RecRobMatroidInt. Let us consider two generalizations or variants of RecRobMatroidInt. First, instead of setting a bound on the size of the symmetric difference \(X \triangle Y\) of two bases X, Y, alternatively, one could set a penalty on the size of the recovery. Let \(C: \mathbb {N}\rightarrow \mathbb {R}\) be a penalty function which determines the penalty that needs to be paid as a function dependent on the size of the symmetric difference \(X \triangle Y\). This leads to the following problem, which we denote by \((P^{\triangle })\).
Clearly, \((P^{\triangle })\) is equivalent to RecRobMatroidInt if \(C(X\triangle Y)\) is equal to zero as long as \(X\triangle Y\le k\), and \(C(X\triangle Y)=\infty \) otherwise. As it turns out, our primaldual algorithm for solving \((P_{=k})\) can be used to efficiently solve \((P^{\triangle })\).
Corollary 2
Problem \((P^{\triangle })\) can be solved in stronglypolynomial time.
Proof
By Theorem 3, optimal solutions \((X_k,Y_k)\) can be computed efficiently for all problems \((P_{=k})_{k\in \{0,1, \ldots , K\}}\) within \(E^{2}\) primal or dual augmentations of the algorithm above. It follows that the optimal solution to \((P^{\triangle })\) is a minimizer of
where \(k^{\triangle } = {\text {rk}}(\mathcal {M}_1) + {\text {rk}}(\mathcal {M}_2)  2k\). \(\square \)
Yet another variant of RecRobMatroidInt or the more general problem \((P^{\triangle })\) would be to aim for the minimum expected second stage cost, instead of the minimum worstcase second stage cost. Suppose, with respect to a given probability distribution per element \(e\in E\), the expected second stage cost on element \(e\in E\) is \(\mathbb {E}(c_S(e))\). By the linearity of expectation, to solve these problems, we could simply solve problem \((P^{\triangle })\) with \(c_2(e):=\mathbb {E}(c_S(e))\).
5 A generalization to polymatroid base polytopes
Recall that a function \(f:2^E \rightarrow \mathbb {R}\) is called submodular if the inequality \(f(U)+f(V) \ge f(U \cup V) + f(U \cap V)\) holds for all \(U, V \subseteq E\). Function f is called monotone if \(f(U) \le f(V)\) for all \(U \subseteq V\), and normalized if \(f(\emptyset ) = 0\). Given a submodular, monotone and normalized function f, the pair (E, f) is called a polymatroid, and f is called rank function of the polymatroid (E, f). The associated polymatroid base polytope is defined as:
where, as usual, \(x(U) := \sum _{e \in U} x_e\) for all \(U\subseteq E\). We refer to the book “Submodular Functions and Optimization” by Fujishige [6] for details on polymatroids and polymatroidal flows as refered to below.
Remark 2
We note that all of the arguments presented in this section work also for the more general setting of submodular systems (cf. [6]), which are defined on arbitrary distributive lattices instead of the Boolean lattice \((2^{E}, \subseteq , \cap , \cup )\).
Whenever f is a submodular function on ground set E with \(f(U) \in \mathbb {N}\) for all \(U \subset E\), we call pair (E, f) an integer polymatroid. Polymatroids generalize matroids in the following sense: if the polymatroid rank function f is integer and additionally satisfies the unitincrease property
then the vertices of the associated polymatroid base polytope \(\mathcal {B}(f)\) are exactly the incidence vectors of a matroid \((E,\mathcal {B})\) with \(\mathcal {B}:=\{B\subseteq E \mid f(B)=f(E)\}\). Conversely, the rank function \({\text {rk}}: 2^E\rightarrow \mathbb {R}\) which assigns to every subset \(U\subseteq E\) the maximum cardinality \({\text {rk}}(U)\) of an independent set within U is a polymatroid rank function satisfying the unitincrease property. In particular, bases of a polymatroid base polytope are not necessarily \(01\) vectors anymore. Generalizing settheoretic intersection and union from sets (a.k.a. \(01\) vectors) to arbitrary vectors can be done via the following binary operations, called meet and join: given two vectors \(x,y\in \mathbb {R}^{E}\) the meet of x and y is \(x\wedge y:=(\min \{x_e,y_e\})_{e\in E}\), and the join of x and y is \(x\vee y:=(\max \{x_e,y_e\})_{e\in E}\). Instead of the size of the intersection, we now talk about the size of the meet, abbreviated by
Similarly, the size of the join is \(x\vee y:=\sum _{e\in E}\max \{x_e, y_e\}.\) Note that using these definitions we have that \(x+y=x\wedge y+x\vee y\), where, as usual, for any \(x\in \mathbb {R}^{E}\), we abbreviate \(x=\sum _{e\in E}x_e\). It follows that for any base pair \((x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)\), we have
and
Therefore, it holds that \(x\wedge y \ge k\) if and only if both, \(\sum _{e\in E :x_e > y_e} (x_ey_e) \le f_1(E)k\) and \(\sum _{e\in E :x_e < y_e} (y_ex_e) \le f_2(E)k\). The problem described in the next paragraph can be seen as a direct generalization of problem \((P_{\ge k})\) when going from matroid bases to more general polymatroid base polytopes.
The model. Let \(f_{1}, f_{2}\) be two polymatroid rank functions with associated polymatroid base polytopes \(\mathcal {B}(f_{1})\) and \(\mathcal {B}(f_{2})\) defined on the same ground set of resources E, let \(c_1, c_2:E\rightarrow \mathbb {R}\) be two cost functions on E, and let k be some integer. The following problem, the Recoverable Polymatroid Basis Problem which we denote by \(({\hat{P}}_{\ge k})\), is a direct generalization of \((P_{\ge k})\) from matroids to polymatroids.
Results obtained for \((P_{\ge k}), (P_{\le k})\) and \((P_{= k})\) directly give us pseudopolynomial algorithms for \(({\hat{P}}_{\ge k}), ({\hat{P}}_{\le k})\) and \(({\hat{P}}_{= k})\).
Corollary 3
If \((E,f_1), (E,f_2)\) are two integer polymatroids, problems \(({\hat{P}}_{\ge k})\), \(({\hat{P}}_{\le k})\) and \(({\hat{P}}_{= k})\) can be solved within pseudopolynomial time.
Proof
Each integer polymatroid can be written as a matroid on a pseudopolynomial number of resources, namely on \(\sum _{e \in E}f(\{e\})\) resources [7]. Hence, the strongly polynomial time algorithms we derived for problems \((P_{\ge k}), (P_{\le k})\) and \((P_{= k})\) can directly be applied, but now have a pseudopolynomial running time. \(\square \)
In the following, we first show that \(({\hat{P}}_{\ge k})\) can be reduced to an instance of the polymatroidal flow problem, which is known to be computationally equivalent to a submodular flow problem and can thus be solved in strongly polynomial time. Afterwards, we show that the two problems \(({\hat{P}}_{\le k})\) and \(({\hat{P}}_{= k})\), which can be obtained from \(({\hat{P}}_{\ge k})\) by replacing constraint \(x\wedge y\ge k\) by either \(x\wedge y\le k\), or \(x\wedge y= k\), respectively, are weakly NPhard.
5.1 Reduction of polymatroid base problem \(({\hat{P}}_{\ge k})\) to polymatroidal flows.
The polymatroidal flow problem can be described as follows: we are given a digraph \(G=(V,A)\), arc costs \(\gamma :A\rightarrow \mathbb {R}\), lower bounds \(l:A\rightarrow \mathbb {R}\), and two submodular functions \(f^+_v\) and \(f^_v\) for each vertex \(v\in V.\) Function \(f_v^+\) is defined on \(2^{\delta ^{+}(v)}\), the set of subsets of the set \(\delta ^+(v)\) of vleaving arcs, while \(f_v^\) is defined on \(2^{\delta ^{}(v)}\), the set of subsets of the set \(\delta ^(v)\) of ventering arcs and
Given a flow \(\varphi :A\rightarrow \mathbb {R},\) the netflow at v is abbreviated by \(\partial \varphi (v):=\sum _{a\in \delta ^(v)} \varphi (a)\sum _{a\in \delta ^+(v)} \varphi (a)\). For a set of arcs \(S \subseteq A\), \(\varphi _{S}\) denotes the vector \((\varphi (a))_{a \in S}\). The associated polymatroidal flow problem can now be formulated as follows.
As described in Fujishige’s book (see [6], page 127ff), the polymatroidal flow problem is computationally equivalent to submodular flows and can thus be solved in strongly polynomial time.
Theorem 4
The Recoverable Polymatroid Basis Problem \(({\hat{P}}_{\ge k})\) can be reduced to the Polymatroidal Flow Problem.
Proof
Given an instance \((E, f_1, f_2, c_1, c_2, k)\) of \(({\hat{P}}_{\ge k})\), we construct an equivalent instance \((G, \gamma , l, (f_{v}^{+})_{v \in V}, (f_{v}^{})_{v \in V})\) of the Polymatroid Flow Problem as shown in Fig. 3. The graph G consists of 6 special vertices \(s, u_{1}, u_{2}, v_{1}, v_{2}, t\) and of 12E additional vertices denoted by \(u^{X}_{e}\), \(v^{X}_{e}\), \(u^{Y}_{e}\), \(v^{Y}_{e}\), \(u^{Z}_{e}\), \(v^{Z}_{e}\) for each \(e \in E\). The arc set consists of arcs \((s,u_{1})\), \((s, u_{2})\), \((v_{1}, t)\), \((v_{2},t)\), (t, s). In addition we have arcs \((u_{1}, u^{X}_{e}), (u_{1}, u^{Z}_{e})\) for each \(e \in E\), \((u_{2}, u^{Y}_{e})\) for each \(e \in E\), \((v^{X}_{e}, v_{1})\) for each \(e \in E\), \((v^{Z}, v_{2}), v^{Y}_{e}, v_{2})\) for each \(e \in E\). In addition for each \(e \in E\) we have three sets of arcs \((u^{X}_{e}, v^{X}_{e})\), \((u^{Y}_{e}, v^{Y}_{e})\), \((u^{Z}_{e}, v^{Z}_{e})\) which we denote by \(E_{X}, E_{Y}, E_{Z}\) respectively. We set \(\gamma ((u^{X}_{e}, v^{X}_{e})) := c_{1}(e)\), \(\gamma ((u^{Y}_{e}, v^{Y}_{e})) := c_{2}(e)\), \(\gamma ((u^{Z}_{e}, v^{Z}_{e})) := c_{1}(e) + c_{2}(e)\) and \(\gamma (a) := 0\) for all other arcs a. We enforce lower bounds on the flow on the two arcs by setting \(l((s,u_{1})) := f_{1}(E)\) and \(l((v_{2}, t)) := f_{2}(E)\) and \(l(a)=0\) for all other arcs a. To model upper bounds on the flow along the arcs \((v_{1}, t)\) and \((s,u_{2})\) we add the polymatroidal constraints on \(\varphi _{\delta ^{+}}(v_{1})\) and \(\varphi _{\delta ^{}}(u_{2})\) and set \(f_{v_{1}}^{+}((v_{1}, t)) := f_{1}(E)  k\) and \(f_{u_{2}}^{}( (s,u_{2}) ) := f_{2}(E)  k\). We also set
All other polymatroidal constraints are set to the trivial polymatroid, hence arbitrary in and outflows are allowed.
We show that the constructed instance of the Polymatroidal Flow Problem is equivalent to the given instance of the Recoverable Polymatroid Basis Problem \(({\hat{P}}_{\ge k})\).
Consider the two designated vertices \(u_1\) and \(v_2\) such that \(\delta ^+(u_1)\) are the red arcs, and \(\delta ^(v_2)\) are the green arcs in Fig. 3. Take any feasible polymatroidal flow \(\varphi \) and let \({\tilde{x}} := \varphi _{\delta ^{+}(u_{1})}\) denote the restriction of \(\varphi \) to the red arcs, and \({\tilde{y}} := \varphi _{\delta ^{}(v_{2})}\) denote the restriction of \(\varphi \) to the green arcs. Note that there is a unique arc entering \(u_1\) which we denote by \((s,u_1)\). Observe that the constraints \(\varphi (s,u_1)\ge f_{1}(E)\) and \(\varphi _{\delta ^{+}(u_1)}\in P({f}_{u_{1}}^{+})\) for the flow going into \(E_{X}\) and \(E_{Z}\) imply that the flow vector \({\tilde{x}}\) on the red arcs belongs to \( \mathcal {B}({f}_{u_{1}}^{+})\). Analogously, the flow vector \({\tilde{y}}\) satisfies \({\tilde{y}} \in \mathcal {B}({f}_{v_{2}}^{})\). By setting \(x(e) := {\tilde{x}}( (u_{1}, u_{e}^{X}) ) + {\tilde{x}}( (u_{1}, u_{e}^{Z}) )\) and \(y(e) := {\tilde{y}}( (v_{e}^{Y}, v_{2}) ) + {\tilde{y}}( (v_{e}^{Z}, v_{2}) )\) for each \(e \in E\), we have that the cost of the polymatroid flow can be rewritten as
The constraint \(\varphi _{\delta ^{+}(u_2)}\le f_2(E)k\) on the inflow into \(E_{Y}\), and the constraint \(\varphi _{\delta ^{}(v_1)}\le f_1(E)k\) on the outflow out of \(E_{X}\) are equivalent to \(x\wedge y \ge k\). Hence, the equivalence follows. \(\square \)
Note that \(({\hat{P}}_{\ge k})\) is computationally equivalent to
which we denote by \(({\hat{P}}_{\Vert \cdot \Vert _{1}})\), as of the direct connection \(x+y=2 x\wedge y +\Vert xy\Vert _1\) between \(x \wedge y\), the size of the meet of x and y, and the 1norm of \(xy\). It is an interesting open question whether this problem is also tractable if one replaces \(\Vert xy\Vert _{1} \le k'\) by arbitrary norms or, specifically, the 2norm. We conjecture that methods based on convex optimization could work in this case, likely leading to a polynomial, but not strongly polynomial, running time.
5.2 Hardness of polymatroid basis problems \(({\hat{P}}_{\le k})\) and \(({\hat{P}}_{= k})\)
Let us consider the decision problem associated to problem \(({\hat{P}}_{\le k})\) which can be formulated as follows: given an instance \((f_1,f_2, c_1, c_2, k)\) of \(({\hat{P}}_{\le k})\) together with some target value \(T\in \mathbb {R}\), decide whether or not there exists a base pair \((x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)\) with \(x\wedge y\le k\) of cost \(c_1^Tx+c_2^Ty \le T.\) Clearly, this decision problem belongs to the complexity class NP, since we can verify in polynomial time whether a given pair (x, y) of vectors satisfies the following three conditions

(i)
\(x\wedge y \le k\),

(ii)
\(c_1^Tx+c_2^Ty\le T\), and

(iii)
\((x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)\).
To verify (iii), we assume, as usual, the existence of an evaluation oracle.
Reduction from partition. To show that the problem is NPcomplete, we show that any instance of the NPcomplete problem partition can be polynomially reduced to an instance of \(({\hat{P}}_{\le k})\)decision. Recall the problem partition: given a set E of n real numbers \(a_1, \ldots , a_n\), the task is to decide whether or not the n numbers can be partitioned into two sets L and R with \(E=L\cup R\) and \(L\cap R=\emptyset \) such that \(\sum _{j\in L}a_j=\sum _{j\in R}a_j\).
Given an instance \(a_1, \ldots , a_n\) of partition with \(B:=\sum _{j\in E} a_j\), we construct the following polymatroid rank function
It is not hard to see that f is indeed a polymatroid rank function as it is normalized, monotone, and submodular. Moreover, we observe that an instance of partition \(a_1, \ldots , a_n\) is a yesinstance if and only if there exist two bases x and y in polymatroid base polytope \(\mathcal {B}(f)\) satisfying \(x\wedge y \le 0.\)
Similarly, it can be shown that any instance of partition can be reduced to an instance of the decision problem associated to \(({\hat{P}}_{= k})\), since an instance of partition is a yesinstance if and only if for the polymatroid rank function f as constructed above there exists two bases x and y in polymatroid base polytope \(\mathcal {B}(f)\) satisfying \(x\wedge y = 0.\)
6 More than two matroids
Another straightforward generalization of the matroid problems \((P_{\le k})\), \((P_{\ge k})\), and \((P_{= k})\) is to consider more than two matroids, and a constraint on the intersection of the bases of those matroids. Given matroids \(\mathcal {M}_{1} = (E, \mathcal {B}_{1}), \dots \), \(\mathcal {M}_{M} = (E, \mathcal {B}_{M})\) on a common ground set E, some integer \(k \in \mathbb {N}\), and cost functions \(c_{1}, \dots , c_{M} :E \rightarrow \mathbb {R}\), we consider the optimization problem \((P_{\le k}^{M})\)
Analogously, we define the problems \((P_{\ge k}^{M})\) and \((P_{= k}^{M})\) by replacing \(\le k\) by \(\ge k\) and \(=k\) respectively.
It is easy to observe that both variants \((P_{\ge k}^{M})\) and \((P_{= k}^{M})\) are NPhard already for the case \(M=3\), since even for the feasibility question there is an easy reduction from the matroid intersection problem for three matroids.
Interestingly, this is different for \((P^{M}_{\le k})\). A direct generalization of the reduction for \((P_{\le k})\) to weighted matroid intersection (for two matroids) shown in Sect. 2 works again.
Theorem 5
Problem \((P^{M}_{\le k})\) can be reduced to weighted matroid intersection.
Proof
Let \({\tilde{E}} := E_{1} \mathbin {\dot{\cup }}\dots \mathbin {\dot{\cup }}E_{M}\), where \(E_{1}, \dots , E_{M}\) are M copies of our original ground set E. We consider \(\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1}), \dots , \mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})\), two special types of matroids on this new ground set \({\tilde{E}}\), where \(\mathcal {F}_{1}, \dots , \mathcal {F}_{M}, {\tilde{\mathcal {F}}}_{1}, {\tilde{\mathcal {F}}}_{2}\) are the sets of independent sets of \(\mathcal {M}_{1}, \dots , \mathcal {M}_{M}, \mathcal {N}_{1}, \mathcal {N}_{2}\) respectively. Firstly, let \(\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1})\) be the direct sum of \(\mathcal {M}_{1}\) on \(E_{1}\) to \(\mathcal {M}_{M}\) on \(E_{M}\). That is, for \(A \subseteq {\tilde{E}}\) it holds that \(A \in {\tilde{\mathcal {F}}}_{1}\) if and only if \(A \cap E_{i} \in \mathcal {F}_{i}\) for all \(i=1,\dots ,M\).
The second matroid \(\mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})\) is defined as follows: we call \(e_{1} \in E_{1}, \dots \), \(e_{M} \in E_{M}\) a line, if \(e_{1}\) to \(e_{M}\) are copies of the same element in E. If \(e_{i}\) and \(e_{i'}\) are part of the same line then we call \(e_{i}\) a sibling of \(e_{i'}\) and vice versa. Then
For any \(A \subseteq {\tilde{E}}\), \(X_{i}=A\cap E_{i}\) for all \(i=1,\dots ,M\) forms a feasible solution for \((P^{M}_{\le k})\) if and only if A is a basis in matroid \(\mathcal {N}_1\) and independent in matroid \(\mathcal {N}_2.\) Thus, \((P^{M}_{\le k})\) is equivalent to the weighted matroid intersection problem
with weight function \( w(e_i) := Cc_i(e)\) for each \(i \in \{1,\dots ,M\}\), for some constant \(C>0\) chosen large enough to ensure that A is a basis in \(\mathcal {N}_1\). To see that \(\mathcal {N}_2\) is indeed a matroid, we first observe that \({\tilde{\mathcal {F}}}_{2}\) is nonempty and downwardclosed (i.e., \(A\in {\tilde{\mathcal {F}}}_{2}\), and \(B\subset A\) implies \(B\in {\tilde{\mathcal {F}}}_{2}\)). To see that \({\tilde{\mathcal {F}}}_{2}\) satisfies the matroidcharacterizing augmentation property
take any two independent sets \(A,B\in {\tilde{\mathcal {F}}}_{2}\). If A cannot be augmented from B, i.e., if \(A+e\not \in {\tilde{\mathcal {F}}}_{2}\) for every \(e\in B\setminus {A}\), then A must contain exactly k lines, and for each \(e\in B\setminus {A}\), the \(M1\) siblings of e must be contained in A. This implies \(B \le A\), i.e., \(\mathcal {N}_2\) is a matroid. \(\square \)
References
Büsing, C.: Recoverable Robustness in Combinatorial Optimization. Cuvillier Verlag, New York (2011)
Chassein, A., Goerigk, M.: On the complexity of minmaxmin robustness with two alternatives and budgeted uncertainty. Discrete Appl. Math. 3, 76109 (2020)
Cunningham, W.H., Geelen, J.F.: The optimal pathmatching problem. Combinatorica 17(3), 315–337 (1997)
Edmonds, J.: Submodular functions, matroids and certain polyhedra, combinatorial structures and their applications (R. Guy, H. Hanani, N. Sauer and J. Schönheim, eds.). Gordon and Breach pp. 69–87 (1970)
Frank, A.: A weighted matroid intersection algorithm. J. Algorithms 2(4), 328–336 (1981)
Fujishige, S.: Submodular Functions and Optimization, vol. 58. Elsevier, Oxford (2005)
Helgason, T.: Aspects of the theory of hypermatroids. In: Berge, C., RayChaudhuri, D. (eds.) Hypergraph Seminar, pp. 191–213. Springer, Berlin (1974)
Hradovich, M., Kasperski, A., Zieliński, P.: Recoverable robust spanning tree problem under interval uncertainty representations. J. Comb. Optim. 34(2), 554–573 (2017)
Hradovich, M., Kasperski, A., Zieliński, P.: The recoverable robust spanning tree problem with interval costs is polynomially solvable. Optim. Lett. 11(1), 17–30 (2017)
Iri, M., Tomizawa, N.: An algorithm for finding an optimal independent assignment. J. Oper. Res. Soc. Japan 19(1), 32–57 (1976)
Iwamasa, Y., Takazawa, K.: Optimal matroid bases with intersection constraints: Valuated matroids, Mconvex functions, and their applications. arXiv preprint arXiv:2003.02424 (2020)
Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms, 4th edn. Springer Publishing Company, Incorporated (2007)
Lendl, S., Ćustić, A., Punnen, A.P.: Combinatorial optimization with interaction costs: complexity and solvable cases. Discrete Optim. 33, 101–117 (2019)
Linhares, A., Olver, N., Swamy, C., Zenklusen, R.: Approximate multimatroid intersection via iterative refinement. In: Lodi, A., Nagarajan, V. (eds.) Integer Programming and Combinatorial Optimization, pp. 299–312. Springer International Publishing, Cham (2019)
Murota, K.: Valuated matroid intersection i: optimality criteria. SIAM J. Discrete Math. 9(4), 545–561 (1996)
Murota, K.: Valuated matroid intersection ii: algorithms. SIAM J. Discrete Math. 9(4), 562–576 (1996)
Murota, K.: Discrete convex analysis. Math. Program. 83(1–3), 313–371 (1998)
Oxley, J.G.: Matroid Theory, vol. 3. Oxford University Press, Oxford (2006)
Acknowledgements
We would like to thank András Frank, Jannik Matuschke, Tom McCormick, Rico Zenklusen, Satoru Iwata, Mohit Singh, Michel Goemans, and Guyla Pap for fruitful discussions at the HIM workshop in Bonn, and at the workshop on Combinatorial Optimization in Oberwolfach. We would als o like to thank Björn Tauer and Thomas Lachmann for several helpful discussions about this topic. We also thank the anonymous referees for helpful comments that improved our paper.
Funding
Open access funding provided by University of Graz.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Stefan Lendl acknowledges the support of the Austrian Science Fund (FWF): W1230.
Appendices
Reduction from \((P_{\ge k})\) to independent bipartite matching
As mentioned in the introduction, one can also solve problem \((P_{\ge k})\) (and, hence, also problem \((P_{\le k})\)) by reduction to a special case of the basic pathmatching problem [3], which is also known under the name independent bipartite matching problem [10]. The basic pathmatching problem can be solved in strongly polynomial time, since it is a special case of the submodular flow problem.
Definition 1
(Independent bipartite matching) We are given two matroids \(\mathcal {M}_1=(E_1,\mathcal {B}_1)\) and \(\mathcal {M}_2=(E_2,\mathcal {B}_2)\) and a weighted, bipartite graph G on node sets \(E_1\) and \(E_2\). The task is to find a minimum weight matching in G such that the elements that are matched in \(E_1\) form a basis in \(\mathcal {M}_1\) and the elements matched in \(E_2\) form a basis in \(\mathcal {M}_2\).
Consider an instance of \((P_{\ge k})\), where we are given two matroids \(\mathcal {M}_1, \mathcal {M}_2\) on common ground set E. We create an instance for the independent matching problem as follows: Define a bipartite graph \(G=(E_{1}, E_{2}, A)\), where \(E_{1}\) contains a distinct copy of E and an additional set \(U_{1}\) of exactly \(k_2:={\text {rk}}_{2}(E)k\) elements. Let \({\tilde{\mathcal {M}}}_{1}\) be the sum of \(\mathcal {M}_{1}\), on the copy of E in \(E_{1}\) and the unrestricted matroid of all subsets of \(U_{1}\). The set \(E_{2}\) and the matroid \({\tilde{\mathcal {M}}}_{2}\) are defined symmetrically and \(U = U_{1} \cup U_{2}\). The set A of edges in G contains an edge between \(\{e, e'\}\) if \(e, e'\) are copies of the same element of E in \(E_{1}, E_{2}\). In addition we add all edges \(\{e, u\}\) if e is a copy of some element of E in \(E_{1}\) and \(u \in U_{2}\) or if e is a copy of some element of E in \(E_{2}\) and \(u \in U_{1}\). See Fig. 4 for an illustration of the constructed bipartite graph.
Observe, that every feasible solution to the independent bipartite matching instance matches a basis of \({\tilde{\mathcal {M}}}_{1}\) with a basis of \({\tilde{\mathcal {M}}}_{2}\). Respectively for \(i=1,2\), these bases consist of a basis in \(\mathcal {M}_{i}\) and all elements in \(U_{i}\). Hence at most \({\text {rk}}_{i}(E)  k\) can be matched with an arbitrary element of U, so at least k elements need to be matched using edges \(\{e, e'\}\). This implies that the size of the intersection of the corresponding bases in \(\mathcal {M}_{1}, \mathcal {M}_{2}\) is at least k.
Proof of Theorem 2
Consider the following linear relaxation \((P_{\lambda })\) of an integer programming formulation of problem val\((\lambda )\). The letters in squared brackets indicate the associated dual variables.
The dual program is then \((D_{\lambda })\):
Applying the strong LPduality to the two inner problems which correspond to dual variables \((w,\mu )\) and \((v,\nu )\), respectively, yields
Thus, replacing the two inner problems by their respective duals, we can rewrite \((D_{\lambda })\) as follows:
Now, take any tuple \((X,Y, \alpha , \beta )\) satisfying the optimality conditions (i),(ii) and (iii) for \(\lambda \). Observe that the incidence vectors x and y of X and Y, respectively, together with the incidence vector z of the intersection \(X\cap Y\), is a feasible solution of the primal LP, while \(\alpha \) and \(\beta \) yield a feasible solution of the dual LP. Since
the objective values of the primal and dual feasible solutions coincide. It follows that any tuple \((X,Y,\alpha , \beta , \lambda )\) satisfying optimality conditions (i),(ii) and (iii) must be optimal for val\((\lambda )\) and its dual. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lendl, S., Peis, B. & Timmermans, V. Matroid bases with cardinality constraints on the intersection. Math. Program. 194, 661–684 (2022). https://doi.org/10.1007/s10107021016421
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107021016421
Keywords
 Matroids
 Robust optimization
 Polymatroids
 Intersection constraints
Mathematics Subject Classification
 90C27 Combinatorial optimization