Matroid bases with cardinality constraints on the intersection

Given two matroids \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {M}_{1} = (E, \mathcal {B}_{1})$$\end{document}M1=(E,B1) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {M}_{2} = (E, \mathcal {B}_{2})$$\end{document}M2=(E,B2) on a common ground set E with base sets \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {B}_1$$\end{document}B1 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal {B}_2$$\end{document}B2, some integer \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k \in \mathbb {N}$$\end{document}k∈N, and two cost functions \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c_{1}, c_{2} :E \rightarrow \mathbb {R}$$\end{document}c1,c2:E→R, we consider the optimization problem to find a basis \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X \in \mathcal {B}_{1}$$\end{document}X∈B1 and a basis \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y \in \mathcal {B}_{2}$$\end{document}Y∈B2 minimizing the cost \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sum _{e\in X} c_1(e)+\sum _{e\in Y} c_2(e)$$\end{document}∑e∈Xc1(e)+∑e∈Yc2(e) subject to either a lower bound constraint \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|X \cap Y| \le k$$\end{document}|X∩Y|≤k, an upper bound constraint \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|X \cap Y| \ge k$$\end{document}|X∩Y|≥k, or an equality constraint \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|X \cap Y| = k$$\end{document}|X∩Y|=k on the size of the intersection of the two bases X and Y. The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomial-time algorithm was left as an open question in Hradovich et al. (J Comb Optim 34(2):554–573, 2017). We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomial-time primal-dual algorithm. We also present a strongly polynomial, primal-dual algorithm that computes a minimum cost solution for every feasible size of the intersection k in one run with asymptotic running time equal to one run of Frank’s matroid intersection algorithm. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids. We obtain a strongly polynomial time algorithm for the recoverable robust polymatroid base problem with interval uncertainties.


Introduction
Matroids are fundamental and well-studied structures in combinatorial optimization. Recall that a matroid M is a tuple M = (E, F), consisting of a finite ground set E and a family of subsets F ⊆ 2 E , called the independent sets, satisfying (i) ∅ ∈ F, (ii) if F ∈ F and F ⊂ F, then F ∈ F, and (iii) if F, F ∈ F with |F | > |F|, then there exists some element e ∈ F \ F satisfying F ∪ {e} ∈ F. As usual when dealing with matroids, we assume that a matroid is specified via an independence oracle, that, given S ⊆ E as input, checks whether or not S ∈ F. Any inclusionwise maximal set in independence system F is called a basis of F. Note that the set of bases B = B(M) of a matroid M uniquely defines its independence system via F(B) = {F ⊆ E | F ⊆ B for some B ∈ B}. Because of their rich structure, matroids allow for various different characterizations (see, e.g., [18]). In particular, matroids can be characterized algorithmically as the only downward-closed structures for which a simple greedy algorithm is guaranteed to return a basis B ∈ B of minimum cost c(B) = e∈B c(e) for any linear cost function c : E → R + . Moreover, the problem to find a min-cost common base in two matroids M 1 = (E, B 1 ) and M 2 = (E, B 2 ) on the same ground set, or the problem to maximize a linear function over the intersection F 1 ∩ F 2 of two matroids M 1 = (E, F 1 ) and M 2 = (E, F 2 ) can be done efficiently with a strongly-polynomial primal-dual algorithm (cf. [5]). Optimization over the intersection of three matroids, however, is easily seen to be NP-hard. See [14] for most recent progress on approximation results for the latter problem.
Optimization on matroids, their generalization to polymatroids, or on the intersection of two (poly-)matroids, capture a wide range of interesting problems. In this paper, we introduce and study yet another variant of matroid-optimization problems. Our problems can be seen as a variant or generalization of matroid intersection: we aim at minimizing the sum of two linear cost functions over two bases chosen from two matroids on the same ground set with an additional cardinality constraint on the intersection of the two bases. As it turns out, the problems with lower and upper bound constraint are computationally equivalent to matroid intersection, while the problem with equality constraint seems to be strictly more general, and to lie somehow on the boundary of efficiently solvable combinatorial optimization problems. Interestingly, while the problems on matroids with lower, upper, or equality constraint on the intersection can be shown to be solvable in strongly polynomial time, the extension of the problems towards polymatroids is solvable in strongly polynomial time for the lower bound constraint, but NP-hard for both, upper and equality constraints.
The model Given two matroids M 1 = (E, B 1 ) and M 2 = (E, B 2 ) on a common ground set E with base sets B 1 and B 2 , some integer k ∈ N, and two cost functions c 1 , c 2 : E → R, we consider the optimization problem to find a basis X ∈ B 1 and a basis Y ∈ B 2 minimizing c 1 (X ) + c 2 (Y ) subject to either a lower bound constraint |X ∩ Y | ≥ k, an upper bound constraint |X ∩ Y | ≤ k, or an equality constraint |X ∩ Y | = k on the size of the intersection of the two bases X and Y . Here, as usual, we write c 1 (X ) = e∈X c 1 (e) and c 2 (Y ) = e∈Y c 2 (e) to shorten notation. Let us denote the following problem by (P =k ).
Accordingly, if |X ∩ Y | = k is replaced by either the upper bound constraint |X ∩ Y | ≤ k, or the lower bound constraint |X ∩ Y | ≥ k, the problem is called (P ≤k ) or (P ≥k ), respectively. Clearly, it only makes sense to consider integers k in the range between 0 and K := min{rk(M 1 ), rk(M 2 )}, where rk(M i ) for i ∈ {1, 2} is the rank of matroid M i , i.e., the cardinality of each basis in M i which is unique due to the augmentation property (iii). For details on matroids, we refer to [18].
Related literature on the Recoverable Robust Matroid Problem: Problem (P ≥k ) in the special case where B 1 = B 2 is known and well-studied under the name Recoverable Robust Matroid Problem Under Interval Uncertainty Representation, see [1,8,9] and Sect. 4 below. For this special case of (P ≥k ), Büsing [1] presented an algorithm which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński [9] proved that the problem can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in [8] a strongly polynomial time primal-dual algorithm for the special case of the problem on a graphical matroid. The question whether a strongly polynomial time algorithm for (P ≥k ) with B 1 = B 2 exists remained open.
In a recent preprint, which builds upon the preprint of this paper, Iwamasa and Takazawa [11] generalize the problems (P ≤k ), (P ≥k ), (P =k ) to nonlinear convex cost functions and analyze them within the framework of discrete convex analysis [17]. Combining this with the ideas in this paper, they obtain generalizations of our results.
Our contribution. In Sect. 2, we show that both, (P ≤k ) and (P ≥k ), can be polynomially reduced to weighted matroid intersection. Since weighted matroid intersection can be solved in strongly polynomial time by some very elegant primal-dual algorithm [5], this answers the open question raised in [9] affirmatively.
As we can solve (P ≤k ) and (P ≥k ) in strongly polynomial time via some combinatorial algorithm, the question arises whether or not the problem with equality constraint (P =k ) can be solved in strongly polynomial time as well, and whether there is an efficient parametric algorithm to compute the whole parametric curve with respect to k. In Sect. 3, we provide a strongly polynomial, primal-dual algorithm that constructs an optimal solution for (P =k ) for every feasible value of k. This algorithm can also be used to solve an extension, called (P ), of the Recoverable Robust Matroid problem under Interval Uncertainty Representation. Our approach to solve (P ) uses the capability to solve (P =k ) for every k as an essential subroutine, see Sect. 4.
Then, in Sect. 5, we consider the generalization of problems (P ≤k ), (P ≥k ), and (P =k ) from matroids to polymatroids with lower bound, upper bound, or equality constraint, respectively, on the size of the meet |x ∧ y| := e∈E min{x e , y e }. Interestingly, as it turns out, the generalization of (P ≥k ) can be solved in strongly polynomial time via reduction to some polymatroidal flow problem, while the generalizations of (P ≤k ) and (P =k ) can be shown to be weakly NP-hard, already for uniform polymatroids.
Finally, in Sect. 6, we discuss the generalization of our matroid problems from two to three or more matroids. That is, we consider n matroids M i = (E, B i ), i ∈ [n], and n linear cost functions c i : E → R, for i ∈ [n]. The task is to find n bases X i ∈ B i , i ∈ [n], minimizing the cost n i=1 c i (X i ) subject to a cardinality constraint on the size of intersection | n i=1 X i |. When we are given an upper bound on the size of this intersection we can find an optimal solution within polynomial time. Though, when we have an equality or a lower bound constraint on the size of the intersection, the problem becomes strongly NP-hard.

Reduction of (P ≤k ) and (P ≥k ) to weighted matroid intersection
We first note that (P ≤k ) and (P ≥k ) are computationally equivalent. To see this, consider are two matroids on the same ground set E with base sets B 1 and B 2 , respectively.
. Similarly, it can be shown that any problem of type (P ≤k ) polynomially reduces to an instance of type (P ≥k * ).
Note that, for a set S and an element e we abbreviate S ∪ {e} by S + e and S \ {e} by S − e.
Theorem 1 Both problems, (P ≤k ) and (P ≥k ), can be reduced to weighted matroid intersection.
Proof By our observation above, it suffices to show that (P ≤k ) can be reduced to weighted matroid intersection. LetẼ := E 1∪ E 2 , where E 1 , E 2 are two copies of our original ground set E. We consider N 1 = (Ẽ,F 1 ), N 2 = (Ẽ,F 2 ), two special types of matroids on this new ground setẼ, where F 1 , F 2 ,F 1 ,F 2 are the sets of independent sets of M 1 , M 2 , N 1 , N 2 respectively. Firstly, let N 1 = (Ẽ,F 1 ) be the direct sum of M 1 on E 1 and M 2 on E 2 . That is, for A ⊆Ẽ it holds that A ∈F 1 if and only if A ∩ E 1 ∈ F 1 and A ∩ E 2 ∈ F 2 .
The second matroid N 2 = (Ẽ,F 2 ) is defined as follows: we call e 1 ∈ E 1 and e 2 ∈ E 2 a pair, if e 1 and e 2 are copies of the same element in E. If e 1 , e 2 are a pair then we call e 2 the sibling of e 1 and vice versa. Theñ for some constant C > 0 chosen large enough to ensure that A is a basis in N 1 . To see that N 2 is indeed a matroid, we first observe thatF 2 is non-empty and downwardclosed (i.e., A ∈F 2 , and B ⊂ A implies B ∈F 2 ). To see thatF 2 satisfies the matroid-characterizing augmentation property take any two independent sets A, B ∈F 2 . If A cannot be augmented from B, i.e., if A + e / ∈F 2 for every e ∈ B \ A, then A must contain exactly k pairs, and for each e ∈ B \ A, the sibling of e must be contained in A. This implies |B| ≤ |A|, i.e., N 2 is a matroid.
Weighted matroid intersection is known to be solvable within strongly polynomial time (e.g., see Frank [5]). Hence, both (P ≤k ) and (P ≥k ) can be solved in strongly polynomial time.
The same result can be obtained by a reduction to independent matching (see Appendix A), which in bipartite graphs is known to be solvable within strongly polynomial time as well (see [10]).

The algorithm
In this section, we describe a primal-dual strongly polynomial algorithm for (P =k ).
Our algorithm can be seen as a generalization of the algorithm presented by Hradovich et al. in [9]. However, the analysis of our algorithm turns out to be much simpler than the one in [9]. Let us consider the following piecewise linear concave curve val(λ) = min which depends on parameter λ ≥ 0. Note that val(λ) + kλ is the Lagrangian relaxation of problem (P =k ). Observe that any base pair (X , Y ) ∈ B 1 × B 2 determines a line L (X ,Y ) (λ) that hits the vertical axis at c 1 (X ) + c 2 (Y ) and has negative slope |X ∩ Y |. Thus, val(λ) is the lower envelope of all such lines. It follows that every base pair (X , Y ) ∈ B 1 × B 2 which intersects with curve val(λ) in either a segment or a breakpoint, and with |X ∩ Y | = k, is a minimizer of (P =k ).
Sketch of our algorithm. We first solve the problem without any constraint on the intersection. Note that this problem can be solved with a matroid greedy algorithm. Let (X ,Ȳ ) be an optimal solution of this problem.
1. If |X ∩Ȳ | = k, we are done as (X ,Ȳ ) is optimal for (P =k ). 2. Else, if |X ∩Ȳ | = k < k, our algorithm starts with the optimal solution (X ,Ȳ ) for (P =k ), and iteratively increases k by one until k = k. Our algorithm maintains as invariant an optimal solution (X ,Ȳ ) for the current problem (P =k ), together with some dual optimal solution (ᾱ,β) satisfying the optimality conditions, stated in Theorem 2 below, for the current breakpointλ. Details of the algorithm are described below. 3. Else, if |X ∩Ȳ | > k, we instead consider an instance of (P =k * ) for k * = rk(M 1 )−k, costs c 1 and c * 2 = −c 2 , and the two matroids M 1 = (E, B 1 ) and M * 2 = (E, B * 2 ). As seen above, an optimal solution (X , E \Y ) of problem (P =k * ) corresponds to an optimal solution (X , Y ) of our original problem (P =k ), and vice versa. Moreover, Note that a slight modification of the algorithm allows to compute the optimal solutions (X k ,Ȳ k ) for all k ∈ {0, . . . , K } in only two runs: run the algorithm for k = 0 and for k = K .
An optimality condition. The following optimality condition turns out to be crucial for the design of our algorithm.

Theorem 2 (Sufficient pair optimality conditions) For fixed
The proof of Theorem 2 can be found in Appendix B.
Construction of the auxiliary digraph. Given a tuple (X , Y , α, β, λ) satisfying the optimality conditions stated in Theorem 2, we construct the auxiliary digraph D = D(X , Y , α, β) with red-blue colored arcs as follows (see Fig. 2): Note Although not depicted in Fig. 2, there might well be blue arcs going from Observe that any red arc (e, f ) represents a move in B 1 from X to X ∪{e}\{ f } ∈ B 1 . To shorten notation, we write . Given a red-blue alternating path P in D we denote by X = X ⊕ P the set obtained from X by performing all moves corresponding to red arcs, and, accordingly, by Y = Y ⊕ P the set obtained from Y by performing all moves corresponding to blue arcs.
A. Frank proved the following result about sequences of moves, to show correctness of the weighted matroid intersection algorithm. Lemma 1 (Frank [5], [12,Lemma 13.35]) Let M = (E, F) be a matroid, c : E → R and X ∈ F. Let x 1 , . . . , x l ∈ X and y 1 , . . . , y l / ∈ X with Then This suggests the following definition.
Augmenting paths. We call any shortest (w.r.t. number of arcs) red-blue alternating path linking a vertex in Y \ X to a vertex in X \ Y an augmenting path.

Lemma 2 If P is an augmenting path in D, then
Proof By Lemma 1, we know that X = X ⊕ P is min cost basis in B 1 w.r.t. costs c 1 − α, and Y = Y ⊕ P is min cost basis in B 2 w.r.t. costs c 2 − β. The fact that the intersection is increased by one follows directly by the construction of the digraph.
Primal update: Given (X , Y , α, β, λ) satisfying the optimality conditions and the associated digraph D, we update (X , Y ) to (X , Y ) with X = X ⊕ P, and Y = Y ⊕ P, as long as some augmenting path P exists in D. It follows by construction and Lemma 1 that in each iteration (X , Y , α, β, λ) satisfies the optimality conditions and that |X ∩ Y | = |X ∩ Y | + 1.
Dual update: If D admits no augmenting path, and |X ∩ Y | < k, let R denote the set of vertices/elements which are reachable from Y \ X on some red-blue alternating For each e ∈ E define the residual costsc Note that, by optimality of X and Y w.r.t.c 1 andc 2 , respectively, we have thatc 1 We compute a "step length" δ > 0 as follows: Compute δ 1 and δ 2 via It is possible that the sets over which the minima are calculated are empty. In these cases we define the corresponding minimum to be ∞. In the special case where M 1 = M 2 this case cannot occur.
Since neither a red, nor a blue arc goes from R to E \ R, we know that both, δ 1 and δ 2 , are strictly positive, so that δ := min{δ 1 , δ 2 } > 0. Now, update and β e = β e if e ∈ R β e + δ else.

Lemma 3 (X , Y , α , β ) satisfies the optimality conditions for
Moreover, by construction and choice of δ, we observe that X and Y are optimal for c 1 − α and To see this, suppose for the sake of contradiction that in contradiction to our choice of δ. Similarly, it can be shown that Y is optimal w.r.t. c 2 − β . Thus, (X , Y , α , β ) satisfies the optimality conditions for λ = λ + δ.
J. Edmonds proved the following feasibility condition for the non-weighted matroid intersection problem.

Lemma 4 (Edmonds [4]) Consider the digraphD
Based on this result we show the following feasibility condition for our problem.
Proof This follows by the fact that δ = ∞ if and only if the set (X \ Y ) ∩ R = ∅, even if we construct the graph D without the condition that for red edges Non existence of such a path implies infeasibility of the instance by Lemma 4.

Lemma 6
If (X , Y , α, β, λ) satisfies the optimality conditions and δ < ∞, a primal update can be performed after at most |E| dual updates.
Proof With each dual update, at least one more vertex enters the set R of reachable The primal-dual algorithm. Summarizing, we obtain the following algorithm.
1. Compute an optimal solution (X , Y ) of -Else, compute step length δ as described above.
As a consequence of our considerations, the following theorem follows.

Theorem 3
The algorithm above solves (P =k ) using at most k × |E| primal or dual augmentations. Moreover, the entire sequence of optimal solutions (X k , Y k ) for all (P =k ) with k = 0, 1, . . . , K can be computed within |E| 2 primal or dual augmentations.
Proof By running the algorithm for both k = 0 and k = K , where we set K := min{rk(M 1 ), rk(M 2 )} we obtain optimal bases (X k , Y k ) for (P =k ) for all k = 1, 2, . . . , K within |E| 2 primal or dual augmentations.

Remark 1
Note that an alternative method to solve (P =k ) for a given fixed k is to compute solutions (X 1 , Y 1 ), (X 2 , Y 2 ) for both (P ≤k ) and (P ≥k ) respectively, which can be done by running a matroid intersection algorithm twice. If one of those problems is infeasible, it directly follows that (P =k ) is infeasible. Otherwise, it holds that We can prove that both X 1 , X 2 are minimum cost bases with respect to c 1 and both Y 1 , Y 2 are minimum cost bases with respect to c 2 . Using a similar pivot strategy as in the algorithm above one can then pivot from X 1 to X 2 and Y 2 to Y 1 maintaining this property and obtain a pair of bases (X , Y ) such that |X ∩ Y | = k.
Note, that this approach does not show how to directly obtain the whole parametric curve with respect to k.
Comparison to the results in [11] We would also like to point out that in a recent preprint, which is partially based on the preprint of the current paper, Iwamasa and Takazawa [11] show that problem (P =k ) can also be solved by combining the ideas described in Appendix A for solving (P ≤k ) and (P ≥k ) with Murota's theory of valuated matroid intersection, as described in [15,16]. We note that the approach in [11] could also be extended to obtain the whole parametric curve in a similar fashion as we do here. In addition, Iwamasa and Takazawa in [11] were able to generalize our results on polymatroids, as described in Sect. 5, using similar techniques as the ones described in Sect. 5.
One of the main contributions of [11] is the structured analysis and modelling (using DCA) of matroid base problems and their generalizations with intersection constraints. They also present some relations of the problems studied in this paper with the Combinatorial Optimization Problem with Interaction Costs (COPIC) studied in [2,13].
The algorithm to solve (P =k ) as stated in [11] is similar to our algorithm stated above. The main difference is that the algorithm in [11] is based on an reduction to the valuated independent assignment problem, called VIAP(k), and the use of the existing augmenting path algorithm described in [15,16].
There are two main differences of the algorithm in [11] to our algorithm described in this section. Firstly, the optimality conditions differ slightly. Secondly, the algorithm in [11] uses a different graph for the primal updating procedure and performs the dual updates simultaneously with their primal update.

The recoverable robust matroid basis problem-an application.
There is a strong connection between the model described in this paper and the recoverable robust matroid basis problem (RecRobMatroid) studied in [1,8], and [9]. In RecRobMatroid, we are given a matroid M = (E, B) on a ground set E with base set B, the recoverability parameter k ∈ N, a first stage cost function c 1 and an uncertainty set U that contains different scenarios S, where each scenario S ∈ U gives a possible second stage cost S = (c S (e)) e∈E .
RecRobMatroid then consists out of two phases: in the first stage, one needs to pick a basis X ∈ B. Then, after the actual scenario S ∈ U is revealed, there is a second "recovery" stage, where a second basis Y is picked with the goal to minimize the worst-case cost c 1 (X ) + c S (Y ) under the constraint that Y differs in at most k elements from the original basis X . That is, we require that Y satisfies |X Y | ≤ k or, equivalently, that |X ∩ Y | ≥ rk(M) − k. Here, as usual, The recoverable robust matroid basis problem can be written as follows: The main motivation for this model is a decision problem with two consecutive stages. The costs for the first stage are known but the costs for the second stage are uncertain. In addition, due to environmental constraints the changes between the first and second stage solution are bounded by the recoverability parameter k. For instance, in the case of transversal matroids the two stages could correspond to the morning and afternoon shift and one has to select the sets of workers able to perform the given tasks for both shifts. With the recoverability parameter one can configure how many workers have to be selected that work both shifts. On the other hand, the case of graphic matroids corresponds to selecting two consecutive sets of communication links in a network while minimizing usage costs in both stages with an additional bound on the number of links one can change between the two stages.
There are several ways in which the uncertainty set U for the costs of the second stage can be represented. One popular way is the interval uncertainty representation. In this representation, we are given functions c : E → R, d : E → R + and assume that the uncertainty set U can be represented by a set of |E| intervals: In the worst-case scenarioS we have for all e ∈ E that cS(e) = c (e) + d(e). When we define c 2 (e) := cS(e) and k = rk(M) − k, it is clear that the RecRobMatroid problem under interval uncertainty representation (RecRobMatroid-Int, for short) is a special case of (P ≥k ), in which B 1 = B 2 .
Büsing [1] presented an algorithm for RecRobMatroid-Int which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński [9] proved that RecRobMatroid-Int can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in [8] a strongly polynomial time primal-dual algorithm for the special case of RecRobMatroid-Int on a graphical matroid. The question whether a strongly polynomial time algorithm for RecRobMatroid-Int on general matroids exists remained open.
Furthermore, Hradovich, Kaperski, and Zieliński showed that an algorithm for (P ≤k ) can be used to obtain an approximation algorithm for the recoverable robust matroid basis problem with more general uncertainty sets U. For details see [8,Theorem 5].
Theorem 3 directly implies the following result as a special case.

Corollary 1 RecRobMatroid-Int can be solved in strongly polynomial time. In particular we can compute solutions for all possible choices of the recoverability parameter k using just one run of the primal-dual algorithm.
Knowing solutions for all possible choices of recoverability parameter k is specifically useful if a decision maker has to balance between cost and capability of recoverability. This viewpoint inspires the first of the following variants of RecRobMatroid-Int.

Two variants of RecRobMatroid-Int.
Let us consider two generalizations or variants of RecRobMatroid-Int. First, instead of setting a bound on the size of the symmetric difference |X Y | of two bases X , Y , alternatively, one could set a penalty on the size of the recovery. Let C : N → R be a penalty function which determines the penalty that needs to be paid as a function dependent on the size of the symmetric difference |X Y |. This leads to the following problem, which we denote by (P ).
Clearly, (P ) is equivalent to RecRobMatroid-Int if C(|X Y |) is equal to zero as long as |X Y | ≤ k, and C(|X Y |) = ∞ otherwise. As it turns out, our primal-dual algorithm for solving (P =k ) can be used to efficiently solve (P ).

Corollary 2 Problem (P ) can be solved in strongly-polynomial time.
Proof By Theorem 3, optimal solutions (X k , Y k ) can be computed efficiently for all problems (P =k ) k∈{0,1,...,K } within |E| 2 primal or dual augmentations of the algorithm above. It follows that the optimal solution to (P ) is a minimizer of Yet another variant of RecRobMatroid-Int or the more general problem (P ) would be to aim for the minimum expected second stage cost, instead of the minimum worstcase second stage cost. Suppose, with respect to a given probability distribution per element e ∈ E, the expected second stage cost on element e ∈ E is E(c S (e)). By the linearity of expectation, to solve these problems, we could simply solve problem (P ) with c 2 (e) := E(c S (e)).

A generalization to polymatroid base polytopes
Recall that a function f : and normalized if f (∅) = 0. Given a submodular, monotone and normalized function f , the pair (E, f ) is called a polymatroid, and f  is called rank function of the polymatroid (E, f ). The associated polymatroid base polytope is defined as: where, as usual, x(U ) := e∈U x e for all U ⊆ E. We refer to the book "Submodular Functions and Optimization" by Fujishige [6] for details on polymatroids and polymatroidal flows as refered to below.

Remark 2
We note that all of the arguments presented in this section work also for the more general setting of submodular systems (cf. [6]), which are defined on arbitrary distributive lattices instead of the Boolean lattice (2 E , ⊆, ∩, ∪).
Whenever f is a submodular function on ground set E with f (U ) ∈ N for all U ⊂ E, we call pair (E, f ) an integer polymatroid. Polymatroids generalize matroids in the following sense: if the polymatroid rank function f is integer and additionally satisfies the unit-increase property then the vertices of the associated polymatroid base polytope B( f ) are exactly the incidence vectors of a matroid (E, B) with B := {B ⊆ E | f (B) = f (E)}. Conversely, the rank function rk : 2 E → R which assigns to every subset U ⊆ E the maximum cardinality rk(U ) of an independent set within U is a polymatroid rank function satisfying the unit-increase property. In particular, bases of a polymatroid base polytope are not necessarily 0 − 1 vectors anymore. Generalizing set-theoretic intersection and union from sets (a.k.a. 0 − 1 vectors) to arbitrary vectors can be done via the following binary operations, called meet and join: given two vectors x, y ∈ R |E| the meet of x and y is x ∧ y := (min{x e , y e }) e∈E , and the join of x and y is x ∨ y := (max{x e , y e }) e∈E . Instead of the size of the intersection, we now talk about the size of the meet, abbreviated by Similarly, the size of the join is |x ∨ y| := e∈E max{x e , y e }. Note that using these definitions we have that |x|+|y| = |x ∧ y|+|x ∨ y|, where, as usual, for any x ∈ R |E| , we abbreviate |x| = e∈E x e . It follows that for any base pair (x, y) ∈ B( f 1 )×B( f 2 ), Therefore, it holds that |x ∧ y| ≥ k if and only if both, e∈E : x e >y e (x e − y e ) ≤ f 1 (E) − k and e∈E : x e <y e (y e − x e ) ≤ f 2 (E) − k. The problem described in the next paragraph can be seen as a direct generalization of problem (P ≥k ) when going from matroid bases to more general polymatroid base polytopes.
The model. Let f 1 , f 2 be two polymatroid rank functions with associated polymatroid base polytopes B( f 1 ) and B( f 2 ) defined on the same ground set of resources E, let c 1 , c 2 : E → R be two cost functions on E, and let k be some integer. The following problem, the Recoverable Polymatroid Basis Problem which we denote by (P ≥k ), is a direct generalization of (P ≥k ) from matroids to polymatroids.

Corollary 3 If
Proof Each integer polymatroid can be written as a matroid on a pseudo-polynomial number of resources, namely on e∈E f ({e}) resources [7]. Hence, the strongly polynomial time algorithms we derived for problems (P ≥k ), (P ≤k ) and (P =k ) can directly be applied, but now have a pseudo-polynomial running time.
In the following, we first show that (P ≥k ) can be reduced to an instance of the polymatroidal flow problem, which is known to be computationally equivalent to a submodular flow problem and can thus be solved in strongly polynomial time. Afterwards, we show that the two problems (P ≤k ) and (P =k ), which can be obtained from (P ≥k ) by replacing constraint |x ∧ y| ≥ k by either |x ∧ y| ≤ k, or |x ∧ y| = k, respectively, are weakly NP-hard.

Reduction of polymatroid base problem (P ≥k ) to polymatroidal flows.
The polymatroidal flow problem can be described as follows: we are given a digraph G = (V , A), arc costs γ : A → R, lower bounds l : A → R, and two submodular functions , the set of subsets of the set δ − (v) of v-entering arcs and , Given a flow ϕ : A → R, the net-flow at v is abbreviated by ∂ϕ(v) . For a set of arcs S ⊆ A, ϕ| S denotes the vector (ϕ(a)) a∈S . The associated polymatroidal flow problem can now be formulated as follows.
As described in Fujishige's book (see [6], page 127ff), the polymatroidal flow problem is computationally equivalent to submodular flows and can thus be solved in strongly polynomial time.

Theorem 4 The Recoverable Polymatroid Basis Problem (P ≥k ) can be reduced to the Polymatroidal Flow Problem.
Proof Given an instance (E, of the Polymatroid Flow Problem as shown in Fig. 3. The graph G consists of 6 special vertices s, u 1 , The arc set consists of arcs (s, u 1 ), (s, u 2 ), (v 1 , t), (v 2 , t), (t, s). In addition we have arcs In addition for each e ∈ E we have three sets of  (v 1 , t) and (s, u 2 ) we add the polymatroidal constraints on ϕ| δ + (v 1 ) and ϕ| δ − (u 2 ) and set f + v 1 ((v 1 , t)) We also set All other polymatroidal constraints are set to the trivial polymatroid, hence arbitrary in-and outflows are allowed. We show that the constructed instance of the Polymatroidal Flow Problem is equivalent to the given instance of the Recoverable Polymatroid Basis Problem (P ≥k ).
Consider the two designated vertices u 1 and v 2 such that δ + (u 1 ) are the red arcs, and δ − (v 2 ) are the green arcs in Fig. 3. Take any feasible polymatroidal flow ϕ and letx := ϕ| δ + (u 1 ) denote the restriction of ϕ to the red arcs, andỹ := ϕ| δ − (v 2 ) denote the restriction of ϕ to the green arcs. Note that there is a unique arc entering u 1 which we denote by (s, u 1 ). Observe that the constraints ϕ(s, u 1 ) ≥ f 1 (E) and ϕ| δ + (u 1 ) ∈ P( f + u 1 ) for the flow going into E X and E Z imply that the flow vectorx on the red arcs belongs to B( f + u 1 ). Analogously, the flow vectorỹ satisfiesỹ ∈ B( f − v 2 ). By setting x(e) :=x((u 1 , u X e )) +x((u 1 , u Z e )) and y(e) :=ỹ((v Y e , v 2 )) +ỹ((v Z e , v 2 )) for each e ∈ E, we have that the cost of the polymatroid flow can be rewritten as e∈E c 1 (e)x(e) + e∈E c 2 (e)y(e).
The constraint ϕ| δ + (u 2 ) ≤ f 2 (E) − k on the inflow into E Y , and the constraint Hence, the equivalence follows.
Note that (P ≥k ) is computationally equivalent to which we denote by (P · 1 ), as of the direct connection |x|+|y| = 2|x ∧ y|+ x − y 1 between |x ∧ y|, the size of the meet of x and y, and the 1-norm of x − y. It is an interesting open question whether this problem is also tractable if one replaces x − y 1 ≤ k by arbitrary norms or, specifically, the 2-norm. We conjecture that methods based on convex optimization could work in this case, likely leading to a polynomial, but not strongly polynomial, running time.

Hardness of polymatroid basis problems (P ≤k ) and (P =k )
Let us consider the decision problem associated to problem (P ≤k ) which can be formulated as follows: given an instance ( f 1 , f 2 , c 1 , c 2 , k) of (P ≤k ) together with some target value T ∈ R, decide whether or not there exists a base pair (x, y) Clearly, this decision problem belongs to the complexity class NP, since we can verify in polynomial time whether a given pair (x, y) of vectors satisfies the following three conditions To verify (iii), we assume, as usual, the existence of an evaluation oracle.
Reduction from partition. To show that the problem is NP-complete, we show that any instance of the NP-complete problem partition can be polynomially reduced to an instance of (P ≤k )-decision. Recall the problem partition: given a set E of n real numbers a 1 , . . . , a n , the task is to decide whether or not the n numbers can be partitioned into two sets L and R with E = L ∪ R and L ∩ R = ∅ such that Given an instance a 1 , . . . , a n of partition with B := j∈E a j , we construct the following polymatroid rank function It is not hard to see that f is indeed a polymatroid rank function as it is normalized, monotone, and submodular. Moreover, we observe that an instance of partition a 1 , . . . , a n is a yes-instance if and only if there exist two bases x and y in polymatroid base polytope B( f ) satisfying |x ∧ y| ≤ 0. Similarly, it can be shown that any instance of partition can be reduced to an instance of the decision problem associated to (P =k ), since an instance of partition is a yes-instance if and only if for the polymatroid rank function f as constructed above there exists two bases x and y in polymatroid base polytope B( f ) satisfying |x ∧ y| = 0.

More than two matroids
Another straightforward generalization of the matroid problems (P ≤k ), (P ≥k ), and (P =k ) is to consider more than two matroids, and a constraint on the intersection of the bases of those matroids. Given matroids M 1 = (E, B 1 Analogously, we define the problems (P M ≥k ) and (P M =k ) by replacing ≤ k by ≥ k and = k respectively.
It is easy to observe that both variants (P M ≥k ) and (P M =k ) are NP-hard already for the case M = 3, since even for the feasibility question there is an easy reduction from the matroid intersection problem for three matroids.
Interestingly, this is different for (P M ≤k ). A direct generalization of the reduction for (P ≤k ) to weighted matroid intersection (for two matroids) shown in Sect. 2 works again.  To see that N 2 is indeed a matroid, we first observe thatF 2 is non-empty and downward-closed (i.e., A ∈F 2 , and B ⊂ A implies B ∈F 2 ). To see thatF 2 satisfies the matroid-characterizing augmentation property A, B ∈F 2 with |A| ≤ |B| implies ∃e ∈ B \ A with A + e ∈F 2 , take any two independent sets A, B ∈F 2 . If A cannot be augmented from B, i.e., if A + e / ∈F 2 for every e ∈ B \ A, then A must contain exactly k lines, and for each e ∈ B \ A, the M − 1 siblings of e must be contained in A. This implies |B| ≤ |A|, i.e., N 2 is a matroid.

A Reduction from (P ≥k ) to independent bipartite matching
As mentioned in the introduction, one can also solve problem (P ≥k ) (and, hence, also problem (P ≤k )) by reduction to a special case of the basic path-matching problem [3], which is also known under the name independent bipartite matching problem [10]. The basic path-matching problem can be solved in strongly polynomial time, since it is a special case of the submodular flow problem.
Definition 1 (Independent bipartite matching) We are given two matroids M 1 = (E 1 , B 1 ) and M 2 = (E 2 , B 2 ) and a weighted, bipartite graph G on node sets E 1 and E 2 . The task is to find a minimum weight matching in G such that the elements that are matched in E 1 form a basis in M 1 and the elements matched in E 2 form a basis in M 2 .
Consider an instance of (P ≥k ), where we are given two matroids M 1 , M 2 on common ground set E. We create an instance for the independent matching problem as follows: Define a bipartite graph G = (E 1 , E 2 , A), where E 1 contains a distinct copy of E and an additional set U 1 of exactly k 2 := rk 2 (E) − k elements. LetM 1 be the sum of M 1 , on the copy of E in E 1 and the unrestricted matroid of all subsets of U 1 . The set E 2 and the matroidM 2 are defined symmetrically and U = U 1 ∪ U 2 . The set A of edges in G contains an edge between {e, e } if e, e are copies of the same element of E in E 1 , E 2 . In addition we add all edges {e, u} if e is a copy of some element of E in E 1 and u ∈ U 2 or if e is a copy of some element of E in E 2 and u ∈ U 1 . See Fig. 4 for an illustration of the constructed bipartite graph.
Observe, that every feasible solution to the independent bipartite matching instance matches a basis ofM 1 with a basis ofM 2 . Respectively for i = 1, 2, these bases consist of a basis in M i and all elements in U i . Hence at most rk i (E) − k can be matched with an arbitrary element of U , so at least k elements need to be matched using edges {e, e }. This implies that the size of the intersection of the corresponding bases in M 1 , M 2 is at least k.
∀e ∈ E Now, take any tuple (X , Y , α, β) satisfying the optimality conditions (i),(ii) and (iii) for λ. Observe that the incidence vectors x and y of X and Y , respectively, together with the incidence vector z of the intersection X ∩ Y , is a feasible solution of the primal LP, while α and β yield a feasible solution of the dual LP. Since the objective values of the primal and dual feasible solutions coincide. It follows that any tuple (X , Y , α, β, λ) satisfying optimality conditions (i),(ii) and (iii) must be optimal for val(λ) and its dual.