## 1 Introduction

Matroids are fundamental and well-studied structures in combinatorial optimization. Recall that a matroid $$\mathcal {M}$$ is a tuple $$\mathcal {M}=(E,\mathcal {F})$$, consisting of a finite ground set E and a family of subsets $$\mathcal {F}\subseteq 2^E$$, called the independent sets, satisfying (i) $$\emptyset \in \mathcal {F}$$, (ii) if $$F\in \mathcal {F}$$ and $$F'\subset F$$, then $$F'\in \mathcal {F}$$, and (iii) if $$F,F'\in \mathcal {F}$$ with $$|F'|>|F|$$, then there exists some element $$e\in F'\setminus {F}$$ satisfying $$F\cup \{e\}\in \mathcal {F}$$. As usual when dealing with matroids, we assume that a matroid is specified via an independence oracle, that, given $$S\subseteq E$$ as input, checks whether or not $$S\in \mathcal {F}$$. Any inclusion-wise maximal set in independence system $$\mathcal {F}$$ is called a basis of $$\mathcal {F}$$. Note that the set of bases $$\mathcal {B}=\mathcal {B}(\mathcal {M})$$ of a matroid $$\mathcal {M}$$ uniquely defines its independence system via $$\mathcal {F}(\mathcal {B})=\{F\subseteq E\mid F\subseteq B \text{ for } \text{ some } B\in \mathcal {B}\}.$$ Because of their rich structure, matroids allow for various different characterizations (see, e.g., ). In particular, matroids can be characterized algorithmically as the only downward-closed structures for which a simple greedy algorithm is guaranteed to return a basis $$B\in \mathcal {B}$$ of minimum cost $$c(B)=\sum _{e\in B} c(e)$$ for any linear cost function $$c:E\rightarrow \mathbb {R}_+.$$ Moreover, the problem to find a min-cost common base in two matroids $$\mathcal {M}_1=(E,\mathcal {B}_1)$$ and $$\mathcal {M}_2=(E,\mathcal {B}_2)$$ on the same ground set, or the problem to maximize a linear function over the intersection $$\mathcal {F}_1\cap \mathcal {F}_2$$ of two matroids $$\mathcal {M}_1=(E,\mathcal {F}_1)$$ and $$\mathcal {M}_2=(E,\mathcal {F}_2)$$ can be done efficiently with a strongly-polynomial primal-dual algorithm (cf. ). Optimization over the intersection of three matroids, however, is easily seen to be NP-hard. See  for most recent progress on approximation results for the latter problem.

Optimization on matroids, their generalization to polymatroids, or on the intersection of two (poly-)matroids, capture a wide range of interesting problems. In this paper, we introduce and study yet another variant of matroid-optimization problems. Our problems can be seen as a variant or generalization of matroid intersection: we aim at minimizing the sum of two linear cost functions over two bases chosen from two matroids on the same ground set with an additional cardinality constraint on the intersection of the two bases. As it turns out, the problems with lower and upper bound constraint are computationally equivalent to matroid intersection, while the problem with equality constraint seems to be strictly more general, and to lie somehow on the boundary of efficiently solvable combinatorial optimization problems. Interestingly, while the problems on matroids with lower, upper, or equality constraint on the intersection can be shown to be solvable in strongly polynomial time, the extension of the problems towards polymatroids is solvable in strongly polynomial time for the lower bound constraint, but NP-hard for both, upper and equality constraints.

The model Given two matroids $$\mathcal {M}_{1} = (E, \mathcal {B}_{1})$$ and $$\mathcal {M}_{2} = (E, \mathcal {B}_{2})$$ on a common ground set E with base sets $$\mathcal {B}_1$$ and $$\mathcal {B}_2$$, some integer $$k \in \mathbb {N}$$, and two cost functions $$c_{1}, c_{2} :E \rightarrow \mathbb {R}$$, we consider the optimization problem to find a basis $$X \in \mathcal {B}_{1}$$ and a basis $$Y \in \mathcal {B}_{2}$$ minimizing $$c_{1}(X) + c_{2}(Y)$$ subject to either a lower bound constraint $$|X \cap Y| \ge k$$, an upper bound constraint $$|X \cap Y| \le k$$, or an equality constraint $$|X \cap Y| = k$$ on the size of the intersection of the two bases X and Y. Here, as usual, we write $$c_1(X)=\sum _{e\in X} c_1(e)$$ and $$c_2(Y)=\sum _{e\in Y} c_2(e)$$ to shorten notation. Let us denote the following problem by $$(P_{= k})$$.

\begin{aligned} \min&c_{1}(X) + c_{2}(Y)\\ \text {s.t. }&X \in \mathcal {B}_{1}\\&Y \in \mathcal {B}_{2}\\&|X \cap Y| = k \end{aligned}

Accordingly, if $$|X \cap Y| = k$$ is replaced by either the upper bound constraint $$|X \cap Y| \le k$$, or the lower bound constraint $$|X \cap Y| \ge k$$, the problem is called $$(P_{\le k})$$ or $$(P_{\ge k})$$, respectively. Clearly, it only makes sense to consider integers k in the range between 0 and $$K:=\min \{{\text {rk}}(\mathcal {M}_1), {\text {rk}}(\mathcal {M}_2)\}$$, where $${\text {rk}}(\mathcal {M}_i)$$ for $$i\in \{1,2\}$$ is the rank of matroid $$\mathcal {M}_i$$, i.e., the cardinality of each basis in $$\mathcal {M}_i$$ which is unique due to the augmentation property (iii). For details on matroids, we refer to .

Related literature on the Recoverable Robust Matroid Problem: Problem $$(P_{\ge k})$$ in the special case where $$\mathcal {B}_1=\mathcal {B}_2$$ is known and well-studied under the name Recoverable Robust Matroid Problem Under Interval Uncertainty Representation, see [1, 8, 9] and Sect. 4 below. For this special case of $$(P_{\ge k})$$, Büsing  presented an algorithm which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński  proved that the problem can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in  a strongly polynomial time primal-dual algorithm for the special case of the problem on a graphical matroid. The question whether a strongly polynomial time algorithm for $$(P_{\ge k})$$ with $$\mathcal {B}_1=\mathcal {B}_2$$ exists remained open.

In a recent preprint, which builds upon the preprint of this paper, Iwamasa and Takazawa  generalize the problems $$(P_{\le k})$$, $$(P_{\ge k})$$, $$(P_{= k})$$ to nonlinear convex cost functions and analyze them within the framework of discrete convex analysis . Combining this with the ideas in this paper, they obtain generalizations of our results.

Our contribution. In Sect. 2, we show that both, $$(P_{\le k})$$ and $$(P_{\ge k})$$, can be polynomially reduced to weighted matroid intersection. Since weighted matroid intersection can be solved in strongly polynomial time by some very elegant primal-dual algorithm , this answers the open question raised in  affirmatively.

As we can solve $$(P_{\le k})$$ and $$(P_{\ge k})$$ in strongly polynomial time via some combinatorial algorithm, the question arises whether or not the problem with equality constraint $$(P_{=k})$$ can be solved in strongly polynomial time as well, and whether there is an efficient parametric algorithm to compute the whole parametric curve with respect to k. In Sect. 3, we provide a strongly polynomial, primal-dual algorithm that constructs an optimal solution for $$(P_{=k})$$ for every feasible value of k. This algorithm can also be used to solve an extension, called $$(P^{\triangle })$$, of the Recoverable Robust Matroid problem under Interval Uncertainty Representation. Our approach to solve $$(P^{\triangle })$$ uses the capability to solve $$(P_{=k})$$ for every k as an essential subroutine, see Sect. 4.

Then, in Sect. 5, we consider the generalization of problems $$(P_{\le k})$$, $$(P_{\ge k})$$, and $$(P_{= k})$$ from matroids to polymatroids with lower bound, upper bound, or equality constraint, respectively, on the size of the meet $$|x \wedge y| :=\sum _{e\in E} \min \{x_e, y_e\}$$. Interestingly, as it turns out, the generalization of $$(P_{\ge k})$$ can be solved in strongly polynomial time via reduction to some polymatroidal flow problem, while the generalizations of $$(P_{\le k})$$ and $$(P_{= k})$$ can be shown to be weakly NP-hard, already for uniform polymatroids.

Finally, in Sect. 6, we discuss the generalization of our matroid problems from two to three or more matroids. That is, we consider n matroids $$\mathcal {M}_i=(E,\mathcal {B}_i)$$, $$i\in [n]$$, and n linear cost functions $$c_i:E\rightarrow \mathbb {R}$$, for $$i\in [n]$$. The task is to find n bases $$X_i\in \mathcal {B}_i$$, $$i\in [n]$$, minimizing the cost $$\sum _{i=1}^n c_i(X_i)$$ subject to a cardinality constraint on the size of intersection $$|\bigcap _{i=1}^n X_i|$$. When we are given an upper bound on the size of this intersection we can find an optimal solution within polynomial time. Though, when we have an equality or a lower bound constraint on the size of the intersection, the problem becomes strongly NP-hard.

## 2 Reduction of $$(P_{\le k})$$ and $$(P_{\ge k})$$ to weighted matroid intersection

We first note that $$(P_{\le k})$$ and $$(P_{\ge k})$$ are computationally equivalent. To see this, consider any instance $$(\mathcal {M}_1, \mathcal {M}_2, k, c_1, c_2)$$ of $$(P_{\ge k})$$, where $$\mathcal {M}_1=(E, \mathcal {B}_1)$$, and $$\mathcal {M}_2=(E, \mathcal {B}_2)$$ are two matroids on the same ground set E with base sets $$\mathcal {B}_1$$ and $$\mathcal {B}_2$$, respectively. Define $$c_2^*=-c_2$$, $$k^*={\text {rk}}(\mathcal {M}_1)-k$$, and let $$\mathcal {M}_2^*=(E, \mathcal {B}_2^*)$$ with $$\mathcal {B}_2^*=\{E\setminus {Y}\mid Y\in \mathcal {B}_2\}$$ be the dual matroid of $$\mathcal {M}_2$$. Since

1. (i)

$$|X\cap Y| \le k \iff |X\cap (E\setminus {Y})| =|X|-|X\cap Y| \ge {\text {rk}}(\mathcal {M}_1)-k=k^*$$, and

2. (ii)

$$c_1(X)+c_2(Y)=c_1(X)+c_2(E)-c_2(E\setminus {Y})= c_1(X)+c^*_2(E\setminus {Y})+c_2(E)$$,

where $$c_2(E)$$ is a constant, it follows that (XY) is a minimizer of $$(P_{\ge k})$$ for the given instance $$(\mathcal {M}_1, \mathcal {M}_2, k, c_1, c_2)$$ if and only if $$(X, E\setminus {Y})$$ is a minimizer of $$(P_{\le k^*})$$ for $$(\mathcal {M}_1, \mathcal {M}^*_2, k^*, c_1, c^*_2)$$. Similarly, it can be shown that any problem of type $$(P_{\le k})$$ polynomially reduces to an instance of type $$(P_{\ge k^*})$$.

Note that, for a set S and an element e we abbreviate $$S \cup \{e\}$$ by $$S+e$$ and $$S \setminus \{e\}$$ by $$S-e$$.

### Theorem 1

Both problems, $$(P_{\le k})$$ and $$(P_{\ge k})$$, can be reduced to weighted matroid intersection.

### Proof

By our observation above, it suffices to show that $$(P_{\le k})$$ can be reduced to weighted matroid intersection. Let $${\tilde{E}} := E_{1} \mathbin {\dot{\cup }}E_{2}$$, where $$E_{1}, E_{2}$$ are two copies of our original ground set E. We consider $$\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1}), \mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})$$, two special types of matroids on this new ground set $${\tilde{E}}$$, where $$\mathcal {F}_{1}, \mathcal {F}_{2}, {\tilde{\mathcal {F}}}_{1}, {\tilde{\mathcal {F}}}_{2}$$ are the sets of independent sets of $$\mathcal {M}_{1}, \mathcal {M}_{2}, \mathcal {N}_{1}, \mathcal {N}_{2}$$ respectively. Firstly, let $$\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1})$$ be the direct sum of $$\mathcal {M}_{1}$$ on $$E_{1}$$ and $$\mathcal {M}_{2}$$ on $$E_{2}$$. That is, for $$A \subseteq {\tilde{E}}$$ it holds that $$A \in {\tilde{\mathcal {F}}}_{1}$$ if and only if $$A \cap E_{1} \in \mathcal {F}_{1}$$ and $$A \cap E_{2} \in \mathcal {F}_{2}$$.

The second matroid $$\mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})$$ is defined as follows: we call $$e_{1} \in E_{1}$$ and $$e_{2} \in E_{2}$$ a pair, if $$e_{1}$$ and $$e_{2}$$ are copies of the same element in E. If $$e_{1}, e_{2}$$ are a pair then we call $$e_{2}$$ the sibling of $$e_{1}$$ and vice versa. Then

\begin{aligned} {\tilde{\mathcal {F}}}_{2} := \{ A \subseteq {\tilde{E}} :A \text{ contains } \text{ at } \text{ most } k \text{ pairs }\}. \end{aligned}

For any $$A \subseteq {\tilde{E}}$$, $$X=A\cap E_1$$ and $$Y=A\cap E_2$$ forms a feasible solution for $$(P_{\le k})$$ if and only if A is a basis in matroid $$\mathcal {N}_1$$ and independent in matroid $$\mathcal {N}_2.$$ Thus, $$(P_{\le k})$$ is equivalent to the weighted matroid intersection problem

\begin{aligned} \max \{w(A) :A\in {\tilde{\mathcal {F}}}_{1} \cap {\tilde{\mathcal {F}}}_{2}\}, \end{aligned}

with weight function

\begin{aligned} w(e)= {\left\{ \begin{array}{ll} C-c_1(e) &{} \text{ if } e\in E_1,\\ C-c_2(e) &{} \text{ if } e\in E_2, \end{array}\right. } \end{aligned}

for some constant $$C>0$$ chosen large enough to ensure that A is a basis in $$\mathcal {N}_1$$. To see that $$\mathcal {N}_2$$ is indeed a matroid, we first observe that $${\tilde{\mathcal {F}}}_{2}$$ is non-empty and downward-closed (i.e., $$A\in {\tilde{\mathcal {F}}}_{2}$$, and $$B\subset A$$ implies $$B\in {\tilde{\mathcal {F}}}_{2}$$). To see that $${\tilde{\mathcal {F}}}_{2}$$ satisfies the matroid-characterizing augmentation property

\begin{aligned} A,B\in {\tilde{\mathcal {F}}}_{2} \text{ with } |A| \le |B| \text{ implies } \exists e\in B\setminus {A} \text{ with } A+e \in {\tilde{\mathcal {F}}}_{2}, \end{aligned}

take any two independent sets $$A,B\in {\tilde{\mathcal {F}}}_{2}$$. If A cannot be augmented from B, i.e., if $$A+e\not \in {\tilde{\mathcal {F}}}_{2}$$ for every $$e\in B\setminus {A}$$, then A must contain exactly k pairs, and for each $$e\in B\setminus {A}$$, the sibling of e must be contained in A. This implies $$|B| \le |A|$$, i.e., $$\mathcal {N}_2$$ is a matroid. $$\square$$

Weighted matroid intersection is known to be solvable within strongly polynomial time (e.g., see Frank ). Hence, both $$(P_{\le k})$$ and $$(P_{\ge k})$$ can be solved in strongly polynomial time.

The same result can be obtained by a reduction to independent matching (see Appendix A), which in bipartite graphs is known to be solvable within strongly polynomial time as well (see ).

## 3 A strongly polynomial primal-dual algorithm for $$(P_{=k})$$

We saw in the previous section that both problems, $$(P_{\le k})$$ and $$(P_{\ge k})$$, can be solved in strongly polynomial time via a weighted matroid intersection algorithm. This leads to the question whether we can solve the problem $$(P_{=k})$$ with equality constraint on the size of the intersection efficiently as well, and whether a parametric algorithm exists, which computes the whole parametric curve with respect to k.

### 3.1 The algorithm

In this section, we describe a primal-dual strongly polynomial algorithm for $$(P_{=k})$$. Our algorithm can be seen as a generalization of the algorithm presented by Hradovich et al. in . However, the analysis of our algorithm turns out to be much simpler than the one in .

Let us consider the following piecewise linear concave curve

\begin{aligned} \text{ val }{(\lambda )}=\min _{X\in \mathcal {B}_1, Y\in \mathcal {B}_2} c_1(X)+c_2(Y)-\lambda |X\cap Y|, \end{aligned}

which depends on parameter $$\lambda \ge 0$$.

Note that $${\text {val}}(\lambda )+k \lambda$$ is the Lagrangian relaxation of problem $$(P_{=k})$$. Observe that any base pair $$(X,Y)\in \mathcal {B}_1\times \mathcal {B}_2$$ determines a line $$L_{(X,Y)}(\lambda )$$ that hits the vertical axis at $$c_1(X)+c_2(Y)$$ and has negative slope $$|X\cap Y|$$. Thus, val$$(\lambda )$$ is the lower envelope of all such lines. It follows that every base pair $$(X,Y)\in \mathcal {B}_1\times \mathcal {B}_2$$ which intersects with curve val$$(\lambda )$$ in either a segment or a breakpoint, and with $$|X\cap Y|=k$$, is a minimizer of $$(P_{=k})$$.

Sketch of our algorithm. We first solve the problem

\begin{aligned} \min \{c_1(X)+c_2(Y)\mid X\in \mathcal {B}_1, \ Y\in \mathcal {B}_2\}, \end{aligned}

without any constraint on the intersection. Note that this problem can be solved with a matroid greedy algorithm. Let $$({\bar{X}}, {\bar{Y}})$$ be an optimal solution of this problem.

1. 1.

If $$|{\bar{X}}\cap {\bar{Y}}|=k$$, we are done as $$({\bar{X}}, {\bar{Y}})$$ is optimal for $$(P_{=k})$$.

2. 2.

Else, if $$|{\bar{X}}\cap {\bar{Y}}|=k'<k$$, our algorithm starts with the optimal solution $$({\bar{X}}, {\bar{Y}})$$ for $$(P_{=k'})$$, and iteratively increases $$k'$$ by one until $$k'=k$$. Our algorithm maintains as invariant an optimal solution $$({\bar{X}}, {\bar{Y}})$$ for the current problem $$(P_{=k'})$$, together with some dual optimal solution $$({\bar{\alpha }}, {\bar{\beta }})$$ satisfying the optimality conditions, stated in Theorem 2 below, for the current breakpoint $${\bar{\lambda }}$$. Details of the algorithm are described below.

3. 3.

Else, if $$|{\bar{X}}\cap {\bar{Y}}|>k$$, we instead consider an instance of $$(P_{=k^*})$$ for $$k^*={\text {rk}}(\mathcal {M}_1)-k$$, costs $$c_1$$ and $$c_2^*=-c_2$$, and the two matroids $$\mathcal {M}_1=(E, \mathcal {B}_1)$$ and $$\mathcal {M}_2^*=(E, \mathcal {B}_2^*)$$. As seen above, an optimal solution $$(X, E\setminus {Y})$$ of problem $$(P_{=k^*})$$ corresponds to an optimal solution (XY) of our original problem $$(P_{=k})$$, and vice versa. Moreover, $$|{\bar{X}}\cap {\bar{Y}}|>k$$ for the initial base pair $$({\bar{X}}, {\bar{Y}})$$ implies that $$|{\bar{X}}\cap (E\setminus {{\bar{Y}}})|=|{\bar{X}}|-|{\bar{X}}\cap {\bar{Y}}|<k^*$$. Thus, starting with the initial feasible solution $$({\bar{X}}, E\setminus {{\bar{Y}}})$$ for $$(P_{=k^*})$$, we can iteratively increase $$|{\bar{X}}\cap (E\setminus {{\bar{Y}}})|$$ until $$|{\bar{X}}\cap (E\setminus {{\bar{Y}}})|=k^*,$$ as described in step 2.

Note that a slight modification of the algorithm allows to compute the optimal solutions $$({\bar{X}}_k, {\bar{Y}}_k)$$ for all $$k\in \{0, \ldots , K\}$$ in only two runs: run the algorithm for $$k=0$$ and for $$k=K$$.

An optimality condition. The following optimality condition turns out to be crucial for the design of our algorithm.

### Theorem 2

(Sufficient pair optimality conditions) For fixed $$\lambda \ge 0$$, base pair $$(X,Y)\in \mathcal {B}_1\times \mathcal {B}_2$$ is a minimizer of $${\text {val}}(\lambda )$$ if there exist $$\alpha ,\beta \in \mathbb {R}^{|E|}_+$$ such that

1. (i)

X is a min cost basis for $$c_{1} - \alpha$$, and Y is a min cost basis for $$c_{2} - \beta$$;

2. (ii)

$$\alpha _{e} = 0$$ for $$e \in X\setminus {Y}$$, and $$\beta _{e} = 0$$ for $$e \in Y\setminus {X}$$;

3. (iii)

$$\alpha _{e} + \beta _{e} = \lambda$$ for each $$e \in E$$.

The proof of Theorem 2 can be found in Appendix B.

Construction of the auxiliary digraph. Given a tuple $$(X,Y,\alpha ,\beta , \lambda )$$ satisfying the optimality conditions stated in Theorem 2, we construct the auxiliary digraph $$D=D(X,Y,\alpha ,\beta )$$ with red-blue colored arcs as follows (see Fig. 2):

• one vertex for each element in E;

• a red arc (ef) if $$e\not \in X$$, $$X-f+e \in \mathcal {B}_1$$, and $$c_1(e)-\alpha _e=c_1(f)-\alpha _f$$; and

• a blue arc (fg) if $$g\not \in Y$$, $$Y-f+g\in \mathcal {B}_2$$, and $$c_2(g)-\beta _g=c_2(f)-\beta _f.$$

Note Although not depicted in Fig. 2, there might well be blue arcs going from $$Y\setminus {X}$$ to either $$E\setminus {(X\cup Y)}$$ or $$X\setminus {Y}$$, or red arcs going from $$Y\setminus {X}$$ to $$X\setminus {Y}$$.

Observe that any red arc (ef) represents a move in $$\mathcal {B}_1$$ from X to $$X \cup \{e\} \setminus \{f\}\in \mathcal {B}_1$$. To shorten notation, we write $$X \cup \{e\} \setminus \{f\} := X \oplus (e, f)$$. Analogously, any blue arc (ef) represents a move from Y to $$Y \cup \{f\} \setminus \{e\}\in \mathcal {B}_2$$. Like above, we write $$Y \cup \{f\} \setminus \{e\}:=Y\oplus (e, f)$$. Given a red-blue alternating path P in D we denote by $$X' = X \oplus P$$ the set obtained from X by performing all moves corresponding to red arcs, and, accordingly, by $$Y' = Y \oplus P$$ the set obtained from Y by performing all moves corresponding to blue arcs.

A. Frank proved the following result about sequences of moves, to show correctness of the weighted matroid intersection algorithm.

### Lemma 1

(Frank [5, 12, Lemma 13.35]) Let $$\mathcal {M}= (E, \mathcal {F})$$ be a matroid, $$c :E \rightarrow \mathbb {R}$$ and $$X \in \mathcal {F}$$. Let $$x_{1}, \dots , x_{l} \in X$$ and $$y_{1}, \dots , y_{l} \notin X$$ with

1. 1.

$$X + y_{j} - x \in \mathcal {F}$$ and $$c(x_{j}) = c(y_{j})$$ for $$j=1,\dots ,l$$ and

2. 2.

$$X + y_{j} -x_{i} \notin \mathcal {F}$$ or $$c(x_{i}) > c(y_{j})$$ for $$1 \le i,j \le l$$ with $$i \ne j$$.

Then $$(X \setminus \{x_{1}, \dots , x_{l}\}) \cup \{y_{1}, \dots , y_{l}\} \in \mathcal {F}$$.

This suggests the following definition.

Augmenting paths. We call any shortest (w.r.t. number of arcs) red-blue alternating path linking a vertex in $$Y\setminus {X}$$ to a vertex in $$X\setminus {Y}$$ an augmenting path.

### Lemma 2

If P is an augmenting path in D, then

• $$X'=X\oplus P$$ is min cost basis in $$\mathcal {B}_1$$ w.r.t. costs $$c_1-\alpha$$,

• $$Y'=Y\oplus P$$ is min cost basis in $$\mathcal {B}_2$$ w.r.t. costs $$c_2-\beta$$, and

• $$|X'\cap Y'|=|X\cap Y|+1$$.

### Proof

By Lemma 1, we know that $$X'=X\oplus P$$ is min cost basis in $$\mathcal {B}_1$$ w.r.t. costs $$c_1-\alpha$$, and $$Y'=Y\oplus P$$ is min cost basis in $$\mathcal {B}_2$$ w.r.t. costs $$c_2-\beta$$. The fact that the intersection is increased by one follows directly by the construction of the digraph. $$\square$$

Primal update: Given $$(X,Y,\alpha ,\beta , \lambda )$$ satisfying the optimality conditions and the associated digraph D, we update (XY) to $$(X', Y')$$ with $$X'=X\oplus P$$, and $$Y'=Y\oplus P$$, as long as some augmenting path P exists in D. It follows by construction and Lemma 1 that in each iteration $$(X',Y',\alpha , \beta , \lambda )$$ satisfies the optimality conditions and that $$|X'\cap Y'|=|X\cap Y|+1$$.

Dual update: If D admits no augmenting path, and $$|X\cap Y| < k$$, let R denote the set of vertices/elements which are reachable from $$Y\setminus {X}$$ on some red-blue alternating path. Note that $$Y\setminus {X}\subseteq R$$ and $$(X\setminus {Y})\cap R=\emptyset$$. For each $$e\in E$$ define the residual costs

\begin{aligned} {\bar{c}}_1(e):=c_1(e)-\alpha _e \quad \text{ and } \quad {\bar{c}}_2(e):=c_2(e)-\beta _e. \end{aligned}

Note that, by optimality of X and Y w.r.t. $${\bar{c}}_1$$ and $${\bar{c}}_2$$, respectively, we have that $${\bar{c}}_1(e)\ge {\bar{c}}_1(f)$$ whenever $$X-f+e\in \mathcal {B}_1$$, and $${\bar{c}}_2(e)\ge {\bar{c}}_2(f)$$ whenever $$Y-f+e\in \mathcal {B}_2$$.

We compute a “step length” $$\delta >0$$ as follows: Compute $$\delta _1$$ and $$\delta _2$$ via

\begin{aligned} \delta _1:= & {} \min \{ {\bar{c}}_1(e)-{\bar{c}}_1(f)\mid e\in R\setminus {X},\ f\in X\setminus {R}:X-f+e\in \mathcal {B}_1\},\\ \delta _2:= & {} \min \{ {\bar{c}}_2(g)-{\bar{c}}_2(f)\mid g\not \in Y \cup R ,\ f\in Y\cap R:Y-g+f\in \mathcal {B}_2\}. \end{aligned}

It is possible that the sets over which the minima are calculated are empty. In these cases we define the corresponding minimum to be $$\infty$$. In the special case where $$\mathcal {M}_{1} = \mathcal {M}_{2}$$ this case cannot occur.

Since neither a red, nor a blue arc goes from R to $$E\setminus {R}$$, we know that both, $$\delta _1$$ and $$\delta _2$$, are strictly positive, so that $$\delta :=\min \{\delta _1, \delta _2\}>0$$. Now, update

\begin{aligned} \alpha '_e= {\left\{ \begin{array}{ll} \alpha _e+\delta &{}\text{ if } e\in R\\ \alpha _e &{}\text{ else. } \end{array}\right. } \quad \text{ and } \quad \beta '_e= {\left\{ \begin{array}{ll} \beta _e &{}\text{ if } e\in R\\ \beta _e +\delta &{}\text{ else. } \end{array}\right. } \end{aligned}

### Lemma 3

$$(X,Y,\alpha ', \beta ')$$ satisfies the optimality conditions for $$\lambda '=\lambda +\delta$$.

### Proof

By construction, we have for each $$e\in E$$

• $$\alpha _e'+\beta _e'=\alpha _e+\beta _e+\delta = \lambda +\delta =\lambda '$$.

• $$\alpha _e'=0$$ for $$e\in X\setminus {Y}$$, since $$\alpha _e=0$$ and $$e \notin R$$ (as $$(X\setminus {Y})\cap R=\emptyset$$).

• $$\beta '_e=0$$ for $$e \in Y \setminus {X}$$, since $$\beta _e=0$$ and $$(Y\setminus {X})\subseteq R$$.

Moreover, by construction and choice of $$\delta$$, we observe that X and Y are optimal for $$c_1-\alpha '$$ and $$c_2-\beta '$$, since

• $$c_1(e)-\alpha '_e\ge c_1(f)-\alpha '_f$$ whenever $$X'-f+e \in \mathcal {B}_1$$,

• $$c_2(g)-\beta '_g\ge c_2(f)-\beta '_f$$ whenever $$Y'-f+g \in \mathcal {B}_2$$.

To see this, suppose for the sake of contradiction that $$c_1(e)-\alpha '_e < c_1(f)-\alpha '_f$$ for some pair $$\{e,f\}$$ with $$e\not \in X$$, $$f\in X$$ and $$X-f+e \in \mathcal {B}_1$$. Then $$e\in R$$, $$f\not \in R$$, $$\alpha '_e =\alpha _e-\delta$$, and $$\alpha '_f=\alpha _f,$$ implying $$\delta > c_1(e) -\alpha _e- c_1(f) + \alpha _e ={\bar{c}}_1(e)-{\bar{c}}_{1}(f),$$ in contradiction to our choice of $$\delta$$. Similarly, it can be shown that Y is optimal w.r.t. $$c_2-\beta '$$. Thus, $$(X,Y,\alpha ',\beta ')$$ satisfies the optimality conditions for $$\lambda '=\lambda +\delta$$. $$\square$$

J. Edmonds proved the following feasibility condition for the non-weighted matroid intersection problem.

### Lemma 4

(Edmonds ) Consider the digraph $${\tilde{D}} = D(X,Y, 0, 0)$$ for cost functions $$c_1 = c_2 = 0$$ (non-weighted case). If there exists no augmenting path in $${\tilde{D}}$$ then $$|X \cap Y|$$ is maximum among all $$X \in \mathcal {B}_1, Y \in \mathcal {B}_2$$.

Based on this result we show the following feasibility condition for our problem.

### Lemma 5

If $$\delta = \infty$$ and $$|X \cap Y| < k$$ the given instance is infeasible.

### Proof

This follows by the fact that $$\delta = \infty$$ if and only if the set $$(X \setminus Y) \cap R = \emptyset$$, even if we construct the graph $$D'$$ without the condition that for red edges $$c_{1}(e) - \alpha _{e} = c_{1}(f) - \alpha _{f}$$ and for blue edges $$c_{2}(g) - \beta _{g} = c_{2}(f) - \beta _{f}$$.

Non existence of such a path implies infeasibility of the instance by Lemma 4. $$\square$$

### Lemma 6

If $$(X,Y,\alpha , \beta , \lambda )$$ satisfies the optimality conditions and $$\delta < \infty$$, a primal update can be performed after at most |E| dual updates.

### Proof

With each dual update, at least one more vertex enters the set $$R'$$ of reachable elements in digraph $$D'=D(X,Y,\alpha ', \beta ')$$. $$\square$$

The primal-dual algorithm. Summarizing, we obtain the following algorithm.

Input: $$\mathcal {M}_1=(E,\mathcal {B}_1)$$, $$\mathcal {M}_2=(E,\mathcal {B}_2)$$, $$c_1, c_2:E\rightarrow \mathbb {R}$$, $$k\in \mathbb {N}$$

Output: Optimal solution (XY) of $$(P_{=k})$$

1. 1.

Compute an optimal solution (XY) of

\begin{aligned} \min \{c_1(X)+c_2(Y)\mid X\in \mathcal {B}_1, Y\in \mathcal {B}_2\}. \end{aligned}
2. 2.

If $$|X\cap Y|=k$$, return (XY) as optimal solution of $$(P_{=k})$$.

3. 3.

Else, if $$|X\cap Y|>k$$, run algorithm on $$\mathcal {M}_1$$, $$\mathcal {M}^*_2$$, $$c_1$$, $$c_2^*:=-c_2$$, and $$k^*:={\text {rk}}(\mathcal {M}_1)-k$$.

4. 4.

Else, set $$\lambda :=0$$, $$\alpha := 0, \beta := 0$$.

5. 5.

While $$|X\cap Y|<k$$, do:

• Construct auxiliary digraph D based on $$(X,Y,\lambda , \alpha , \beta )$$ .

• If there exists an augmenting path P in D, update primal

\begin{aligned} X':=X\oplus P, \quad Y':=Y\oplus P. \end{aligned}
• Else, compute step length $$\delta$$ as described above.

If $$\delta = \infty$$, terminate with the message “infeasible instance”.

Else, set $$\lambda :=\lambda +\delta$$ and update dual:

\begin{aligned} \alpha _e := {\left\{ \begin{array}{ll} \alpha _e +\delta &{} \text{ if } \text{ e } \text{ reachable, }\\ \alpha _e &{} \text{ otherwise. } \end{array}\right. } \quad \beta _e := {\left\{ \begin{array}{ll} \beta _e &{} \text{ if } e \text{ reachable, }\\ \beta _e +\delta &{} \text{ otherwise. } \end{array}\right. } \end{aligned}
• Iterate with $$(X,Y,\lambda , \alpha , \beta )$$

6. 6.

Return (XY).

As a consequence of our considerations, the following theorem follows.

### Theorem 3

The algorithm above solves $$(P_{=k})$$ using at most $$k \times |E|$$ primal or dual augmentations. Moreover, the entire sequence of optimal solutions $$(X_k,Y_k)$$ for all $$(P_{=k})$$ with $$k=0,1,\dots ,K$$ can be computed within $$|E|^{2}$$ primal or dual augmentations.

### Proof

By running the algorithm for both $$k=0$$ and $$k=K$$, where we set $$K:=\min \{{\text {rk}}(\mathcal {M}_1), {\text {rk}}(\mathcal {M}_2)\}$$ we obtain optimal bases $$(X_k,Y_k)$$ for $$(P_{=k})$$ for all $$k=1,2,\dots ,K$$ within $$|E|^{2}$$ primal or dual augmentations. $$\square$$

### Remark 1

Note that an alternative method to solve $$(P_{=k})$$ for a given fixed k is to compute solutions $$(X_1, Y_1), (X_2, Y_2)$$ for both $$(P_{\le k})$$ and $$(P_{\ge k})$$ respectively, which can be done by running a matroid intersection algorithm twice. If one of those problems is infeasible, it directly follows that $$(P_{=k})$$ is infeasible. Otherwise, it holds that $$|X_1 \cap Y_1| \le k \le |X_2 \cap Y_2|$$. We can prove that both $$X_1, X_2$$ are minimum cost bases with respect to $$c_1$$ and both $$Y_1, Y_2$$ are minimum cost bases with respect to $$c_2$$. Using a similar pivot strategy as in the algorithm above one can then pivot from $$X_1$$ to $$X_2$$ and $$Y_2$$ to $$Y_1$$ maintaining this property and obtain a pair of bases (XY) such that $$|X \cap Y| = k$$.

Note, that this approach does not show how to directly obtain the whole parametric curve with respect to k.

Comparison to the results in  We would also like to point out that in a recent preprint, which is partially based on the preprint of the current paper, Iwamasa and Takazawa  show that problem $$(P_{=k})$$ can also be solved by combining the ideas described in Appendix A for solving $$(P_{\le k})$$ and $$(P_{\ge k})$$ with Murota’s theory of valuated matroid intersection, as described in [15, 16]. We note that the approach in  could also be extended to obtain the whole parametric curve in a similar fashion as we do here. In addition, Iwamasa and Takazawa in  were able to generalize our results on polymatroids, as described in Sect. 5, using similar techniques as the ones described in Sect. 5.

One of the main contributions of  is the structured analysis and modelling (using DCA) of matroid base problems and their generalizations with intersection constraints. They also present some relations of the problems studied in this paper with the Combinatorial Optimization Problem with Interaction Costs (COPIC) studied in [2, 13].

The algorithm to solve $$(P_{=k})$$ as stated in  is similar to our algorithm stated above. The main difference is that the algorithm in  is based on an reduction to the valuated independent assignment problem, called $$\text {VIAP}(k)$$, and the use of the existing augmenting path algorithm described in [15, 16].

There are two main differences of the algorithm in  to our algorithm described in this section. Firstly, the optimality conditions differ slightly. Secondly, the algorithm in  uses a different graph for the primal updating procedure and performs the dual updates simultaneously with their primal update.

## 4 The recoverable robust matroid basis problem–an application.

There is a strong connection between the model described in this paper and the recoverable robust matroid basis problem (RecRobMatroid) studied in [1, 8], and . In RecRobMatroid, we are given a matroid $$\mathcal {M}= (E, \mathcal {B})$$ on a ground set E with base set $$\mathcal {B}$$, the recoverability parameter $$k \in \mathbb {N}$$, a first stage cost function $$c_{1}$$ and an uncertainty set $$\mathcal {U}$$ that contains different scenarios S, where each scenario $$S \in \mathcal {U}$$ gives a possible second stage cost $$S=(c_S(e))_{e \in E}$$.

RecRobMatroid then consists out of two phases: in the first stage, one needs to pick a basis $$X \in \mathcal {B}$$. Then, after the actual scenario $$S \in \mathcal {U}$$ is revealed, there is a second “recovery” stage, where a second basis Y is picked with the goal to minimize the worst-case cost $$c_{1}(X) + c_{S}(Y)$$ under the constraint that Y differs in at most k elements from the original basis X. That is, we require that Y satisfies $$|X\triangle Y| \le k$$ or, equivalently, that $$|X \cap Y| \ge {\text {rk}}(\mathcal {M}) - k$$. Here, as usual, $$X\triangle Y=(X\setminus {Y}) \cup (Y\setminus {X})$$. The recoverable robust matroid basis problem can be written as follows:

\begin{aligned} \min _{X \in \mathcal {B}} \left( c_1(X) + \max _{S \in \mathcal {U}} \min _{\begin{array}{c} Y \in \mathcal {B}\\ |X \cap Y| \ge {\text {rk}}(\mathcal {M}) - k \end{array} } c_S(Y) \right) . \end{aligned}
(1)

The main motivation for this model is a decision problem with two consecutive stages. The costs for the first stage are known but the costs for the second stage are uncertain. In addition, due to environmental constraints the changes between the first and second stage solution are bounded by the recoverability parameter k. For instance, in the case of transversal matroids the two stages could correspond to the morning and afternoon shift and one has to select the sets of workers able to perform the given tasks for both shifts. With the recoverability parameter one can configure how many workers have to be selected that work both shifts. On the other hand, the case of graphic matroids corresponds to selecting two consecutive sets of communication links in a network while minimizing usage costs in both stages with an additional bound on the number of links one can change between the two stages.

There are several ways in which the uncertainty set $$\mathcal {U}$$ for the costs of the second stage can be represented. One popular way is the interval uncertainty representation. In this representation, we are given functions $$c': E \rightarrow \mathbb {R}$$, $$d: E \rightarrow \mathbb {R}_+$$ and assume that the uncertainty set $$\mathcal {U}$$ can be represented by a set of |E| intervals:

\begin{aligned} \mathcal {U} = \left\{ S = (c_S(e))_{e \in E} \mid c_S \in [c'(e), c'(e) + d(e)], e \in E \right\} . \end{aligned}

In the worst-case scenario $${\bar{S}}$$ we have for all $$e \in E$$ that $$c_{{\bar{S}}}(e) =c'(e) + d(e)$$. When we define $$c_2(e):= c_{{\bar{S}}}(e)$$ and $$k' = {\text {rk}}(\mathcal {M}) - k$$, it is clear that the RecRobMatroid problem under interval uncertainty representation (RecRobMatroid-Int, for short) is a special case of $$(P_{\ge k'})$$, in which $$\mathcal {B}_1=\mathcal {B}_2$$.

Büsing  presented an algorithm for RecRobMatroid-Int which is exponential in k. In 2017, Hradovich, Kaperski, and Zieliński  proved that RecRobMatroid-Int can be solved in polynomial time via some iterative relaxation algorithm and asked for a strongly polynomial time algorithm. Shortly after, the same authors presented in  a strongly polynomial time primal-dual algorithm for the special case of RecRobMatroid-Int on a graphical matroid. The question whether a strongly polynomial time algorithm for RecRobMatroid-Int on general matroids exists remained open.

Furthermore, Hradovich, Kaperski, and Zieliński showed that an algorithm for $$(P_{\le k})$$ can be used to obtain an approximation algorithm for the recoverable robust matroid basis problem with more general uncertainty sets $$\mathcal {U}$$. For details see [8, Theorem 5].

Theorem 3 directly implies the following result as a special case.

### Corollary 1

RecRobMatroid-Int can be solved in strongly polynomial time. In particular we can compute solutions for all possible choices of the recoverability parameter k using just one run of the primal-dual algorithm.

Knowing solutions for all possible choices of recoverability parameter k is specifically useful if a decision maker has to balance between cost and capability of recoverability. This viewpoint inspires the first of the following variants of RecRobMatroid-Int.

Two variants of RecRobMatroid-Int. Let us consider two generalizations or variants of RecRobMatroid-Int. First, instead of setting a bound on the size of the symmetric difference $$|X \triangle Y|$$ of two bases XY, alternatively, one could set a penalty on the size of the recovery. Let $$C: \mathbb {N}\rightarrow \mathbb {R}$$ be a penalty function which determines the penalty that needs to be paid as a function dependent on the size of the symmetric difference $$|X \triangle Y|$$. This leads to the following problem, which we denote by $$(P^{\triangle })$$.

\begin{aligned} \min&c_{1}(X) + c_{2}(Y) + C(|X \triangle Y|)\\ \text {s.t. }&X,Y \in \mathcal {B} \end{aligned}

Clearly, $$(P^{\triangle })$$ is equivalent to RecRobMatroid-Int if $$C(|X\triangle Y|)$$ is equal to zero as long as $$|X\triangle Y|\le k$$, and $$C(|X\triangle Y|)=\infty$$ otherwise. As it turns out, our primal-dual algorithm for solving $$(P_{=k})$$ can be used to efficiently solve $$(P^{\triangle })$$.

### Corollary 2

Problem $$(P^{\triangle })$$ can be solved in strongly-polynomial time.

### Proof

By Theorem 3, optimal solutions $$(X_k,Y_k)$$ can be computed efficiently for all problems $$(P_{=k})_{k\in \{0,1, \ldots , K\}}$$ within $$|E|^{2}$$ primal or dual augmentations of the algorithm above. It follows that the optimal solution to $$(P^{\triangle })$$ is a minimizer of

\begin{aligned} \min \{ c_1(X_k) + c_2 (Y_k) + C(k^{\triangle })\mid k \in \{0, \dots , K\} \}, \end{aligned}

where $$k^{\triangle } = {\text {rk}}(\mathcal {M}_1) + {\text {rk}}(\mathcal {M}_2) - 2k$$. $$\square$$

Yet another variant of RecRobMatroid-Int or the more general problem $$(P^{\triangle })$$ would be to aim for the minimum expected second stage cost, instead of the minimum worst-case second stage cost. Suppose, with respect to a given probability distribution per element $$e\in E$$, the expected second stage cost on element $$e\in E$$ is $$\mathbb {E}(c_S(e))$$. By the linearity of expectation, to solve these problems, we could simply solve problem $$(P^{\triangle })$$ with $$c_2(e):=\mathbb {E}(c_S(e))$$.

## 5 A generalization to polymatroid base polytopes

Recall that a function $$f:2^E \rightarrow \mathbb {R}$$ is called submodular if the inequality $$f(U)+f(V) \ge f(U \cup V) + f(U \cap V)$$ holds for all $$U, V \subseteq E$$. Function f is called monotone if $$f(U) \le f(V)$$ for all $$U \subseteq V$$, and normalized if $$f(\emptyset ) = 0$$. Given a submodular, monotone and normalized function f, the pair (Ef) is called a polymatroid, and f is called rank function of the polymatroid (Ef). The associated polymatroid base polytope is defined as:

\begin{aligned} \mathcal {B}(f) := \left\{ x \in \mathbb {\mathbb {R}}_{+}^{|E|} \mid x(U) \le f(U) \; \forall U \subseteq E, \; x(E) = f(E)\right\} , \end{aligned}

where, as usual, $$x(U) := \sum _{e \in U} x_e$$ for all $$U\subseteq E$$. We refer to the book “Submodular Functions and Optimization” by Fujishige  for details on polymatroids and polymatroidal flows as refered to below.

### Remark 2

We note that all of the arguments presented in this section work also for the more general setting of submodular systems (cf. ), which are defined on arbitrary distributive lattices instead of the Boolean lattice $$(2^{E}, \subseteq , \cap , \cup )$$.

Whenever f is a submodular function on ground set E with $$f(U) \in \mathbb {N}$$ for all $$U \subset E$$, we call pair (Ef) an integer polymatroid. Polymatroids generalize matroids in the following sense: if the polymatroid rank function f is integer and additionally satisfies the unit-increase property

\begin{aligned} f(S\cup \{e\})\le f(S)+1\quad \forall S\subseteq E,\ e\in E, \end{aligned}

then the vertices of the associated polymatroid base polytope $$\mathcal {B}(f)$$ are exactly the incidence vectors of a matroid $$(E,\mathcal {B})$$ with $$\mathcal {B}:=\{B\subseteq E \mid f(B)=f(E)\}$$. Conversely, the rank function $${\text {rk}}: 2^E\rightarrow \mathbb {R}$$ which assigns to every subset $$U\subseteq E$$ the maximum cardinality $${\text {rk}}(U)$$ of an independent set within U is a polymatroid rank function satisfying the unit-increase property. In particular, bases of a polymatroid base polytope are not necessarily $$0-1$$ vectors anymore. Generalizing set-theoretic intersection and union from sets (a.k.a. $$0-1$$ vectors) to arbitrary vectors can be done via the following binary operations, called meet and join: given two vectors $$x,y\in \mathbb {R}^{|E|}$$ the meet of x and y is $$x\wedge y:=(\min \{x_e,y_e\})_{e\in E}$$, and the join of x and y is $$x\vee y:=(\max \{x_e,y_e\})_{e\in E}$$. Instead of the size of the intersection, we now talk about the size of the meet, abbreviated by

\begin{aligned} |x\wedge y|:=\sum _{e\in E} \min \{x_e, y_e\}. \end{aligned}

Similarly, the size of the join is $$|x\vee y|:=\sum _{e\in E}\max \{x_e, y_e\}.$$ Note that using these definitions we have that $$|x|+|y|=|x\wedge y|+|x\vee y|$$, where, as usual, for any $$x\in \mathbb {R}^{|E|}$$, we abbreviate $$|x|=\sum _{e\in E}x_e$$. It follows that for any base pair $$(x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)$$, we have

\begin{aligned} |x|=f_1(E)=\sum _{e\in E:x_e>y_e} (x_e-y_e) + |x\wedge y| \end{aligned}

and

\begin{aligned} |y|=f_2(E)=\sum _{e\in E:y_e>x_e} (y_e-x_e) + |x\wedge y|. \end{aligned}

Therefore, it holds that $$|x\wedge y| \ge k$$ if and only if both, $$\sum _{e\in E :x_e > y_e} (x_e-y_e) \le f_1(E)-k$$ and $$\sum _{e\in E :x_e < y_e} (y_e-x_e) \le f_2(E)-k$$. The problem described in the next paragraph can be seen as a direct generalization of problem $$(P_{\ge k})$$ when going from matroid bases to more general polymatroid base polytopes.

The model. Let $$f_{1}, f_{2}$$ be two polymatroid rank functions with associated polymatroid base polytopes $$\mathcal {B}(f_{1})$$ and $$\mathcal {B}(f_{2})$$ defined on the same ground set of resources E, let $$c_1, c_2:E\rightarrow \mathbb {R}$$ be two cost functions on E, and let k be some integer. The following problem, the Recoverable Polymatroid Basis Problem which we denote by $$({\hat{P}}_{\ge k})$$, is a direct generalization of $$(P_{\ge k})$$ from matroids to polymatroids.

\begin{aligned} \min&\sum _{e \in E} c_{1}(e) x(e) + \sum _{e \in E} c_{2}(e) y(e)\\ \text {s.t. }&x \in \mathcal {B}(f_{1})\\&y \in \mathcal {B}(f_{2})\\&|x\wedge y| \ge k\\ \end{aligned}

Results obtained for $$(P_{\ge k}), (P_{\le k})$$ and $$(P_{= k})$$ directly give us pseudo-polynomial algorithms for $$({\hat{P}}_{\ge k}), ({\hat{P}}_{\le k})$$ and $$({\hat{P}}_{= k})$$.

### Corollary 3

If $$(E,f_1), (E,f_2)$$ are two integer polymatroids, problems $$({\hat{P}}_{\ge k})$$, $$({\hat{P}}_{\le k})$$ and $$({\hat{P}}_{= k})$$ can be solved within pseudo-polynomial time.

### Proof

Each integer polymatroid can be written as a matroid on a pseudo-polynomial number of resources, namely on $$\sum _{e \in E}f(\{e\})$$ resources . Hence, the strongly polynomial time algorithms we derived for problems $$(P_{\ge k}), (P_{\le k})$$ and $$(P_{= k})$$ can directly be applied, but now have a pseudo-polynomial running time. $$\square$$

In the following, we first show that $$({\hat{P}}_{\ge k})$$ can be reduced to an instance of the polymatroidal flow problem, which is known to be computationally equivalent to a submodular flow problem and can thus be solved in strongly polynomial time. Afterwards, we show that the two problems $$({\hat{P}}_{\le k})$$ and $$({\hat{P}}_{= k})$$, which can be obtained from $$({\hat{P}}_{\ge k})$$ by replacing constraint $$|x\wedge y|\ge k$$ by either $$|x\wedge y|\le k$$, or $$|x\wedge y|= k$$, respectively, are weakly NP-hard.

### 5.1 Reduction of polymatroid base problem $$({\hat{P}}_{\ge k})$$ to polymatroidal flows.

The polymatroidal flow problem can be described as follows: we are given a digraph $$G=(V,A)$$, arc costs $$\gamma :A\rightarrow \mathbb {R}$$, lower bounds $$l:A\rightarrow \mathbb {R}$$, and two submodular functions $$f^+_v$$ and $$f^-_v$$ for each vertex $$v\in V.$$ Function $$f_v^+$$ is defined on $$2^{\delta ^{+}(v)}$$, the set of subsets of the set $$\delta ^+(v)$$ of v-leaving arcs, while $$f_v^-$$ is defined on $$2^{\delta ^{-}(v)}$$, the set of subsets of the set $$\delta ^-(v)$$ of v-entering arcs and

\begin{aligned} P(f_v^+) = \left\{ x \in \mathbb {R}^{\delta ^{+}(v)} :x(U) \le f_v^+(U) \; \forall U \subseteq \delta ^{+}(v) \right\} ,\\ P(f_v^-) = \left\{ x \in \mathbb {R}^{\delta ^{-}(v)} :x(U) \le f_v^-(U) \; \forall U \subseteq \delta ^{-}(v) \right\} . \end{aligned}

Given a flow $$\varphi :A\rightarrow \mathbb {R},$$ the net-flow at v is abbreviated by $$\partial \varphi (v):=\sum _{a\in \delta ^-(v)} \varphi (a)-\sum _{a\in \delta ^+(v)} \varphi (a)$$. For a set of arcs $$S \subseteq A$$, $$\varphi |_{S}$$ denotes the vector $$(\varphi (a))_{a \in S}$$. The associated polymatroidal flow problem can now be formulated as follows.

\begin{aligned} \min&\sum _{a \in A} \gamma (a) \varphi (a)&\\ \text {s.t. }&l(a) \le \varphi (a)&(a \in A)\\&\partial \varphi (v) = 0&(v \in V)\\&\varphi |_{\delta ^{+}(v)} \in P(f^{+}_{v})&(v \in V)\\&\varphi |_{\delta ^{-}(v)} \in P(f^{-}_{v})&(v \in V) \end{aligned}

As described in Fujishige’s book (see , page 127ff), the polymatroidal flow problem is computationally equivalent to submodular flows and can thus be solved in strongly polynomial time.

### Theorem 4

The Recoverable Polymatroid Basis Problem $$({\hat{P}}_{\ge k})$$ can be reduced to the Polymatroidal Flow Problem.

### Proof

Given an instance $$(E, f_1, f_2, c_1, c_2, k)$$ of $$({\hat{P}}_{\ge k})$$, we construct an equivalent instance $$(G, \gamma , l, (f_{v}^{+})_{v \in V}, (f_{v}^{-})_{v \in V})$$ of the Polymatroid Flow Problem as shown in Fig. 3. The graph G consists of 6 special vertices $$s, u_{1}, u_{2}, v_{1}, v_{2}, t$$ and of 12|E| additional vertices denoted by $$u^{X}_{e}$$, $$v^{X}_{e}$$, $$u^{Y}_{e}$$, $$v^{Y}_{e}$$, $$u^{Z}_{e}$$, $$v^{Z}_{e}$$ for each $$e \in E$$. The arc set consists of arcs $$(s,u_{1})$$, $$(s, u_{2})$$, $$(v_{1}, t)$$, $$(v_{2},t)$$, (ts). In addition we have arcs $$(u_{1}, u^{X}_{e}), (u_{1}, u^{Z}_{e})$$ for each $$e \in E$$, $$(u_{2}, u^{Y}_{e})$$ for each $$e \in E$$, $$(v^{X}_{e}, v_{1})$$ for each $$e \in E$$, $$(v^{Z}, v_{2}), v^{Y}_{e}, v_{2})$$ for each $$e \in E$$. In addition for each $$e \in E$$ we have three sets of arcs $$(u^{X}_{e}, v^{X}_{e})$$, $$(u^{Y}_{e}, v^{Y}_{e})$$, $$(u^{Z}_{e}, v^{Z}_{e})$$ which we denote by $$E_{X}, E_{Y}, E_{Z}$$ respectively. We set $$\gamma ((u^{X}_{e}, v^{X}_{e})) := c_{1}(e)$$, $$\gamma ((u^{Y}_{e}, v^{Y}_{e})) := c_{2}(e)$$, $$\gamma ((u^{Z}_{e}, v^{Z}_{e})) := c_{1}(e) + c_{2}(e)$$ and $$\gamma (a) := 0$$ for all other arcs a. We enforce lower bounds on the flow on the two arcs by setting $$l((s,u_{1})) := f_{1}(E)$$ and $$l((v_{2}, t)) := f_{2}(E)$$ and $$l(a)=0$$ for all other arcs a. To model upper bounds on the flow along the arcs $$(v_{1}, t)$$ and $$(s,u_{2})$$ we add the polymatroidal constraints on $$\varphi |_{\delta ^{+}}(v_{1})$$ and $$\varphi |_{\delta ^{-}}(u_{2})$$ and set $$f_{v_{1}}^{+}((v_{1}, t)) := f_{1}(E) - k$$ and $$f_{u_{2}}^{-}( (s,u_{2}) ) := f_{2}(E) - k$$. We also set

\begin{aligned} f_{u_{1}}^{+}(S)&:= f_{1}(\{e \in E :(u_{1}, u^{X}_{e}) \in S \text { or } (u_{1}, u^{Z}_{e}) \in S\}) \quad \forall S \subseteq \delta ^{+}(u_{1}),\\ f_{v_{2}}^{-}(S)&:= f_{2}(\{e \in E :(v^{Y}_{e}, v_{2}) \in S \text { or } (v^{Z}_{e}, v_{2}) \in S\}) \quad \forall S \subseteq \delta ^{-}(v_{2}). \end{aligned}

All other polymatroidal constraints are set to the trivial polymatroid, hence arbitrary in- and outflows are allowed.

We show that the constructed instance of the Polymatroidal Flow Problem is equivalent to the given instance of the Recoverable Polymatroid Basis Problem $$({\hat{P}}_{\ge k})$$.

Consider the two designated vertices $$u_1$$ and $$v_2$$ such that $$\delta ^+(u_1)$$ are the red arcs, and $$\delta ^-(v_2)$$ are the green arcs in Fig. 3. Take any feasible polymatroidal flow $$\varphi$$ and let $${\tilde{x}} := \varphi |_{\delta ^{+}(u_{1})}$$ denote the restriction of $$\varphi$$ to the red arcs, and $${\tilde{y}} := \varphi |_{\delta ^{-}(v_{2})}$$ denote the restriction of $$\varphi$$ to the green arcs. Note that there is a unique arc entering $$u_1$$ which we denote by $$(s,u_1)$$. Observe that the constraints $$\varphi (s,u_1)\ge f_{1}(E)$$ and $$\varphi |_{\delta ^{+}(u_1)}\in P({f}_{u_{1}}^{+})$$ for the flow going into $$E_{X}$$ and $$E_{Z}$$ imply that the flow vector $${\tilde{x}}$$ on the red arcs belongs to $$\mathcal {B}({f}_{u_{1}}^{+})$$. Analogously, the flow vector $${\tilde{y}}$$ satisfies $${\tilde{y}} \in \mathcal {B}({f}_{v_{2}}^{-})$$. By setting $$x(e) := {\tilde{x}}( (u_{1}, u_{e}^{X}) ) + {\tilde{x}}( (u_{1}, u_{e}^{Z}) )$$ and $$y(e) := {\tilde{y}}( (v_{e}^{Y}, v_{2}) ) + {\tilde{y}}( (v_{e}^{Z}, v_{2}) )$$ for each $$e \in E$$, we have that the cost of the polymatroid flow can be rewritten as

\begin{aligned} \sum _{e \in E} c_{1}(e) x(e) + \sum _{e \in E} c_{2}(e) y(e). \end{aligned}

The constraint $$\varphi |_{\delta ^{+}(u_2)}\le f_2(E)-k$$ on the inflow into $$E_{Y}$$, and the constraint $$\varphi |_{\delta ^{-}(v_1)}\le f_1(E)-k$$ on the outflow out of $$E_{X}$$ are equivalent to $$|x\wedge y| \ge k$$. Hence, the equivalence follows. $$\square$$

Note that $$({\hat{P}}_{\ge k})$$ is computationally equivalent to

\begin{aligned} \min&\sum _{e \in E} c_{1}(e) x(e) + \sum _{e \in E} c_{2}(e) y(e)\\ \text {s.t. }&x \in \mathcal {B}(f_{1})\\&y \in \mathcal {B}(f_{2})\\&\Vert x - y\Vert _{1} \le k' \end{aligned}

which we denote by $$({\hat{P}}_{\Vert \cdot \Vert _{1}})$$, as of the direct connection $$|x|+|y|=2 |x\wedge y| +\Vert x-y\Vert _1$$ between $$|x \wedge y|$$, the size of the meet of x and y, and the 1-norm of $$x-y$$. It is an interesting open question whether this problem is also tractable if one replaces $$\Vert x-y\Vert _{1} \le k'$$ by arbitrary norms or, specifically, the 2-norm. We conjecture that methods based on convex optimization could work in this case, likely leading to a polynomial, but not strongly polynomial, running time.

### 5.2 Hardness of polymatroid basis problems $$({\hat{P}}_{\le k})$$ and $$({\hat{P}}_{= k})$$

Let us consider the decision problem associated to problem $$({\hat{P}}_{\le k})$$ which can be formulated as follows: given an instance $$(f_1,f_2, c_1, c_2, k)$$ of $$({\hat{P}}_{\le k})$$ together with some target value $$T\in \mathbb {R}$$, decide whether or not there exists a base pair $$(x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)$$ with $$|x\wedge y|\le k$$ of cost $$c_1^Tx+c_2^Ty \le T.$$ Clearly, this decision problem belongs to the complexity class NP, since we can verify in polynomial time whether a given pair (xy) of vectors satisfies the following three conditions

1. (i)

$$|x\wedge y| \le k$$,

2. (ii)

$$c_1^Tx+c_2^Ty\le T$$, and

3. (iii)

$$(x,y)\in \mathcal {B}(f_1)\times \mathcal {B}(f_2)$$.

To verify (iii), we assume, as usual, the existence of an evaluation oracle.

Reduction from partition. To show that the problem is NP-complete, we show that any instance of the NP-complete problem partition can be polynomially reduced to an instance of $$({\hat{P}}_{\le k})$$-decision. Recall the problem partition: given a set E of n real numbers $$a_1, \ldots , a_n$$, the task is to decide whether or not the n numbers can be partitioned into two sets L and R with $$E=L\cup R$$ and $$L\cap R=\emptyset$$ such that $$\sum _{j\in L}a_j=\sum _{j\in R}a_j$$.

Given an instance $$a_1, \ldots , a_n$$ of partition with $$B:=\sum _{j\in E} a_j$$, we construct the following polymatroid rank function

\begin{aligned} f(U)=\min \left\{ \sum _{j\in U} a_j, \frac{B}{2}\right\} \quad \forall U\subseteq E. \end{aligned}

It is not hard to see that f is indeed a polymatroid rank function as it is normalized, monotone, and submodular. Moreover, we observe that an instance of partition $$a_1, \ldots , a_n$$ is a yes-instance if and only if there exist two bases x and y in polymatroid base polytope $$\mathcal {B}(f)$$ satisfying $$|x\wedge y| \le 0.$$

Similarly, it can be shown that any instance of partition can be reduced to an instance of the decision problem associated to $$({\hat{P}}_{= k})$$, since an instance of partition is a yes-instance if and only if for the polymatroid rank function f as constructed above there exists two bases x and y in polymatroid base polytope $$\mathcal {B}(f)$$ satisfying $$|x\wedge y| = 0.$$

## 6 More than two matroids

Another straightforward generalization of the matroid problems $$(P_{\le k})$$, $$(P_{\ge k})$$, and $$(P_{= k})$$ is to consider more than two matroids, and a constraint on the intersection of the bases of those matroids. Given matroids $$\mathcal {M}_{1} = (E, \mathcal {B}_{1}), \dots$$, $$\mathcal {M}_{M} = (E, \mathcal {B}_{M})$$ on a common ground set E, some integer $$k \in \mathbb {N}$$, and cost functions $$c_{1}, \dots , c_{M} :E \rightarrow \mathbb {R}$$, we consider the optimization problem $$(P_{\le k}^{M})$$

\begin{aligned} \min&\sum _{i=1}^{M}c_{i}(X_{i})\\ \text {s.t. }&X_{i} \in \mathcal {B}_{i} \quad \forall i =1,\dots ,M\\&\Big |\bigcap _{i=1}^{M} X_{i}\Big | \le k. \end{aligned}

Analogously, we define the problems $$(P_{\ge k}^{M})$$ and $$(P_{= k}^{M})$$ by replacing $$\le k$$ by $$\ge k$$ and $$=k$$ respectively.

It is easy to observe that both variants $$(P_{\ge k}^{M})$$ and $$(P_{= k}^{M})$$ are NP-hard already for the case $$M=3$$, since even for the feasibility question there is an easy reduction from the matroid intersection problem for three matroids.

Interestingly, this is different for $$(P^{M}_{\le k})$$. A direct generalization of the reduction for $$(P_{\le k})$$ to weighted matroid intersection (for two matroids) shown in Sect. 2 works again.

### Theorem 5

Problem $$(P^{M}_{\le k})$$ can be reduced to weighted matroid intersection.

### Proof

Let $${\tilde{E}} := E_{1} \mathbin {\dot{\cup }}\dots \mathbin {\dot{\cup }}E_{M}$$, where $$E_{1}, \dots , E_{M}$$ are M copies of our original ground set E. We consider $$\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1}), \dots , \mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})$$, two special types of matroids on this new ground set $${\tilde{E}}$$, where $$\mathcal {F}_{1}, \dots , \mathcal {F}_{M}, {\tilde{\mathcal {F}}}_{1}, {\tilde{\mathcal {F}}}_{2}$$ are the sets of independent sets of $$\mathcal {M}_{1}, \dots , \mathcal {M}_{M}, \mathcal {N}_{1}, \mathcal {N}_{2}$$ respectively. Firstly, let $$\mathcal {N}_{1} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{1})$$ be the direct sum of $$\mathcal {M}_{1}$$ on $$E_{1}$$ to $$\mathcal {M}_{M}$$ on $$E_{M}$$. That is, for $$A \subseteq {\tilde{E}}$$ it holds that $$A \in {\tilde{\mathcal {F}}}_{1}$$ if and only if $$A \cap E_{i} \in \mathcal {F}_{i}$$ for all $$i=1,\dots ,M$$.

The second matroid $$\mathcal {N}_{2} = ({\tilde{E}}, {\tilde{\mathcal {F}}}_{2})$$ is defined as follows: we call $$e_{1} \in E_{1}, \dots$$, $$e_{M} \in E_{M}$$ a line, if $$e_{1}$$ to $$e_{M}$$ are copies of the same element in E. If $$e_{i}$$ and $$e_{i'}$$ are part of the same line then we call $$e_{i}$$ a sibling of $$e_{i'}$$ and vice versa. Then

\begin{aligned} {\tilde{\mathcal {F}}}_{2} := \{ A \subseteq {\tilde{E}} :A \text{ contains } \text{ at } \text{ most } k \text{ lines }\}. \end{aligned}

For any $$A \subseteq {\tilde{E}}$$, $$X_{i}=A\cap E_{i}$$ for all $$i=1,\dots ,M$$ forms a feasible solution for $$(P^{M}_{\le k})$$ if and only if A is a basis in matroid $$\mathcal {N}_1$$ and independent in matroid $$\mathcal {N}_2.$$ Thus, $$(P^{M}_{\le k})$$ is equivalent to the weighted matroid intersection problem

\begin{aligned} \max \{w(A) :A\in {\tilde{\mathcal {F}}}_{1} \cap {\tilde{\mathcal {F}}}_{2}\}, \end{aligned}

with weight function $$w(e_i) := C-c_i(e)$$ for each $$i \in \{1,\dots ,M\}$$, for some constant $$C>0$$ chosen large enough to ensure that A is a basis in $$\mathcal {N}_1$$. To see that $$\mathcal {N}_2$$ is indeed a matroid, we first observe that $${\tilde{\mathcal {F}}}_{2}$$ is non-empty and downward-closed (i.e., $$A\in {\tilde{\mathcal {F}}}_{2}$$, and $$B\subset A$$ implies $$B\in {\tilde{\mathcal {F}}}_{2}$$). To see that $${\tilde{\mathcal {F}}}_{2}$$ satisfies the matroid-characterizing augmentation property

\begin{aligned} A,B\in {\tilde{\mathcal {F}}}_{2} \text{ with } |A| \le |B| \text{ implies } \exists e\in B\setminus {A} \text{ with } A+e \in {\tilde{\mathcal {F}}}_{2}, \end{aligned}

take any two independent sets $$A,B\in {\tilde{\mathcal {F}}}_{2}$$. If A cannot be augmented from B, i.e., if $$A+e\not \in {\tilde{\mathcal {F}}}_{2}$$ for every $$e\in B\setminus {A}$$, then A must contain exactly k lines, and for each $$e\in B\setminus {A}$$, the $$M-1$$ siblings of e must be contained in A. This implies $$|B| \le |A|$$, i.e., $$\mathcal {N}_2$$ is a matroid. $$\square$$