1 Introduction

Practical solutions to optimization problems are often inherently incremental in the sense that they evolve historically instead of being established in a one-shot fashion. This is especially true when solutions are expensive and need time and repeated investments to be implemented, for example when optimizing the layout of logistics and other infrastructures. In this paper, we propose a theoretical framework to capture incremental maximization problems in some generality.

We describe an incremental problem by a set \(U \) containing the possible elements of a solution, and an objective function \(f:2^{U}\rightarrow {\mathbb {R}}_{\ge 0}\) that assigns to each solution \(S\subseteq U \) some non-negative value f(S). We consider problems of the form

$$\begin{aligned} \max&\; f(S) \nonumber \\ \text {s.t.}&\; \left| S\right| \le k \nonumber \\&\; S\subseteq U, \end{aligned}$$
(1)

where \(k\in {\mathbb {N}}\) grows over time.

An incremental solution \(\mathbf {S}\) is given by an order \((s_{1},s_{2},\dots )\) in which the elements of \(U \) are to be added to the solution over time. A good incremental solution needs to provide a good solution after k steps, for every k, compared to an optimum solution \(S^{\star }_{k} \) with k elements, where we let \(S^{\star }_{k} \in \arg \max _{S\subseteq U,\left| S\right| =k}f(S)\) and \(f^{\star }_{k} := f(S^{\star }_{k})\). Formally, we measure the quality of an incremental solution by its competitive ratio. We let \(S_{k}:=\{s_{1},\dots ,s_{k}\}\subseteq U \) denote the set of the first k elements of \(\mathbf {S}\), and we say that \(\mathbf {S}\) is (strictly) \(\rho \)-competitive if

$$\begin{aligned} \max _{k\in \{1,\dots ,\left| U \right| \}}\frac{f^{\star }_{k}}{f(S_{k})}\le \rho . \end{aligned}$$

An algorithm is called \(\rho \)-competitive if it always produces a \(\rho \)-competitive solution, and its competitive ratio is the infimum over all \(\rho \ge 1\) such that it is \(\rho \)-competitive. Notice that we do not require the algorithm to run in polynomial time.

While all cardinality constrained optimization problems can be viewed in an incremental setting, clearly not all such problems admit good incremental solutions. For example, consider a cardinality constrained formulation of the classical maximum s-t-flow problem: For a given graph \(G=(V,E)\), two vertices \(s,t\in V\) and capacities \(u:E\rightarrow {\mathbb {R}}_{\ge 0}\), we ask for a subset \(E'\subseteq E\) of cardinality \(k\in {\mathbb {N}}\) such that the maximum flow in the subgraph \((V,E')\) is maximized. The example in Fig. 1 shows that we cannot hope for an incremental solution that is simultaneously close to optimal for cardinalities 1 and 2.

Fig. 1
figure 1

Example showing that the st-flow problem does not always admit good incremental solutions, where \(\varepsilon >0\) is arbitrarily small

In order to derive general bounds on the competitive ratio of incremental problems, we need to restrict the class of objective functions f that we consider. Intuitively, the unbounded competitive ratio in the flow example comes from the fact that we have to invest in the s-t-path of capacity 1 as soon as possible, but this path only yields its payoff once it is completed after two steps.

In order to prevent this and similar behaviors, we can, for example, require f to be monotone (i.e., \(f(S)\le f(T)\) if \(S\subseteq T\)) and sub-additive (i.e., \(f(S)+f(T)\ge f(S\cup T)\)). Many important optimization problems satisfy these weak conditions, and we give a short list of examples below. We will see that all these (and many more) problems admit incremental solutions with a bounded competitive ratio. More specifically, we develop a general 2.618-competitive incremental algorithm that can be applied to a broad class of problems, including all problems mentioned below. We illustrate in detail how to apply our model to obtain an incremental variant of the matching problem, and then list incremental versions of other important problems that are obtained analogously.

  • \({\textsc {Maximum Weighted Matching}}\): Consider a graph \(G = (V,E)\) with edge weights \(w:E \rightarrow {\mathbb {R}}_{\ge 0}\). If we think of edges as potential connections and edge weights as potential payoffs, then it is not enough to find the final matching, because we cannot construct the edges all at once: the goal is to find a sequence of edges that achieves a high pay-off in the short, the medium, and the long term. In terms of our formal framework, we add edges to a set S one at a time with \(U =E\) and f(S) is the maximum weight of a matching \(M \subseteq S\). In order to be \(\rho \)-competitive, we need that, after k steps for every k, our solution S of cardinality k is no worse than a factor of \(\rho \) away from the optimum solution of cardinality k, i.e., \(f(S) \ge f(S^{\star }_k)/\rho \). This model captures the setting where the infrastructure (e.g. the matching, the knapsack, the covering, or the flow) must be built up over time. The online model would be too restrictive in this setting because here we know our options in advance. Note that, as we add more edges, the set of edges S only needs to contain a large matching M, but does not have to be a matching itself. The matching M can change to an arbitrary subset of S from one cardinality to the next and does not have to stay consistent. This ensures that f(S) is monotonically increasing, and is in keeping with the infrastructures setting where the potential regret present in the online model does not apply: building more infrastructure can only help, since once it is built, we can change how it is used. Accordingly, in all the problems below, the set S does not have to be a valid solution to the cardinality constrained problem at hand, but rather needs to contain a good solution as a subset. The objective f(S) is consistently defined to be the value of the best solution that is a subset of S. Notice that this approach can easily be generalized to Maximum b -Matching.

  • Maximum Weighted d -Dimensional Matching: Given sets \(V_1,\dots \!,V_d\), vectors \(E \subseteq V_1 \times \dots V_d\), and weights \(w:E \rightarrow {\mathbb {R}}_{\ge 0}\), we ask for an incremental subset \(S \subseteq E\) where f(S) is the maximum weight of a d-dimensional matching in S, i.e., the maximum weight of a subset \(M \subseteq S\) with \(v_{i}\ne v_{i}'\) for all \(e=(v_{1},\dots ,v_{d})\in M\), \(e'=(v_{1}',\dots ,v_{d}')\in M{\setminus }\{e\}\), and all \(i \in \{1,\dots ,d\}\).

  • \({\textsc {Set Packing}}\): Given a set of weighted sets \({\mathcal {X}}\) we ask for an incremental subset \(S\subseteq {\mathcal {X}}\) where f(S) is the maximum weight of mutually disjoint subsets in S. This problem captures many well-known problems such as Maximum Hypergraph Matching and Maximum Independent Set.

  • \({\textsc {Maximum Coverage}}\): Given a set of weighted sets \({\mathcal {X}}\subseteq 2^{U}\) over an universe of elements U, we ask for an incremental subset \(S\subseteq {\mathcal {X}}\), where f(S) is the weight of elements in \(\bigcup _{X \in S} X\). This problem captures maximization versions of clustering and location problems. We can include opening costs \(c :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) by letting f(S) be the maximum over all subsets \(S'\subseteq S\) of the number (or weight) of the sets in \(S'\) minus their opening costs.

  • \({\textsc {Knapsack}}\): Given a set X of items, associated sizes \(s:X\rightarrow {\mathbb {R}}_{\ge 0}\) and values \(v:X\rightarrow {\mathbb {R}}_{\ge 0}\), and a knapsack of capacity 1, we ask for an incremental subset \(S\subseteq X\), where f(S) is the largest value \(\sum _{x\in S'}v(x)\) of any subset \(S'\subseteq S\) with \(\sum _{x\in S'}s(x)\le 1\). This problem can be generalized to Multi-Dimensional Knapsack by letting item sizes be vectors and letting the knapsack have a capacity in every dimension.

  • \({\textsc {Disjoint Paths}}\): Given a graph \(G = (V,E)\), a set of pairs \({\mathcal {X}} \subseteq V^2\) with weights \(w:{\mathcal {X}} \rightarrow {\mathbb {R}}_{\ge 0}\), we ask for an incremental subset \(S \subseteq {\mathcal {X}}\), where f(S) is the maximum weight of a subset \(S'\subseteq S\), such that G contains mutually disjoint paths between every pair in \(S'\).

  • Maximum Bridge-Flow: We argued above that the maximum s-t-flow problem is not amenable to the incremental setting because it does not pay off to build paths partially.

    To overcome this, we consider a natural restriction of the flow problem where most edges are freely available to be used, and only the edges of a directed s-t-cut need to be built incrementally. If there are no edges over the reversely oriented cut, every s-t-path contains exactly one edge that needs to be built, and we never have to invest multiple steps to establish a single path. This problem captures logistical problems where links need to be established between two clusters, such as when bridges need to be built across a river, cables across an ocean, or when warehouses need to be opened in a supplier-warehouse-consumer network. Formally, given a directed graph \(G=(V,E)\) with capacities \(u:E\rightarrow {\mathbb {R}}\), vertices \(s,t\in V\), and a directed s-t-cut \(C\subseteq E\) induced by the partition (UW) of V such that the directed cut induced by (WU) is empty, we ask for an incremental subset \(S\subseteq C\) where f(S) is the value of a maximum flow in the subgraph \((V,E{\setminus }(C{\setminus } S))\).

It is easy to verify that the objective functions of all the problems mentioned above (and many more) have the following properties for all \(S,T \subseteq U \):

  1. 1.

    (monotonicity): \(S\subseteq T\Rightarrow f(S)\le f(T)\),

  2. 2.

    (sub-additivity): \(f(S)+f(T)\ge f(S\cup T)\),

  3. 3.

    (accountability): \(\exists s\in S:f(S{\setminus }\{s\})\ge f(S)-f(S)/\left| S\right| \).

To our knowledge, the accountability property has not been named before. Intuitively, it ensures that at least one element contributes not more than the average to the value of every set \(S\subseteq U \). While it is easy to formulate artificial problems that have monotonicity and sub-additivity but no accountability, or vice-versa, we were not able to identify any natural problems of this kind. This justifies that we rely on accountability in our general incremental algorithm. Note that both sub-additivity and accountability are implied by submodularity.

Observe that a \(\rho \)-competitive incremental algorithm immediately yields a \(\rho \)-approximation algorithm for the underlying cardinality constrained problem, with the caveat that the resulting approximation algorithm might not be efficient, since we make no demands on the runtime of the incremental algorithm. The converse is rarely the case, since approximation algorithms usually do not construct their solution in an incremental fashion. A prominent exception are greedy algorithms that are inherently incremental in the sense that they pick elements one-by-one such that each pick increases the objective by the maximum amount possible. This type of a greedy algorithm has been studied as an approximation algorithm for many cardinality constrained problems, and approximation ratios translate immediately to competitive ratios for the incremental version of the corresponding problem. In particular, the greedy algorithm is known to have competitive ratio (exactly) \(\frac{e}{e-1}\approx 1.58\) if the objective function f is monotone and submodular [32]. Note, however, that of all the incremental problems listed above, only Maximum Coverage (without opening costs) has a submodular objective function. It is also known that if we relax the submodularity requirement and allow f to be the minimum of two monotone (sub-)modular functions, the greedy algorithm can be arbitrarily bad [25]. We provide a different relaxation of submodularity that captures Maximum (Weighted) d -Dimensional Matching, Maximum (Weighted) (b-)Matching, and Maximum Bridge-Flow, and where the greedy algorithm has a bounded competitive/approximation ratio.

Our Results. As our first result, we show that a large class of incremental problems admits a bounded competitive ratio. We remark that our upper bound does not require sub-additivity, while our lower bound still works for sub-additive objectives.

Theorem 1

Every incremental problem with monotone, accountable objective admits a \((1+\varphi )\)-competitive algorithm, where \(\varphi \) is the golden ratio and \((1+\varphi )\approx 2.618\). No general deterministic algorithm for this class of problems has a competitive ratio of 2.18 or better.

Again, note that we make no guarantees regarding the running time of our incremental algorithm. In fact, our algorithm relies on the ability to compute the optimum of the underlying cardinality constrained problem for increasing cardinalities. If we can provide an efficient approximation of this optimum, we get an efficient incremental algorithm in the following sense.

Corollary 1

If there is a polynomial time \(\alpha \)-approximation algorithm for a cardinality constrained problem with monotone, accountable objective, then we can design a polynomial time \(\alpha (1+\varphi )\)-competitive incremental algorithm.

We also analyze the approximation/competitive ratio of the greedy algorithm. We observe that for many incremental problems like Knapsack, Maximum Independent Set, and Disjoint Paths, the greedy algorithm has an unbounded competitive ratio (Observation 4). On the other hand, we define a relaxation of submodularity called \(\alpha \)-augmentability (see Definition 2 below) under which the greedy algorithm has a bounded competitive ratio. In particular, this relaxation captures Maximum Weighted \(\alpha \)-Dimensional Matching (Proposition 4), and, for \(\alpha =2\), it captures cardinality constrained versions of Maximum (Weighted) (b-)Matching (Proposition 5) and Maximum Bridge-Flow (Proposition 6). We get the following result, where the tight lower bound for \(\alpha =2\) is obtained for Maximum Bridge-Flow, and the lower bound of \(\alpha \) for \(\alpha \in {\mathbb {N}}\) is obtained for Maximum Weighted \(\alpha \)-Dimensional Matching.

Theorem 2

For every cardinality constrained problem with a monotone and \(\alpha \)-augmentable objective, the greedy algorithm has approximation/competitive ratio at most \(\alpha \frac{ e^\alpha }{e^{\alpha }-1}\). This bound is tight for problems with 2-augmentable objectives, and, the greedy algorithm has competitive ratio at least \(\alpha \) for \(\alpha \)-augmentable objectives with \(\alpha \in {\mathbb {N}}\).

We point out that submodularity implies 1-augmentability (Proposition 2) which, in turn, implies accountability (Proposition 3), but not vice-versa (Observation 5). In particular, Theorem 2 generalizes the bound of \(\frac{e}{e-1} \approx 1.58\) that is known for submodular functions to the larger class of 1-augmentable functions. For \(\alpha =2\), our tight bound is \(\frac{2 e^2}{e^2-1} \approx 2.313\). For \(\alpha \rightarrow \infty \) our lower bound of \(\alpha \) becomes tight, since \(\frac{ e^\alpha }{e^{\alpha }-1} \rightarrow 1\). We note that the greedy algorithm is well known to yield an \(\alpha \)-approximation for Maximum Weighted \(\alpha \)-Dimensional Matching.

Related Work. Most work on incremental settings has focused on cardinality constrained minimization problems. A prominent exception is the robust matching problem, introduced by Hassin and Rubinstein [19]. This problem asks for a weighted matching M with the property that, for every value k, the total weight of the \(\min (k,|M|)\) heaviest edges of M comes close to the weight of a maximum weight matching of cardinality at most k. Note that this differs from our definition of incremental matchings in that the robust matching problem demands that the “incremental” solution consists of a matching, while we allow any edge set that contains a heavy matching as a subset. Since their model is more strict, all of the following competitive ratios carry over to our setting. Note that, in contrast to our setting, the objective function of the robust matching problem is submodular, and hence the greedy algorithm has competitive ratio at most \(\frac{e}{e-1}\approx 1.58\) [32]. Hassin and Rubinstein [19] gave an improved, deterministic algorithm that achieves competitive ratio \(\sqrt{2}\approx 1.414\). They also give a tight example for this ratio of \(\sqrt{2}\), which also works in our incremental setting. Fujita et al. [14] extended this result to matroid intersection, and Kakimura and Makino [21] showed that every independence system allows for a \(\sqrt{\mu }\)-competitive solution, with \(\mu \) being the extendibility of the system. Matuschke at al. [28] describe a randomized algorithm for this problem that, under the assumption that the adversary does not know the outcome of the randomness, has competitive ratio \(\ln (4)\approx 1.386\).

A variant of the knapsack problem with a similar notion of robustness was proposed by Kakimura et al. [22]. In this problem a knapsack solution needs to be computed, such that, for every k, the value of the k most valuable items in the knapsack compares well with the optimum solution using k items, for every k. Kakimura et al. [22] restrict themselves to polynomial time algorithms and show that under this restriction a bounded competitive ratio is possible only if the rank quotient of the knapsack system is bounded. In contrast, our results show that if we do not restrict the running time and if we only require our solution to contain a good packing with k items for every k, then we can be \((1+\varphi )\)-competitive using our generic algorithm, even for generalizations like Multi-Dimensional Knapsack. If we restrict the running time and use the well-known PTAS for the knapsack problem [20, 26], we still get a \((1+\varphi )(1+\varepsilon )\)-competitive algorithm. Megow and Mestre [29] and Disser et al. [9] considered another variant of the knapsack problem that asks for an order in which to pack the items that works well for every knapsack capacity. Kobayashi and Takazawa [23] study randomized strategies for cardinality robustness in the knapsack problem.

Hartline and Sharp [18] considered an incremental variant of the maximum flow problem where capacities increase over time. This is in contrast to our framework where the cardinality of the solution increases.

Incremental solutions for cardinality constrained minimization problems have been studied extensively, in particular for clustering [4, 8], k-median [5, 13, 31], minimium spanning tree [2, 15], and facility location [16]. An important result in this domain is the incremental framework given by Lin et al. [27]. This general framework allows to devise algorithms for every incremental minimization problem for which a suitable augmentation subroutine can be formulated. Lin et al. [27] used their framework to match or improve many of the known specialized bounds for the problems above and to derive new bounds for covering problems. In contrast to their result, our incremental framework allows for a general algorithm that works out-of-the-box for a broad class of incremental maximization problems and yields a constant (relatively small) competitive ratio.

Abstractly, incremental problems can be seen as optimization problems under uncertainty. Various approaches to handling uncertain input data have been proposed, ranging from robust and stochastic optimization to streaming and exploration. On this level, incremental problems can be seen as a special case of online optimization problems, i.e., problems where the input data arrives over time (see [3, 12]). Whereas online optimization in general assumes adversarial input, incremental problems restrict the freedom of the adversary to deciding when to stop, i.e., the adversary may choose the cardinality k while all other data is fixed and known to the algorithm. Online problems with such a “non-adaptive” adversary have been studied in other contexts [6, 11, 17]. Note that online problems demand irrevocable decisions in every time step – a requirement that may be overly restrictive in many settings where solutions develop over a long time period. In contrast, our incremental model only requires a growing solution “infrastructure” and allows the actual solution to change arbitrarily over time within this infrastructure.

Regarding the greedy algorithm as an approximation algorithm for maximization problems, we mentioned above that it is well-known to achieve a competitive ratio of \(\frac{e}{e-1}\) for monotone and submodular objectives [32]. Theorem 2 generalizes this result, since 1-augmentability is a relaxation of submodularity (Proposition 2). The submodularity ratio was proposed by Das and Kempe [7] as a somewhat similar relaxation of submodularity, with a submodularity ratio greater zero implying an approximation guarantee for the greedy algorithm. We note that, while a submodularity ratio of 1 implies submodularity and thus 1-augmentability, otherwise, there is no relationship between a constant submodularity ratio and \(\alpha \)-augmentability for constant \(\alpha \).

Another important setting where the greedy algorithm yields an approximation are independence systems with bounded rank quotient [24], where elements of the ground set are weighted and the objective function value of a set is given by the maximum sum of the weights over all independent subsets. This setting can be seen as a relaxation of matroids, i.e., independence systems with rank quotient 1, for which the greedy algorithm computes an optimum solution [10, 33]. In fact, an independence system that is the intersection of k matroids is has rank quotient 1/k [24]. In particular, this implies that the greedy algorithm yields a d-approximation for the Maximum Weighted d -Dimensional Matching problem. Another relaxation of matroids are k-extendible systems, for which greedy yields a k-approximation [30]. Again, while a rank quotient of 1 (i.e., the matroid case) implies submodularity of the objective function [32] and thus 1-augmentability, otherwise, there is no relationship between a bounded rank quotient or bounded extendibility and \(\alpha \)-augmentability for bounded \(\alpha \).

2 A competitive algorithm for accountable problems

In this section, we show the first part of Theorem 1, i.e., we give an incremental algorithm that is \((1+\varphi \approx 2.618)\)-competitive for all problems with a monotone and accountable objective. For convenience, we define the density \(\delta _S\) of a set \(S\subseteq U \) via \(\delta _S := f(S)/\left| S\right| \), and we let \(\delta ^{\star }_{k} := \delta _{S^{\star }_{k}}\) denote the optimum density for cardinality k. Our algorithm relies on the following two observations that follow from the accountability of the objective function. We note that, throughout this section, we do not require sub-additivity.

Lemma 1

If the objective f is accountable, then, for every k, there is an ordering \((s^{\star }_1,s^{\star }_2,\dots ,s^{\star }_k)\) of \(S^{\star }_{k} \), such that \(\delta _{\{s^{\star }_1,\dots ,s^{\star }_i\}} \ge \delta _{\{s^{\star }_1,\dots ,s^{\star }_{i+1}\}}\) for all \(i\in \{1,\dots ,k-1\}\). We call \((s^{\star }_1,s^{\star }_2,\dots ,s^{\star }_k)\) a greedy order of \(S^{\star }_{k} \).

Proof

By accountability of f, there is an element \(s^{\star }_k \in S^{\star }_{k} \) for which

$$\begin{aligned} \delta _{S^{\star }_{k} {\setminus } \{s^{\star }_k\}} = \frac{f(S^{\star }_{k} {\setminus }\{s^{\star }_k\})}{k - 1} \ge \frac{f(S^{\star }_{k})}{k} = \delta _{S^{\star }_{k}}. \end{aligned}$$

We can repeat this argument for \(s^{\star }_{k-1} \in S^{\star }_{k} {\setminus }\{s^{\star }_k\}\), \(s^{\star }_{k-2} \in S^{\star }_{k} {\setminus }\{s^{\star }_k, s^{\star }_{k-1}\}\), etc. to obtain the desired ordering. \(\square \)

Lemma 2

If the objective f is monotone and accountable, we have \(\delta ^{\star }_{k'} \ge \delta ^{\star }_{k} \) for all \(1 \le k' \le k\).

Proof

Fix any cardinality \(k > 1\). By accountability of the objective function f, there is an element \(s^{\star } \in S^{\star }_{k} \) with

$$\begin{aligned} \delta ^{\star }_{k} = \frac{f(S^{\star }_{k})}{k} \le \frac{f(S^{\star }_{k} {\setminus }\{s^{\star }\})}{k-1} \le \frac{f(S^{\star }_{k-1})}{k-1} = \delta ^{\star }_{k-1}. \end{aligned}$$

It follows that \(\delta ^{\star }_{k} \) is monotonically decreasing in k. \(\square \)

Now, we define \(k_0 := 1\) and \(k_i := \lceil (1+\varphi )k_{i-1}\rceil \) for all positive integers i. Our algorithm operates in phases \(i\in \{0,1,\dots \}\). In each phase i, we add the elements of the optimum solution \(S^{\star }_{k_i} \) of cardinality \(k_i\) to our incremental solution in greedy order (Lemma 1). Note that we allow the algorithm to add elements multiple times (without effect) in order to not complicate the analysis needlessly (of course we would only improve the algorithm by skipping over duplicates). In the following, we denote by \(t_i\) the number of steps (possibly without effect) until the end of phase i, i.e., we let \(t_0 := k_0\) and \(t_i := t_{i-1} + k_i\).

Lemma 3

For every phase \(i\in \{0,1,\dots \}\), we have \(t_i \le \varphi k_i\).

Proof

We use induction over i, with the case \(i=0\) being trivial, since \(t_0 = k_0\). Now assume that \(t_{i-1} \le \varphi k_{i-1}\) for some \(i\ge 1\). Using the property \(\frac{\varphi }{\varphi + 1} = \varphi - 1\) of the golden ratio, we get

$$\begin{aligned} t_i = t_{i-1} + k_i \le \varphi k_{i-1} + k_{i} \le \frac{\varphi }{\varphi + 1} k_i + k_i = \varphi k_i. \end{aligned}$$

\(\square \)

Finally, we show the solution \(\mathbf {S}\) computed by our algorithm is \((1+\varphi )\)-competitive. As before, we let \(S_{k} \) denote the set of the first k elements of \(\mathbf {S}\).

Theorem 3

If the objective f is monotone and accountable, then, for every cardinality k, we have \(f(S_{k}) \ge f^{\star }_{k} / (1+\varphi )\).

Proof

We use induction over k. The claim is true for \(k=t_0=1\), since \(S_{1} = S^{\star }_{1} \) by definition of the algorithm. For the inductive step, we prove that if the claim is true for \(k=t_{i-1}\), then it remains true for all \(k\in \{t_{i-1} + 1, \dots , t_i\}\). Recall that \(k_i = \lceil (1+\varphi )k_{i-1}\rceil \). By Lemma 3, we have

$$\begin{aligned} t_{i-1} \le \varphi k_{i-1}< k_i < t_{i-1} + k_i = t_i, \end{aligned}$$

and we can therefore distinguish the following cases.

Case 1: \({\varvec{t}}_{{\varvec{i\,-\,1}}}< {\varvec{k}} < {\varvec{k}}_{\varvec{i}}\). Since \(k > t_{i-1}\), our algorithm has already completed phase \(i-1\) and added all elements of \(S^{\star }_{k_{i-1}} \), so we have \(f(S_{k}) \ge f^{\star }_{k_{i-1}} \). Because k is an integer and \(k < k_i = \lceil (1+\varphi )k_{i-1}\rceil \), we have that \(k < (1+\varphi )k_{i-1}\). By Lemma 2, we thus have

$$\begin{aligned} f^{\star }_{k} = \delta ^{\star }_{k} \cdot k < \delta ^{\star }_{k_{i-1}} \cdot (1+\varphi )k_{i-1} = (1+\varphi )f^{\star }_{k_{i-1}} \le (1+\varphi )f(S_{k}). \end{aligned}$$

Case 2: \({\varvec{k}}_{\varvec{i}} \le {\varvec{k}} \le {\varvec{t}}_{\varvec{i}}\). At time k, our algorithm has already completed the first \(k - t_{i-1}\) elements of \(S^{\star }_{k_i} \). Since the algorithm adds the elements of \(S^{\star }_{k_i} \) in greedy order, we have \(f(S_{k}) \ge (k - t_{i-1})\delta ^{\star }_{k_i} \). On the other hand, since \(k \ge k_i\), by Lemma 2 we have \(f^{\star }_{k} = k\cdot \delta ^{\star }_{k} \le k\cdot \delta ^{\star }_{k_i} \). In order to complete the proof, it is thus sufficient to show that \(k \le (1+\varphi )(k-t_{i-1})\). To see this, let \(k = k_i + k'\) for some non-negative integer \(k'\). Because \(t_{i-1}\) is integral, Lemma 3 implies \(t_{i-1} \le \lfloor \varphi k_{i-1} \rfloor \). Since \(\varphi \) is irrational and \(k_{i-1}\) is integral, \(\varphi k_{i-1}\) cannot be integral, thus

$$\begin{aligned} k - t_{i-1} = k' + k_i - t_{i-1} \ge k' + \lceil (1+\varphi )k_{i-1}\rceil - \lfloor \varphi k_{i-1} \rfloor = k' + k_{i-1} + 1. \end{aligned}$$

This completes the proof, since

$$\begin{aligned} (1+\varphi )(k - t_{i-1}) \ge (1+\varphi )(k' + k_{i-1} + 1) > k' + (1+\varphi )k_{i-1} + 1 \ge k' + k_i = k. \end{aligned}$$

\(\square \)

Corollary 1 follows if we replace \(S^{\star }_{k_i} \) by an \(\alpha \)-approximate solution for cardinality \(k_i\).

3 Lower bound on the best-possible competitive ratio

In this section, we show the second part of Theorem 1, i.e., we give a lower bound on the best-possible competitive ratio for the maximization of incremental problems with monotone, sub-additive, and accountable objective functions. For this purpose, we define the Region Choosing problem. In this problem, we are given \(N \) disjoint sets \(R_1,\dots ,R_N \), called regions, with region \(R_i\) containing i elements with a value of \(\delta (i)\) each. We say that \(\delta (i)\) is the density of region \(R_i\). The total value of all elements in the region \(R_i\) is \(v(i) := i \cdot \delta (i)\) for all \(i \in \left\{ 1,\dots ,N \right\} \).

The objective is to compute an incremental solution \(S \subseteq U:= \bigcup _{i=1}^{N}R_{i}\) such that the maximum value of the items from a single region in S is large. Formally, the objective function is given by \(f(S) := \max _{i\in \left\{ 1,\dots ,N \right\} }\left| R_{i}\cap S\right| \cdot \delta (i)\).

Observation 1

Region Choosing has a monotone, sub-additive, and accountable objective function.

Proof

The objective function of Region Choosing is monotone by definition.

Let ST be two solutions to an instance of the Region Choosing problem, and consider a region \(R_{S\cup T} \in \arg \max _{i \in \{1,\dots ,N\}} |R_i \cap (S\cup T)|\cdot \delta (i)\) of maximum value in the solution \(S \cup T\). Let \(v_X\) denote the total value of the items from region \(R_{S \cup T}\) in solution X. Then,

$$\begin{aligned} f(S\cup T) = v_{S \cup T} \le v_S + v_T \le f(S) + f(T), \end{aligned}$$

which proves sub-additivity.

To show accountability, let \(R_S\) be the region of maximum value in solution S. We need to find \(s \in S\) with \(f(S{\setminus }\{s\}) \ge f(S) - f(S)/|S|\). If \(S {\setminus } R_S = \emptyset \), then, for every \(s\in S\), we have

$$\begin{aligned} f(S{\setminus }\{s\}) = f(S) - f(S)/|S|. \end{aligned}$$

Otherwise, for every \(s \in S{\setminus } R_S\), we have

$$\begin{aligned} f(S{\setminus }\{s\}) = f(S) \ge f(S) - f(S)/|S|. \end{aligned}$$

\(\square \)

We state the following tight lower bound without proof.

Proposition 1

The algorithm of the previous section has competitive ratio at least \(\varphi + 1\) for Region Choosing for \(N \rightarrow \infty \) and with \(\delta (i)=1\) for all \(i \in \{1 ,\dots , N\}\).

For our general lower bound, we set \(\delta (i) := i^{\beta -1}\) for some \(\beta \in (0,1)\) that we will choose later. For this choice of \(\beta \), we have \(\delta (i) < \delta (j)\) and \(v(i) > v(j)\) for \(0 \le j < i \le N \). Also, for \(N \rightarrow \infty \) we have \(\lim _{i\rightarrow \infty } v(i) = \infty \). We call instances of the Region Choosing problem in this form \(\beta \)-decreasing. Observe that in every \(\beta \)-decreasing instance the optimum solution of cardinality \(i \le N \) is to take all i elements from region \(R_i\). This solution has value \(f^{\star }_{i} = i^\beta \).

In order to impose a lower bound on the best-possible competitive ratio for \(\beta \)-decreasing instances, we need some insights into the structure of incremental solutions with an optimal competitive ratio. First, consider a solution that picks only \(i' < i\) elements from region \(R_i\). In this case, we could have picked \(i'\) elements from region \(R_{i'}\) instead – this would only improve the solution, since densities are decreasing. Secondly, if we take i elements from region \(R_i\), it is always beneficial to take them in an uninterrupted sequence before taking any elements from a region \(R_j\) with \(j>i\): Our objective depends only on the region with the most value, therefore it never helps to take elements from different regions in an alternating fashion. This leads us the following observation.

Observation 2

For every \(\beta \)-decreasing instance of Region Choosing there is an incremental solution with optimal competitive ratio of the following structure: For \(k_0< k_1< \dots < k_m\in {\mathbb {N}}\) with \(m\in {\mathbb {N}}\), it takes \(k_0\) elements from region \(R_{k_0}\), followed by \(k_1\) elements from \(R_{k_1}\), and so on, until finally \(k_m\) elements from region \(R_{k_m}\) are chosen.

Thus, we can describe an algorithm for the Region Choosing problem by an increasing sequence of region indices \(k_0,\dots ,k_m\). Note that, in order to have a bounded competitive ratio if \(N \rightarrow \infty \), we must have \(m\rightarrow \infty \), since \(\lim _{i\rightarrow \infty }v(i)\rightarrow \infty \). We are interested in a cardinality for which an incremental solution given by \(k_0,\dots ,k_m\) has a bad competitive ratio. We define

$$\begin{aligned} \alpha _i := \frac{1}{k_i}\sum _{j=0}^i k_j \qquad \forall i \in \left\{ 0,\dots ,m \right\} . \end{aligned}$$

Observe that \(\alpha _i > 1\) for all \(i\in \{1,\dots ,m\}\). We know that the value of the optimum solution for cardinality \(\alpha _i k_i\) is \(v(\alpha _i k_i) = (\alpha _i k_i)^\beta \), whereas the incremental solution only achieves a value of \(v(k_i) = (k_i)^\beta \). This allows us to derive the following necessary condition on the \(\alpha _i\)-values of \(\rho \)-competitive solutions.

Observation 3

If an incremental solution defined by a sequence \(k_0,\dots ,k_m\) is \(\rho \)-competitive for some \(\rho \ge 1\), then, for all \(i \in \left\{ 0,\dots ,m \right\} \), we must have

$$\begin{aligned} \rho \ge \frac{v(\alpha _i k_i)}{v(k_i)} = \left( \frac{\alpha _i k_i}{k_i}\right) ^\beta = \alpha _i^\beta \quad \Longleftrightarrow \quad \alpha _i \le \rho ^{\frac{1}{\beta }}. \end{aligned}$$
(2)

We will exclude a certain range of values of \(\rho \) by showing that we can find a \(\beta \in (0,1)\) such that, for a sufficiently large number of regions \(N \), necessary condition (2) is violated. We do this by showing that, for some \(i^\star \in {\mathbb {N}}\) and some fixed \(\varepsilon > 0\), we have \(\alpha _{i+1} - \alpha _{i} > \varepsilon \) for all \(i\ge i^{\star }\), i.e., as i goes to \(\infty \), condition (2) must eventually be violated. The following definition relates a value of \(\beta \in (0,1)\) to a lower bound on the competitive ratio \(\rho \) for \(\beta \)-decreasing instances.

Definition 1

A pair \((\rho , \beta )\) with \(\rho \ge 1\) and \(\beta \in (0,1)\) is problematic if there is \(\varepsilon >0\) such that for all \(x \in (1,\rho ^{1/\beta }]\) it holds that \(h_{\rho ,\beta }(x)<0\), where

$$\begin{aligned} h_{\rho ,\beta }(x) := (\rho ^{\frac{1}{\beta }} + \varepsilon - x)^\frac{1}{1-\beta } - \frac{x}{x-1+\varepsilon }. \end{aligned}$$

We show that problematic pairs indeed have the intended property.

Lemma 4

If \((\rho , \beta )\) is a problematic pair, then \(\rho \) is a strict lower bound on the competitive ratio of incremental solutions for \(\beta \)-decreasing instances of Region Choosing.

Proof

We fix a problematic pair \((\rho ,\beta )\) and let \(\varepsilon \) be as in Definition 1. Consider a \(\beta \)-decreasing instance of sufficiently large size \(N \) and assume that there is a \(\rho \)-competitive incremental solution for this instance, given by the sequence \(k_0,k_1,\dots ,k_m\). Consider a cardinality \(k = \sum _{j=0}^i k_j\) for any \(i \in \{1,\dots ,N \}\) for which the incremental solution takes all elements from regions \(R_{k_0}\), \(\dots \), \(R_{k_i}\). Assume that we do not take any additional elements for larger cardinalities. We are interested in the first cardinality for which \(\rho \)-competitiveness would be violated, i.e., where \(v(k_i)\) is not enough to be \(\rho \)-competitive. This is the minimal cardinality \(t_i\) with \(f^{\star }_{t_i} = t_i^\beta > \rho k_i^\beta = \rho v(k_i)\), i.e.,

$$\begin{aligned} t_i := \min \left\{ t_i \in {\mathbb {N}}\ \left| \ t_i > \rho ^{\frac{1}{\beta }} k_i\right. \right\} \le \rho ^{\frac{1}{\beta }}k_i + 1. \end{aligned}$$

Then, for cardinality \(t_i\), the incremental solution must have taken enough value from a later region to be \(\rho \)-competitive. Without loss of generality, we can assume this region to be \(R_{k_{i+1}}\), otherwise the incremental solution that skips region \(R_{k_{i+1}}\) is also \(\rho \)-competitive, and we can consider this solution instead. It follows that the incremental solution must satisfy

$$\begin{aligned} \left( t_i - \sum _{j=0}^i k_j\right) \delta (k_{i+1}) \ge \frac{1}{\rho } v(t_i) > k_i^\beta , \end{aligned}$$

which, by definition of \(\alpha _i\), implies

$$\begin{aligned} \left( \rho ^{\frac{1}{\beta }} k_i + 1 - \alpha _i k_i\right) \delta (k_{i+1}) > k_i^\beta . \end{aligned}$$
(3)

Defining \(q_i := k_i / k_{i-1}\) for all \(i\in \{1,\dots ,N \}\) gives \(\delta (k_{i+1}) = \delta (q_{i+1} k_i) = \delta (k_i) \cdot q_{i+1}^{\beta -1}\). With this, Eq. (3) can be written as

$$\begin{aligned} v(k_i) = k_i^\beta < \left( \rho ^{\frac{1}{\beta }} + \frac{1}{k_i} - \alpha _i\right) q_{i+1}^{\beta -1} \delta (k_i) k_i = \left( \rho ^{\frac{1}{\beta }} + \frac{1}{k_i} - \alpha _i\right) q_{i+1}^{\beta -1} v(k_i). \end{aligned}$$

Dividing by \(v(k_i)\) yields

$$\begin{aligned} q_{i+1} < \left( \rho ^{\frac{1}{\beta }} + \frac{1}{k_i} - \alpha _i\right) ^{\frac{1}{1-\beta }}. \end{aligned}$$
(4)

Since the incremental solution is \(\rho \)-competitive, we have \(\alpha _i \in (1,\rho ^{\frac{1}{\beta }}]\) by condition (2). Because \((\rho , \beta )\) is a problematic pair, we have

$$\begin{aligned} h_{\rho ,\beta }(\alpha _i) = (\rho ^{\frac{1}{\beta }} +\varepsilon - \alpha _i)^{\frac{1}{1-\beta }} - \frac{\alpha _i}{\alpha _i-1+\varepsilon } < 0. \end{aligned}$$
(5)

Observe that since \(\lim _{i\rightarrow \infty } k_i = \infty \), we can find an \(i^\star \) such that \(\frac{1}{k_i} < \varepsilon \) for all \(i\ge i^\star \). Thus, for \(i\ge i^\star \) Eqs. (5) and (4) imply

$$\begin{aligned} q_{i+1}< (\rho ^{\frac{1}{\beta }} +\varepsilon - \alpha _i)^{\frac{1}{1-\beta }} < \frac{\alpha _i}{\alpha _i - 1 + \varepsilon }. \end{aligned}$$

For \(i\ge i^\star \), we therefore get

$$\begin{aligned} \alpha _{i+1} - \alpha _{i}&= \frac{1}{k_{i+1}} \sum _{j=0}^{i+1} k_j - \alpha _i \\&= 1 + \frac{\alpha _i}{q_{i+1}} - \alpha _i\\&> \varepsilon . \end{aligned}$$

But this implies that, if \(N \) is sufficiently large, condition (2) eventually gets violated, which contradicts the fact that the incremental solution has competitive ratio \(\rho \). \(\square \)

All that remains is to specify a problematic pair in order to obtain a lower bound via Lemma 4. It is easy to verify that (2.18, 0.86) is a problematic pair. Note that the resulting bound of 2.18 can slightly be increased to larger values below 2.19.

Theorem 4

There is no 2.18-competitive incremental Region Choosing algorithm.

4 The greedy algorithm for augmentable problems

In this section, we analyze the greedy algorithm that computes an incremental solution \(\mathbf {S}\) with \(S_{k} =S_{k-1} \cup \{s_{k}\}\), where \(s_{k}\in \arg \max _{s\in U {\setminus }S_{k-1}}f(S_{k-1} \cup \{s\})\) and \(S_{0} =\emptyset \). This algorithm is well-known to have a competitive ratio of \(\frac{e}{e-1}\approx 1.58\) if the objective function f is monotone and submodular [32]. Note that every (non-negative) monotone and submodular function is sub-additive and accountable. On the other hand, in general, the greedy algorithm does not have a bounded competitive ratio for incremental problems.

Observation 4

The greedy algorithm has an unbounded competitive ratio for many incremental problems with monotone, sub-additive, and accountable objectives, e.g., for the Knapsack, the Weighted Independent Set, and the Disjoint Paths problem.

Proof

We construct a knapsack instance with three types of items for small \(\varepsilon >0\) and any \(k\in {\mathbb {N}}\): One item of size and value both \(1-\varepsilon \), k items of size \(2\varepsilon \) and value \(1-2\varepsilon \), and k items of size and value both \(\varepsilon ^2\). Obviously, the greedy algorithm first takes the largest item and then continues with the smallest one, since each of them further increases the maximum value that can be packed in the knapsack of capacity 1. Consequently, the greedy solution has value below 1 for cardinality k, while the optimum value approaches k.

We can reproduce the same behavior for the Weighted Independent Set problem by choosing a star of degree k plus k isolated vertices as our input graph, where the center of the star has weight \(1-\varepsilon \), the leaves of the star have weight \(1-2\varepsilon \), and the isolated vertices have weight \(\varepsilon ^2\).

Similarly, for Disjoint Paths, we can choose a long path and many isolated edges as input. The endpoints of the path form a pair of weight \(1-\varepsilon \), each edge along the path is a pair of weight \(1-2\varepsilon \), and each isolated edge has weight \(\varepsilon ^2\). \(\square \)

We will now define a subclass of incremental problems where the competitive ratio of greedy can be bounded. Observe that submodularity of a function \(f:2^{U} \rightarrow {\mathbb {R}}_{\ge 0}\) implies that, for every \(S \ne T \subseteq U \), there exists an element \(t \in T {\setminus } S\) with \(f(S\cup \{t\}) - f(S) \ge (f(S \cup T) - f(S))/\left| T {\setminus } S\right| \). Accordingly, we can define the following relaxation of submodularity.

Definition 2

We say that \(f:2^{U} \rightarrow {\mathbb {R}}_{\ge 0}\) is \(\alpha \)-augmentable for an \(\alpha > 0\), if for every \(S, T \subseteq U \) with \(T {\setminus } S \ne \emptyset \) there exists an element \(t \in T {\setminus } S\) with

$$\begin{aligned} f(S\cup \{t\}) - f(S) \ge \frac{f(S \cup T) - \alpha f(S)}{\left| T\right| }. \end{aligned}$$
(6)

4.1 Characterization of augmentability

We first observe that augmentability is a relaxation of submodularity in the following sense. Note also that \(\alpha \)-augmentability implies \(\alpha '\)-augmentability for \(\alpha ' > \alpha \).

Proposition 2

Every submodular set function is 1-augmentable. Not every 1-augmentable set function is submodular.

Proof

Assume that f is submodular, i.e., for all sets XY, we have

$$\begin{aligned} f(X)+f(Y)\ge f(X\cup Y)+f(X \cap Y). \end{aligned}$$

For every pair of sets ST with \(T{\setminus } S=\{e_{1},\dots ,e_{k}\}\ne \emptyset \) it then follows that

$$\begin{aligned} \sum _{i=1}^{k}f(S\cup \{e_{i}\})&\ge f(S)+f(S\cup \{e_{1},e_{2}\})+\sum _{i=3}^{k}f(S\cup \{e_{i}\})\\&\ge 2f(S)+f(S\cup \{e_{1},e_{2},e_{3}\})+\sum _{i=4}^{k}f(S\cup \{e_{i}\})\\&\cdots \\&\ge (k-1)f(S)+f(S\cup T). \end{aligned}$$

This implies that there exists \(t\in T{\setminus } S\) with

$$\begin{aligned} f(S\cup \{t\})\ge \frac{f(S\cup T)+(k-1)f(S)}{k}=\frac{f(S\cup T)-f(S)}{k}+f(S). \end{aligned}$$

We rewrite this to obtain

$$\begin{aligned} f(S\cup \{t\})-f(S) \ge \frac{f(S\cup T)-f(S)}{|T{\setminus } S|} \ge \frac{f(S\cup T)-f(S)}{|T|}, \end{aligned}$$

i.e., f is 1-augmentable.

It remains to provide a 1-augmentable function that is not submodular. One such function \(f:2^U \rightarrow {\mathbb {R}}_{\ge 0}\) with \(U = \{e_1,e_2,e_3\}\) is given by

$$\begin{aligned} f(S) := {\left\{ \begin{array}{ll} 7, &{} \mathrm {if\,}S=U,\\ 6, &{} \mathrm {if\,}S=\{e_1,e_2\},\\ 5, &{} \mathrm {if\,}e_1 \in S\mathrm {\,and\,} e_2 \notin S,\\ |S|, &{} \mathrm {otherwise}.\\ \end{array}\right. } \end{aligned}$$

It is easy to verify that this function is 1-augmentable, but it is not submodular (not even sub-additive), since \(f(\{e_1,e_3\}) + f(\{e_2\}) < f(\{e_1,e_2,e_3\})\). \(\square \)

We show the following relationship between 1-augmentability and accountability. We will see later (Observation 5) that no such relationship holds for \(\alpha \)-augmentability with \(\alpha \ge 2\).

Proposition 3

Every 1-augmentable function \(f:2^U \rightarrow {\mathbb {R}}_{\ge 0}\) is accountable.

Proof

Consider an arbitrary subset \(T \subseteq U\) with \(n := |T|\) and let \(S_0 := \emptyset \), \(T_0 := T\). For \(i= 1,\dots ,n-1\), we incrementally construct sets \(S_i := S_{i-1} \cup \{t_i\}\) and \(T_i := T_{i-1} {\setminus } \{t_i\}\), where we can choose \(t_i \in T_{i-1}{\setminus } S_{i-1} = T_{i-1}\) such that

$$\begin{aligned} f(S_{i-1} \cup \{t_i\}) - f(S_{i-1}) \ge \frac{f(S_{i-1} \cup T_{i-1}) - f(S_{i-1})}{|T_{i-1}|}, \end{aligned}$$

because f is 1-augmentable. With \(T_{i-1} \cup S_{i-1} = T\) and \(|T_{i-1}| = n - i + 1\), this yields

$$\begin{aligned} f(S_i) \ge \frac{f(T) - f(S_{i-1})}{n-i+1} + f(S_{i-1}) = \frac{f(T) + (n-i)f(S_{i-1})}{n-i+1}, \end{aligned}$$

or

$$\begin{aligned} f(T) - f(S_i) \le \left( \frac{n-i}{n-i+1}\right) \left( f(T) - f(S_{i-1})\right) . \end{aligned}$$

Letting \(\{t_n\} := T_{n-1}\) and applying this inequality repeatedly yields

$$\begin{aligned} f(T) - f(T{\setminus } \{t_n\}) \le \bigl (f(T) - f(S_0)\bigr )\cdot \prod _{i=1}^{n-1}\frac{n-i}{n-i+1} \le \frac{f(T)}{|T|}, \end{aligned}$$

where we used that f is non-negative.

We conclude that \(t_n\) satisfies the requirement for accountability. \(\square \)

Our definition of \(\alpha \)-augmentability is meaningful in the sense that it induces an interesting subclass of incremental problems. In particular, we show that it captures the objective function of Maximum (Weighted) d -Dimensional Matching for \(d=\alpha \).

Proposition 4

The objective function of Maximum (Weighted) d -Dimensional Matching is always d-augmentable, but, for every \(\alpha < d\), there exists an instance where it is not \(\alpha \)-augmentable.

Proof

Let an instance of Maximum Weighted d -Dimensional Matching be given by \(V_{1},\dots ,V_{d}\) and \(E\subseteq V_{1}\times \dots \times V_{d} w :E \rightarrow \mathbb {R}_{\ge 0}\). Consider \(S,T\subseteq E\) with \(T{\setminus } S\ne \emptyset \), and let \(M_{S}\), \(M_{S\cup T}\) be of maximum weight among all matchings that are subsets of S, and \(S\cup T\), respectively. In particular, \(f(S)=w(M_{S})\) and \(f(S\cup T)=w(M_{S\cup T})\).

For \(e=(v_{1},\dots ,v_{d})\in M_{S\cup T}{\setminus } M_{S}\), let \(M_{S,e}:=\{e'=(v_{1}',\dots ,v_{d}')\in M_{S}\,|\,v_{i}=v_{i}'\text { for some }i\in \{1,\dots ,d\}\}\) denote the set of elements of \(M_{S}\) that intersect e. Observe that, since \(M_{S\cup T}\) is a d-dimensional matching, we have \(\bigcup _{e\in M_{S\cup T}{\setminus } M_{S}}M_{S,e}\subseteq M_{S}{\setminus } M_{S\cup T}\), and that every \(e'\in M_{S}\) is contained in at most d such intersection sets. We can lower bound \(f(S\cup \{e\})\) by replacing in \(M_{S}\) the elements \(M_{S,e}\) by the single element e. This yields

$$\begin{aligned} \sum _{e\in M_{S\cup T}{\setminus } M_{S}}\!\!\!\!\!\!\!\!\!\left( f(S\!\cup \!\{e\})\!-\!f(S)\right)&\ge \sum _{e\in M_{S\cup T}{\setminus } M_{S}}\left( w(e)-w(M_{S,e})\right) \\&\ge w(M_{S\cup T}{\setminus } M_{S})-d\cdot w\Bigl ({\textstyle \bigcup }_{e\in M_{S\cup T}{\setminus } M_{S}}M_{S,e}\Bigr )\\&\ge w(M_{S\cup T}{\setminus } M_{S})-d\cdot w(M_{S}{\setminus } M_{S\cup T})\\&=w(M_{S\cup T})\!-\!w(M_{S}\cap M_{S\cup T})\!-\!d\!\cdot \! w(M_{S}{\setminus } M_{S\cup T})\\&=w(M_{S\cup T})-w(M_{S})-(d-1)\cdot w(M_{S}{\setminus } M_{S\cup T})\\&\ge w(M_{S\cup T})-d\cdot w(M_{S})\\&=f(S\cup T)-d\cdot f(S). \end{aligned}$$

Note that every summand on the left-hand side of this expression is non-negative, and summands for \(e\in S\) are zero. It follows that, if \(M_{S \cup T} {\setminus } S \ne \emptyset \), there must exist \(t\in (M_{S\cup T}{\setminus } S)\subseteq T{\setminus } S\) with

$$\begin{aligned} f(S\cup \{t\})-f(S)&\ge \frac{f(S\cup T)-d\cdot f(S)}{|M_{S\cup T}{\setminus } S|}\\&\ge \frac{f(S\cup T)-d\cdot f(S)}{|T{\setminus } S|}\\&\ge \frac{f(S\cup T)-d\cdot f(S)}{|T|}. \end{aligned}$$

If \(M_{S \cup T} {\setminus } S = \emptyset \), then \(f(S \cup T) = f(S)\), and we can choose any \(t \in T {\setminus } S \ne \emptyset \). Thus, f is d-augmentable.

To see that f is not \(\alpha \)-augmentable for \(\alpha \in (0,d)\), consider an (unweighted) instance of Maximum d -Dimensional Matching with \(V_{i}=\{v_{i,1},\dots ,v_{i,d}\}\) for \(i\in \{1,\dots ,d\}\) and \(E=S\cup T\) with \(S:=\{(v_{1,1},v_{2,2},\dots ,v_{d,d})\}\) and \(T:=\bigcup _{i=1}^{d}\{(v_{1,i},\dots ,v_{d,i})\}\). For every \(e\in T{\setminus } S\), we have \(f(S\cup \{e\})=f(S)=1\), and thus

$$\begin{aligned} f(S\cup \{e\})-f(S)=0<\frac{d-\alpha }{d}=\frac{f(S\cup T)-\alpha f(S)}{|T|}. \end{aligned}$$

Hence, f is not \(\alpha \)-augmentable. \(\square \)

We show next that 2-augmentability specifically captures the problems Maximum (Weighted) (b -)Matching and Maximum Bridge-Flow. Recall that, for \(b:2^E \rightarrow {\mathbb {N}}\), a b-matching in an unweighted graph \(G=(V,E)\) is a set of edges \(M \subseteq E\), such that the degree of every vertex v in (VM) is upper bounded by b(v). The problem Maximum Weighted b -Matching is defined anologously to Maximum Weighted Matching.

Proposition 5

The objective function of Maximum (Weighted) b -Matching is always 2-augmentable, but, for every \(\alpha < 2\), there exists an instance where it is not \(\alpha \)-augmentable.

Proof

Let \(G=(V,E)\) be a graph, let \(w:E \rightarrow {\mathbb {R}}_{\ge 0}\) be edge weights, let \(b:V \rightarrow {\mathbb {N}}\) be vertex capacities, and let \(f :2^E \rightarrow {\mathbb {N}}\) such that f(S) denotes the maximum weight of a b-matching in the subgraph (VS) of G. Consider two edge sets \(S,T\subseteq E\) with \(S\ne T\) and let \(M_S\subseteq S\) be a maximum weight b-matching in the graph (VS), i.e., no vertex \(v\in V\) is incident to more than b(v) edges in \(M_S\) and \(w(M_S) = f(S)\), where \(w(X) := \sum _{e\in X} w(e)\). We let \(d_{M_S}:V \rightarrow {\mathbb {N}}\) denote the vertex degrees of the subgraph \((V,M_S)\), and define \(l_{M_S}(v)\) to be the weight of the b-matching \(M_S\) that we lose if we need to reduce b(v) by one, i.e., for every \(v\in V\) we define

$$\begin{aligned} l_{M_S}(v) := {\left\{ \begin{array}{ll} 0, &{} \text {if }d_{M_S}(v) < b(v), \\ \min _{e=\{v,w\}\in M_S} w(e), &{} \text {if }d_{M_S}(v) = b(v). \end{array}\right. } \end{aligned}$$

Now take any edge \(e=\{u,v\}\) of the maximum weight b-matching \(M_{S\cup T}\) in the graph \((V,S \cup T)\), and assume that we need to add e to \(M_S\) without violating vertex capacities, i.e., we may first need to remove edges from \(M_S\) to make room for e. If e is already part of \(M_S\), we do not need to change the matching, and, in particular, the weight of the matching remains unchanged. Otherwise, by definition of \(l_{M_S}\), we can ensure that the change in weight of the b-matching is at least

$$\begin{aligned} w(e) - l_{M_S}(u) - l_{M_S}(v). \end{aligned}$$

Observe that, since \(e \in M_{S\cup T}\), the weight values yielded by \(l_{M_S}(u)\) and \(l_{M_S}(v)\) must originate from edges in \(M_S {\setminus } M_{S \cup T}\). Therefore, if we sum the above change over all edges in \(M_{S \cup T} {\setminus } M_S\), and let \(b'(v)\) denote the degree of v in the subgraph \((V, M_{S \cup T}{\setminus } M_S)\), we obtain

$$\begin{aligned}&w(M_{S \cup T}{\setminus } M_S) - \sum _{v\in V} b'(v)l_{M_S}(v) \\&\quad \ge w(M_{S \cup T}{\setminus } M_S) - 2w(M_S {\setminus } M_{S \cup T}) \\&\quad = w(M_{S \cup T}) - w(M_{S \cup T} \cap M_S) - 2w(M_S {\setminus } M_{S \cup T}) \\&\quad = w(M_{S \cup T}) - w(M_S) - w(M_S {\setminus } M_{S \cup T}) \\&\quad \ge w(M_{S \cup T}) - 2w(M_S) \\&\quad = f(S \cup T) - 2f(S). \end{aligned}$$

Since \(M_S\) is a maximum matching in (VS), no edge in S can have a positive contribution to this sum. If this expression is still positive, there must be an edge \(e \in M_{S \cup T}{\setminus } S\) that increases the weight of \(M_S\) by at least

$$\begin{aligned} (f(S \cup T) - 2f(S)) / |M_{S \cup T}{\setminus } S| \ge (f(S \cup T) - 2f(S)) / |T {\setminus } S|, \end{aligned}$$

and we get

$$\begin{aligned} f(S \cup \{e\}) - f(S) \ge (f(S \cup T) - 2f(S)) / |T {\setminus } S|, \end{aligned}$$

as claimed. Otherwise, the right-hand side is not positive, and the inequality is trivially satisfied by monotonicity of f. Hence, f is 2-augmentable.

To see that the objective function need not be \(\alpha \)-augmentable for \(\alpha < 2\), consider an unweighted path \(P=(V,E)\) of length three with edges \(e_1,e_2,e_3\in E\) in this order along the path. Recall that f(X) is the cardinality of a maximum matching in the subgraph (VX) for every \(X\subseteq E\). With this, for \(S:=\{e_2\}\) and \(T:=\{e_1,e_3\}\) we have

$$\begin{aligned} f(S \cup \{t\}) - f(S) = 0 < \frac{2 - \alpha }{2} = \frac{f(S \cup T) - \alpha f(S)}{|T|}, \end{aligned}$$

for all \(t \in T{\setminus } S\). Thus f is not \(\alpha \)-augmentable for \(\alpha < 2\). \(\square \)

Proposition 6

The objective function of Maximum Bridge-Flow is 2-augmentable but not \(\alpha \)-augmentable for \(\alpha < 2\).

Proof

Recall that for any subset \(S \subset C\), f(S) is the value of the maximum flow using edges in \(E {\setminus }(C {\setminus }S)\). For any set \(X \subseteq C\), let \(G_X\) be the graph that contains all the edges of X, plus all the edges in G that are not in C: Let \(f_S\) be some maximum flow in \(G_S\), and let \(\text {val} (f_S)\) be its value. By definition of the Maximum Bridge-Flow objective function we have \(\text {val} (f_S) = f(S)\).

Now, \(f_S\) can also be viewed as a flow in \(G_{S \cup T}\). Let \(G^S_{S \cup T}\) be the residual graph in \(G_{S \cup T}\) formed by flow \(f_S\). Let \(f_r\) be the maximum flow in \(G^S_{S \cup T}\) (r for residual), and let \(\text {val} (f_r)\) be its value.

By the properties of residual graphs, we know that \(f(S \cup T) = f(S) + \text {val} (f_r)\). Rearranging we get that \(\text {val} (f_r) = f(S \cup T) - f(S)\). Now, by the property of flows, the flow \(f_r\) can be decomposed into source-sink paths and cycles, and we can define \(f_r\) to be a max flow in \(G^S_{S \cup T}\) that contains only paths (no cycles), because such a max flow must always exist. There are two types of paths to consider in the decomposition of \(f_r\): some use backwards edges in S, while others only forward edges in \(S \cup T\). Note that the total capacity of all backwards residual edges in \(G^S_{S \cup T}\) is \(\text {val} (f_S) = f(S)\). Thus, if we let \(f^{nb}_r\) be the sub-flow of \(f_r\) that uses no backwards edges (nb for no backwards edges), and we let \(\text {val} (f^{nb}_r)\) be its value, then we have

$$\begin{aligned} \text {val} (f^{nb}_r) \ge \text {val} (f_r) - f(S) = f(S \cup T) - 2f(S). \end{aligned}$$
(7)

Now, note that the flow \(f^{nb}_r\) is decomposed into paths which each cross the cut C exactly once, because by the problem definition the cut C is directed one way, and \(f^{nb}_r\) contains no backwards edges with which to go back to the source side of the cut. Moreover, none of these paths use any edges in S; if such a path existed, then it would use no edges in \(C {\setminus }S\) (because it crosses the cut exactly once), so it could have been added to \(f_s\) in \(G_S\), which contradicts \(f_S\) being the maximum flow in \(G_S\). Thus, every flow-path in \(f^{nb}_r\) uses a single edge in \(T {\setminus }S\) and no other edges in C. Now, for any edge \(e \in T\), let \(f^{nb}_r(e)\) be the flow on e in \(f^{nb}_r\). We now argue that \(f(S \cup \left\{ e \right\} ) - f(S) \ge f^{nb}_r(e)\). To see this, let \(G^S_{S \cup \left\{ e \right\} }\) be the residual graph of \(G_{S \cup \left\{ e \right\} }\) defined by flow \(f_S\). By the properties of residual graphs we have that \(f(S \cup \left\{ e \right\} ) - f(S)\) is precisely the value of the maximum flow in \(G^S_{S \cup \left\{ e \right\} }\). But note that all the flow-paths in \(f^{nb}_r\) that go through edge e do not go through any other edge in T, and so they are also valid flow-paths in \(G^S_{S \cup \left\{ e \right\} }\). Thus, the value of the maximum flow in \(G^{S}_{S \cup \left\{ e \right\} }\) is at least \(f^{nb}_r(e)\), as desired.

Since every flow-path in \(f^{nb}_r\) only goes through a single edge in T we have \(\text {val} (f^{nb}_r) = \sum _{e \in T}f^{nb}_r(e)\). Also, since \(f_S\) is a maximum flow in \(G_S\), we have \(f^{nb}_r(e)=0\) for \(e \in S\), and thus \(\text {val} (f^{nb}_r) = \sum _{e \in T{\setminus } S}f^{nb}_r(e)\). Thus, there is some edge \(e \in T\) with \(f^{nb}_r(e) \ge \text {val} (f^{nb}_r) / |T{\setminus } S|\), so by the argument in the paragraph above, \(f(S \cup \left\{ e \right\} ) - f(S) \ge \text {val} (f^{nb}_r) / |T {\setminus } S|\). Equation (7) then completes the proof.

Fig. 2
figure 2

Example of a Maximum Bridge-Flow instance with unit capacities that is not 1-augmentable

To see that the objective function need not be \(\alpha \)-augmentable for \(\alpha < 2\), consider the graph \(G=(V,E)\) in Fig. 2, where all arcs have capacity 1, and the directed cut is \(C=\{e_1,e_2,e_3\}\). Recall that the objective function of the Bridge-Flow problem is defined such that f(X) is the value of a maximum flow in the graph \((V,(E{\setminus } C)\cup X)\) for \(X\subseteq C\). With this, for \(S:=\{e_2\}\) and \(T:=\{e_1,e_3\}\) we have

$$\begin{aligned} f(S \cup \{t\}) - f(S) = 0 < \frac{2 - \alpha }{2} = \frac{f(S \cup T) - \alpha f(S)}{|T|}, \end{aligned}$$

for all \(t \in T{\setminus } S\). Thus f is not \(\alpha \)-augmentable for \(\alpha < 2\). \(\square \)

4.2 Upper bound for the greedy algorithm

We now show the first part of Theorem 2, i.e., we show an upper bound on the competitive ratio of the greedy algorithm for incremental problems with monotone and \(\alpha \)-augmentable objective functions. Note that, by Proposition 2, our result strengthens the known upper bound of \(\frac{e}{e-1}\) for submodular objectives, by establishing the same bound for the slightly larger class of 1-augmentable set functions.

Theorem 5

For every maximization problem with monotone and \(\alpha \)-augmentable objective, the greedy algorithm is \(\alpha \frac{e^\alpha }{e^\alpha - 1}\)-competitive.

Proof

Fix any \(k \in \{1,\dots ,|U|\}\) and, for \(i \in \{0,\dots ,k\}\), let \(S_{i} \) be our greedy incremental solution after i elements have been added. Then, for \(i \in \{1,\dots ,k\}\), \(\alpha \)-augmentability applied with \(S = S_{i-1} \) and \(T = S^{\star }_{k} \) guarantees the existence of an element \(t \in S^{\star }_{k} {\setminus } S_{i-1} \) such that

$$\begin{aligned} f(S_{i-1} \cup \left\{ t \right\} ) - f(S_{i-1}) \ge \frac{f(S_{i-1} \cup S^{\star }_{k}) - \alpha f(S_{i-1})}{k} \ge \frac{f^{\star }_{k} - \alpha f(S_{i-1})}{k}, \end{aligned}$$
(8)

where we used monotonicity of f.

Since we construct \(S_{i} \) greedily, we have \(f(S_{i}) \ge f(S_{i-1} \cup \left\{ t \right\} )\). With (8) this yields

$$\begin{aligned} f(S_{i}) \ge \left( \frac{\alpha }{k} - 1\right) \left( \frac{1}{\alpha } f^{\star }_{k} - f(S_{i-1}) \right) + \frac{1}{\alpha } f^{\star }_{k}, \end{aligned}$$

or, equivalently,

$$\begin{aligned} \frac{1}{\alpha } f^{\star }_{k} - f(S_{i}) \le \left( 1 - \frac{\alpha }{k}\right) \left( \frac{1}{\alpha } f^{\star }_{k} - f(S_{i-1}) \right) . \end{aligned}$$

Applying this inequality repeatedly, we obtain

$$\begin{aligned} \frac{1}{\alpha } f^{\star }_{k} - f(S_{i}) \le \left( 1 - \frac{\alpha }{k}\right) ^{i}\left( \frac{1}{\alpha } f^{\star }_{k} - f(S_{0}) \right) \le e^{-\frac{ \alpha i}{ k}} \frac{1}{\alpha } f^{\star }_{k}, \end{aligned}$$
(9)

where we used that \(f(S_0) \ge 0\) and \(1 + x \le e^x\) for \(x \in {\mathbb {R}}\).

Finally, setting \(i = k\) in (9) yields

$$\begin{aligned} f(S_{k}) \ge \frac{1}{\alpha } \bigl (1 - e^{-\alpha }\bigr ) f^{\star }_{k}, \end{aligned}$$

or \(\frac{f^{\star }_{k}}{f(S_{k})} \le \alpha \frac{e^{\alpha }}{e^\alpha - 1}\). \(\square \)

With this, we can complement Proposition 3 by showing that there is no relationship between accountability and \(\alpha \)-augmentability for \(\alpha \ge 2\).

Observation 5

A monotone and accountable set function need not be \(\alpha \)-augmentable for any \(\alpha \). Conversely, for every \(\alpha \ge 2\), a monotone and \(\alpha \)-augmentable set function need not be accountable.

Proof

The first part of the statement follows directly from Observation 4 and Theorem 5.

For the second part of the statement, consider ground set \(U = \{e_1,e_2,e_3\}\) and the monotone function \(f:2^U \rightarrow {\mathbb {R}}_{\ge 0}\) given by

$$\begin{aligned} f(S) := {\left\{ \begin{array}{ll} 4, &{}\text {if }S = U,\\ 2, &{}\text {if }S = \{e_1\},\\ |S|, &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$

It is easy to verify that f is 2-augmentable and thus \(\alpha \)-augmentable for \(\alpha \ge 2\). However, f is not accountable, since \(f(U {\setminus } \{e\}) = 2 < 4 - \frac{4}{3} = f(U) - \frac{f(U)}{|U|}\) for all \(e \in U\). \(\square \)

4.3 Lower bounds for the greedy algorithm

We now show the second part of Theorem 2, i.e., we show lower bounds on the competitive ratio of the greedy algorithm. Note that the upper bound of \(\alpha \frac{e^{\alpha }}{e^{\alpha } - 1}\) shown above converges to \(\alpha \) in the limit \(\alpha \rightarrow \infty \). We first establish an asymptotically tight lower bound of \(\alpha \) for \(\alpha \)-augmentable objectives.

Theorem 6

For \(\alpha \in {\mathbb {N}}\), \(\alpha \ge 2\), the greedy algorithm has a competitive ratio of at least \(\alpha \) for the \(\alpha \)-augmentable problem Maximum Weighted \(\alpha \)-Dimensional Matching.

Proof

Consider an instance of Maximum Weighted \(\alpha \)-Dimensional Matching with \(V_{i}=\{v_{i,1},\dots ,v_{i,2\alpha -1}\}\) for \(i\in \{1,\dots ,\alpha \}\) and \({E=E_{1}\cup E_{2}\cup E_{3}}\), where we set \(E_{1}:=\bigcup _{i=1}^{\alpha }\{(v_{1,i},\dots ,v_{\alpha ,i})\}\), \(E_{2}:=\{(v_{1,1},v_{2,2},\dots ,v_{\alpha ,\alpha })\}\), and \(E_{3}:=\bigcup _{i=\alpha +1}^{2d-1}\{(v_{1,i},\dots ,v_{\alpha ,i})\}\). Let \(w(e)=1\) for \(e\in E_{1}\), let \(w(e)=1+\varepsilon \) for \(e\in E_{2}\), and let \(w(e)=\varepsilon \) for \(e\in E_{3}\), where \(\varepsilon >0\) is arbitrarily small. The greedy algorithm computes an incremental solution \(\mathbf {S}\) that first selects the element in \(E_{2}\), then the elements in \(E_{3}\), and finally the elements in \(E_{1}\). In particular, we have \(f(S_{\alpha }) =1+\alpha \varepsilon \) and \(f_{\alpha }^{\star }=\alpha \). The competitive ratio of the greedy algorithm is thus bounded by

$$\begin{aligned} \rho \ge \frac{\alpha }{1+\alpha \varepsilon }\rightarrow _{\varepsilon \rightarrow 0}\alpha . \end{aligned}$$

\(\square \)

For submodular objectives, it is well-known that the greedy algorithm has competitive ratio exactly \(\frac{e}{e - 1}\) [32]. By Proposition 2, the corresponding lower bound carries over to 1-augmentable objectives, which implies that our upper bound is tight beyond the class of all submodular functions. We now show that our upper bound is also tight for 2-augmentable objectives, which may be an indication that the bound is tight in general.

To show this lower bound, we construct the following family of instances for the Maximum Bridge-Flow problem. For \(k \in {\mathbb {N}}\), we define a graph \(G_k = (V_k,E_k)\) with designated nodes s and t by (see Fig. 3)

$$\begin{aligned} V_k&:= \left\{ s,t \right\} \cup \left\{ v^1_i,v^4_i\ \right| \ \left. i=1,\dots ,2k\right\} \cup \left\{ v^2_i,v^3_i\ \right| \ \left. i \in \{1,\dots ,4k\}\right\} ,\\ E_k&:= E^1_k \cup E^\infty _k \cup \bigcup _{i=1}^{2k} E_{k,i} \cup \bigcup _{i=1}^{2k} E'_{k,i},\\ E^1_k&:= \left\{ (s,v^2_i),(v^3_{3k+i},t)\ \right| \ \left. i \in \{1,\dots ,k\}\right\} ,\\ E^\infty _k&:= \left\{ (s,v^2_{3k+i}),(v^2_i,v^3_i),(v^2_{3k+i},v^3_{3k+i}),(v^3_{i},t)\ \right| \ \left. i \in \{1,\dots ,k\}\right\} ,\\ E_{k,i}&:= \left\{ (s,v^1_i),(v^1_i,v^2_{k+i}),(v^2_{k+i},v^3_{k+i}),(v^3_{k+i},v^4_i),(v^4_i,t) \right\} \quad \forall i\in \{1,\dots ,2k\},\\ E'_{k,i}&:= \left\{ (v^1_i,v^2_j),(v^3_{3k+j},v^4_{i})\ \right| \ \left. j=1,\dots ,k\right\} \quad \forall i\in \{1,\dots ,2k\}.\\ \end{aligned}$$

The edge capacities \(u_k:E_k \rightarrow {\mathbb {R}}_{\ge 0}\) are given by \(u_k(e)=(\frac{k}{k-1})^{2k+1-i}\) for \(e \in E_{k,i}\), by \(u_k(e)=\frac{1}{k}(\frac{k}{k-1})^{2k+1-i}\) for \(e \in E'_{k,i}\), by \(u_k(e)=1\) for \(e\in E^1_k\), and by \(u_k(e)=\infty \) for \(e\in E^\infty _k\).

For every \(G_k\), we choose a directed s-t-cut \(C_k := \left\{ (v^2_i,v^3_i)\ \right| \ \left. i=1,\dots ,4k\right\} \).

Fig. 3
figure 3

The lower bound construction \(G_k\) for the greedy algorithm

Without loss of generality, we will assume in the following that we can resolve all ties in the greedy algorithm to our preference. This can be done formally by adding some very small offsets to the edge weights, but we omit this for clarity. Now consider how the greedy algorithm operates on graph \(G_k\).

Lemma 5

In step \(j\!\in \!\{1,\dots ,2k\}\), the greedy algorithm picks edge \((v_{k+j}^2,\!v_{k+j}^3)\).

Proof

We prove the lemma by induction on j, starting with step \(j=1\), together with the fact that all picked edges can fully be saturated. Choosing \((v^2_{k+1},v^3_{k+1}) \in C_k\) in the first step results in a possible s-t-flow of \((\frac{k}{k-1})^{2k}\) along the s-t-path in \(E_{k,1}\), thus fully saturating the edge. By construction, selecting \((v^2_{k+i},v^3_{k+i}) \in C_k\) with \(i \in \{2,\dots ,2k\}\) yields less s-t-flow, since these edges have lower capacity. Picking edge \((v^2_{i},v^3_{i}) \in C_k\) with \(i \in \{1,\dots ,k,3k+1,\dots ,4k\}\) results in a flow value bounded by the sum of the incoming edge capacities at \(v^2_{i}\) or the outgoing edge capacities at \(v^3_{3k+i}\). In either case, we get a flow of

$$\begin{aligned} 1+\frac{1}{k}\sum _{i=1}^{2k} \left( \frac{k}{k-1}\right) ^{i}= & {} 1+\frac{1}{k} \left( \frac{\left( \frac{k}{k-1}\right) ^{2k+1}-1}{\frac{k}{k-1}-1}-1\right) \\= & {} 1+\frac{1}{k} \left( (k-1)\left( \frac{k}{k-1}\right) ^{2k+1}-(k-1)-1\right) \\= & {} \left( \frac{k}{k-1}\right) ^{2k}. \end{aligned}$$

Thus, no other edge results in more s-t-flow than edge \((v^2_{k+1},v^3_{k+1})\), and with suitable tie-breaking we can ensure that the greedy algorithm picks this edge first, as claimed.

Now assume that greedy picked edges \((v^2_{k+1},v^3_{k+1}), \dots , (v^2_{k+j-1},v^3_{k+j-1})\) before step j, and these edges can fully be saturated. Then, in step j, it can pick edge \((v^2_{k+j},v^3_{k+j})\) to increase the s-t-flow value by \((\frac{k}{k-1})^{2k+1-j}\) along the s-t-path in \(E_{k,j}\), thus saturating the edge. This is again better than selecting an edge \((v^2_{k+i},v^3_{k+i}) \in C_k\) for \(i\in \{j+1,\dots ,2k\}\), since these edges have lower capacity. For the remaining edges, we need to account for the fact that the edges \((s,v^1_i)\) and \((v^4_i,t)\) for \(i\in \{1,\dots ,j-1\}\) are already saturated, by induction. Therefore, the gain in flow value if we add any edge \((v^2_{i},v^3_{i}) \in C_k\) with \(i\in \{1,\dots ,k,3k+1,\dots ,4k\}\) in step j is

$$\begin{aligned} 1+\frac{1}{k}\sum _{i=1}^{2k+1-j} \left( \frac{k}{k-1}\right) ^{i}= & {} 1+\frac{1}{k} \left( \frac{\left( \frac{k}{k-1}\right) ^{2k+2-j}-1}{\frac{k}{k-1}-1}-1\right) \\= & {} 1+\frac{1}{k} \left( (k-1)\left( \frac{k}{k-1}\right) ^{2k+2-j}-(k-1)-1\right) \\= & {} \left( \frac{k}{k-1}\right) ^{2k+1-j}. \end{aligned}$$

This is again not better than picking edge \((v^2_{k+j},v^3_{k+j})\), and with suitable tie-breaking we can ensure that the greedy algorithm picks \((v^2_{k+j},v^3_{k+j})\), as claimed. \(\square \)

With this, we are ready to show the following result, which, together with Proposition 6, implies the second part of Theorem 2.

Theorem 7

The greedy algorithm has competitive ratio at least \(\frac{2e^2}{e^2-1}\approx 2.313\) for Maximum Bridge-Flow.

Proof

By Lemma 5, in the first 2k steps, the greedy algorithm picks the edges \((v^2_{k+1},v^3_{k+1}), \dots , (v^2_{3k},v^3_{3k})\). Thus, after step 2k, greedy can send an s-t-flow of value

$$\begin{aligned} \sum _{i=1}^{2k} \left( \frac{k}{k-1}\right) ^{i} = \left( \frac{\left( \frac{k}{k-1}\right) ^{2k+1}-1}{\frac{k}{k-1}-1}-1\right) = (k-1)\left( \frac{k}{k-1}\right) ^{2k+1}-k. \end{aligned}$$

On the other hand, the solution of size 2k consisting of the edges \((v^2_{1},v^3_{1})\), \(\dots \), \((v^2_{k},v^3_{k})\), and \((v^2_{3k+1},v^3_{3k+1}), \dots , (v^2_{4k},v^3_{4k})\) results in an (optimal) flow value of

$$\begin{aligned} 2k+2\sum _{i=1}^{2k} \left( \frac{k}{k-1}\right) ^{i} = 2(k-1)\left( \frac{k}{k-1}\right) ^{2k+1}. \end{aligned}$$

This corresponds to a competitive ratio of

$$\begin{aligned} \frac{2(k-1)\left( \frac{k}{k-1}\right) ^{2k+1}}{(k-1)\left( \frac{k}{k-1}\right) ^{2k+1}-k} = \frac{2\left( \frac{k}{k-1}\right) ^{2k}}{\left( \frac{k}{k-1}\right) ^{2k}-1} = \frac{2\left( \frac{k}{k-1}\right) ^{2(k-1)+2}}{\left( \frac{k}{k-1}\right) ^{2(k-1)+2}-1}. \end{aligned}$$

Substituting \(x := k-1\) and using the identity \(\lim _{x \rightarrow \infty } (1+1/x)^x = e\), we get the lower bound on the competitive ratio of the greedy algorithm claimed in Theorem 2 in the limit:

$$\begin{aligned} \lim _{x \rightarrow \infty } \frac{2\left( \frac{x+1}{x}\right) ^{2x}\left( \frac{x+1}{x}\right) ^2}{\left( \frac{x+1}{x}\right) ^{2x}\left( \frac{x+1}{x}\right) ^2-1} = \lim _{x \rightarrow \infty } \frac{2e^2\left( \frac{x+1}{x}\right) ^2}{e^2\left( \frac{x+1}{x}\right) ^2-1} = \frac{2e^2}{e^2 - 1}. \end{aligned}$$

\(\square \)

5 Conclusion

We have defined a formal framework that captures a large class of incremental problems and allows for incremental solutions with bounded competitive ratios. We also defined a new and meaningful subclass consisting of problems with \(\alpha \)-augmentable objective functions for which the greedy algorithm has a bounded competitive ratio. Hopefully our results can inspire future work on incremental problems from a perspective of competitive analysis.

The following open problems are left for future research:

  1. 1.

    Close the gap between our bounds of 2.618 and 2.18 for the best-possible competitive ratio of (deterministic) incremental algorithms.

  2. 2.

    Extend our lower bound of 2.18 to randomized algorithms and/or show that randomized algorithms can perform strictly better than deterministic algorithms in terms of competitive ratios.

  3. 3.

    For \(\alpha \)-augmentable objectives, we showed that our bound of \(\alpha \frac{e^\alpha }{e^\alpha -1}\) for the competitive ratio of the greedy algorithm is tight when \(\alpha \in \{1, 2\}\) and when \(\alpha \rightarrow \infty \). Prove or disprove that the bound is tight for all \(\alpha \in {\mathbb {N}}\).

  4. 4.

    Prove whether or not the greedy algorithm is a best-possible incremental algorithm for \(\alpha \)-augmentable objectives.