Skip to main content
Log in

Robust monotone submodular function maximization

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider a robust formulation, introduced by Krause et al. (J Artif Intell Res 42:427–486, 2011), of the classical cardinality constrained monotone submodular function maximization problem, and give the first constant factor approximation results. The robustness considered is w.r.t. adversarial removal of up to \(\tau \) elements from the chosen set. For the fundamental case of \(\tau =1\), we give a deterministic \((1-1/e)-1/\varTheta (m)\) approximation algorithm, where m is an input parameter and number of queries scale as \(O(n^{m+1})\). In the process, we develop a deterministic \((1-1/e)-1/\varTheta (m)\) approximate greedy algorithm for bi-objective maximization of (two) monotone submodular functions. Generalizing the ideas and using a result from Chekuri et al. (in: FOCS 10, IEEE, pp 575–584, 2010), we show a randomized \((1-1/e)-\epsilon \) approximation for constant \(\tau \) and \(\epsilon \le \frac{1}{\tilde{\varOmega }(\tau )}\), making \(O(n^{1/\epsilon ^3})\) queries. Further, for \(\tau \ll \sqrt{k}\), we give a fast and practical 0.387 algorithm. Finally, we also give a black box result result for the much more general setting of robust maximization subject to an Independence System.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Badanidiyuru, A., Vondrák, J.: Fast algorithms for maximizing submodular functions. In SODA ’14, pages 1497–1514. SIAM, (2014)

  2. Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust optimization. Princeton University Press, Princeton (2009)

    Book  Google Scholar 

  3. Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MathSciNet  Google Scholar 

  4. Bertsimas, D., Sim, M.: Robust discrete optimization and network flows. Math. Program. 98(1–3), 49–71 (2003)

    Article  MathSciNet  Google Scholar 

  5. Bertsimas, D., Sim, M.: The price of robustness. Oper. Res. 52(1), 35–53 (2004)

    Article  MathSciNet  Google Scholar 

  6. Bogunovic, I., Mitrovic, S., Scarlett, J., Cevher, V.: Robust submodular maximization: a non-uniform partitioning approach. In: ICML (2017)

  7. Buchbinder, N., Feldman, M.: Deterministic algorithms for submodular maximization problems. CoRR. arXiv:1508.02157 (2015)

  8. Buchbinder, N., Feldman, M., Naor, J.S., Schwartz, R.: A tight linear time (1/2)-approximation for unconstrained submodular maximization. In: FOCS ’12, pp. 649–658 (2012)

  9. Calinescu, G., Chekuri, C., Pál, M., Vondrák, J.: Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput. 40(6), 1740–1766 (2011)

    Article  MathSciNet  Google Scholar 

  10. Chekuri, C., Vondrák, J., Zenklusen, R.: Dependent randomized rounding via exchange properties of combinatorial structures. In: FOCS 10, pp. 575–584. IEEE (2010)

  11. Dobzinski, S., Vondrák, J.: From query complexity to computational complexity. In: STOC ’12, pp. 1107–1116. ACM (2012)

  12. Feige, U.: A threshold of ln n for approximating set cover. J. ACM (JACM) 45(4), 634–652 (1998)

    Article  Google Scholar 

  13. Feige, U., Mirrokni, V.S., Vondrak, J.: Maximizing non-monotone submodular functions. SIAM J. Comput. 40(4), 1133–1153 (2011)

    Article  MathSciNet  Google Scholar 

  14. Feldman, M., Naor, J.S., Schwartz, R.: A unified continuous greedy algorithm for submodular maximization. In: FOCS ’11, pp. 570–579. IEEE (2011)

  15. Feldman, M., Naor, J.S., Schwartz, R.: Nonmonotone submodular maximization via a structural continuous greedy algorithm. In: Automata, Languages and Programming, pp. 342–353. Springer (2011)

  16. Gharan, S.O., Vondrák, J.: Submodular maximization by simulated annealing. In: SODA ’11, pp. 1098–1116. SIAM (2011)

  17. Globerson, A., Roweis, S.: Nightmare at test time: robust learning by feature deletion. In: Proceedings of the 23rd International Conference on Machine learning, pp. 353–360. ACM (2006)

  18. Golovin, D., Krause, A.: Adaptive submodularity: Theory and applications in active learning and stochastic optimization. J. Artif. Intell. Res. 42, 427–486 (2011)

    MathSciNet  MATH  Google Scholar 

  19. Guestrin, C., Krause, A., Singh, A.P.: Near-optimal sensor placements in gaussian processes. In: Proceedings of the 22nd International Conference on Machine learning, pp. 265–272. ACM (2005)

  20. Iwata, S., Fleischer, L., Fujishige, S.: A combinatorial strongly polynomial algorithm for minimizing submodular functions. J. ACM (JACM) 48(4), 761–777 (2001)

    Article  MathSciNet  Google Scholar 

  21. Krause, A., Guestrin, C., Gupta, A., Kleinberg, J.: Near-optimal sensor placements: maximizing information while minimizing communication cost. In: Proceedings of the 5th International Conference On Information processing in Sensor Networks, pp. 2–10. ACM (2006)

  22. Krause, A., McMahan, H.B., Guestrin, C., Gupta, A.: Robust submodular observation selection. J. Mach. Learn. Res. 9, 2761–2801 (2008)

    MATH  Google Scholar 

  23. Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., VanBriesen, J., Glance, N.: Cost-effective outbreak detection in networks. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge discovery and Data Mining, pp. 420–429. ACM (2007)

  24. Liu, Y., Wei, K., Kirchhoff, K., Song, Y., Bilmes, J.: Submodular feature selection for high-dimensional acoustic score spaces. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7184–7188. IEEE (2013)

  25. Nemhauser, G.L., Wolsey, L.A.: Best algorithms for approximating the maximum of a submodular set function. Math. Oper. Res. 3(3), 177–188 (1978)

    Article  MathSciNet  Google Scholar 

  26. Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions–i. Math. Program. 14(1), 265–294 (1978)

    Article  MathSciNet  Google Scholar 

  27. Schrijver, A.: A combinatorial algorithm minimizing submodular functions in strongly polynomial time. J. Comb. Theory, Ser. B 80(2), 346–355 (2000)

    Article  MathSciNet  Google Scholar 

  28. Sviridenko, M.: A note on maximizing a submodular set function subject to a knapsack constraint. Oper. Res. Lett. 32(1), 41–43 (2004)

    Article  MathSciNet  Google Scholar 

  29. Thoma, M., Cheng, H., Gretton, A., Han, J., Kriegel, H.P., Smola, A.J., Song, L., Philip, S.Y., Yan, X., Borgwardt, K.M.: Near-optimal supervised feature selection among frequent subgraphs. In: SDM, pp. 1076–1087. SIAM (2009)

  30. Vondrák, J.: Optimal approximation for the submodular welfare problem in the value oracle model. In: STOC ’08, pp. 67–74. ACM (2008)

  31. Vondrák, J.: Symmetry and approximability of submodular maximization problems. SIAM J. Comput. 42(1), 265–304 (2013)

    Article  MathSciNet  Google Scholar 

  32. Vondrák, J., Chekuri, C., Zenklusen, R.: Submodular function maximization via the multilinear relaxation and contention resolution schemes. In: STOC ’11, pp. 783–792. ACM (2011)

Download references

Acknowledgements

This work was partially supported by ONR Grant N00014-17-1-2194. The authors would like to thank all the anonymous reviewers for their useful suggestions and comments on all the versions of the paper so far. In addition, RU would also like to thank Jan Vondrák for a useful discussion and pointing out a relevant result in [10].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rajan Udwani.

A Appendix

A Appendix

1.1 A.1

Lemma 17

(Nemhauser, Wolsey [25, 26]) For all \(\alpha \ge 0\), greedy algorithm terminated after \(\alpha k\) steps yields a set A with \(f(A)\ge \beta (0,\alpha )f({ OPT}(k,N,0))\).

Proof

Let \(A_i\) be the set at iteration i of the greedy algorithm. Then by monotonicity, we have,

$$\begin{aligned} f(A_i\cup { OPT}(k,N,0))\ge f({ OPT}(k,N,0)) \end{aligned}$$

and by submodularity,

$$\begin{aligned} \sum _{e\in { OPT}(k,N,0)-A_i} f(e|A_i)\ge f({ OPT}(k,N,0)|A_i)\ge f({ OPT}(N,k,0))-f(A_i) \end{aligned}$$

Hence, there exists an element e in \({ OPT}(k,N,0)-A_i\) such that,

$$\begin{aligned} f(e|A_i)\ge (f({ OPT}(k,N,0))-f(A_i))/k \end{aligned}$$

Hence, we get the recurring inequality,

$$\begin{aligned} f(A_{i+1})\ge f(A_{i}+e)\ge f(A_i)+(f({ OPT}(k,N,0))-f(A_i))/k \end{aligned}$$

The above implies that the difference between the value of the greedy set and optimal solution decreases by a factor of \((1-1/k)\) at each step, so after \(\alpha k\) steps,

$$\begin{aligned} f(A_{\alpha k})&\ge (1-(1-1/k)^{\alpha k})f({ OPT}(k,N,0))\\ \implies f(A_{\alpha k})&\ge \beta (0,\alpha ) f({ OPT}(k,N,0)) \end{aligned}$$

\(\square \)

1.2 A.2

Lemma 18

There exists no polytime algorithm with approximation ratio greater than \((1-1/e)\) for P2 unless \(P=NP\). For the value oracle model, we have the same threshold, but for algorithms that make only a polynomial number of queries.

Proof

We will give a strict reduction from the classical problem P1 (for which the above hardness result holds [12, 26]) to the robust problem P2. Consider an instance of P1, denoted by (kN, 0). We intend to reduce this to an instance of P2 on an augmented ground set \(N\cup X\) i.e. \((k+\tau ,N\cup X,\tau )\).

The set \(X=\{x_1,\ldots ,x_\tau \}\) is such that \(f(x_i)=(k+1) f(a_1)\) (and recall that \(f(a_1)\ge f(a_i),\forall a_i\in N\)) and \(f(x_i|S)=f(x_i)\) for every i and \(S\subset N\cup X\) not containing \(x_i\). We will show that \(g({ OPT}(k+\tau ,N\cup X,\tau ))=f({ OPT}(k,N,0))\).

First, note that for an arbitrary set \(S=S_N \cup S_X\), such that \(|S|=k+\tau \) and \(S_X=S\cap X\), we have that every minimizer contains \(S_X\). This follows by definition of X, since for any two subsets PQ of S with \(|P|=|Q|= k\) and P disjoint with X but \(Q\cap X \ne \emptyset \), we have by monotonicity \(f(Q)\ge f(x_i)= (k+1)f(a_1)> kf(a_1)\) and by submodularity \(kf(a_1) \ge f(P)\). This implies that X is the minimizer of \({ OPT}(k,N,0)\cup X\) and hence \(f({ OPT}(k,N,0))\le g({ OPT}(k+\tau ,N\cup X,\tau ))\)

For the other direction, consider the set \({ OPT}(k+\tau ,N\cup X,\tau )\) and define,

$$\begin{aligned} M={ OPT}(k+\tau ,N\cup X,\tau ) \cap X \end{aligned}$$

Next, observe that carving out an arbitrary set B of size \(\tau -|M|\) from \({ OPT}(k+\tau ,N\cup X,\tau )-M\) will give us the set

$$\begin{aligned} C={ OPT}(k+\tau ,N\cup X,\tau )-M-B \end{aligned}$$

of size \(k+\tau -(|M| +\tau -|M|)=k\). Also note that by design, \(C\subseteq N\) and hence \(f(C)\le f({ OPT}(k,N,0))\), but by definition, we have that \(g({ OPT}(k+\tau ,N\cup X,\tau ))\le f(C)\). This gives us the other direction and we have \(g({ OPT}(k+\tau ,N\cup X,\tau ))=f({ OPT}(k,N,0))\).

To complete the reduction we need to show how to obtain an \(\alpha \)-approximate solution to (kN, 0) given an \(\alpha \)-approximate solution to \((k+\tau ,N\cup X,\tau )\). Let \(S=S_N \cup S_X\) be such a solution i.e. a set of size \(k+\tau \) with \(S_X=S\cap X\), such that \(g(S)\ge \alpha g({ OPT}(k+\tau ,N\cup X,\tau ))\). Now consider an arbitrary subset \(S'_N\) of \(S_N\) of size \(\tau -|S_X|\). Observe that \(|S_N-S'_N|=|S|-|S_X|-(\tau -|S_X|)=k\) and further \(f(S_N-S'_N)\ge g(S)\ge \alpha g({ OPT}(k+\tau ,N\cup X,\tau ))=\alpha f({ OPT}(k,N,0))\), by definition. Hence the set \(S_N-S'_N\subseteq N\) is an \(\alpha \)-approximate solution to (kN, 0) that, given S, can be obtained in polynomial time/queries. \(\square \)

1.3 A.3 Tight analysis of Algorithm 3

Theorem 19

The 0.387-algorithm is \(\frac{1}{2}\beta (0.5,\frac{k-2}{k-1})(> 0.387\) asymptotically) approximate.

Proof

Let \({ OPT}=g({ OPT}(k,N,1))\), A be the output of the 0.387-algorithm and \(a'_1\) be the first element added to A apart from \(a_1\). The case \(z=a_1\) is straightforward since \(f(A-a_1)\ge \beta (0,1)f({ OPT}(k-1,N-a_1,0))\ge \beta (0,1){ OPT}\) where the last inequality follows from Lemma 2. So assume \(z\ne a_1\). Further, let \(f(z|A-a_1-z)=\eta f(A-a_1)\) which implies that \(f(a'_1)\ge f(z)\ge f(z|A-a_1-z)= \eta f(A-a_1)\) and now from Lemma 5 with N replaced by \(N-a_1\), A replaced by \(A-a_1\) and thus k replaced by \(k-1\), \(S=a'_1\) with \(s=1\) and \(l=k-1-|S|=k-2\), we get,

$$\begin{aligned} f(A-a_1)\ge \beta \left( \eta ,\frac{k-2}{k-1}\right) f({ OPT}(k-1,N-a_1,0)) \end{aligned}$$

This together with Lemma 2 implies, \(f(A-a_1)\ge \beta (\eta ,\frac{k-2}{k-1}){ OPT}\). Also, we have by definition,

$$\begin{aligned} f(A-a_1-z)=(1-\eta )f(A-a_1)\ge (1-\eta )\beta (\eta ,\frac{k-2}{k-1}){ OPT} \end{aligned}$$

Further, we have,

$$\begin{aligned} g(A)&\ge \max \{ f(a_1), f(A-a_1-z)\}\\&\ge \max \{ f(z|A-a_1-z), f(A-a_1-z)\}\\&\ge \max \left\{ \eta \beta \left( \eta ,\frac{k-2}{k-1}\right) , (1-\eta )\beta \left( \eta ,\frac{k-2}{k-1}\right) \right\} { OPT} \\&\ge 0.5\beta \left( 0.5,\frac{k-2}{k-1}\right) { OPT}\quad [\text {for } \eta =0.5]\\&\xrightarrow {k\rightarrow \infty } 0.387 { OPT} \end{aligned}$$

\(\square \)

We now give an instance where the above analysis is tight. Let the algorithm start with a maximum value element \(a_1\), then pick \(a_2\), and then add the set C, such that the output of the algorithm is \(a_1\cup a_2 \cup C\), with C being a set of size \(k-2\). Let \(f(a_1)=1, f(a_2)=1, f(C)=1\) with \(f(a_1+C)=1, f(a_1+a_2)=2, f(a_2+C)=2\) i.e. C copies \(a_1\). Hence \(f(a_1+a_2+C)=2\) and \(g(a_1+a_2+C)=f(a_1+C)=1\).

Let \({ OPT}(k,N,1)\) include \(a_2\), a copy \(a'_2\) of \(a_2\) (so \(f(a'_2)=1, f(a_2+a'_2)=1\)) and a set D of \(k-2\) elements of value \(\frac{1}{(k-2)\beta (0,1)}\) each, such that \(f({ OPT}(k,N,1))=1+(k-2)\frac{1}{(k-2)\beta (0,1)}=1+\frac{e}{e-1}=\frac{2}{\beta (0.5,1)}\). Observe that the small value elements are all minimizers and \(g({ OPT}(k,N,1))\approx \frac{2}{\beta (0.5,1)}\) as k becomes large. Note that \(f(D)=\frac{f(C)}{\beta (0,1)} \) and we can have sets C and D as above based on the worst case example for the greedy algorithm given in [26].This proves that the inequality in Lemma 5 is tight.

1.4 A.4 Analysis of Algorithm 4

Theorem 20

Algorithm 4 is \(0.5547-\varOmega (1/k)\) approximate.

Proof

Let A denote the output and \(A_0\subset A\) denote \(\{a_1,a_2\}\). Due to submodularity, there exists at most two distinct \(x\in A\) with \(f(x|A-x)>\frac{f(A)}{3}\). Additionally, for every \(x\not \in A_0\), we have that \(f(x|S)\le f(a_1)\) and \(f(x|S)\le f(a_2|a_1)\) for arbitrary subset S of A containing \(A_0\) and \(x\not \in S\). This implies that that \(2f(x|S)\le f(A_0)\le f(S)\), which gives us that \(f(x|S)\le \frac{f(S+x)}{3}\).

Note that due to condition in Phase 1, the algorithm ignores \(a_1\) even if it is not a minimizer, as long as its marginal is more than a third the value of the set at that iteration. At the end of Phase 1, if \(a_2\) has marginal more than third of the set value, then it is ignored until its contribution/marginal decreases. Phase 3 adds greedily (without ignoring any element added). As argued above, no element other than \(a_1,a_2\) can have marginal more than a third of the set value at any iteration, so during Phase 2 we have that \(a_2\) is also a minimizer.

We will now proceed by splitting into cases. Denote \(g({ OPT}(k,N,1))\) as \({ OPT}\) and recall from Lemma 2, \({ OPT}\le f({ OPT}(k-1,N-a_i,0))\) for \(i\in \{1,2\}\). Also, let the set of elements added to \(A_0\) during Phase 1 be \(U=\{u_1,\ldots ,u_p\}\), similarly elements added during Phases 2 and 3 be \(V=\{v_1,\ldots ,v_q\},W=\{w_1,\ldots ,w_r\}\) respectively, with indexing in order of addition to the set. Finally, let \(\alpha _p=\frac{p-2}{k-1}\, ,\, \alpha _q=\frac{q-1}{k-1}\, ,\, \alpha _r=\frac{r}{k-1}\, ,\, \alpha =\alpha _p+\alpha _q+\alpha _r=\frac{k-5}{k-1}\), and assume \(k\ge 8\).

Case 1 Phase 2,3 do not occur i.e. \(p=k-2, q=r=0\).

Since we have,

$$\begin{aligned} f(A-a_1)&\overset{(a)}{\ge } f(a_2) + \beta \left( 0,\frac{k-2}{k-1}\right) (f({ OPT}(k-1,N-a_1,0))-f(a_2)) \nonumber \\&\ge \beta \left( 0,\frac{k-2}{k-1}\right) f({ OPT}(k-1,N-a_1,0)) \nonumber \\&\overset{(b)}{\ge } \beta \left( 0,\frac{k-2}{k-1}\right) { OPT} , \end{aligned}$$
(16)

where (a) follows from Lemma 1 and (b) from Lemma 2. This deals with the case \(z=a_1\). If \(z=u_p\), we have,

$$\begin{aligned} f(A-u_p)\ge f(A-u_p-a_1)&\overset{(c)}{\ge } \beta \left( 0,\frac{k-3}{k-1}\right) f({ OPT}(k-1,N-a_1,0))\\&\ge \beta \left( 0,\frac{k-3}{k-1}\right) { OPT}, \end{aligned}$$

where (c) is like (16) above but with \(k-2\) replaced by \(k-3\) when using Lemma 1. Finally, let \(z\not \in \{a_1,u_p\}\), then due to the Phase 1 termination criteria, we have \(f(a_1|A-a_1-u_p)\ge f(A-u_p)/3\), which implies that,

$$\begin{aligned} 2 f(a_1|A-a_1-u_p)&\ge f(A-a_1-u_p) \end{aligned}$$
(17)

Now letting \(\eta =\frac{f(z|A-a_1-u_p-z)}{f(A-a_1-u_p)}\), we have by submodularity \(f(z|A-z)\le \eta f(A-a_1-u_p)\) and by definition \(f(A-a_1-u_p-z)=(1-\eta )f(A-a_1-u_p)\). Using the above we get,

$$\begin{aligned}&f(A-z)\overset{(d)}{\ge } \max \{f(a_1), f(A-u_p-z)\} \nonumber \\&\ge \max \{f(z|A-a_1-u_p-z), f(A-a_1-u_p-z)+ f(a_1|A-a_1-u_p-z)\} \nonumber \\&\ge \max \{\eta f(A-a_1-u_p), (1-\eta )f(A-a_1-u_p)+ f(a_1|A-a_1-u_p)\} \nonumber \\&\ge \max \left\{ \eta , (1-\eta )+ \frac{1}{2}\right\} f(A-a_1-u_p) \end{aligned}$$
(18)

where (d) follows from monotonicity and the fact that \(a_1\in A-z\) and \(A-u_p-z\subset A-z\). Now, from Lemma 5 with \(S=\{a_2,u_1\}\), \(l=p-2\), k replaced by \(k-1\), N by \(N-a_1\) and \(s=1\), we have, \(f(A-a_1-u_p)\ge \beta (\eta ,\alpha _p)f({ OPT}(k-1,N-a_1,0))\ge \beta (\eta ,\alpha _p){ OPT}\). Substituting this in (18) above we get,

$$\begin{aligned} f(A-z)&\max \left\{ \eta , \frac{3}{2}-\eta \right\} \beta (\eta ,\alpha _p) { OPT} \nonumber \\&\ge \frac{3}{4}\beta \left( \frac{3}{4},\alpha _p\right) { OPT}\quad [\eta =3/4] \nonumber \\&> \beta (0,\alpha _p) { OPT}=\beta \left( 0,\frac{k-4}{k-1}\right) { OPT} \end{aligned}$$
(19)

Case 2 Phase 2 occurs, 3 doesn’t i.e. \(p+q=k-2\) and \(q>0\).

As stated earlier, during Phase 2, \(a_2\) is the minimizer of \(A_0\cup U\cup (V-v_q)\). We have \(g(A)\ge g(A-v_q)= f(A-v_q-a_2)=f(a_1+U) + f(V-v_q|a_1+U)\). Further, since the addition rule in Phase 2 ignores \(a_2\), we have from Lemma 1, \(f(V-v_q|a_1+U)\ge \beta (0,\alpha _q)({ OPT}-f(a_1+U))\), and \(f(a_1+U)\ge \beta (0,\alpha _p){ OPT}\) follows from the previous case (to see this, suppose that the algorithm was terminated after Phase 1 and note that \(z=a_2\) falls under the scenario \(z\not \in \{a_1,u_p\}\)). Using this,

$$\begin{aligned} g(A-v_q)&\ge f(a_1+U)+\beta (0,\alpha _q)({ OPT}-f(a_1+U)) \nonumber \\ \frac{f(A-z)}{{ OPT}}&\ge (1-\beta (0,\alpha _q))\beta (0,\alpha _p) +\beta (0,\alpha _q) \nonumber \\&= \beta (0,\alpha )= \beta \left( 0,\frac{k-5}{k-1}\right) \end{aligned}$$
(20)

Case 3 Phase 3 occurs i.e., \(r>0\).

We consider two sub-cases, \(z\in A-W\) and \(z\in W\). Suppose \(z\in A-W\). Due to \(f(z|A-W-z)\le f(A-W)/3\) we have, \(f(A-W)\le \frac{3}{2}f(A-W-z)\). Also, \(f(W|A-W-z)\ge f(W|A-W)\ge \beta (0,\alpha _r)({ OPT}-f(A-W))\). Then using this along with the previous cases,

$$\begin{aligned} f(A-z)&= f(A-W-z)+ f(W|A-W-z)\\&\ge f(A-W-z)+ \beta (0,\alpha _r)({ OPT}-f(A-W))\\&\ge f(A-W-z)+ \beta (0,\alpha _r)\left( { OPT}-\frac{3}{2}f(A-W-z)\right) \\&\ge \left( 1-\frac{3}{2}\beta (0,\alpha _r)\right) f(A-W-z) + \beta (0,\alpha _r){ OPT}\\&\ge \left( 1-\frac{3}{2}\beta (0,\alpha _r)\right) \beta (0,\alpha _p+\alpha _q) { OPT} + \beta (0,\alpha _r){ OPT}\quad [\text {from } (19),(20)]\\ \frac{f(A-z)}{{ OPT}}&\ge 0.5-\frac{3}{2e^{\alpha }} + \frac{1}{2} (e^{-(\alpha _p+\alpha _q)} +e^{-\alpha _r})\\&\ge 0.5-\frac{3}{2e^\alpha }+e^{-\alpha /2}\quad [\text {for } \alpha _r=\alpha _p+\alpha _q=\alpha /2]\\&= 0.5- \frac{3}{2e^\frac{k-4}{k-1}}+e^{-\frac{k-4}{2(k-1)}}\xrightarrow {k\rightarrow \infty } 0.5547 \end{aligned}$$

Now, suppose \(z\in W\), then note that for \(p+q\ge 6\) we have either \(p\ge 3\), and hence due to greedy additions \(f(z|A-z)\le f(\{u_1,u_2,u_3\})/3\le f(A-W)/3\), or \(q\ge 3\), and again due to greedy additions \(f(z|A-z)\le f(a_1+U\cup \{v_1,v_2\})/3\le f(A-W)/3\).

If \(q>0\), then note that \(f(A-W)\ge f(A-W-v_q)\ge \frac{3}{2} f(A-W-v_q-a_2)\) due to the Phase 2 termination conditions. Now we reduce the analysis to look like the previous sub-case through the following,

$$\begin{aligned} f(A-z)&= f(A-W)+f(W|A-W)-f(z|A-z)\\&\ge f(A-W)+ \beta (0,\alpha _r) ({ OPT}-f(A-W)) -f(z|A-z)\\&\ge (1-\beta (0,\alpha _r))f(A-W) +\beta (0,\alpha _r){ OPT} -f(A-W)/3\\&\ge \left( 1-\frac{3}{2}\beta (0,\alpha _r)\right) \frac{2}{3}f(A-W) +\beta (0,\alpha _r){ OPT}\\&\ge \left( 1-\frac{3}{2}\beta (0,\alpha _r)\right) f(A-W-v_q-a_2) +\beta (0,\alpha _r){ OPT} \end{aligned}$$

Which since \(f(A-W-v_q-a_2)\ge \beta \Big (0,\frac{p+q-3}{k-1}\Big ) { OPT}\) from (20), leads to the same ratio asymptotically as when \(z\in A-W\). The case \(q=0\) can be dealt with similarly by using \(f(A-W)\ge f(A-W-u_p)\ge \frac{3}{2} f(A-W-u_p-a_1)\)

If \(p+q<6\), then let \(f(z|A-z)=\eta f(A)\). Now we have, \(f(A-W)\ge f(A_0)=f(a_1)+f(a_2|a_1)\ge 2f(z|A-z)= 2\eta f(A)\), which further implies that \(f(A-W+w_1)\ge 3 \eta f(A)\) since \(z\in W\). Then proceeding as in Lemma 5 with k replaced by \(k-1\), \(S=A-W+w_1\) and hence \(s=3\) and finally \(l=k-|S|=k-(p+q+2+1)\le k-8\) gives us,

$$\begin{aligned} f(A)\ge \beta (3\eta ,\frac{k-8}{k-1})f({ OPT}(k-1,N,0))\ge \beta (3\eta ,\frac{k-8}{k-1}) { OPT} \end{aligned}$$

Then using Lemma 6 we have, \(f(A-z)\ge (1-\eta )\beta (3\eta ,\frac{k-8}{k-1}){ OPT} \ge \beta (0,\frac{k-8}{k-1}){ OPT}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Orlin, J.B., Schulz, A.S. & Udwani, R. Robust monotone submodular function maximization. Math. Program. 172, 505–537 (2018). https://doi.org/10.1007/s10107-018-1320-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-018-1320-2

Mathematics Subject Classification

Navigation