Scenario Submodular Cover

  • Nathaniel Grammel
  • Lisa HellersteinEmail author
  • Devorah Kletenik
  • Patrick Lin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10138)


We introduce the Scenario Submodular Cover problem. In this problem, the goal is to produce a cover with minimum expected cost, with respect to an empirical joint probability distribution, given as input by a weighted sample of realizations. The problem is a counterpart to the Stochastic Submodular Cover problem studied by Golovin and Krause [6], which assumes independent variables. We give two approximation algorithms for Scenario Submodular Cover. Assuming an integer-valued utility function and integer weights, the first achieves an approximation factor of \(O(\log Qm)\), where m is the sample size and Q is the goal utility. The second, simpler algorithm achieves an approximation factor of \(O(\log QW)\), where W is the sum of the weights. We achieve our bounds by building on previous related work (in [4, 6, 15]) and by exploiting a technique we call the Scenario-OR modification. We apply these algorithms to a new problem, Scenario Boolean Function Evaluation. Our results have applciations to other problems involving distributions that are explicitly specified by their support.



The work in this paper was supported by NSF Grant 1217968. L. Hellerstein thanks Andreas Krause for useful discussions at ETH, and for directing our attention to the bound of Streeter and Golovin for min-sum submodular cover. We thank an anonymous referee for suggesting the Kosaraju trick.


  1. 1.
    Bellala, G., Bhavnani, S., Scott, C.: Group-based active query selection for rapid diagnosis in time-critical situations. IEEE Trans. Inf. Theor. 58, 459–478 (2012)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Chen, Y., Javdani, S., Karbasi, A., Bagnell, J.A., Srinivasa, S.S., Krause, A.: Submodular surrogates for value of information. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, Texas, USA, 25–30 January 2015, pp. 3511–3518 (2015)Google Scholar
  3. 3.
    Chen, Y., Javdani, S., Karbasi, A., Bagnell, J.A., Srinivasa, S.S., Krause, A.: Submodular surrogates for value of information (long version) (2015).
  4. 4.
    Cicalese, F., Laber, E., Saettler, A.M.: Diagnosis determination: decision trees optimizing simultaneously worst and expected testing cost. In: Proceedings of the 31st International Conference on Machine Learning, pp. 414–422 (2014)Google Scholar
  5. 5.
    Deshpande, A., Hellerstein, L., Kletenik, D.: Approximation algorithms for stochastic boolean function evaluation and stochastic submodular set cover. In: Symposium on Discrete Algorithms (2014)Google Scholar
  6. 6.
    Golovin, D., Krause, A.: Adaptive submodularity: theory and applications in active learning and stochastic optimization. J. Artif. Intell. Res. 42, 427–486 (2011)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Golovin, D., Krause, A., Ray, D.: Near-optimal Bayesian active learning with noisy observations. In: 24th Annual Conference on Neural Information Processing Systems (NIPS), pp. 766–774 (2010)Google Scholar
  8. 8.
    Grammel, N., Hellerstein, L., Kletenik, D., Lin, P.: Scenario submodular cover. CoRR abs/1603.03158 (2016).
  9. 9.
    Guillory, A., Bilmes, J.A.: Simultaneous learning and covering with adversarial noise. In: Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, 28 June–2 July 2011, pp. 369–376 (2011)Google Scholar
  10. 10.
    Gupta, A., Nagarajan, V., Ravi, R.: Approximation algorithms for optimal decision trees and adaptive TSP problems. In: Abramsky, S., Gavoille, C., Kirchner, C., Meyer auf der Heide, F., Spirakis, P.G. (eds.) ICALP 2010. LNCS, vol. 6198, pp. 690–701. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-14165-2_58 CrossRefGoogle Scholar
  11. 11.
    Javdani, S., Chen, Y., Karbasi, A., Krause, A., Bagnell, D., Srinivasa, S.S.: Near optimal bayesian active learning for decision making. In: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, 22–25 April 2014, pp. 430–438 (2014)Google Scholar
  12. 12.
    Kaplan, H., Kushilevitz, E., Mansour, Y.: Learning with attribute costs. In: Symposium on the Theory of Computing, pp. 356–365 (2005)Google Scholar
  13. 13.
    Kosaraju, S.R., Przytycka, T.M., Borgstrom, R.: On an optimal split tree problem. In: Dehne, F., Sack, J.-R., Gupta, A., Tamassia, R. (eds.) WADS 1999. LNCS, vol. 1663, pp. 157–168. Springer, Heidelberg (1999). doi: 10.1007/3-540-48447-7_17 CrossRefGoogle Scholar
  14. 14.
    Navidi, F., Kambadur, P., Nagarajan, V.: Adaptive submodular ranking. CoRR abs/1606.01530 (2016).
  15. 15.
    Streeter, M., Golovin, D.: An online algorithm for maximizing submodular functions. In: Advances in Neural Information Processing Systems, pp. 1577–1584 (2009)Google Scholar
  16. 16.
    Ünlüyurt, T.: Sequential testing of complex systems: a review. Discrete Appl. Math. 142(1–3), 189–205 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Wolsey, L.: Maximising real-valued submodular functions: primal and dual heuristics for location problems. Math. Oper. Res. 7(3), 410–425 (1982)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Nathaniel Grammel
    • 1
  • Lisa Hellerstein
    • 1
    Email author
  • Devorah Kletenik
    • 2
  • Patrick Lin
    • 3
  1. 1.Department of Computer Science and EngineeringNYU Tandon School of EngineeringBrooklynUSA
  2. 2.Department of Computer and Information Science, Brooklyn CollegeCity University of New YorkNew YorkUSA
  3. 3.Department of Computer ScienceUniversity of Illinois at Urbana-ChampaignChampaignUSA

Personalised recommendations