Advertisement

Abstract

A large class of stochastic optimization problems can be modeled as minimizing an objective function f that depends on a choice of a vector xX, as well as on a random external parameter ω∈ Ω given by a probability distribution π. The value of the objective function is a random variable and often the goal is to find an xX to minimize the expected cost E ω [f ω (x)]. Each ω is referred to as a scenario. We consider the case when Ω is large or infinite and we are allowed to sample from π in a black-box fashion. A common method, known as the SAA method (sample average approximation), is to pick sufficiently many independent samples from π and use them to approximate π and correspondingly E ω [f ω (x)]. This is one of several scenario reduction methods used in practice.

There has been substantial recent interest in two-stage stochastic versions of combinatorial optimization problems which can be modeled by the framework described above. In particular, we are interested in the model where a parameter λ bounds the relative factor by which costs increase if decisions are delayed to the second stage. Although the SAA method has been widely analyzed, the known bounds on the number of samples required for a (1+ε) approximation depend on the variance of π even when λ is assumed to be a fixed constant. Shmoys and Swamy [13,14] proved that a polynomial number of samples suffice when f can be modeled as a linear or convex program. They used modifications to the ellipsoid method to prove this.

In this paper we give a different proof, based on earlier methods of Kleywegt, Shapiro, Homem-De-Mello [6] and others, that a polynomial number of samples suffice for the SAA method. Our proof is not based on computational properties of f and hence also applies to integer programs. We further show that small variations of the SAA method suffice to obtain a bound on the sample size even when we have only an approximation algorithm to solve the sampled problem. We are thus able to extend a number of algorithms designed for the case when π is given explicitly to the case when π is given as a black-box sampling oracle.

Keywords

Approximation Algorithm Stochastic Optimization Sampling Bound Stochastic Optimization Problem Ellipsoid Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Dhamhere, K., Ravi, R., Singh, M.: On two-stage Stochastic Minimum Spanning Trees. In: Jünger, M., Kaibel, V. (eds.) IPCO 2005. LNCS, vol. 3509, pp. 321–334. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  2. 2.
    Flaxman, A., Frieze, A., Krivelevich, M.: On the random 2-stage minimum spanning tree. In: Proc. of SODA (2005)Google Scholar
  3. 3.
    a.Gupta, M. Pál, R. Ravi, and A. Sinha. Boosted sampling: Approximation algorithms for stochastic optimization. In: Proceedings of the 36th Annual ACM Symposium on Theory of Computing (2004)Google Scholar
  4. 4.
    Gupta, A., Ravi, R., Sinha, A.: An edge in time saves nine: Lp rounding approximation algorithms. In: Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (2004)Google Scholar
  5. 5.
    Immorlica, N., Karger, D., Minkoff, M., Mirrokni, V.: On the costs and benefits of procrastination: Approximation algorithms for stochastic combinatorial optimization problems. In: Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms (2004)Google Scholar
  6. 6.
    Kleywegt, J., Shapiro, A., Homem-De-Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM J. on Optimization 12, 479–502 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Mahdian, M.: Facility Location and the Analysis of Algorithms through Factor- Revealing Programs. Ph.D. Thesis, MIT (June 2004)Google Scholar
  8. 8.
    Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)zbMATHGoogle Scholar
  9. 9.
    Ruszczynski, A., Shapiro, A. (eds.): Stochastic Programming. Handbook in Operations Research and Management Science, vol. 10. Elsevier, Amsterdam (2003)zbMATHGoogle Scholar
  10. 10.
    Shapiro, A.: Montecarlo sampling methods. In: Ruszczynski, A., Shapiro, A. (eds.) Stochastic Programming. Handbook in Operations Research and Management Science, vol. 10. Elsevier, Amsterdam (2003)CrossRefGoogle Scholar
  11. 11.
    Ravi, R., Sinha, A.: Hedging uncertainty: Approximation algorithms for stochastic optimization problems. In: Bienstock, D., Nemhauser, G.L. (eds.) IPCO 2004. LNCS, vol. 3064, pp. 101–115. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Ravi, R., Singh, M.: Personal communication (February 2005)Google Scholar
  13. 13.
    Shmoys, D., Swamy, C.: Stochastic optimization is (almost) as easy as deterministic optimization. In: Proc. of FOCS (2004)Google Scholar
  14. 14.
    Shmoys, D., Swamy, C.: The Sample Average Approximation Method for 2-stage Stochastic Optimization (November 2004) (manuscript)Google Scholar
  15. 15.
    Shmoys, D., Swamy, C.: Sampling-based Approximation Algorithms for Multistage Stochastic Optimization (2005) (manuscript)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Moses Charikar
    • 1
  • Chandra Chekuri
    • 2
  • Martin Pál
    • 2
  1. 1.Computer Science DeptPrinceton UniversityPrincetonUSA
  2. 2.Lucent Bell LabsMurray HillUSA

Personalised recommendations