Optimization Letters

, Volume 3, Issue 1, pp 49–62 | Cite as

A smoothing algorithm for finite min–max–min problems

Original Paper

Abstract

We generalize a smoothing algorithm for finite min–max to finite min–max–min problems. We apply a smoothing technique twice, once to eliminate the inner min operator and once to eliminate the max operator. In mini–max problems, where only the max operator is eliminated, the approximation function is decreasing with respect to the smoothing parameter. Such a property is convenient to establish algorithm convergence, but it does not hold when both operators are eliminated. To maintain the desired property, an additional term is added to the approximation. We establish convergence of a steepest descent algorithm and provide a numerical example.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mayne D.Q., Polak E., Trahan R.: An outer approximations algorithm for computer-aided design problems. J. Optim. Theory Appl. 28(3), 331–352 (1979)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Drezner Z., Thisse J.-F., Wesolowsky G.O.: The minimax–min location problem. J. Reg. Sci. 26(1), 87–101 (1986)CrossRefGoogle Scholar
  3. 3.
    Polak E., Royset J.O.: Algorithms for finite and semi-infinite min–max–min problems using adaptive smoothing techniques. J. Optim. Theory Appl. 119(3), 421–457 (2003)MATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Bertsekas D.: Constrained Optimization and Lagrange Multiplier Methods. Athena Scientific, Nashua (1996)Google Scholar
  5. 5.
    Xingsi L.: An entropy-based aggregate method for minimax optimization. Eng. Opt. 18, 277–285 (1992)CrossRefGoogle Scholar
  6. 6.
    Xu S.: Smoothing method for minimax problems. Comput. Optim. Appl. 20(3), 267–279 (2001)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Polak E., Royset J.O., Womersley R.S.: Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 119(3), 459–484 (2003)MATHCrossRefMathSciNetGoogle Scholar
  8. 8.
    De Bruijn, N.G.: Asymptotic methods in analysis, 2nd edn. Bibliotheca Mathematica, vol IV. North-Holland, Amsterdam (1961)Google Scholar
  9. 9.
    Parpas P., Rustem B., Pistikopoulos E.N.: Linearly constrained global optimization and stochastic differential equations. J. Global Optim. 36(2), 191–217 (2006)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Hwang C.: Laplace’s method revisited: weak convergence of probability measures. Ann. Probab. 8(6), 1177–1182 (1980)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Polak, E.: Optimization. Applied Mathematical Sciences, vol. 124. In: Algorithms and consistent approximations. Springer, New York (1997)Google Scholar
  12. 12.
    Bertsekas D.: Nonlinear Programming. Athena Scientific, Nashua (1999)MATHGoogle Scholar

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.Department of ComputingImperial CollegeLondonUK

Personalised recommendations