Advertisement

Mathematical Programming

, Volume 53, Issue 1–3, pp 323–338 | Cite as

Pure adaptive search in global optimization

  • Zelda B. Zabinsky
  • Robert L. Smith
Article

Abstract

Pure adaptive seach iteratively constructs a sequence of interior points uniformly distributed within the corresponding sequence of nested improving regions of the feasible space. That is, at any iteration, the next point in the sequence is uniformly distributed over the region of feasible space containing all points that are strictly superior in value to the previous points in the sequence. The complexity of this algorithm is measured by the expected number of iterations required to achieve a given accuracy of solution. We show that for global mathematical programs satisfying the Lipschitz condition, its complexity increases at mostlinearly in the dimension of the problem.

Key words

Random search Monte Carlo optimization global optimization complexity 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Personal communication from an anonymous referee (February 1990).Google Scholar
  2. [2]
    F. Archetti, B. Betro and S. Steffe, “A theoretical framework for global optimization via random sampling,” Working Paper, Quaderno dei Gruppi di Recerca Matematica del CNR, University of Pisa (Pisa, 1975).Google Scholar
  3. [3]
    H.C.P. Berbee, C.G.E. Boender, A.H.G. Rinnooy Kan, C.L. Scheffer, R.L. Smith and J. Telgen, “Hit-and-run algorithms for the identification of nonredundant linear inequalities,”Mathematical Programming 37 (1987) 184–207.Google Scholar
  4. [4]
    A. Boneh, “A probabilistic algorithm for identifying redundancy by a random feasible point generator (RFPG),” in: M.H. Karwan, Bl Lotfi, J. Telgen and S. Zionts, eds.,Redundancy in Mathematical Programming (Springer, Berlin, 1983).Google Scholar
  5. [5]
    S.H. Brooks, “A discussion of random methods for seeking maxima,”Operations Research 6 (1958) 244–251.Google Scholar
  6. [6]
    D.J. Clough, “An asymptotic extreme-value sampling theory for estimation of a global maximum,”CORS Journal 7 (1969) 102–115.Google Scholar
  7. [7]
    L.C.W. Dixon and G.P. Szegö, eds.,Towards Global Optimization (North-Holland, Amsterdam, 1975).Google Scholar
  8. [8]
    L.C.W. Dixon and G.P. Szegö, eds.,Towards Global Optimization 2 (North-Holland, Amsterdam, 1978).Google Scholar
  9. [9]
    R.M. Freund, “Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function,” MIT Working Paper OR-182-88 (Massachusetts Institute of Technology, Cambridge, MA, 1988).Google Scholar
  10. [10]
    J. Galambos,The Asymptotic Theory of Extreme Order Statistics (Wiley, New York, 1978).Google Scholar
  11. [11]
    C.C. Gonzaga, “Polynomial affine algorithms for linear programming,” Report ES-139/88, Federal University of Rio de Janeiro (Rio de Janeiro, 1988).Google Scholar
  12. [12]
    L. De Haan, “Estimation of the minimum of a function using order statistics,”Journal of the American Statistical Association 76 (1981) 467–469.Google Scholar
  13. [13]
    P. Huard, “Resolution of mathematical programming with non-linear constraints by the method of centers,” in: J. Abadie, ed.,Non-Linear Programming (North-Holland, Amsterdam, 1967) pp. 207–219.Google Scholar
  14. [14]
    N. Karmarkar, “A new polynomial-time algorithm for linear programming,”Combinatorica 4 (1984) 373–395.Google Scholar
  15. [15]
    M.G. Kendall,A Course in the Geometry of n-Dimensions (Hafner, New York, 1961).Google Scholar
  16. [16]
    N.R. Patel and R.L. Smith, “The asymptotic extreme value distribution of the sample minimum of a concave function under linear constraints”,Operations Research 31 (1983) 789–794.Google Scholar
  17. [17]
    N.R. Patel, R.L. Smith and Z.B. Zabinsky, “Pure adaptive search in Monte Carlo optimization,”Mathematical Programming 43 (1988) 317–328.Google Scholar
  18. [18]
    J. Renegar, “A polynomial-time algorithm, based on Newton's method, for linear programming,”Mathematical Programming 40 (1988) 59–93.Google Scholar
  19. [19]
    A.H.G. Rinnooy Kan and G.T. Timmer, “Stochastic methods for global optimization,”American Journal of Mathematical and Management Sciences 4 (1984) 7–40.Google Scholar
  20. [20]
    A.H.G. Rinnooy Kan and G.T. Timmer, “Stochastic global optimization methods part I: clustering methods,”Mathematical Programming 39 (1987) 27–56.Google Scholar
  21. [21]
    A.H.G. Rinnooy Kan and G.T. Timmer, “Stochastic global optimization methods part II: multi-level methods,”Mathematical Programming 39 (1987) 57–78.Google Scholar
  22. [22]
    R.Y. Rubinstein, “Generating random vectors uniformly distributed inside and on the surface of different regions”,European Journal of Operations Research 10 (1982) 205–209.Google Scholar
  23. [23]
    S.M. Ross,Stochastic Processes (Wiley, New York, 1983).Google Scholar
  24. [24]
    G. Schrack and N. Borowski, “An experimental comparison of three random searches,” in: F. Lootsma, ed.,Numerical Methods for Nonlinear Optimization (Academic Press, London, 1972) pp. 137–147.Google Scholar
  25. [25]
    M.A. Schumer and K. Steiglitz, “Adaptive step size random search,”IEEE Transactions on Automatic Control AC-13 (1968) 270–276.Google Scholar
  26. [26]
    R.L. Smith, “Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions,”Operations Research 32 (1984) 1296–1308.Google Scholar
  27. [27]
    F.J. Solis and R.J.-B. Wets, “Minimization by random search techniques,”Mathematics of Operations Research 6 (1981) 19–30.Google Scholar

Copyright information

© The Mathematical Programming Society, Inc. 1992

Authors and Affiliations

  • Zelda B. Zabinsky
    • 1
  • Robert L. Smith
    • 2
  1. 1.Industrial Engineering Program, FU-20University of WashingtonSeattleUSA
  2. 2.Department of Industrial & Operations EngineeringThe University of MichiganAnn ArborUSA

Personalised recommendations