Advertisement

Discrete Backtracking Adaptive Search for Global Optimization

  • Birna P. Kristinsdottir
  • Zelda B. Zabinsky
  • Graham R. Wood
Chapter
  • 354 Downloads
Part of the Nonconvex Optimization and Its Applications book series (NOIA, volume 59)

Abstract

This paper analyses a random search algorithm for global optimization that allows acceptance of non-improving points with a certain probability. The algorithm is called discrete backtracking adaptive search. We derive upper and lower bounds on the expected number of iterations for the random search algorithm to first sample the global optimum. The bounds are derived by modeling the algorithm using a series of absorbing Markov chains. Finally, upper and lower bounds for the expected number of iterations to find the global optimum are derived for specific forms of the algorithm.

Keywords

Global optimization adaptive search simulated annealing random search 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. [1]
    Kirkpatrick, S., Gelatt Jr., C. D. and Vecchi, M.P.: Optimization by simulated annealing, Science 20 (1983), 671–680.MathSciNetGoogle Scholar
  2. [2]
    Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A. and Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21 (1953), 1087–1090.CrossRefGoogle Scholar
  3. [3]
    Zabinsky, Z. B. and Kristinsdottir, B. P.: Complexity Analysis Integrating Pure Adaptive Search (PAS) and Pure Random Search (PRS), Developments in Global Optimization, edited by I. M. Bomze et al., 1997, 171–181.Google Scholar
  4. [4]
    Romeijn, E. H. and Smith, R. L.: Simulated annealing and adaptive search in global optimization, Probab. Eng. Inform. Sci. 8 (1994), 571–590.Google Scholar
  5. [5]
    Bulger, D. W. and Wood, G. R., Hesitant adaptive search for global optimisation, Math. Programming 81 (1998), 89–102.MathSciNetGoogle Scholar
  6. [6]
    Wood, G. R., Zabinsky, Z. B. and Kristinsdottir, B. P.: Hesitant adaptive search: The distribution of the number of iterations to convergence, Math. Programming 89(3) (2001), 479–86.MathSciNetGoogle Scholar
  7. [7]
    Aarts, E. and Korst, J.: Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing, Wiley, New York, 1989.Google Scholar
  8. [8]
    Rudolph, G.: Convergence analysis of canonical genetic algorithms, IEEE Trans. Neural Networks 5 (1994), 96–101.CrossRefGoogle Scholar
  9. [9]
    Kemeny, J. G. and Snell, J. L.: Finite Markov Chains, Springer-Verlag, New York, 1976.Google Scholar
  10. [10]
    Patel, N. R., Smith, R. L. and Zabinsky, Z. B.: Pure adaptive search in Monte Carlo optimization, Math. Programming 43 (1988), 317–328.MathSciNetGoogle Scholar
  11. [11]
    Zabinsky, Z. B. and Smith, R. L.: Pure adaptive search in global optimization, Math. Programming 53 (1992), 323–338.CrossRefMathSciNetGoogle Scholar
  12. [12]
    Zabinsky, Z. B., Wood, G. R., Steel, M. A. and Baritompa, W. P.: Pure adaptive search for finite global optimization, Math. Programming 69 (1995), 443–448.MathSciNetGoogle Scholar
  13. [13]
    Ravindran, A., Phillips, D. T. and Solberg, J. J.: Operations Research, Principles and Practice, Wiley, 1976.Google Scholar
  14. [14]
    Kristinsdottir, B. P: Complexity analysis of random search algorithms, Ph.D. Dissertation, University of Washington, 1997.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Birna P. Kristinsdottir
    • 1
  • Zelda B. Zabinsky
    • 2
  • Graham R. Wood
    • 3
  1. 1.Mechanical and Industrial Engineering DepartmentUniversity of IcelandReykjavikIceland
  2. 2.Industrial EngineeringUniversity of WashingtonSeattleUSA
  3. 3.Institute of Information Sciences and TechnologyMassey UniversityPalmerston NorthNew Zealand

Personalised recommendations