Stochastic Adaptive Search Methods: Theory and Implementation

  • Zelda B. Zabinsky
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 216)


Random search algorithms are very useful for simulation optimization, because they are relatively easy to implement and typically find a “good” solution quickly. One drawback is that strong convergence results to a global optimum require strong assumptions on the structure of the problem.

This chapter begins by discussing optimization formulations for simulation optimization that combines expected performance with a measure of variability, or risk. It then summarizes theoretical results for several adaptive random search algorithms (including pure adaptive search, hesitant adaptive search, backtracking adaptive search and annealing adaptive search) that converge in probability to a global optimum on ill-structured problems. More importantly, the complexity of these adaptive random search algorithms is linear in dimension, on average.

While it is not possible to exactly implement stochastic adaptive search with the ideal linear performance, this chapter describes several algorithms utilizing a Markov chain Monte Carlo sampler known as hit-and-run that approximate stochastic adaptive search. The first optimization algorithm discussed that uses hit-and-run is called improving hit-and-run, and it has polynomial complexity, on average, for a class of convex problems. Then a simulated annealing algorithm and a population based algorithm, both using hit-and-run as the candidate point generator, are described. A variation to hit-and-run that can handle mixed continuous/integer feasible regions, called pattern hit-and-run, is described. Pattern hit-and-run retains the same convergence results to a target distribution as hit-and-run on continuous domains. This broadly extends the class of the optimization problems for these algorithms to mixed continuous/integer feasible regions.


Simulated Annealing Feasible Region Random Search Boltzmann Distribution Global Optimization Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work was supported in part by the National Science Foundation under Grant CMMI-1235484.


  1. 1.
    H. C. Andersen and P. Diaconis. Hit and run as a unifying device. Journal de la societe francaise de statistique & revue de statistique appliquee, 148(4):5–28, 2007.Google Scholar
  2. 2.
    W. P. Baritompa, D. W. Bulger, and G. R. Wood. Grover’s quantum algorithm applied to global optimization. SIAM Journal of Optimization, 15(4):1170–1184, 2005.CrossRefGoogle Scholar
  3. 3.
    W. P. Baritompa, D. W. Bulger, and G. R. Wood. Generating functions and the performance of backtracking adaptive search. Journal of Global Optimization 37:159–175, 2007.CrossRefGoogle Scholar
  4. 4.
    S. Baumert, A. Ghate, S. Kiatsupaibul, Y. Shen, R. L. Smith, and Z. B. Zabinsky. Discrete hit-and-run for generating multivariate distributions over arbitrary finite subsets of a lattice. Operations Research, 57(3):727–739, 2009.CrossRefGoogle Scholar
  5. 5.
    C. J. P. Bélisle. Convergence theorems for a class of simulated annealing algorithms on R d. J. Applied Probability, 29:885–895, 1992.Google Scholar
  6. 6.
    C. J. P. Bélisle, H. E. Romeijn, and R. L. Smith. Hit-and-run algorithms for generating multivariate distributions. Mathematics of Operations Research, 18:255–266, 1993.CrossRefGoogle Scholar
  7. 7.
    D. Bertsimas and S. Vempala. Solving convex programs by random walks. Journal of the ACM, 51(4):540–556, 2004.CrossRefGoogle Scholar
  8. 8.
    S. H. Brooks. A discussion of random methods for seeking maxima. Operations Research, 6:244–251, 1958.CrossRefGoogle Scholar
  9. 9.
    D. W. Bulger, R. D. Alexander, W. P. Baritompa, G. R. Wood, and Z. B. Zabinsky. Expected hitting times for backtracking adaptive search. Optimization, 53(2):189–202, 2004.CrossRefGoogle Scholar
  10. 10.
    D. W. Bulger, W. P. Baritompa, and G. R. Wood. Implementing pure adaptive search with Grover’s quantum algorithm. Journal of Optimization Theory and Applications, 116:517–529, 2003.CrossRefGoogle Scholar
  11. 11.
    D. W. Bulger and G. R. Wood. Hesitant adaptive search for global optimization. Mathematical Programming, 81:89–102, 1998.Google Scholar
  12. 12.
    K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, New York, 2001.Google Scholar
  13. 13.
    P. Del Moral. Feynman-Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Springer, New York, 2004.CrossRefGoogle Scholar
  14. 14.
    P. Del Moral and L. Miclo. On the convergence and applications of generalized simulated annealing. SIAM Journal of Control and Optimization, 37(4):1222–1250, 1999.CrossRefGoogle Scholar
  15. 15.
    J. L. Devore. Probability and Statistics for Engineering and the Sciences. 4th Edition, Wadsworth, Inc. Belmont, CA, 1995.Google Scholar
  16. 16.
    L. C. W. Dixon and G. P. Szegö. Towards Global Optimization. North-Holland, Amsterdam, 1975.Google Scholar
  17. 17.
    L. C. W. Dixon and G. P. Szegö. Towards Global Optimization 2. North-Holland, Amsterdam, 1978.Google Scholar
  18. 18.
    M. E. Dyer and A. M. Frieze. Computing the volume of convex bodies: a case where randomness provably helps. Proceedings of Symposia in Applied Mathematics, 44:123–169, 1991.CrossRefGoogle Scholar
  19. 19.
    F. S. Hillier and G. J. Lieberman. Introduction to Operations Research. 9th edition, McGraw-Hill, 2010.Google Scholar
  20. 20.
    Y. C. Ho, Q. C. Zhao, and Q. S. Jia. Ordinal Optimization: Soft Optimization for Hard Problems. Springer, 2007.Google Scholar
  21. 21.
    H. Huang and Z. B. Zabinsky. Adaptive probabilistic branch and bound with confidence intervals for level set approximation. In R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, editors, Proceedings of the 2013 Winter Simulation Conference, pages 980–991, 2013.Google Scholar
  22. 22.
    A. Kalai and S. Vempala. Convex optimization by simulated annealing. Mathematics of Operations Research, 31(2):253–266, 2006.CrossRefGoogle Scholar
  23. 23.
    D. E. Kaufman and R. L. Smith. Direction choice for accelerated convergence in hit-and-run sampling. Operations Research, 46(1):84–95, 1998.CrossRefGoogle Scholar
  24. 24.
    S. Kiatsupaibul, R. L. Smith, and Z. B. Zabinsky, An analysis of a variation of hit-and-run for uniform sampling from general regions. ACM Transactions on Modeling and Computer Simulation (ACM TOMACS), 21:3, 16:1–16:11, 2011.Google Scholar
  25. 25.
    W. Kohn, Z. B. Zabinsky, and V. Brayman. Meta-control of an optimization algorithm. Journal of Global Optimization, 34(2):293–316, 2006.CrossRefGoogle Scholar
  26. 26.
    B. P. Kristinsdottir, Z. B. Zabinsky, and G. R. Wood. Discrete backtracking adaptive search for global optimization. In Stochastic and Global Optimization, dedicated to the 70th anniversary of Professor J. Mockus, edited by G. Dzemyda, V. Saltenis, and A. Zilinskas, Kluwer Academic Publishers, 147–174, 2002.Google Scholar
  27. 27.
    L. Lovász. Hit-and-run mixes fast. Mathematical Programming, 86:443–461, 1999.CrossRefGoogle Scholar
  28. 28.
    L. Lovász and S. Vempala. Hit-and-run from a corner. SIAM Journal of Computing, 35(4):985–1005, 2006.CrossRefGoogle Scholar
  29. 29.
    H. O. Mete, Y. Shen, Z. B. Zabinsky, S. Kiatsupaibul, and R. L. Smith. Pattern discrete and mixed hit-and-run for global optimization. Journal of Global Optimization, 50(4):597–627, 2011.CrossRefGoogle Scholar
  30. 30.
    H. O. Mete and Z. B. Zabinsky. Pattern hit-and-run for sampling efficiently on polytopes. Operations Research Letters, 40(1):6–11, 2012.CrossRefGoogle Scholar
  31. 31.
    N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21: 1087–1090, 1953.CrossRefGoogle Scholar
  32. 32.
    O. Molvalioglu, Z. B. Zabinsky, and W. Kohn. Multi-particle simulated annealing. In Models and Algorithms for Global Optimization, edited by A. Törn, J. Zilinskas, and A. Zilinskas, Springer, New York, 215–222, 2007.Google Scholar
  33. 33.
    O. Molvalioglu, Z. B. Zabinsky, and W. Kohn. The interacting-particle algorithm with dynamic heating and cooling. Journal of Global Optimization, 43:329–356, 2009.CrossRefGoogle Scholar
  34. 34.
    O. Molvalioglu, Z. B. Zabinsky, and W. Kohn. Meta-control of an interacting-particle algorithm. Nonlinear Analysis: Hybrid Systems, 4(4):659–671, 2010.Google Scholar
  35. 35.
    B. Morris and A. Sinclair, Random walks on truncated cubes and sampling 0–1 knapsack solutions. SIAM Journal on Computing, 34(1):195–226, 2004.CrossRefGoogle Scholar
  36. 36.
    N.R. Patel, R. L. Smith, and Z. B. Zabinsky. Pure adaptive search in Monte Carlo optimization. Mathematical Programming, 4:317–328, 1988.Google Scholar
  37. 37.
    J. D. Pintér. Global Optimization in Action (Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications). Kluwer Academic Publishers, Dordrecht / Boston / London, 1996.CrossRefGoogle Scholar
  38. 38.
    R. L. Rardin. Optimization in Operations Research. Prentice Hall, New Jersey, 1998.Google Scholar
  39. 39.
    D. Reaume, H. E. Romeijn, and R. L. Smith. Implementing pure adaptive search for global optimization. Journal of Global Optimization, 20(1):33–47, 2001.CrossRefGoogle Scholar
  40. 40.
    R. T. Rockafellar. Coherent approaches to risk in optimization under uncertainty. Tutorials in Operations Research, INFORMS, 38–61, 2007.Google Scholar
  41. 41.
    H. E. Romeijn and R. L. Smith. Simulated annealing for constrained global optimization. Journal of Global Optimization, 5:101–126, 1994.CrossRefGoogle Scholar
  42. 42.
    H. E. Romeijn and R. L. Smith. Simulated annealing and adaptive search in global optimization. Probability in the Engineering and Informational Sciences, 8:571–590, 1994.CrossRefGoogle Scholar
  43. 43.
    H. E. Romeijn, Z. B. Zabinsky, D. L. Graesser, and S. Neogi. New reflection generator for simulated annealing in mixed-integer/continuous global optimization. Journal of Optimization Theory and Applications, 101(2):403–427, 1999.CrossRefGoogle Scholar
  44. 44.
    Y. Shen, S. Kiatsupaibul,, and R. L. Smith. An analytically derived cooling schedule for simulated annealing. Journal of Global Optimization, 38:333–365, 2007.CrossRefGoogle Scholar
  45. 45.
    R. L. Smith. Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Operations Research, 32:1296–1308, 1984.CrossRefGoogle Scholar
  46. 46.
    F. J. Solis and R. J. B. Wets. Minimization by random search techniques. Mathematics of Operations Research, 6:19–30, 1981.CrossRefGoogle Scholar
  47. 47.
    J. C. Spall. Introduction to Stochastic Search and Optimization: Estimation, Simulation and Control. Wiley, Hoboken, New Jersey, 2003.CrossRefGoogle Scholar
  48. 48.
    S. A. Vavasis. Complexity issues in global optimization: a survey. Handbook of Global Optimization, edited by R. Horst, and P. M. Pardalos, Kluwer Academic Publishers, Netherlands, 27–41, 1995.Google Scholar
  49. 49.
    W. Wang, A. Ghate, and Z. B. Zabinsky. Adaptive parameterized improving hit-and-run for global optimization. Optimization Methods and Software (OMS), 24:4–5, 569–594, 2009.Google Scholar
  50. 50.
    D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1:67–82, 1997.CrossRefGoogle Scholar
  51. 51.
    G. R. Wood, D. W. Bulger, W. P. Baritompa, and D. L. Alexander. Backtracking adaptive search: the distribution of the number of iterations to convergence. Journal of Optimization Theory and Applications, 128(3):547–562, 2006.CrossRefGoogle Scholar
  52. 52.
    G. R. Wood and Z. B. Zabinsky. Stochastic adaptive search. In Handbook of Global Optimization Volume 2, edited by P. M. Pardalos and H. E. Romeijn, Kluwer Academic Publishers, Dordrecht, Netherlands, 231–249, 2002.Google Scholar
  53. 53.
    G. R. Wood, Z. B. Zabinsky, and B. P. Kristinsdottir. Hesitant adaptive search: the distribution of the number of iterations to convergence. Mathematical Programming, 89(3):479–486, 2001.CrossRefGoogle Scholar
  54. 54.
    Z. B. Zabinsky. Stochastic methods for practical global optimization. Journal of Global Optimization, 13:433–444, 1998.CrossRefGoogle Scholar
  55. 55.
    Z. B. Zabinsky. Stochastic Adaptive Search for Global Optimization. Kluwer Academic Publishers, Boston, 2003.CrossRefGoogle Scholar
  56. 56.
    Z. B. Zabinsky. Random search algorithms. In Wiley Encyclopedia of Operations Research and Management Science, edited by J. J. Cochran, L. A. Cox, Jr., P. Keskinocak, J. P. Kharoufeh, and J. C. Smith, John Wiley & Sons, 2011.Google Scholar
  57. 57.
    Z. B. Zabinsky, Stochastic search methods for global optimization. In Wiley Encyclopedia of Operations Research and Management Science, edited by J. J. Cochran, L. A. Cox, Jr., P. Keskinocak, J. P. Kharoufeh and J. C. Smith, John Wiley & Sons, 2011.Google Scholar
  58. 58.
    Z. B. Zabinsky, D. Bulger, and C. Khompatraporn. Stopping and restarting strategy for stochastic sequential search in global optimization. Journal of Global Optimization, 46(2):273–286, 2010.CrossRefGoogle Scholar
  59. 59.
    Z. B. Zabinsky, D. L. Graesser, M. E. Tuttle, and G. I. Kim. Global optimization of composite laminate using improving hit and run. In Recent Advances in Global Optimization, edited by C. A. Floudas, P. M. Pardalos, Princeton University Press, Princeton, NJ, 343–365, 1992.Google Scholar
  60. 60.
    Z. B. Zabinsky and R. L. Smith. Pure adaptive search in global optimization. Mathematical Programming, 53:323–338, 1992.CrossRefGoogle Scholar
  61. 61.
    Z. B. Zabinsky and R. L. Smith. Hit-and-run methods. In S. I. Gass and M. C. Fu, editors, Encyclopedia of Operations Research & Management Science, pages 721–729. Springer, 3rd edition, 2013.Google Scholar
  62. 62.
    Z. B. Zabinsky, R. L. Smith, J. F. McDonald, H. E. Romeijn, and D. E. Kaufman. Improving hit and run for global optimization. Journal of Global Optimization, 3:171–192, 1993.CrossRefGoogle Scholar
  63. 63.
    Z. B. Zabinsky, M. E. Tuttle, and Khompatraporn, C., A case study: composite structure design optimization. In Global Optimization: Scientific and Engineering Case Studies, edited by Janos Pintér, Springer Science and Business Media, LLC, 507–528, 2006.Google Scholar
  64. 64.
    Z. B. Zabinsky, W. Wang, Y. M. Prasetio, A. Ghate, and J. W. Yen. Adaptive probabilistic branch and bound for level set approximation. In S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, editors, Proceedings of the 2011 Winter Simulation Conference, pages 4146–4157, 2011.Google Scholar
  65. 65.
    Z. B. Zabinsky, G. R. Wood, M. A. Steel, and W. P. Baritompa. Pure adaptive search for finite global optimization. Mathematical Programming, 69:443–448, 1995.Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.University of WashingtonSeattleUSA

Personalised recommendations