Stochastic Adaptive Search Methods: Theory and Implementation
Random search algorithms are very useful for simulation optimization, because they are relatively easy to implement and typically find a “good” solution quickly. One drawback is that strong convergence results to a global optimum require strong assumptions on the structure of the problem.
This chapter begins by discussing optimization formulations for simulation optimization that combines expected performance with a measure of variability, or risk. It then summarizes theoretical results for several adaptive random search algorithms (including pure adaptive search, hesitant adaptive search, backtracking adaptive search and annealing adaptive search) that converge in probability to a global optimum on ill-structured problems. More importantly, the complexity of these adaptive random search algorithms is linear in dimension, on average.
While it is not possible to exactly implement stochastic adaptive search with the ideal linear performance, this chapter describes several algorithms utilizing a Markov chain Monte Carlo sampler known as hit-and-run that approximate stochastic adaptive search. The first optimization algorithm discussed that uses hit-and-run is called improving hit-and-run, and it has polynomial complexity, on average, for a class of convex problems. Then a simulated annealing algorithm and a population based algorithm, both using hit-and-run as the candidate point generator, are described. A variation to hit-and-run that can handle mixed continuous/integer feasible regions, called pattern hit-and-run, is described. Pattern hit-and-run retains the same convergence results to a target distribution as hit-and-run on continuous domains. This broadly extends the class of the optimization problems for these algorithms to mixed continuous/integer feasible regions.
KeywordsSimulated Annealing Feasible Region Random Search Boltzmann Distribution Global Optimization Problem
This work was supported in part by the National Science Foundation under Grant CMMI-1235484.
- 1.H. C. Andersen and P. Diaconis. Hit and run as a unifying device. Journal de la societe francaise de statistique & revue de statistique appliquee, 148(4):5–28, 2007.Google Scholar
- 5.C. J. P. Bélisle. Convergence theorems for a class of simulated annealing algorithms on R d. J. Applied Probability, 29:885–895, 1992.Google Scholar
- 11.D. W. Bulger and G. R. Wood. Hesitant adaptive search for global optimization. Mathematical Programming, 81:89–102, 1998.Google Scholar
- 12.K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, New York, 2001.Google Scholar
- 15.J. L. Devore. Probability and Statistics for Engineering and the Sciences. 4th Edition, Wadsworth, Inc. Belmont, CA, 1995.Google Scholar
- 16.L. C. W. Dixon and G. P. Szegö. Towards Global Optimization. North-Holland, Amsterdam, 1975.Google Scholar
- 17.L. C. W. Dixon and G. P. Szegö. Towards Global Optimization 2. North-Holland, Amsterdam, 1978.Google Scholar
- 19.F. S. Hillier and G. J. Lieberman. Introduction to Operations Research. 9th edition, McGraw-Hill, 2010.Google Scholar
- 20.Y. C. Ho, Q. C. Zhao, and Q. S. Jia. Ordinal Optimization: Soft Optimization for Hard Problems. Springer, 2007.Google Scholar
- 21.H. Huang and Z. B. Zabinsky. Adaptive probabilistic branch and bound with confidence intervals for level set approximation. In R. Pasupathy, S.-H. Kim, A. Tolk, R. Hill, and M. E. Kuhl, editors, Proceedings of the 2013 Winter Simulation Conference, pages 980–991, 2013.Google Scholar
- 24.S. Kiatsupaibul, R. L. Smith, and Z. B. Zabinsky, An analysis of a variation of hit-and-run for uniform sampling from general regions. ACM Transactions on Modeling and Computer Simulation (ACM TOMACS), 21:3, 16:1–16:11, 2011.Google Scholar
- 26.B. P. Kristinsdottir, Z. B. Zabinsky, and G. R. Wood. Discrete backtracking adaptive search for global optimization. In Stochastic and Global Optimization, dedicated to the 70th anniversary of Professor J. Mockus, edited by G. Dzemyda, V. Saltenis, and A. Zilinskas, Kluwer Academic Publishers, 147–174, 2002.Google Scholar
- 32.O. Molvalioglu, Z. B. Zabinsky, and W. Kohn. Multi-particle simulated annealing. In Models and Algorithms for Global Optimization, edited by A. Törn, J. Zilinskas, and A. Zilinskas, Springer, New York, 215–222, 2007.Google Scholar
- 34.O. Molvalioglu, Z. B. Zabinsky, and W. Kohn. Meta-control of an interacting-particle algorithm. Nonlinear Analysis: Hybrid Systems, 4(4):659–671, 2010.Google Scholar
- 36.N.R. Patel, R. L. Smith, and Z. B. Zabinsky. Pure adaptive search in Monte Carlo optimization. Mathematical Programming, 4:317–328, 1988.Google Scholar
- 38.R. L. Rardin. Optimization in Operations Research. Prentice Hall, New Jersey, 1998.Google Scholar
- 40.R. T. Rockafellar. Coherent approaches to risk in optimization under uncertainty. Tutorials in Operations Research, INFORMS, 38–61, 2007.Google Scholar
- 48.S. A. Vavasis. Complexity issues in global optimization: a survey. Handbook of Global Optimization, edited by R. Horst, and P. M. Pardalos, Kluwer Academic Publishers, Netherlands, 27–41, 1995.Google Scholar
- 49.W. Wang, A. Ghate, and Z. B. Zabinsky. Adaptive parameterized improving hit-and-run for global optimization. Optimization Methods and Software (OMS), 24:4–5, 569–594, 2009.Google Scholar
- 52.G. R. Wood and Z. B. Zabinsky. Stochastic adaptive search. In Handbook of Global Optimization Volume 2, edited by P. M. Pardalos and H. E. Romeijn, Kluwer Academic Publishers, Dordrecht, Netherlands, 231–249, 2002.Google Scholar
- 56.Z. B. Zabinsky. Random search algorithms. In Wiley Encyclopedia of Operations Research and Management Science, edited by J. J. Cochran, L. A. Cox, Jr., P. Keskinocak, J. P. Kharoufeh, and J. C. Smith, John Wiley & Sons, 2011.Google Scholar
- 57.Z. B. Zabinsky, Stochastic search methods for global optimization. In Wiley Encyclopedia of Operations Research and Management Science, edited by J. J. Cochran, L. A. Cox, Jr., P. Keskinocak, J. P. Kharoufeh and J. C. Smith, John Wiley & Sons, 2011.Google Scholar
- 59.Z. B. Zabinsky, D. L. Graesser, M. E. Tuttle, and G. I. Kim. Global optimization of composite laminate using improving hit and run. In Recent Advances in Global Optimization, edited by C. A. Floudas, P. M. Pardalos, Princeton University Press, Princeton, NJ, 343–365, 1992.Google Scholar
- 61.Z. B. Zabinsky and R. L. Smith. Hit-and-run methods. In S. I. Gass and M. C. Fu, editors, Encyclopedia of Operations Research & Management Science, pages 721–729. Springer, 3rd edition, 2013.Google Scholar
- 63.Z. B. Zabinsky, M. E. Tuttle, and Khompatraporn, C., A case study: composite structure design optimization. In Global Optimization: Scientific and Engineering Case Studies, edited by Janos Pintér, Springer Science and Business Media, LLC, 507–528, 2006.Google Scholar
- 64.Z. B. Zabinsky, W. Wang, Y. M. Prasetio, A. Ghate, and J. W. Yen. Adaptive probabilistic branch and bound for level set approximation. In S. Jain, R. R. Creasey, J. Himmelspach, K. P. White, and M. Fu, editors, Proceedings of the 2011 Winter Simulation Conference, pages 4146–4157, 2011.Google Scholar
- 65.Z. B. Zabinsky, G. R. Wood, M. A. Steel, and W. P. Baritompa. Pure adaptive search for finite global optimization. Mathematical Programming, 69:443–448, 1995.Google Scholar