An Introduction to Dynamical Search
A number of classical optimisation algorithms which appear to converge smoothly behave in a haphazard fashion when looked at a local, or second order, level. Using different renormalisation procedures we link these algorithms to dynamical systems and then study these systems to get additional information about the rates of convergence of the original algorithms. These convergence rates are expressed in terms of the Lyapunov exponents and various entropies of the dynamical systems. Working in a dynamical system environment has suggested new types of algorithms and improvements over a number of classical algorithms. One result of the approach is that algorithms classically considered as optimal are in fact optimal in the worst-case, and, ergodically. the worst-case events have measure zero. We thus are often able to construct algorithms with improved ergodic performances. As the main application areas we consider line-search, ellipsoidal algorithms for linear and nonlinear programming, and gradient algorithms.
Unable to display preview. Download preview PDF.
- Boender, C.G.E. and Romeijn, H.E. (1995). Stochastic methods. In Horst, R. and Pardalos, P.M., editors, Handbook of Global Optimization, pages 829–869. Kluwer, Dordrecht.Google Scholar
- Hansen, P. and Jaumard, B. (1995). Lipschitz optimization. In Horst, R. and Pardalos, P.M., editors, Handbook of Global Optimization I, pages 407–493. Kluwer, Dordrecht.Google Scholar
- Luenberger, D. (1973). Introduction to Linear and Nonlinear Programming. Addison-Wesley, Reading, Massachusetts.Google Scholar
- Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (1997). Using Renyi entropies in search problems. Lectures in Applied Mathematics, 33: 253–268.Google Scholar
- Pronzato, L., Wynn, H.P., and Zhigljaysky, A.A. (2001b). Renormalised steepest descent in Hilbert space converges to a two-point attrator. Acta Applicandae Mathematicae, 40. (to appear).Google Scholar