Advertisement

On Spectral Invariance of Randomized Hessian and Covariance Matrix Adaptation Schemes

  • Sebastian U. Stich
  • Christian L. Müller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7491)

Abstract

We evaluate the performance of several gradient-free variable-metric continuous optimization schemes on a specific set of quadratic functions. We revisit a randomized Hessian approximation scheme (D. Leventhal and A. S. Lewis. Randomized Hessian estimation and directional search, 2011), discuss its theoretical underpinnings, and introduce a novel, numerically stable implementation of the scheme (RH). For comparison we also consider closely related Covariance Matrix Adaptation (CMA) schemes. A key goal of this study is to elucidate the influence of the distribution of eigenvalues of quadratic functions on the convergence properties of the different variable-metric schemes. For this purpose we introduce a class of quadratic functions with parameterizable spectra. Our empirical study shows that (i) the performance of RH methods is less dependent on the spectral distribution than CMA schemes, (ii) that adaptive step size control is more efficient in the RH method than line search, and (iii) that the concept of the evolution path allows a paramount speed-up of CMA schemes on quadratic functions but does not alleviate the overall dependence on the eigenvalue spectrum. The present results may trigger research into the design of novel CMA update schemes with improved spectral invariance.

Keywords

gradient-free optimization variable metric Randomized Hessian Covariance Matrix Adaptation quadratic functions 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schumer, M., Steiglitz, K.: Adaptive step size random search. IEEE Transactions on Automatic Control 13(3), 270–276 (1968)CrossRefGoogle Scholar
  2. 2.
    Rechenberg, I.: Evolutionsstrategie; Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog (1973)Google Scholar
  3. 3.
    Betro, B., De Biase, L.: A Newton-like method for stochastic optimization. In: Towards Global Optimization, vol. 2, pp. 269–289. North-Holland (1978)Google Scholar
  4. 4.
    Marti, K.: Controlled random search procedures for global optimization. In: Stochastic Optimization. Lecture Notes in Control and Information Sciences, vol. 81, pp. 457–474. Springer (1986)Google Scholar
  5. 5.
    Leventhal, D., Lewis, A.S.: Randomized Hessian estimation and directional search. Optimization 60(3), 329–345 (2011)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Kjellström, G., Taxen, L.: Stochastic Optimization in System Design. IEEE Trans. Circ. and Syst. 28(7) (July 1981)Google Scholar
  7. 7.
    Hansen, N., Ostermeier, A.: Completely Derandomized Self-Adaption in Evolution Strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  8. 8.
    Hansen, N., Muller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)CrossRefGoogle Scholar
  9. 9.
    Brockhoff, D., Auger, A., Hansen, N., Arnold, D.V., Hohm, T.: Mirrored Sampling and Sequential Selection for Evolution Strategies. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 11–21. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Stich, S.U., Müller, C.L., Gärtner, B.: Optimization of convex functions with Random Pursuit (2011), http://arxiv.org/abs/1111.0194
  11. 11.
    Müller, C.L., Sbalzarini, I.F.: Gaussian Adaptation Revisited – An Entropic View on Covariance Matrix Adaptation. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A.I., Goh, C.-K., Merelo, J.J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G.N. (eds.) EvoApplicatons 2010, Part I. LNCS, vol. 6024, pp. 432–441. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  12. 12.
    Müller, C.L., Sbalzarini, I.F.: Gaussian Adaptation as a unifying framework for continuous black-box optimization and adaptive Monte Carlo sampling. In: 2010 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2010)Google Scholar
  13. 13.
    Mutseniyeks, V.A., Rastrigin, L.A.: Extremal control of continuous multi-parameter systems by the method of random search. Eng.Cyb. 1, 82–90 (1964)Google Scholar
  14. 14.
    Jägersküpper, J.: Rigorous Runtime Analysis of the (1+1) ES: 1/5-Rule and Ellipsoidal Fitness Landscapes. In: Wright, A.H., Vose, M.D., De Jong, K.A., Schmitt, L.M. (eds.) FOGA 2005. LNCS, vol. 3469, pp. 260–281. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  15. 15.
    Stich, S.U., Gärtner, B., Müller, C.L.: Variable Metric Random Pursuit. In Preparation for Math. Prog. (2012)Google Scholar
  16. 16.
    Andrieu, C., Thoms, J.: A tutorial on adaptive MCMC. Statistics and Computing 18(4), 343–373 (2008)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Polyak, B.: Introduction to Optimization. Optimization Software - Inc., Publications Division, New York (1987)Google Scholar
  18. 18.
    Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer, Boston (2004)MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Sebastian U. Stich
    • 1
  • Christian L. Müller
    • 1
  1. 1.MOSAIC group, Institute of Theoretical Computer Science, and Swiss Institute of BioinformaticsETH ZürichZürichSwitzerland

Personalised recommendations