Abstract
The CMA-ES is one of the most powerful stochastic numerical optimizers to address difficult black-box problems. Its intrinsic time and space complexity is quadratic—limiting its applicability with increasing problem dimensionality. To circumvent this limitation, different large-scale variants of CMA-ES with subquadratic complexity have been proposed over the past ten years. To-date however, these variants have been tested and compared only in rather restrictive settings, due to the lack of a comprehensive large-scale testbed to assess their performance. In this context, we introduce a new large-scale testbed with dimension up to 640, implemented within the COCO benchmarking platform. We use this testbed to assess the performance of several promising variants of CMA-ES and the standard limited-memory L-BFGS. In all tested dimensions, the best CMA-ES variant solves more problems than L-BFGS for larger budgets while L-BFGS outperforms the best CMA-ES variant for smaller budgets. However, over all functions, the cumulative runtime distributions between L-BFGS and the best CMA-ES variants are close (less than a factor of 4 in high dimension).
Our results illustrate different scaling behaviors of the methods, expose a few defects of the algorithms and reveal that for dimension larger than 80, LM-CMA solves more problems than VkD-CMA while in the cumulative runtime distribution over all functions the VkD-CMA dominates LM-CMA for budgets up to \(10^4\) times dimension and for all budgets up to dimension 80.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
All raw datasets are available for download at http://coco.gforge.inria.fr/doku.php?id=algorithms while already postprocessed results are available (without the need to install COCO) at http://coco.gforge.inria.fr/ppdata-archive.
- 2.
The source code of the new test suite (incl. adaptations in COCO’s postprocessing) can be found in the devel-LS-development branch of the COCO Github page.
- 3.
Except L-BFGS, where the factr parameter was set to 1.0 for very high precision.
References
Ait ElHara, O., Auger, A., Hansen, N.: Permuted orthogonal block-diagonal transformation matrices for large scale optimization benchmarking. In: Genetic and Evolutionary Computation Conference (GECCO 2016), pp. 189–196. ACM (2016)
Akimoto, Y., Hansen, N.: Online model selection for restricted covariance matrix adaptation. In: Handl, J., Hart, E., Lewis, P.R., López-Ibáñez, M., Ochoa, G., Paechter, B. (eds.) PPSN 2016. LNCS, vol. 9921, pp. 3–13. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45823-6_1
Akimoto, Y., Hansen, N.: Projection-based restricted covariance matrix adaptation for high dimension. In: Genetic and Evolutionary Computation Conference (GECCO 2016), pp. 197–204. Denver, USA, July 2016
Hansen, N., Auger, A., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: A platform for comparing continuous optimizers in a black-box setting (2016). arXiv:1603.08785
Hansen, N., Finck, S., Ros, R., Auger, A.: Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. Research Report RR-6829, INRIA (2009)
Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)
Knight, J.N., Lunacek, M.: Reducing the space-time complexity of the CMA-ES. In: Genetic and Evolutionary Computation Conference (GECCO 2007), pp. 658–665. ACM (2007)
Krause, O., Arbonès, D.R., Igel, C.: CMA-ES with optimal covariance update and storage complexity. In: NIPS Proceedings (2016)
Li, Z., Zhang, Q.: A simple yet efficient evolution strategy for large scale black-box optimization. IEEE Trans. Evol. Comput. (2017, accepted)
Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(3), 503–528 (1989)
Loshchilov, I.: LM-CMA: an alternative to L-BFGS for large scale black-box optimization. Evol. Comput. 25, 143–171 (2017)
Loshchilov, I.: A computationally efficient limited memory CMA-ES for large scale optimization. In: Genetic and Evolutionary Computation Conference (GECCO 2014), pp. 397–404 (2014)
Loshchilov, I., Glasmachers, T., Beyer, H.: Limited-memory matrix adaptation for large scale black-box optimization. CoRR abs/1705.06693 (2017)
Ros, R., Hansen, N.: A simple modification in CMA-ES achieving linear time and space complexity. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 296–305. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87700-4_30
Sun, Y., Gomez, F.J., Schaul, T., Schmidhuber, J.: A linear time natural evolution strategy for non-separable functions. CoRR abs/1106.1998 (2011)
Suttorp, T., Hansen, N., Igel, C.: Efficient covariance matrix update for variable metric evolution strategies. Mach. Learn. 75(2), 167–197 (2009)
Tang, K., et al.: Benchmark functions for the CEC 2008 special session and competition on large scale global optimization (2007)
Acknowledgement
The PhD thesis of Konstantinos Varelas is funded by the French MoD DGA/MRIS and Thales Land & Air Systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Varelas, K. et al. (2018). A Comparative Study of Large-Scale Variants of CMA-ES. In: Auger, A., Fonseca, C., Lourenço, N., Machado, P., Paquete, L., Whitley, D. (eds) Parallel Problem Solving from Nature – PPSN XV. PPSN 2018. Lecture Notes in Computer Science(), vol 11101. Springer, Cham. https://doi.org/10.1007/978-3-319-99253-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-99253-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99252-5
Online ISBN: 978-3-319-99253-2
eBook Packages: Computer ScienceComputer Science (R0)