A latent space-based estimation of distribution algorithm for large-scale global optimization
- 71 Downloads
Large-scale global optimization problems (LSGOs) have received considerable attention in the field of meta-heuristic algorithms. Estimation of distribution algorithms (EDAs) are a major branch of meta-heuristic algorithms. However, how to effectively build the probabilistic model for EDA in high dimensions is confronted with obstacle, making them less attractive due to the large computational requirements. To overcome the shortcomings of EDAs, this paper proposed a latent space-based EDA (LS-EDA), which transforms the multivariate probabilistic model of Gaussian-based EDA into its principal component latent subspace with lower dimensionality. LS-EDA can efficiently reduce the complexity of EDA while maintaining its probability model without losing key information to scale up its performance for LSGOs. When the original dimensions are projected to the latent subspace, those dimensions with larger projected value make more contribution to the optimization process. LS-EDA can also help recognize and understand the problem structure, especially for black-box optimization problems. Due to dimensionality reduction, its computational budget and population size can be effectively reduced while its performance is highly competitive in comparison with the state-of-the-art meta-heuristic algorithms for LSGOs. In order to understand the strengths and weaknesses of LS-EDA, we have carried out extensive computational studies. Our results revealed LS-EDA outperforms the others on the benchmark functions with overlap and nonseparate variables.
KeywordsProbabilistic PCA Estimation of distribution algorithm (EDA) Maximum likelihood estimate (MLE)
This study was funded by the NSF of China (Grant Numbers 61170305, 61672024), the NSF of USA (Grant Number CMMI-1162482) and the Key Research Program in Higher Education of Henan (Grant Number 17A520046).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
This article does not contain any studies with human participants performed by any of the authors.
Informed consent was obtained from all individual participants included in the study.
- Bosman PA, Thierens D (2000) Continuous iterated density estimation evolutionary algorithms within the idea framework. In: Genetic and Evolutionary Computation Conference, page 197200. Optimization Building Using Probabilistic Models OBUPM Workshop GECCOGoogle Scholar
- Bosman PAN, Thierens D (2001) Expanding from discrete to continuous estimation of distribution algorithms: the idea. In: Parallel problem solving from nature—PPSN VI, pp 767–776Google Scholar
- Dong WS, Chen TS, Tino P, Yao X (2013) Scaling up estimation of distribution algorithms for continuous optimization. IEEE Trans Evolut Comput 17(6):797–822. 265RU Times Cited:4 Cited References Count:58Google Scholar
- Etxeberria R, Lozano JA (2000) Optimization in continuous domains by learning and simulation of gaussian networks. In: Genetic and Evolutionary Computation ConferenceGoogle Scholar
- Fogel DB (1996) Evolutionary algorithms in theory and practice, evolutionary programming, genetic algorithms. Oxford University Press, OxfordGoogle Scholar
- Hauschild MW, Pelikan M, Sastry K, Goldberg DE (2008) Using previous models to bias structural learning in the hierarchical boa. In: Genetic and Evolutionary Computation Conference, GECCO-2008, pp 415–422Google Scholar
- Mishra KM, Gallagher M (2014) A modified screening estimation of distribution algorithm for large-scale continuous optimization. In: Asia-Pacific Conference on Simulated Evolution and Learning. Springer, pp 119–130Google Scholar
- Mühlenbein H, Paass G (1996) From recombination of genes to the estimation of distributions I. Binary parameters. In: Parallel Problem Solving from Nature—PPSN IV. Springer, pp 178–187Google Scholar
- Omidvar MN, Li X, Yao X (2011) Smart use of computational resources based on contribution for cooperative co-evolutionary algorithms. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, pp 1115–1122Google Scholar
- Pok P (2005) On the utility of linear transformations for population-based optimization algorithms. In: Preprints of the 16th World Congress of the International Federation of Automatic Control, pp 281–286Google Scholar
- Potter MA, De Jong KA (1994) A cooperative coevolutionary approach to function optimization. In: Parallel problem solving from nature—PPSN III. Springer, pp 249–257Google Scholar
- Santana R, Mendiburu A, Lozano JA (2012) Structural transfer using EDAs: an application to multi-marker tagging SNP selection. In Evolutionary Computation, pp 1–8Google Scholar
- Sanyang ML, Kabn A (2015) Heavy tails with parameter adaptation in random projection based continuous EDA. In: 2015 IEEE Congress on Evolutionary Computation (CEC). IEEE, pp 2074–2081Google Scholar
- Sebag M, Ducoulombier A (1998) Extending population-based incremental learning to continuous search spaces. In: Parallel Problem Solving from NaturePPSN V. Springer, pp 418–427Google Scholar
- Suganthan PN, Hansen N, Liang JJ, Deb K, Chen YP, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technological UniversityGoogle Scholar
- Tang K, Li X, Suganthan PN, Yang Z, Weise T (2013) Benchmark functions for the CEC’2013 special session and competition on large-scale global optimization. Nature Inspired Computation & ApplicationsGoogle Scholar
- Wagner M, Auger A, Schoenauer M (2004) EEDA: a new robust estimation of distribution algorithms. Rapport de Recherche (Res. Rep.)RR-5190Google Scholar
- Yang Z, Tang K, Yao X (2007) Differential evolution for high-dimensional function optimization. In IEEE Congress on Evolutionary Computation, 2007. CEC 2007, pp 3523–3530Google Scholar
- Yang Z, Tang K, Yao X (2008b) Multilevel cooperative coevolution for large scale optimization. In: IEEE Congress on Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence), pp 1663–1670Google Scholar