A latent space-based estimation of distribution algorithm for large-scale global optimization

  • Wenyong Dong
  • Yufeng WangEmail author
  • Mengchu Zhou


Large-scale global optimization problems (LSGOs) have received considerable attention in the field of meta-heuristic algorithms. Estimation of distribution algorithms (EDAs) are a major branch of meta-heuristic algorithms. However, how to effectively build the probabilistic model for EDA in high dimensions is confronted with obstacle, making them less attractive due to the large computational requirements. To overcome the shortcomings of EDAs, this paper proposed a latent space-based EDA (LS-EDA), which transforms the multivariate probabilistic model of Gaussian-based EDA into its principal component latent subspace with lower dimensionality. LS-EDA can efficiently reduce the complexity of EDA while maintaining its probability model without losing key information to scale up its performance for LSGOs. When the original dimensions are projected to the latent subspace, those dimensions with larger projected value make more contribution to the optimization process. LS-EDA can also help recognize and understand the problem structure, especially for black-box optimization problems. Due to dimensionality reduction, its computational budget and population size can be effectively reduced while its performance is highly competitive in comparison with the state-of-the-art meta-heuristic algorithms for LSGOs. In order to understand the strengths and weaknesses of LS-EDA, we have carried out extensive computational studies. Our results revealed LS-EDA outperforms the others on the benchmark functions with overlap and nonseparate variables.


Probabilistic PCA Estimation of distribution algorithm (EDA) Maximum likelihood estimate (MLE) 



This study was funded by the NSF of China (Grant Numbers 61170305, 61672024), the NSF of USA (Grant Number CMMI-1162482) and the Key Research Program in Higher Education of Henan (Grant Number 17A520046).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Informed consent was obtained from all individual participants included in the study.


  1. Armananzas R, Saeys Y, Inza I, Garcia-Torres M (2011) Peakbin selection in mass spectrometry data using a consensus approach with estimation of distribution algorithms. IEEE ACM Trans Comput Biol Bioinform 8(3):760–774CrossRefGoogle Scholar
  2. Bielza C, Robles V, Larranaga P (2009) Estimation of distribution algorithms as logistic regression regularizers of microarray classifiers. Methods Inf Med 48(3):236–241CrossRefGoogle Scholar
  3. Bishop CM (2006) Pattern recognition and machine learning. Springer, BerlinzbMATHGoogle Scholar
  4. Bosman PA, Thierens D (2000) Continuous iterated density estimation evolutionary algorithms within the idea framework. In: Genetic and Evolutionary Computation Conference, page 197200. Optimization Building Using Probabilistic Models OBUPM Workshop GECCOGoogle Scholar
  5. Bosman PAN, Thierens D (2001) Expanding from discrete to continuous estimation of distribution algorithms: the idea. In: Parallel problem solving from nature—PPSN VI, pp 767–776Google Scholar
  6. Che A, Wu P, Chu F, Zhou M (2015) Improved quantum-inspired evolutionary algorithm for large-size lane reservation. IEEE Trans Syst Man Cybern Syst 45(12):1535–1548CrossRefGoogle Scholar
  7. Dong W, Yao X (2008) Unified eigen analysis on multivariate gaussian based estimation of distribution algorithms. Inf Sci 178(15):3000–3023MathSciNetCrossRefzbMATHGoogle Scholar
  8. Dong W, Zhou M (2014) Gaussian classifier-based evolutionary strategy for multimodal optimization. IEEE Trans Neural Netw Learn Syst 25(6):1200–1216CrossRefGoogle Scholar
  9. Dong WS, Chen TS, Tino P, Yao X (2013) Scaling up estimation of distribution algorithms for continuous optimization. IEEE Trans Evolut Comput 17(6):797–822. 265RU Times Cited:4 Cited References Count:58Google Scholar
  10. Etxeberria R, Lozano JA (2000) Optimization in continuous domains by learning and simulation of gaussian networks. In: Genetic and Evolutionary Computation ConferenceGoogle Scholar
  11. Fogel DB (1996) Evolutionary algorithms in theory and practice, evolutionary programming, genetic algorithms. Oxford University Press, OxfordGoogle Scholar
  12. Gomes J, Mariano P, Christensen AL (2017) Novelty-driven cooperative coevolution. Evol Comput 25(2):275–307CrossRefGoogle Scholar
  13. Guo Z, Zhou MC, Jiang G (2008) Adaptive sensor placement and boundary estimation for monitoring mass objects. IEEE Trans Syst Man Cybern Part B 38(1):222–232CrossRefGoogle Scholar
  14. Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195CrossRefGoogle Scholar
  15. Hauschild MW, Pelikan M, Sastry K, Goldberg DE (2008) Using previous models to bias structural learning in the hierarchical boa. In: Genetic and Evolutionary Computation Conference, GECCO-2008, pp 415–422Google Scholar
  16. Kabn A, Bootkrajang J, Durrant RJ (2015) Toward large-scale continuous eda: a random matrix theory perspective. Evol Comput 24:255–291CrossRefGoogle Scholar
  17. Larranaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation, vol 2. Springer, BerlinzbMATHGoogle Scholar
  18. Lin T, Zhang H, Zhang K, Tu Z, Cui N (2016) An adaptive multiobjective estimation of distribution algorithm with a novel Gaussian sampling strategy. Soft Comput 21:6043–6061CrossRefGoogle Scholar
  19. Lohn JD, Colombano SP (1999) A circuit representation technique for automated circuit design. IEEE Trans Evol Comput 3(3):205–219CrossRefGoogle Scholar
  20. Lu Q, Yao X (2005) Clustering and learning gaussian distribution for continuous optimization. IEEE Trans Syst Man Cybern Part C Appl Rev 35(2):195–204CrossRefGoogle Scholar
  21. Mahdavi S, Shiri ME, Rahnamayan S (2015) Metaheuristics in large-scale global continues optimization: a survey. Inf Sci 295:407–428MathSciNetCrossRefGoogle Scholar
  22. Mahdavi S, Rahnamayan S, Shiri ME (2016a) Incremental cooperative coevolution for large-scale global optimization. Soft Comput 22:2045–2064CrossRefGoogle Scholar
  23. Mahdavi S, Rahnamayan S, Shiri ME (2016b) Multilevel framework for large-scale global optimization. Soft Comput 21:4111–4140CrossRefGoogle Scholar
  24. Mishra KM, Gallagher M (2014) A modified screening estimation of distribution algorithm for large-scale continuous optimization. In: Asia-Pacific Conference on Simulated Evolution and Learning. Springer, pp 119–130Google Scholar
  25. Mühlenbein H, Paass G (1996) From recombination of genes to the estimation of distributions I. Binary parameters. In: Parallel Problem Solving from Nature—PPSN IV. Springer, pp 178–187Google Scholar
  26. Omidvar MN, Li X, Yao X (2011) Smart use of computational resources based on contribution for cooperative co-evolutionary algorithms. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, pp 1115–1122Google Scholar
  27. Omidvar MN, Li X, Mei Y, Yao X (2014) Cooperative co-evolution with differential grouping for large scale optimization. IEEE Trans Evol Comput 18(3):378–393CrossRefGoogle Scholar
  28. Omidvar MN, Yang M, Mei Y, Li X, Yao X (2017) Dg2: a faster and more accurate differential grouping for large-scale black-box optimization. IEEE Trans Evol Comput 21(6):929–942CrossRefGoogle Scholar
  29. Pelikan M, Goldberg DE, Lobo FG (2002) A survey of optimization by building and using probabilistic models. Comput Optim Appl 21(1):5–20MathSciNetCrossRefzbMATHGoogle Scholar
  30. Pok P (2005) On the utility of linear transformations for population-based optimization algorithms. In: Preprints of the 16th World Congress of the International Federation of Automatic Control, pp 281–286Google Scholar
  31. Potter MA, De Jong KA (1994) A cooperative coevolutionary approach to function optimization. In: Parallel problem solving from nature—PPSN III. Springer, pp 249–257Google Scholar
  32. Ray T, Liew KM (2003) Society and civilization: an optimization algorithm based on the simulation of social behavior. IEEE Trans Evol Comput 7(4):386–396CrossRefGoogle Scholar
  33. Santana R, Mendiburu A, Lozano JA (2012) Structural transfer using EDAs: an application to multi-marker tagging SNP selection. In Evolutionary Computation, pp 1–8Google Scholar
  34. Santana R, Armaanzas R, Bielza C, Larraaga P (2013) Network measures for information extraction in evolutionary algorithms. Int J Comput Intell Syst 6(6):1163–1188CrossRefGoogle Scholar
  35. Sanyang ML, Kabn A (2015) Heavy tails with parameter adaptation in random projection based continuous EDA. In: 2015 IEEE Congress on Evolutionary Computation (CEC). IEEE, pp 2074–2081Google Scholar
  36. Sebag M, Ducoulombier A (1998) Extending population-based incremental learning to continuous search spaces. In: Parallel Problem Solving from NaturePPSN V. Springer, pp 418–427Google Scholar
  37. Shang YW, Qiu YH (2006) A note on the extended Rosenbrock function. Evol Comput 14(1):119–126CrossRefGoogle Scholar
  38. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen YP, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technological UniversityGoogle Scholar
  39. Sun J, Zhang Q, Tsang E, Ford J (2003) Hybrid estimation of distribution algorithm for global optimization. Eng Comput 21(1):91–107zbMATHGoogle Scholar
  40. Tang K, Li X, Suganthan PN, Yang Z, Weise T (2013) Benchmark functions for the CEC’2013 special session and competition on large-scale global optimization. Nature Inspired Computation & ApplicationsGoogle Scholar
  41. Wagner M, Auger A, Schoenauer M (2004) EEDA: a new robust estimation of distribution algorithms. Rapport de Recherche (Res. Rep.)RR-5190Google Scholar
  42. Wu FX, Liu LZ, Zhang WJ (2012) Inference of biological S-system using the separable estimation method and the genetic algorithm. IEEE ACM Trans Comput Biol Bioinform 9(4):955–965CrossRefGoogle Scholar
  43. Yang XS (2014) Swarm intelligence based algorithms: a critical analysis. Evol Intell 7(1):17–28CrossRefGoogle Scholar
  44. Yang Z, Tang K, Yao X (2007) Differential evolution for high-dimensional function optimization. In IEEE Congress on Evolutionary Computation, 2007. CEC 2007, pp 3523–3530Google Scholar
  45. Yang Z, Ke T, Xin Y (2008a) Large scale evolutionary optimization using cooperative coevolution. Inf Sci 178(15):2985–2999MathSciNetCrossRefzbMATHGoogle Scholar
  46. Yang Z, Tang K, Yao X (2008b) Multilevel cooperative coevolution for large scale optimization. In: IEEE Congress on Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence), pp 1663–1670Google Scholar
  47. Yang M, Omidvar MN, Li C, Li X, Cai Z, Kazimipour B, Yao X (2017) Efficient resource allocation in cooperative co-evolution for large-scale global optimization. IEEE Trans Evol Comput 21(4):493–505CrossRefGoogle Scholar
  48. Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102CrossRefGoogle Scholar
  49. Zhou A, Sun J, Zhang Q (2015) An estimation of distribution algorithm with cheap and expensive local search. IEEE Trans Evol Comput 19(6):807–822CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Computer SchoolWuhan UniversityWuhanChina
  2. 2.Software SchoolNanyang Institute of TechnologyNanyangChina
  3. 3.Department of Electrical and Computer EngineeringNew Jersey Institute of TechnologyNewarkUSA

Personalised recommendations