Advertisement

Soft Computing

, Volume 15, Issue 2, pp 311–326 | Cite as

Environment identification-based memory scheme for estimation of distribution algorithms in dynamic environments

  • Xingguang PengEmail author
  • Xiaoguang Gao
  • Shengxiang Yang
Original Paper

Abstract

In estimation of distribution algorithms (EDAs), the joint probability distribution of high-performance solutions is presented by a probability model. This means that the priority search areas of the solution space are characterized by the probability model. From this point of view, an environment identification-based memory management scheme (EI-MMS) is proposed to adapt binary-coded EDAs to solve dynamic optimization problems (DOPs). Within this scheme, the probability models that characterize the search space of the changing environment are stored and retrieved to adapt EDAs according to environmental changes. A diversity loss correction scheme and a boundary correction scheme are combined to counteract the diversity loss during the static evolutionary process of each environment. Experimental results show the validity of the EI-MMS and indicate that the EI-MMS can be applied to any binary-coded EDAs. In comparison with three state-of-the-art algorithms, the univariate marginal distribution algorithm (UMDA) using the EI-MMS performs better when solving three decomposable DOPs. In order to understand the EI-MMS more deeply, the sensitivity analysis of parameters is also carried out in this paper.

Keywords

Estimation of distribution algorithm Dynamic optimization problem Environment identification Memory scheme Diversity compensation 

Notes

Acknowledgments

The authors would like to thank the anonymous associate editor and reviewers for their thoughtful suggestions and constructive comments. This work was supported by the National Nature Science Foundation of China (NSFC) under Grant 60774064, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01.

References

  1. Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 IEEE congress on evolutionary computation (CEC99), vol 3, pp 1875–1882Google Scholar
  2. Branke J (2001) Evolutionary optimization in dynamic environments, Kluwer, DordrechtGoogle Scholar
  3. Branke J, Kaubler T, Schmidt C, Schmeck H (2000) A multipopulation approach to dynamic optimization problems. In: Proceedings of the 4th international conference on adaptive computing in design and manufacturing, pp 299–308Google Scholar
  4. Branke J, Lode C, Shapiro JL (2007) Addressing sampling errors and diversity loss in UMDA. In: Proceedings of the 9th annual conference on genetic and evolutionary computation (GECCO 2007), pp 508–515Google Scholar
  5. Cobb HG (1990) An investigation into the use of hypermutation as an adaptive operator in genetic algorithms having continuous, time-dependent nonstationary environments. Techincal Report AIC-90-001, Naval Research Lab, Washington, USAGoogle Scholar
  6. Cedeno W, Vemuri VR (1997) On the use of niching for dynamic landscapes. In: Proceedings of the 1997 IEEE international conference on evolutionary computation, pp 361–366Google Scholar
  7. Goldberg DE (2002) The design of innovation: lessons from and for competent genetic algorithms. Kluwer, NorwellzbMATHGoogle Scholar
  8. Grefenstette JJ, Fitzpatrick J (1992) Genetic algorithms for changing environments. In: Proceedings of the 2nd internatioanl conference on parallel problem solving from natureGoogle Scholar
  9. Jin Y, Branke J (2005) Evolutionary optimization in uncertain environments—a survey. IEEE Trans Evol Comput 9(3):303–317CrossRefGoogle Scholar
  10. Mori N, Kita H, Nishikawa Y (1996). Adaptation to a changing environment by means of the thermodynamical genetic algorithm. In: Ebeling W et al. (eds) Proceedings of the 4th international conference on parallel problem solving from nature (PPSN IV), pp 513–522Google Scholar
  11. Morrison R (2004) Designing evolutionary algorithms for dynamic environments, Springer, BerlinGoogle Scholar
  12. Morrison R, De Jong K (1999) A test problem generator for non-stationary environments. In: Proceedings of the 1999 congress on evolutionary computation, pp 2047–2053Google Scholar
  13. Mühlenbein H, Paaß G (1996) Recombination of genes to the estimation of distributions. In: Ebeling W (ed) Proceedings of the 4th international conference on parallel problem solving from nature (PPSN IV), pp 178–187Google Scholar
  14. Pelikan M (2002) Bayesian optimization algorithm: from single level to hierarchy. PhD thesis. University of Illinois, Urbana-Champaign, USAGoogle Scholar
  15. Shapiro JL (2003) Scaling of probability-based optimization algorithms. In: Obermayer K (ed) Advances in neural information processing systems, pp 399–406Google Scholar
  16. Shapiro JL (2005) Drift and scaling in estimation of distribution algorithms. Evol Comput 13(1):99–123CrossRefGoogle Scholar
  17. Shapiro JL (2006) Diversity loss in general estimation of distribution algorithms. In: Runarsson TP (ed) Proceedings of the 9th international conference on parallel problem solving from nature, LNCS 4193, pp 92–101Google Scholar
  18. Ursem RK (2000) Multinational GA optimization techniques in dynamic environments. In: Proceedings of the 2000 genetic and evolutionary computation conference (GECCO 2000), pp 19–26Google Scholar
  19. Whitley LD (1991) Fundamental principles of deception in genetic search. In: Rawlins GJE (ed) Foundations of genetic algorithms. Morgan Kaufmann, San Francisco, pp 221–241Google Scholar
  20. Wineberg M, Oppacher F (2000) Enhancing the GA’s ability to cope with dynamic environments. In: Proceedings of the 2000 Genetic and Evolitionary Computation Conference (GECCO 2000), pp 3–10Google Scholar
  21. Yang S (2003) Non-stationary problem optimization using the primal-dual genetic algorithm. In: Proceedings of the 2003 IEEE congress on evolutionary computation, vol 3, pp 2246–2253Google Scholar
  22. Yang S (2005a) Memory-enhanced univariate marginal distribution algorithms for dynamic optimization problems. In: Proceedings of the 2005 IEEE Congress on Evolutionary Computation, vol 3, pp 2560–2567Google Scholar
  23. Yang S (2005b) Population-based incremental learning with memory scheme for changing environments. In: Proceedings of the 7th annual conference on genetic and evolutionary computation (GECCO 2005), pp 711–718Google Scholar
  24. Yang S (2006) Associative memory scheme for genetic algorithms in dynamic environments. In: EvoWorkshops 2006: applications of evolutionary computing, pp 788–799Google Scholar
  25. Yang S, Yao X (2005) Experimental study on population-based incremental learning algorithms for dynamic optimization problems. Soft Comput 9(11):815–834zbMATHCrossRefGoogle Scholar
  26. Yang S, Yao X (2008) Population-based incremental learning with associative memory for dynamic environments. IEEE Trans Evol Comput 12(5):542–561CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Xingguang Peng
    • 1
    Email author
  • Xiaoguang Gao
    • 1
  • Shengxiang Yang
    • 2
  1. 1.School of Electronics and InformationNorthwestern Polytechnical UniversityXi’anChina
  2. 2.Department of Computer ScienceUniversity of LeicesterLeicesterUK

Personalised recommendations