Computational Optimization and Applications

, Volume 61, Issue 2, pp 489–515 | Cite as

Terminating evolutionary algorithms at their steady state

  • Debora GilEmail author
  • David Roche
  • Agnés Borràs
  • Jesús Giraldo


Assessing the reliability of termination conditions for evolutionary algorithms (EAs) is of prime importance. An erroneous or weak stop criterion can negatively affect both the computational effort and the final result. We introduce a statistical framework for assessing whether a termination condition is able to stop an EA at its steady state, so that its results can not be improved anymore. We use a regression model in order to determine the requirements ensuring that a measure derived from EA evolving population is related to the distance to the optimum in decision variable space. Our framework is analyzed across 24 benchmark test functions and two standard termination criteria based on function fitness value in objective function space and EA population decision variable space distribution for the differential evolution (DE) paradigm. Results validate our framework as a powerful tool for determining the capability of a measure for terminating EA and the results also identify the decision variable space distribution as the best-suited for accurately terminating DE in real-world applications.


Evolutionary algorithms Termination condition Steady state Differential evolution 



Work supported by Spanish projects TIN2009-13618, TIN2012-33116, SAF2010-19257 and CSD2007-00018, Fundacio La Marato de TV3 (110230) and RecerCaixa 2010ACUP 00378.


  1. 1.
    Agapie, A.: Theoretical analysis of mutation-adaptive evolutionary algorithms. Evol. Comput. 9, 127–146 (2001)CrossRefGoogle Scholar
  2. 2.
    Arnold, S.: The Theory of Linear Models and Multivariate Observations. Wiley, New York (1997)Google Scholar
  3. 3.
    Ashish, S., Srivastava, M.: Regression Analysis. Theory, Methods and Applications. Springer, New York (1990)zbMATHGoogle Scholar
  4. 4.
    Ashyraliyev, M., Fomekong-Nanfack, Y., Kaandorp, J., Blom, J.: Systems biology: parameter estimation for biochemical models. In: FEBS, pp. 886–902 (2009)Google Scholar
  5. 5.
    Auger, A., Hansen, N.: A restart cma evolution strategy with increasing population size. In: CEC, pp. 1769–1776 (2005)Google Scholar
  6. 6.
    Aytug, H.K.G.: New stopping criterion for genetic algorithms. Eur. J. Oper. Res. 126, 662–674 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
  7. 7.
    Bäck, T., Hammel, U., Schwefel, H.: Evolutionary computation: comments on the history and current state. IEEE Trans. Evol. Comput. 1, 3–17 (1997)CrossRefGoogle Scholar
  8. 8.
    Bertsimas, D.J.T.: Simulated annealing. Stat. Sci. 8(1), 10–15 (1993)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Bischl, B., Mersmann, O., Trautmann, H., Preuss, M.: Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In: GECCO, 14th Annual Conference on Genetic and Evolutionary Computation, pp. 313–320. ACM, New York, USA (2012)Google Scholar
  10. 10.
    Cagnoni, S., Lutton, E., Olague, G.: Genetic and Evolutionary Computation for Image Processing and Analysis. Hindawi Publishing Corporation, New York (2008)Google Scholar
  11. 11.
    Cheney, W., Kincaid, D.: Numerical Mathematics and Computing, 6th edn. Bob Pirtle, USA (2008)Google Scholar
  12. 12.
    Das, S., Konar, A.: An improved differential evolution scheme for noisy optimization problems. In: Pat Rec Mac Int, no. 3776 in LNCS, pp. 417–421 (2005)Google Scholar
  13. 13.
    Deb, K., Jain, S.: Running performance metrics for evolutionary multi-objective optimization. In: SEAL, pp. 13–20 (2002)Google Scholar
  14. 14.
    Finck, S., Hansen, N., Ros, R., Auger, A.: Real Paraemter Black-Box Optimization Benchmarking 2009: Presentation of the Noiseless Functions. Technical Report 2009/20, Research Center PPE (2009)Google Scholar
  15. 15.
    Goel, T., Stander, N.: A non-dominance-based online stopping criterion for multiobjective evolutionary algorithms. Int. J. Numer. Methods Eng. 84, 661–684 (2010)CrossRefzbMATHGoogle Scholar
  16. 16.
    Guerrero, J., Marti, L., Garcia, J., Berlanga, A., Molina, J.: Introducing a robust and efficient stopping criterion for moeas. In: CEC, pp. 1–8 (2010)Google Scholar
  17. 17.
    Hansen, N.: Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In: ACM-GECCO (2009)Google Scholar
  18. 18.
    Hansen, N., Auger, A., Ros, R., Finck, S., Posík, P.: Comparing results of 31 algorithms from the black-box optimization benchmarking. In: GECCO, pp. 1689–1696 (2010)Google Scholar
  19. 19.
    Hansen, N., Finck, S., Ros, R., Auger, A.: Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Technical Report RR-6829, INRIA (2009)Google Scholar
  20. 20.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)Google Scholar
  21. 21.
    Kronfeld, M., Planatscher, H., Zell, A.: The EvA2 optimization framework. In: LION-SWOP, no. 6073 in LNCS, pp. 247–250. Springer, New York (2010)Google Scholar
  22. 22.
    Kwok, N.M., Ha, Q.P., Liu, D.K., Fang, G., Tan, K.C.: Efficient particle swarm optimization: a termination condition based on the decision-making approach. In: CEC, pp. 3353–3360 (2007)Google Scholar
  23. 23.
    Marti, L., Garcia, J., Berlanga, A., Molina, J.: An approach to stopping criteria for multi-objective optimization evolutionary algorithms: the mgbm criterion. In: CEC, pp. 1263–1270 (2009)Google Scholar
  24. 24.
    Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., Rudolph, G.: Exploratory landscape analysis. In: GECCO’11, pp. 829–836 (2011)Google Scholar
  25. 25.
    Mersmann, O., Preuss, M., Trautmann, H.: Benchmarking evolutionary algorithms: towards exploratory landscape analysis. In: PPSN XI, Part I, LNCS 6238, pp. 73–82 (2010)Google Scholar
  26. 26.
    Mullen, K., Ardia, D., Gil, D., Windover, D., Cline, J.: DEoptim: an R package for global optimization by differential evolution. J. Stat. Softw. 40(6), 1–26 (2011)Google Scholar
  27. 27.
    Pandi, V.R., Panigrahi, B.K.: Comparative study of evolutionary computing methods for parameter estimation of power quality signals. IJAEC 1(2), 28–59 (2010)Google Scholar
  28. 28.
    Price, K., Storn, R., Lampinen, J.: Differential Evolution: A Practical Approach to Global Optimization. Springer, New York (2005)Google Scholar
  29. 29.
    Rizki, M., Zmuda, M., Tamburino, L.: Evolving pattern recognition systems. IEEE Trans. Evol. Comput. 6(6), 594–609 (2002)CrossRefGoogle Scholar
  30. 30.
    Roche, D., Gil, D., Giraldo, J.: An inference model for analyzing termination conditions of evolutionary algorithms. In: CCiA Proceedings, pp. 216–225 (2011)Google Scholar
  31. 31.
    Roche, D., Gil, D., Giraldo, J.: Using statistical inference for designing termination conditions ensuring convergence of evolutionary algorithms. In: Proceedings of ECAL, pp. 680–687 (2011)Google Scholar
  32. 32.
    Roche, D., Gil, D., Giraldo, J.: Detecting loss of diversity for an efficient termination of EAs. In: SYNASC, IEEE Proceedings, pp. 563–568 (2013)Google Scholar
  33. 33.
    Rudenko, O., Schoenauer, M.: A steady performance stopping criterion for paretobased. In: Multi-Objective Programming and Goal Programming Conference (2004)Google Scholar
  34. 34.
    Rudin, W.: Principles of Mathematical Analysis, 3rd edn. McGraw-Hill Science, New York (1976)zbMATHGoogle Scholar
  35. 35.
    Rudolph, G.: Convergence analysis of canonical genetic algorithms. IEEE Trans. Neural Netw. 5, 96–101 (1994)CrossRefGoogle Scholar
  36. 36.
    Rudolph, G.: Finite markov chain results in evolutionary computation: a tour d’horizon. Fundam. Inform. (1998)Google Scholar
  37. 37.
    Safe, M., Carballido, J., Ponzoni, I.: On stopping criteria for genetic algorithms. In: SBIA 2004, no. 3171 in LNCS, pp. 405–413 (2004)Google Scholar
  38. 38.
    Shir, O.M.: Niching in Evolutionary Algorithms. Springer, New York (2013)Google Scholar
  39. 39.
    Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997)CrossRefzbMATHMathSciNetGoogle Scholar
  40. 40.
    Tagetiren, M., Suganthan, P.: A multi-populated differential evolution algorithm for solving contrained optimization problem. In: CEC, pp. 340–347 (2006)Google Scholar
  41. 41.
    Thomsen, R.: Multimodal optimization using crowding-based differential evolution. No. vol 2 in CEC2004, pp. 1382–1389 (2004)Google Scholar
  42. 42.
    Vesterstrom, J., Thomsen, R.: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. In: CEC, pp. 1980–1987 (2004)Google Scholar
  43. 43.
    Wagner, T., Trautmann, H., Martí, L.: A taxonomy of online stopping criteria for multi-objective evolutionary algorithms. In: EMO’11, pp. 16–30 (2011)Google Scholar
  44. 44.
    Wagner, T., Trautmann, H., Naujoks, B.: Ocd: online convergence detection for evolutionary multi-objective algorithms based on statistical testing. In: EMO, pp. 198–215 (2009)Google Scholar
  45. 45.
    Xia, W., Wu, Z.: An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 48(2), 409–425 (2005)CrossRefMathSciNetGoogle Scholar
  46. 46.
    Zaharie, D.: Critical values for the control parameters of differential evolution algorithm. In: MENDEL, pp. 62–67 (2002)Google Scholar
  47. 47.
    Zaharie, D.: A compartive analysis of crossover variants in differential evolution. In: Proceedings of the International Conference on Computer Science and Information Technology, pp. 171–181 (2007)Google Scholar
  48. 48.
    Zielinski, K., Laur, R.: Stopping criteria for differential evolution in constrained single-objective optimization. In: Studies in Computational Intelligence, No. 143 in Advances in Differential Evolution, pp. 111–138 (2008)Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Debora Gil
    • 1
    Email author
  • David Roche
    • 2
  • Agnés Borràs
    • 1
  • Jesús Giraldo
    • 2
  1. 1.Computer Vision Center, Department of Computer ScienceUniversitat Autònoma de BarcelonaBarcelonaSpain
  2. 2.Laboratory of Systems Pharmacology and Bioinformatics, Institut de Neurociències and Unitat de BioestadísticaUniversitat Autònoma de BarcelonaBarcelonaSpain

Personalised recommendations