Skip to main content
Log in

Iterated local search with Powell’s method: a memetic algorithm for continuous global optimization

  • Regular Research Paper
  • Published:
Memetic Computing Aims and scope Submit manuscript

Abstract

In combinatorial solution spaces Iterated Local Search (ILS) turns out to be exceptionally successful. The question arises: is ILS also capable of improving the optimization process in continuous solution spaces? To demonstrate that hybridization leads to powerful techniques in continuous domains, we introduce a hybrid meta-heuristic that integrates Powell’s direct search method. It combines direct search with elements from population based evolutionary optimization. The approach is analyzed experimentally on a set of well known test problems and compared to a state-of-the-art technique, i.e., a restart variant of the Covariance Matrix Adaptation Evolution Strategy with increasing population sizes (G-CMA-ES). It turns out that the population-based Powell-ILS is competitive to the CMA-ES, in some cases even significantly faster and behaves more robust than the pure strategy of Powell in multimodal fitness landscapes. Further experiments on the perturbation mechanism, population sizes, and problems with noise complete the analysis of the hybrid methodology and lead to parameter recommendations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In: Proceedings of the IEEE congress on evolutionary computation—CEC 2005, vol. 2, pp 1769–1776

  2. Bellman R (1957) Dynamic programming. Princeton University Press, Princeton

    MATH  Google Scholar 

  3. Beyer H-G, Melkozerov A (2008) σ-self-adaptive weighted multirecombination evolution strategy with scaled weights on the noisy sphere. In: Proceedings of the 10th international conference on parallel problem solving from nature (PPSN). Springer, Berlin, Heidelberg, pp 11–20

  4. Beyer H-G, Schwefel H-P (2002) Evolution strategies—a comprehensive introduction. Nat Comput 1: 3–52

    Article  MATH  MathSciNet  Google Scholar 

  5. Blum C, Roli A (2008) Hybrid metaheuristics: an introduction. In: Hybrid Metaheuristics. Springer, pp 1–30

  6. Broyden CG (1970) The convergence of a class of double-rank minimization algorithms. I: general considerations. J Inst Math 6: 76–90

    Article  MATH  MathSciNet  Google Scholar 

  7. Dai Y, Yuan Y (1998) Convergence properties of the beale-powell restart algorithm

  8. Dai Y-H, Liao L-Z, Li D (2004) On restart procedures for the conjugate gradient method. Numer Algorithms 35(1–4): 249–260

    Article  MATH  MathSciNet  Google Scholar 

  9. Das S, Abraham A, Chakraborty UK, Konar A (2009) Differential evolution using a neighborhood-based mutation operator. IEEE Trans Evol Comput 13(3): 526–553

    Article  Google Scholar 

  10. Davidon W (1991) Variable metric methods for minimization. SIAM J Optim 1: 1–17

    Article  MATH  MathSciNet  Google Scholar 

  11. Deb K, Anand A, Joshi D (2002) A computationally efficient evolutionary algorithm for real-parameter optimization. Evol Comput 10(4): 371–395

    Article  Google Scholar 

  12. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol Comput IEEE Trans 6(2): 182–197

    Article  Google Scholar 

  13. Duarte AR, Ribeiro CC, Urrutia S (2007) A hybrid ILS heuristic to the referee assignment problem with an embedded MIP strategy. In: Hybrid Metaheuristics, pp 82–95

  14. Eiben AE, Hinterding R, Michalewicz Z (1999) Parameter control in evolutionary algorithms. IEEE Trans Evol Comput 3(2): 124–141

    Article  Google Scholar 

  15. Emmerich MTM, Deutz AH, Beume N (2007) Gradient-based/evolutionary relay hybrid for computing pareto front approximations maximizing the S-metric. In: Hybrid Metaheuristics, pp 140–156

  16. Fliege J, Drummond LMG, Svaiter B (2008) Newton’s method for multiobjective optimization. Optimization Online

  17. Fliege J, Svaiter B (2000) Steepest descent methods for multicriteria optimization. Math Methods of Oper Res 51(3): 479–494

    Article  MATH  MathSciNet  Google Scholar 

  18. Fogel DB (1966) Artificial intelligence through simulated evolution. Wiley, New York

    MATH  Google Scholar 

  19. García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization. Journal of Heuristics 15: 617–644

    Article  MATH  Google Scholar 

  20. Beyer HG, Arnold DV (2003) Qualms regarding the optimality of cumulative path length control in CSA/CMA-evolution strategies. Evol Comput 11(1): 19–28

    Article  MathSciNet  Google Scholar 

  21. Glover F, Laguna M (1997) Tabu search. Kluwer Academic Publishers, Boston

    MATH  Google Scholar 

  22. Griewank A (1981) Generalized decent for global optimization. JOTA 34: 11–39

    Article  MATH  MathSciNet  Google Scholar 

  23. Gutin G, Karapetyan D (2009) A selection of useful theoretical tools for the design and analysis of optimization heuristics. Memet Comput 1(1): 25–34

    Article  Google Scholar 

  24. Herrera F, Lozano M (2000) Two-loop real-coded genetic algorithms with adaptive control of mutation step sizes. Appl Intell 13(3): 187–204

    Article  Google Scholar 

  25. Herrera F, Lozano M, Verdegay JL (1998) Tackling real-coded genetic algorithms: operators and tools for behavioural analysis. Artif Intell Rev 12: 265–319

    Article  MATH  Google Scholar 

  26. Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor

    Google Scholar 

  27. Katayama K, Narihisa H (1999) Iterated local search approach using genetic transformation to the traveling salesman problem. In: Proceedings of the 1st annual conference on Genetic and evolutionary computation (GECCO). New York, NY, USA, ACM

  28. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of IEEE international conference on neural networks, pp 1942–1948

  29. Koch P, Kramer O, Rudolph G, Beume N (2009) On the hybridization of SMS-EMOA and local search for continuous multiobjective optimization. In: Proceedings of the 11th annual conference on Genetic and evolutionary computation (GECCO). New York, NY, USA, ACM, pp 603–610

  30. Koumoutsakos P, Muller SD (2006) Flow optimization using stochastic algorithms. Lect Notes Control Inf Sci 330: 213–229

    Article  Google Scholar 

  31. Kramer O (2008) Self-adaptive heuristics for evolutionary computation. Springer, Berlin

    MATH  Google Scholar 

  32. Kramer O (2009) Fast blackbox optimization: iterated local search and the strategy of powell. In: Proceedings of GEM 2009. CSREA Press, pp 159–163

  33. Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28(3): 497–520

    Article  MATH  MathSciNet  Google Scholar 

  34. Le MN, Ong Y-S, Jin Y, Sendhoff B (2009) Lamarckian memetic algorithms: local optimum and connectivity structure analysis. Memet Comput 1(3): 175–190

    Article  Google Scholar 

  35. Lewis R, Torczon V, Trosset M (2000) Direct search methods: then and now. J Comput Appl Math 124(1–2): 191–207

    Article  MATH  MathSciNet  Google Scholar 

  36. Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. Evol Comput IEEE Trans 10(3): 281–295

    Article  Google Scholar 

  37. Lourenço HR, Martin O, Stützle T (2001) A beginner’s introduction to iterated local search. In: Proceedings of the fourth metaheuristics conference, vol. 2, Porto, Portugal, pp 1–6

  38. Lourenço HR, Martin O, Stützle T (2003) Iterated local search. In: Handbook of Metaheuristics, pp 321–352

  39. Lozano M, Herrera F, Krasnogor N, Molina D (2004) Real-coded memetic algorithms with crossover hill-climbing. Evol Comput 12(3): 273–302

    Article  Google Scholar 

  40. Martínez SZ, Coello CAC (2008) A proposal to hybridize multi-objective evolutionary algorithms with non-gradient mathematical programming techniques. In: Proceedings of the 10th international conference on parallel problem solving from nature (PPSN). Springer, Berlin, Heidelberg, pp 837–846

  41. Mersch B, Glasmachers T, Meinicke P, Igel C (2006) Evolutionary optimization of sequence kernels for detection of bacterial gene starts. In: ICANN (2), pp 827–836

  42. Meyer-Nieberg S, Beyer H-G (2007) Self-adaptation in evolutionary algorithms. In: Lobo FG, Lima CF, Michalewicz Z (eds) Parameter setting in evolutionary algorithms. Springer, Berlin

    Google Scholar 

  43. Mladenovic N, Drazic M, Kovacevic-Vujcic V, Cangalovic M (2008) General variable neighborhood search for the continuous optimization. Eur J Oper Res 191(3): 753–770

    Article  MATH  MathSciNet  Google Scholar 

  44. Mladenovic N, Hansen P (1997) Variable neighborhood search. Comput Oper Res 24: 1097–1100

    Article  MATH  MathSciNet  Google Scholar 

  45. Nelder J, Mead R (1964) A simplex method for function minimization. Comput J 7: 308–313

    Google Scholar 

  46. Neri F, Tirronen V (2009) Scale factor local search in differential evolution. Memet Comput 1(2): 153–171

    Article  Google Scholar 

  47. Nguyen QH, Ong YS, Lim MH (2009) A probabilistic memetic framework. Evol Comput IEEE Trans 13(3): 604–623

    Article  Google Scholar 

  48. Noman N, Iba H (2008) Accelerating differential evolution using an adaptive local search. IEEE Trans Evol Comput 12(1): 107–125

    Article  Google Scholar 

  49. Ong Y-S, Keane AJ (2004) Meta-lamarckian learning in memetic algorithms. IEEE Trans Evol Comput 8(2): 99–110

    Article  Google Scholar 

  50. Ong Y-S, Lim M-H, Zhu N, Wong K-W (2006) Classification of adaptive memetic algorithms: a comparative study. IEEE Transactions on Systems, Man, and Cybernetics, part b: Cybernetics 36(1)

  51. Ostermeier A, Gawelczyk A, Hansen N (1994) A derandomized approach to self adaptation of evolution strategies. Evol Comput 2(4): 369–380

    Article  Google Scholar 

  52. Ostermeier A, Gawelczyk A, Hansen N (1995) A derandomized approach to self adaptation of evolution strategies. Evol Comput 2(4): 369–380

    Article  Google Scholar 

  53. Li LP, Wang L (2009) Hybrid algorithms based on harmony search and differential evolution for global optimization. In: GEC ’09: Proceedings of the first ACM/SIGEVO summit on genetic and evolutionary computation. ACM, New York, NY, USA, pp 271–278

  54. Powell M (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput J 7(2): 155–162

    Article  MATH  MathSciNet  Google Scholar 

  55. Powell MJD (1977) Restart procedures for the conjugate gradient method. Math Program V12(1): 241–254

    Article  Google Scholar 

  56. Price KV, Storn RM, Lampinen JA (2005) Differential evolution. A practical approach to global optimization natural computing series. Springer, Berlin

    Google Scholar 

  57. Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans Evol Comput 13(2): 398–417

    Article  Google Scholar 

  58. Raidl GR (2006) A unified view on hybrid metaheuristics. In: Almeida F, Aguilera MJB, Blum C (eds) Hybrid Metaheuristics, Lecture Notes in Computer Science vol. 4030. Springer, pp 1–12

  59. Rechenberg I (1973) Evolutionsstrategie: optimierung technischer systeme nach prinzipien der biologischen evolution. Frommann-Holzboog, Stuttgart

    Google Scholar 

  60. Rosenbrock H (1960) An automatic method for finding the greatest or least value of a function. Comput J 3(3): 175–184

    Article  MathSciNet  Google Scholar 

  61. Sánchez AM, Lozano M, Villar P, Herrera F (2009) Hybrid crossover operators with multiple descendents for real-coded genetic algorithms: combining neighborhood-based crossover operators. Int J Intell Syst 24(5): 540–567

    Article  MATH  Google Scholar 

  62. Schwefel H-P (1975) Evolutionsstrategie und numerische Optimierung. PhD thesis, TU Berlin

  63. Schwefel H-P (1995) Evolution and optimum seeking. Sixth-generation computer technology. Wiley Interscience, New York

    Google Scholar 

  64. Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In: Proceedings of the international conference on evolutionary computation, pp 69–73

  65. Shukla PK (2007) Gradient based stochastic mutation operators in evolutionary multi-objective optimization. In: Adaptive and natural computing algorithms, pp 58–66

  66. Sindhya K, Deb K, Miettinen K (2008) A local search based evolutionary multi-objective optimization approach for fast and accurate convergence. In: Proceedings of the 10th international conference on parallel problem solving from nature (PPSN). Springer, Berlin, Heidelberg, pp 815–824

  67. Stützle T (1999) Local search algorithms for combinatorial problems: analysis, improvements, and new applications, DISKI vol. 220. Infix Publishers, Sankt Augustin, Germany

  68. Stützle T, Hoos HH (1999) Analyzing the run-time behaviour of iterated local search for the TSP. In: III Metaheuristics international conference. Kluwer Academic Publishers

  69. Stutzle T (2006) Iterated local search for the quadratic assignment problem. Eur J Oper Res 174(3): 1519–1539

    Article  MathSciNet  Google Scholar 

  70. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Yp, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical Report, Nanyang Technological University

  71. Talbi E-G (2002) A taxonomy of hybrid metaheuristics. J Heuristics 8(5): 541–564

    Article  Google Scholar 

  72. Ting C-K, Ko C-F, Huang C-H (2009) Selecting survivors in genetic algorithm using tabu search strategies. Memet Comput 1(3): 191–203

    Article  Google Scholar 

  73. Toksari MD, Güner E (2007) Solving the unconstrained optimization problem by a variable neighborhood search. J Math Anal Appl 328(2): 1178–1187

    Article  MATH  MathSciNet  Google Scholar 

  74. Vrugt JA, Robinson BA, Hyman JM (2009) Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans Evol Comput 13(2): 243–259

    Article  Google Scholar 

  75. Yu X, Tang K, Chen T, Yao X (2009) Empirical analysis of evolutionary algorithms with immigrants schemes for dynamic optimization. Memet Comput 1(1): 3–24

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Kramer.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kramer, O. Iterated local search with Powell’s method: a memetic algorithm for continuous global optimization. Memetic Comp. 2, 69–83 (2010). https://doi.org/10.1007/s12293-010-0032-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12293-010-0032-9

Keywords

Navigation