Advertisement

Matheuristics pp 159-187 | Cite as

Convergence Analysis of Metaheuristics

  • Walter J. Gutjahr
Chapter
Part of the Annals of Information Systems book series (AOIS, volume 10)

Abstract

In this tutorial, an overview on the basic techniques for proving convergence of metaheuristics to optimal (or sufficiently good) solutions is given. The presentation is kept as independent of special metaheuristic fields as possible by the introduction of a generic metaheuristic algorithm. Different types of convergence of random variables are discussed, and two specific features of the search process to which the notion “convergence” may refer, the “best-so-far solution” and the “model”, are distinguished. Some core proof ideas as applied in the literature are outlined. We also deal with extensions of metaheuristic algorithms to stochastic combinatorial optimization, where convergence is an especially relevant issue. Finally, the important aspect of convergence speed is addressed by a recapitulation of some methods for analytically estimating the expected runtime until solutions of sufficient quality are detected.

Keywords

Convergence Analysis Random Search Variable Neighborhood Search Memory State Metaheuristic Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    E.H.M. Aarts and J.H.L. Korst. Simulated Annealing and Boltzmann Machines. John Wiley and Sons, Chichester, UK, 1990.Google Scholar
  2. 2.
    M.H. Alrefaei and S. Andradóttir. A simulated annealing algorithm with constant temperature for discrete stochastic optimization. Management Science, 45:748–764, 1999.CrossRefGoogle Scholar
  3. 3.
    L. Bianchi, M. Dorigo, L.M. Gambardella, and W.J. Gutjahr. A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing, to appear.Google Scholar
  4. 4.
    M. Birattari, P. Balaprakash, and M. Dorigo. The ACO/F-Race algorithm for combinatorial optimization under uncertainty. In K.F. Doerner, M. Gendreau, P. Greistorfer, W.J. Gutjahr, R.F. Hartl, and M. Reimann, editors, Metaheuristics—Progress in Complex Systems Optimization, pages 189–203. Springer Verlag, Berlin, Germany, 2006.Google Scholar
  5. 5.
    Y. Borenstein and R. Poli. Information perspective of optimization. In T.P. Runarsson, H.-G. Beyer, E.K. Burke, J.J. Merelo Guervós, L.D. Whitley, and X. Yao, editors, Proceedings of the 9th Conference on Parallel Problem Solving from Nature, volume 4193 of Lecture Note in Computer Science, pages 102–111. Springer Verlag, Berlin, Germany, 2006.CrossRefGoogle Scholar
  6. 6.
    P.A. Borisovsky and A.V. Eremeev. A study on the performance of the (1+1)-evolutionary algorithm. In K.A. De Jong, R. Poli, and J.E. Rowe, editors, Proceedings of Foundations of Genetic Algorithms, volume 7, pages 271–287. Morgan Kaufmann Publishers, San Mateo, CA, 2003.Google Scholar
  7. 7.
    J. Brimberg, P. Hansen, and N. Mladenović. Convergence of variable neighborhood search. Les Cahiers du GERAD G-2002-21, Groupe d’études et de recherche en analyse des décisions (GERAD), Montréal, Canada, 2002.Google Scholar
  8. 8.
    T. Homem de Mello. Variable-sample methods for stochastic optimization. ACM Transactions on Modeling and Computer Simulation, 13:108–133, 2003.CrossRefGoogle Scholar
  9. 9.
    F. Van den Bergh and A.P. Engelbrecht. A study of particle swarm optimization particle trajectories. Information Sciences, 176:937–971, 2006.CrossRefGoogle Scholar
  10. 10.
    M. Dorigo, V. Maniezzo, and A. Colorni. Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, 26:1–13, 1996.Google Scholar
  11. 11.
    S. Droste, T. Jansen, and I. Wegener. Perhaps not a free lunch but at least a free appetizer. In W. Banzhaf, J.M. Daida, A.E. Eiben, M.H. Garzon, V. Honavar, M.J. Jakiela, and R.E. Smith, editors, Proceedings of the Genetic and Evolutionary Computation Conference 1999, pages 833–839. Morgan Kaufmann, San Mateo, CA, 1999.Google Scholar
  12. 12.
    S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary algorithm. Theoretical Computer Science, 276:51–81, 2002.CrossRefGoogle Scholar
  13. 13.
    S. Droste, T. Jansen, and I. Wegener. Upper and lower bounds for randomized search heuristics in black-box optimization. Theory of Computing Systems, 39(4):525–544, 2006.CrossRefGoogle Scholar
  14. 14.
    T. English. Optimization is easy and learning is hard in the typical function. In Proceedings of the 2000 Congress on Evolutionary Computation, volume 2, pages 924–931. IEEE Press, Piscataway, NJ, 2000.Google Scholar
  15. 15.
    T. English. On the structure of sequential search: beyond no free lunch. In J. Gottlieb and G.R. Raidl, editors, Evolutionary Computation in Combinatorial Optimization, 4th European Conference, EvoCOP 2004, volume 3004 of Lecture Notes in Computer Science, pages 95–103. Springer Verlag, Berlin, Germany, 2004.CrossRefGoogle Scholar
  16. 16.
    M. Fischetti and A. Lodi. Local branching. Mathematical Programming Ser. B, 98:23–47, 2003.CrossRefGoogle Scholar
  17. 17.
    S.B. Gelfand and S.K. Mitter. Analysis of simulated annealing for optimization. In Proceedings of the 24th IEEE Conference on Decision and Control, pages 779–786, 1985.Google Scholar
  18. 18.
    S.B. Gelfand and S.K. Mitter. Simulated annealing with noisy or imprecise measurements. Journal of Optimization Theory and Applications, 69:49–62, 1989.CrossRefGoogle Scholar
  19. 19.
    F. Glover and G. Kochenberger, editors. Handbook of Metaheuristics, volume 57 of International Series in Operations Research & Management Science. Springer, 2003.Google Scholar
  20. 20.
    C. Gonzalez, J.A. Lozano, and P. Larrañaga. Analyzing the PBIL algorithm by means of discrete dynamical systems. Complex Systems, 11:1–15, 1997.Google Scholar
  21. 21.
    W.J. Gutjahr. A graph–based ant system and its convergence. Future Generation Computer Systems, 16:873–888, 2000.CrossRefGoogle Scholar
  22. 22.
    W.J. Gutjahr. ACO algorithms with guaranteed convergence to the optimal solution. Information Processing Letters, 82:145–153, 2002.CrossRefGoogle Scholar
  23. 23.
    W.J. Gutjahr. A converging ACO algorithm for stochastic combinatorial optimization. In A.A. Albrecht and K. Steinhöfel, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2003, volume 2827 of Lecture Notes in Computer Science, pages 10–25. Springer Verlag, Berlin, Germany, 2003.CrossRefGoogle Scholar
  24. 24.
    W.J. Gutjahr. An ant-based approach to combinatorial optimization under uncertainty. In M. Dorigo, L. Gambardella, F. Mondada, T. Stützle, M. Birratari, and C. Blum, editors, ANTS’2004, Fourth International Workshop on Ant Algorithms and Swarm Intelligence, volume 3172 of Lecture Notes in Computer Science, pages 238–249. Springer Verlag, Berlin, Germany, 2004.Google Scholar
  25. 25.
    W.J. Gutjahr. On the finite-time dynamics of ant colony optimization. Methodology and Computing in Applied Probability, 8:105–133, 2006.CrossRefGoogle Scholar
  26. 26.
    W.J. Gutjahr. Mathematical runtime analysis of ACO algorithms: survey on an emerging issue. Swarm Intelligence, 1:59–79, 2007.CrossRefGoogle Scholar
  27. 27.
    W.J. Gutjahr. First steps to the runtime complexity analysis of ant colony optimization. Computers & Operations Research, 35:2711–2727, 2008.CrossRefGoogle Scholar
  28. 28.
    W.J. Gutjahr. Stochastic search in metaheuristics. Technical report, Department of Statistics and Decision Support Systems, University of Vienna, 2008.Google Scholar
  29. 29.
    W.J. Gutjahr, S. Katzensteiner, and P. Reiter. A VNS algorithm for noisy problems and its application to project portfolio analysis. In J. Hromkovic, R. Královic, M. Nunkesser, and P. Widmayer, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2007, volume 4665 of Lecture Notes in Computer Science, pages 93–104. Springer Verlag, Berlin, Germany, 2007.CrossRefGoogle Scholar
  30. 30.
    W.J. Gutjahr and G. Pflug. Simulated annealing for noisy cost functions. Journal of Global Optimization, 8:1–13, 1996.CrossRefGoogle Scholar
  31. 31.
    W.J. Gutjahr and G. Sebastiani. Runtime analysis of ant colony optimization with best-so-far reinforcement. Methodology and Computing in Applied Probability, 10:409–433, 2008.CrossRefGoogle Scholar
  32. 32.
    B. Hajek. Cooling schedules for optimal annealing. Mathematics of Operations Research, 13:311–329, 1988.CrossRefGoogle Scholar
  33. 33.
    P. Hansen and N. Mladenović. Variable neighborhood search: Principles and applications. European Journal of Operational Research, 130:449–467, 2001.CrossRefGoogle Scholar
  34. 34.
    R.F. Hartl. A global convergence proof for a class of genetic algorithms. Technical report, Institut für Ökonometrie & Operations Research, Technische Universität Wien, 1990.Google Scholar
  35. 35.
    J. He and X. Yao. Conditions for the convergence of evolutionary algorithms. Journal of Systems Architecture, 47:601–612, 2001.CrossRefGoogle Scholar
  36. 36.
    J. He and X. Yao. Drift analysis and average time complexity of evolutionary algorithms. Artificial Intelligence, 127:57–85, 2003.CrossRefGoogle Scholar
  37. 37.
    J. He and X. Yao. Towards an analytic framework for analysing the computation time of evolutionary algorithms. Artificial Intelligence, 145:59–97, 2003.CrossRefGoogle Scholar
  38. 38.
    J. He and X. Yao. A study of drift analysis for estimating computation time of evolutionary algorithms. Natural Computing, 3:21–35, 2004.CrossRefGoogle Scholar
  39. 39.
    H.H. Hoos. On the runtime behavior of stochastic local search algorithms for SAT. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, pages 661–666. AAAI Press / The MIT Press, Menlo Park, CA, USA, 1999.Google Scholar
  40. 40.
    H.H. Hoos and T. Stützle. Local search algorithms for SAT: an empirical investigation. Journal of Automated Reasoning, 24:421–481, 2000.CrossRefGoogle Scholar
  41. 41.
    C. Igel and M. Toussaint. On classes of functions for which no free lunch results hold. Information Processing Letters, 86:317–321, 2003.CrossRefGoogle Scholar
  42. 42.
    C. Igel and M. Toussaint. A no-free-lunch theorem for non-uniform distributions of target functions. Journal of Mathematical Modelling and Algorithms, 3:313–322, 2004.CrossRefGoogle Scholar
  43. 43.
    S.H. Jacobson, K.A. Sullivan, and A.W. Johnson. Discrete manufacturing process design optimization using computer simulation and generalized hill climbing algorithms. Engineering Optimization, 31:147–260, 1998.CrossRefGoogle Scholar
  44. 44.
    S.H. Jacobson and E. Yuecesan. Analyzing the performance of generalized hill climbers. Journal of Heuristics, 10:387–405, 2004.CrossRefGoogle Scholar
  45. 45.
    J. Jaegerskuepper. Lower bonds for hit-and-run direct search. In J. Hromkovic, R. Královic, M. Nunkesser, and P. Widmayer, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2007, volume 4665 of Lecture Notes in Computer Science, pages 118–129. Springer Verlag, Berlin, Germany, 2007.CrossRefGoogle Scholar
  46. 46.
    Y. Jin and J. Branke. Evolutionary optimization in uncertain environments—a survey. IEEE Transactions on Evolutionary Computation, 9:303–317, 2005.CrossRefGoogle Scholar
  47. 47.
    J. Kennedy and R.C. Eberhart. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, pages 1942–1948. IEEE Press, Piscataway, NJ, 1995.CrossRefGoogle Scholar
  48. 48.
    J. Kennedy and R.C. Eberhart. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, volume 5, pages 4104–4109. IEEE Press, Piscataway, NJ, 1997.Google Scholar
  49. 49.
    L. Margolin. On the convergence of the cross-entropy method. Annals of Operations Research, 134:201–214, 2005.CrossRefGoogle Scholar
  50. 50.
    H. Muehlenbein and J. Zimmermann. Size of neighborhood more important than temperature for stochastic local search. In Proceedings of the 2000 Congress on Evolutionary Computation, volume 2, pages 1017–1024. IEEE Press, Piscataway, NJ, 2000.Google Scholar
  51. 51.
    P.S. Oliveto, J. He, and X. Yao. Time complexity of evolutionary algorithms for combinatorial optimization: a decade of results. International Journal of Automation and Computing, 4:281–293, 2007.CrossRefGoogle Scholar
  52. 52.
    P. Purkayastha and J.S. Baras. Convergence results for ant routing algorithms via stochastic approximation and optimization. In Proceedings of the 46th IEEE Conference on Decision and Control, pages 340–345. IEEE Press, Piscataway, NJ, 2007.CrossRefGoogle Scholar
  53. 53.
    R.Y. Rubinstein. The cross-entropy method for combinatorial and continuous optimization. Methodology and Computing in Applied Probability, pages 127–170, 1999.Google Scholar
  54. 54.
    G. Rudolph. Convergence analysis of canonical genetic algorithms. IEEE Transactions on Neural Networks, 5:96–101, 1994.CrossRefGoogle Scholar
  55. 55.
    G. Sebastiani and G.L. Torrisi. An extended ant colony algorithm and its convergence analysis. Methodology and Computing in Applied Probability, 7:249–263, 2005.CrossRefGoogle Scholar
  56. 56.
    J.C. Spall, S.D. Hill, and D.R. Stark. Theoretical framework for comparing several stochastic optimization algorithms. In G. Calafiore and F. Dabbene, editors, Probabilistic and Randomized Methods for Design under Uncertainty, pages 99–117. Springer Verlag, London, UK, 2006.CrossRefGoogle Scholar
  57. 57.
    T. Stützle and M. Dorigo. A short convergence proof for a class of ACO algorithms. IEEE Transactions on Evolutionary Computation, 6:358–365, 2002.CrossRefGoogle Scholar
  58. 58.
    T. Stützle and H.H. Hoos. MAX –MIN ant ystem. Future Generation Computer Systems, 16:889–914, 2000.CrossRefGoogle Scholar
  59. 59.
    A.S. Thikomirov. On the convergence rate of the Markov homogeneous monotone optimization method. Computational Mathematics and Mathematical Physics, 47:817–828, 2007.Google Scholar
  60. 60.
    I.C. Trelea. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters, 85:317–325, 2003.CrossRefGoogle Scholar
  61. 61.
    I. Wegener. Methods for the analysis of evolutionary algorithms on pseudo-boolean functions. In R. Sarker, M. Mohammadia, and X. Yao, editors, Evolutionary Optimization, volume 48 of International Series in Operations Research & Management Science. Kluwer Academic Publishers, Norwell, MA, 2003.Google Scholar
  62. 62.
    I. Wegener. Simulated annealing beats metropolis in combinatorial optimization. In L. Caires, G.F. Italiano, L. Monteiro, C. Palamidessi, and M. Yung, editors, Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, volume 3580 of Lecture Notes in Computer Science, pages 589–601. Springer Verlag, Berlin, Germany, 2005.CrossRefGoogle Scholar
  63. 63.
    D.H. Wolpert and W.G. Macready. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1:67–82, 1997.CrossRefGoogle Scholar
  64. 64.
    Y. Yu and Z.-H. Zhou. A new approach to estimating the expected first hitting time of evolutionary algorithms. In Proceedings of the Twentyfirst National Conference on Artificial Intelligence, pages 555–560. AAAI Press / The MIT Press, Menlo Park, CA, USA, 2006.Google Scholar
  65. 65.
    M. Zlochin, M. Birattari, N. Meuleau, and M. Dorigo. Model-based search for combinatorial optimization: a critical survey. Annals of Operations Research, 131:373–379, 2004.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Department of Statistics and Decision Support SystemsUniversity of ViennaViennaAustria

Personalised recommendations