Advertisement

Swarm Intelligence

, Volume 6, Issue 1, pp 49–75 | Cite as

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms

  • Zhi Yuan
  • Marco A. Montes de Oca
  • Mauro Birattari
  • Thomas Stützle
Article

Abstract

The performance of optimization algorithms, including those based on swarm intelligence, depends on the values assigned to their parameters. To obtain high performance, these parameters must be fine-tuned. Since many parameters can take real values or integer values from a large domain, it is often possible to treat the tuning problem as a continuous optimization problem. In this article, we study the performance of a number of prominent continuous optimization algorithms for parameter tuning using various case studies from the swarm intelligence literature. The continuous optimization algorithms that we study are enhanced to handle the stochastic nature of the tuning problem. In particular, we introduce a new post-selection mechanism that uses F-Race in the final phase of the tuning process to select the best among elite parameter configurations. We also examine the parameter space of the swarm intelligence algorithms that we consider in our study, and we show that by fine-tuning their parameters one can obtain substantial improvements over default configurations.

Keywords

Automated algorithm configuration Parameter tuning Continuous optimization algorithm Swarm intelligence F-Race 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adenso-Díaz, B., & Laguna, M. (2006). Fine-tuning of algorithms using fractional experimental designs and local search. Operations Research, 54(1), 99–114. CrossRefMATHGoogle Scholar
  2. Ansótegui, C., Sellmann, M., & Tierney, K. (2009). A gender-based genetic algorithm for the automatic configuration of solvers. In I. P. Gent (Ed.), LNCS: Vol. 5732. Principles and practice of constraint programming—CP 2009 (pp. 142–157). Heidelberg: Springer. CrossRefGoogle Scholar
  3. Audet, C., & Dennis, J. E. (2006). Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization, 17(1), 188–217. CrossRefMATHMathSciNetGoogle Scholar
  4. Auger, A., Hansen, N., Zerpa, J. M. P., Ros, R., & Schoenauer, M. (2009). Experimental comparisons of derivative free optimization algorithms. In J. Vahrenhold (Ed.), LNCS: Vol. 5526. Experimental algorithms, 8th international symposium, SEA 2009 (pp. 3–15). Heidelberg: Springer. Google Scholar
  5. Bartz-Beielstein, T. (2006). Experimental research in evolutionary computation–the new experimentalism. Berlin: Springer. MATHGoogle Scholar
  6. Birattari, M. (2009). Tuning metaheuristics: a machine learning perspective. Berlin: Springer. CrossRefMATHGoogle Scholar
  7. Birattari, M., Stützle, T., Paquete, L., & Varrentrapp, K. (2002). A racing algorithm for configuring metaheuristics. In W. B. Langdon, et al. (Eds.), GECCO 2002: proceedings of the genetic and evolutionary computation conference (pp. 11–18). San Francisco: Morgan Kaufmann. Google Scholar
  8. Birattari, M., Zlochin, M., & Dorigo, M. (2006). Towards a theory of practice in metaheuristics design: a machine learning perspective. Theoretical Informatics and Applications, 40(2), 353–369. CrossRefMATHMathSciNetGoogle Scholar
  9. Birattari, M., Yuan, Z., Balaprakash, P., & Stützle, T. (2010). F-Race and iterated F-Race: An overview. In T. Bartz-Beielstein, et al. (Eds.), Experimental methods for the analysis of optimization algorithms (pp. 311–336). Berlin: Springer. CrossRefGoogle Scholar
  10. Clerc, M. (2006). Particle swarm optimization. London: ISTE. CrossRefMATHGoogle Scholar
  11. Clerc, M., & Kennedy, J. (2002). The particle swarm–explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73. CrossRefGoogle Scholar
  12. Dorigo, M. (2007). Ant colony optimization. Scholarpedia, 2(3), 1461. CrossRefMathSciNetGoogle Scholar
  13. Dorigo, M., & Gambardella, L. (1997). Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1(1), 53–66. CrossRefGoogle Scholar
  14. Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge: MIT Press. CrossRefMATHGoogle Scholar
  15. Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics. Part B. Cybernetics, 26(1), 29–41. CrossRefGoogle Scholar
  16. Dorigo, M., Birattari, M., & Stutzle, T. (2006). Ant colony optimization. IEEE Computational Intelligence Magazine, 1(4), 28–39. Google Scholar
  17. Dorigo, M., Montes de Oca, M. A., & Engelbrecht, A. P. (2008). Particle swarm optimization. Scholarpedia, 3(11), 1486. CrossRefGoogle Scholar
  18. Fukunaga, A. S. (2008). Automated discovery of local search heuristics for satisfiability testing. Evolutionary Computation, 16(1), 31–61. CrossRefGoogle Scholar
  19. Hansen, N. (2006). The CMA evolution strategy: a comparing review. In J. Lozano, et al. (Eds.), Studies in fuzziness and soft computing: Vol. 192. Towards a new evolutionary computation (pp. 75–102). Berlin: Springer. CrossRefGoogle Scholar
  20. Hutter, F., Hoos, H. H., Leyton-Brown, K., & Murphy, K. P. (2009a). An experimental investigation of model-based parameter optimisation: SPO and beyond. In F. Rothlauf (Ed.), Genetic and evolutionary computation conference, GECCO 2009 (pp. 271–278). New York: ACM Press. CrossRefGoogle Scholar
  21. Hutter, F., Hoos, H. H., Leyton-Brown, K., & Stützle, T. (2009b). ParamILS: an automatic algorithm configuration framework. Journal of Artificial Intelligence Research, 36, 267–306. MATHGoogle Scholar
  22. Johnson, D. S., McGeoch, L. A., Rego, C., & Glover, F. (2001). 8th DIMACS implementation challenge. http://www.research.att.com/~dsj/chtsp/.
  23. Jones, T., & Forrest, S. (1995). Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In L. J. Eshelman (Ed.), Proc. of the 6th international conference on genetic algorithms (pp. 184–192). San Francisco: Morgan Kaufmann. Google Scholar
  24. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proc. of IEEE international conference on neural networks (pp. 1942–1948). Piscataway: IEEE Press. CrossRefGoogle Scholar
  25. Kennedy, J., & Mendes, R. (2006). Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 36(4), 515–519. CrossRefGoogle Scholar
  26. Kennedy, J., Eberhart, R., & Shi, Y. (2001). Swarm intelligence. San Francisco: Morgan Kaufmann. Google Scholar
  27. Mengshoel, O. (2008). Understanding the role of noise in stochastic local search: analysis and experiments. Artificial Intelligence, 172(8–9), 955–990. CrossRefMATHMathSciNetGoogle Scholar
  28. Nannen, V., & Eiben, A. E. (2007). Relevance estimation and value calibration of evolutionary algorithm parameters. In Proc. of IJCAI 2007 (pp. 975–980). Menlo Park: AAAI Press/IJCAI. Google Scholar
  29. Oltean, M. (2005). Evolving evolutionary algorithms using linear genetic programming. Evolutionary Computation, 13(3), 387–410. CrossRefGoogle Scholar
  30. Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. An overview. Swarm Intelligence, 1(1), 33–57. CrossRefGoogle Scholar
  31. Powell, M. J. D. (2006). The NEWUOA software for unconstrained optimization. In Nonconvex optimization and its applications: Vol. 83. Large-scale nonlinear optimization (pp. 255–297). Berlin: Springer. CrossRefGoogle Scholar
  32. Powell, M. J. D. (2009). The BOBYQA algorithm for bound constrained optimization without derivatives (Technical Report NA2009/06). Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK. Google Scholar
  33. Steinmann, O., Strohmaier, A., & Stützle, T. (1997). Tabu search vs. random walk. In G. Brewka, C. Habel, & B. Nebel (Eds.), LNAI: Vol. 1303. KI-97: advances in artificial intelligence (pp. 337–348). Heidelberg: Springer. Google Scholar
  34. Stützle, T. (1999). DISKI: Vol. 220. Local search algorithms for combinatorial problems: analysis, improvements, and new applications. Sankt Augustin: Infix. MATHGoogle Scholar
  35. Stützle, T., & Hoos, H. H. (1998). MAX-MIN ant system and local search for combinatorial optimization problems: Towards adaptive tools for combinatorial global optimization. In S. Voss, et al. (Eds.), Meta-heuristics: advances and trends in local search paradigms for optimization (pp. 313–329). Dordrecht: Kluwer Academic. Google Scholar
  36. Stützle, T., & Hoos, H. (2000). \(\mathcal{MAX}\)\(\mathcal{MIN}\) ant system. Future Generations Computer Systems, 16(8), 889–914. CrossRefGoogle Scholar
  37. Torczon, V. (1997). On the convergence of pattern search algorithms. SIAM Journal on Optimization, 7(1), 1–25. CrossRefMATHMathSciNetGoogle Scholar
  38. Yuan, Z., Montes de Oca, M., Birattari, M., & Stützle, T. (2010a). Modern continuous optimization algorithms for tuning real and integer algorithm parameters. In M. Dorigo, et al. (Eds.), LNCS: Vol. 6234. Proceedings of ANTS 2010, the seventh international conference on swarm intelligence (pp. 204–215). Heidelberg: Springer. Google Scholar
  39. Yuan, Z., Stützle, T., & Birattari, M. (2010b). MADS/F-Race: mesh adaptive direct search meets F-race. In M. Ali, et al. (Eds.), LNAI: Vol. 6096. Proceedings of IEA-AIE 2010 (pp. 41–50). Heidelberg: Springer. Google Scholar
  40. Zlochin, M., Birattari, M., Meuleau, N., & Dorigo, M. (2004). Model-based search for combinatorial optimization: a critical survey. Annals of Operations Research, 131(1–4), 373–395. CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science + Business Media, LLC 2011

Authors and Affiliations

  • Zhi Yuan
    • 1
  • Marco A. Montes de Oca
    • 1
    • 2
  • Mauro Birattari
    • 1
  • Thomas Stützle
    • 1
  1. 1.IRIDIA, CoDEUniversité Libre de BruxellesBrusselsBelgium
  2. 2.Dept. of Mathematical SciencesUniversity of DelawareNewarkUSA

Personalised recommendations