Skip to main content
Log in

Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms

  • Published:
Swarm Intelligence Aims and scope Submit manuscript

Abstract

The performance of optimization algorithms, including those based on swarm intelligence, depends on the values assigned to their parameters. To obtain high performance, these parameters must be fine-tuned. Since many parameters can take real values or integer values from a large domain, it is often possible to treat the tuning problem as a continuous optimization problem. In this article, we study the performance of a number of prominent continuous optimization algorithms for parameter tuning using various case studies from the swarm intelligence literature. The continuous optimization algorithms that we study are enhanced to handle the stochastic nature of the tuning problem. In particular, we introduce a new post-selection mechanism that uses F-Race in the final phase of the tuning process to select the best among elite parameter configurations. We also examine the parameter space of the swarm intelligence algorithms that we consider in our study, and we show that by fine-tuning their parameters one can obtain substantial improvements over default configurations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adenso-Díaz, B., & Laguna, M. (2006). Fine-tuning of algorithms using fractional experimental designs and local search. Operations Research, 54(1), 99–114.

    Article  MATH  Google Scholar 

  • Ansótegui, C., Sellmann, M., & Tierney, K. (2009). A gender-based genetic algorithm for the automatic configuration of solvers. In I. P. Gent (Ed.), LNCS: Vol. 5732. Principles and practice of constraint programming—CP 2009 (pp. 142–157). Heidelberg: Springer.

    Chapter  Google Scholar 

  • Audet, C., & Dennis, J. E. (2006). Mesh adaptive direct search algorithms for constrained optimization. SIAM Journal on Optimization, 17(1), 188–217.

    Article  MATH  MathSciNet  Google Scholar 

  • Auger, A., Hansen, N., Zerpa, J. M. P., Ros, R., & Schoenauer, M. (2009). Experimental comparisons of derivative free optimization algorithms. In J. Vahrenhold (Ed.), LNCS: Vol. 5526. Experimental algorithms, 8th international symposium, SEA 2009 (pp. 3–15). Heidelberg: Springer.

    Google Scholar 

  • Bartz-Beielstein, T. (2006). Experimental research in evolutionary computation–the new experimentalism. Berlin: Springer.

    MATH  Google Scholar 

  • Birattari, M. (2009). Tuning metaheuristics: a machine learning perspective. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Birattari, M., Stützle, T., Paquete, L., & Varrentrapp, K. (2002). A racing algorithm for configuring metaheuristics. In W. B. Langdon, et al. (Eds.), GECCO 2002: proceedings of the genetic and evolutionary computation conference (pp. 11–18). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Birattari, M., Zlochin, M., & Dorigo, M. (2006). Towards a theory of practice in metaheuristics design: a machine learning perspective. Theoretical Informatics and Applications, 40(2), 353–369.

    Article  MATH  MathSciNet  Google Scholar 

  • Birattari, M., Yuan, Z., Balaprakash, P., & Stützle, T. (2010). F-Race and iterated F-Race: An overview. In T. Bartz-Beielstein, et al. (Eds.), Experimental methods for the analysis of optimization algorithms (pp. 311–336). Berlin: Springer.

    Chapter  Google Scholar 

  • Clerc, M. (2006). Particle swarm optimization. London: ISTE.

    Book  MATH  Google Scholar 

  • Clerc, M., & Kennedy, J. (2002). The particle swarm–explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.

    Article  Google Scholar 

  • Dorigo, M. (2007). Ant colony optimization. Scholarpedia, 2(3), 1461.

    Article  MathSciNet  Google Scholar 

  • Dorigo, M., & Gambardella, L. (1997). Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation, 1(1), 53–66.

    Article  Google Scholar 

  • Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge: MIT Press.

    Book  MATH  Google Scholar 

  • Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics. Part B. Cybernetics, 26(1), 29–41.

    Article  Google Scholar 

  • Dorigo, M., Birattari, M., & Stutzle, T. (2006). Ant colony optimization. IEEE Computational Intelligence Magazine, 1(4), 28–39.

    Google Scholar 

  • Dorigo, M., Montes de Oca, M. A., & Engelbrecht, A. P. (2008). Particle swarm optimization. Scholarpedia, 3(11), 1486.

    Article  Google Scholar 

  • Fukunaga, A. S. (2008). Automated discovery of local search heuristics for satisfiability testing. Evolutionary Computation, 16(1), 31–61.

    Article  Google Scholar 

  • Hansen, N. (2006). The CMA evolution strategy: a comparing review. In J. Lozano, et al. (Eds.), Studies in fuzziness and soft computing: Vol. 192. Towards a new evolutionary computation (pp. 75–102). Berlin: Springer.

    Chapter  Google Scholar 

  • Hutter, F., Hoos, H. H., Leyton-Brown, K., & Murphy, K. P. (2009a). An experimental investigation of model-based parameter optimisation: SPO and beyond. In F. Rothlauf (Ed.), Genetic and evolutionary computation conference, GECCO 2009 (pp. 271–278). New York: ACM Press.

    Chapter  Google Scholar 

  • Hutter, F., Hoos, H. H., Leyton-Brown, K., & Stützle, T. (2009b). ParamILS: an automatic algorithm configuration framework. Journal of Artificial Intelligence Research, 36, 267–306.

    MATH  Google Scholar 

  • Johnson, D. S., McGeoch, L. A., Rego, C., & Glover, F. (2001). 8th DIMACS implementation challenge. http://www.research.att.com/~dsj/chtsp/.

  • Jones, T., & Forrest, S. (1995). Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In L. J. Eshelman (Ed.), Proc. of the 6th international conference on genetic algorithms (pp. 184–192). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proc. of IEEE international conference on neural networks (pp. 1942–1948). Piscataway: IEEE Press.

    Chapter  Google Scholar 

  • Kennedy, J., & Mendes, R. (2006). Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 36(4), 515–519.

    Article  Google Scholar 

  • Kennedy, J., Eberhart, R., & Shi, Y. (2001). Swarm intelligence. San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Mengshoel, O. (2008). Understanding the role of noise in stochastic local search: analysis and experiments. Artificial Intelligence, 172(8–9), 955–990.

    Article  MATH  MathSciNet  Google Scholar 

  • Nannen, V., & Eiben, A. E. (2007). Relevance estimation and value calibration of evolutionary algorithm parameters. In Proc. of IJCAI 2007 (pp. 975–980). Menlo Park: AAAI Press/IJCAI.

    Google Scholar 

  • Oltean, M. (2005). Evolving evolutionary algorithms using linear genetic programming. Evolutionary Computation, 13(3), 387–410.

    Article  Google Scholar 

  • Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. An overview. Swarm Intelligence, 1(1), 33–57.

    Article  Google Scholar 

  • Powell, M. J. D. (2006). The NEWUOA software for unconstrained optimization. In Nonconvex optimization and its applications: Vol. 83. Large-scale nonlinear optimization (pp. 255–297). Berlin: Springer.

    Chapter  Google Scholar 

  • Powell, M. J. D. (2009). The BOBYQA algorithm for bound constrained optimization without derivatives (Technical Report NA2009/06). Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK.

  • Steinmann, O., Strohmaier, A., & Stützle, T. (1997). Tabu search vs. random walk. In G. Brewka, C. Habel, & B. Nebel (Eds.), LNAI: Vol. 1303. KI-97: advances in artificial intelligence (pp. 337–348). Heidelberg: Springer.

    Google Scholar 

  • Stützle, T. (1999). DISKI: Vol. 220. Local search algorithms for combinatorial problems: analysis, improvements, and new applications. Sankt Augustin: Infix.

    MATH  Google Scholar 

  • Stützle, T. (2002). Software ACOTSP. http://iridia.ulb.ac.be/~mdorigo/ACO/aco-code/public-software.html.

  • Stützle, T., & Hoos, H. H. (1998). MAX-MIN ant system and local search for combinatorial optimization problems: Towards adaptive tools for combinatorial global optimization. In S. Voss, et al. (Eds.), Meta-heuristics: advances and trends in local search paradigms for optimization (pp. 313–329). Dordrecht: Kluwer Academic.

    Google Scholar 

  • Stützle, T., & Hoos, H. (2000). \(\mathcal{MAX}\)\(\mathcal{MIN}\) ant system. Future Generations Computer Systems, 16(8), 889–914.

    Article  Google Scholar 

  • Torczon, V. (1997). On the convergence of pattern search algorithms. SIAM Journal on Optimization, 7(1), 1–25.

    Article  MATH  MathSciNet  Google Scholar 

  • Yuan, Z., Montes de Oca, M., Birattari, M., & Stützle, T. (2010a). Modern continuous optimization algorithms for tuning real and integer algorithm parameters. In M. Dorigo, et al. (Eds.), LNCS: Vol. 6234. Proceedings of ANTS 2010, the seventh international conference on swarm intelligence (pp. 204–215). Heidelberg: Springer.

    Google Scholar 

  • Yuan, Z., Stützle, T., & Birattari, M. (2010b). MADS/F-Race: mesh adaptive direct search meets F-race. In M. Ali, et al. (Eds.), LNAI: Vol. 6096. Proceedings of IEA-AIE 2010 (pp. 41–50). Heidelberg: Springer.

    Google Scholar 

  • Zlochin, M., Birattari, M., Meuleau, N., & Dorigo, M. (2004). Model-based search for combinatorial optimization: a critical survey. Annals of Operations Research, 131(1–4), 373–395.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Yuan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yuan, Z., Montes de Oca, M.A., Birattari, M. et al. Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms. Swarm Intell 6, 49–75 (2012). https://doi.org/10.1007/s11721-011-0065-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11721-011-0065-9

Keywords

Navigation