Advertisement

Soft Computing

, Volume 15, Issue 11, pp 2233–2255 | Cite as

An incremental particle swarm for large-scale continuous optimization problems: an example of tuning-in-the-loop (re)design of optimization algorithms

  • Marco A. Montes de Oca
  • Doğan Aydın
  • Thomas Stützle
Focus

Abstract

The development cycle of high-performance optimization algorithms requires the algorithm designer to make several design decisions. These decisions range from implementation details to the setting of parameter values for testing intermediate designs. Proper parameter setting can be crucial for the effective assessment of algorithmic components because a bad parameter setting can make a good algorithmic component perform poorly. This situation may lead the designer to discard promising components that just happened to be tested with bad parameter settings. Automatic parameter tuning techniques are being used by practitioners to obtain peak performance from already designed algorithms. However, automatic parameter tuning also plays a crucial role during the development cycle of optimization algorithms. In this paper, we present a case study of a tuning-in-the-loop approach for redesigning a particle swarm-based optimization algorithm for tackling large-scale continuous optimization problems. Rather than just presenting the final algorithm, we describe the whole redesign process. Finally, we study the scalability behavior of the final algorithm in the context of this special issue.

Keywords

Parameter tuning Large-scale continuous optimization Incremental particle swarms Particle swarm optimization Local search 

Notes

Acknowledgments

The work described in this paper was supported by the META-X project, an Action de Recherche Concertée funded by the Scientific Research Directorate of the French Community of Belgium. Thomas Stützle acknowledges support from the F.R.S-FNRS of the French Community of Belgium of which he is a Research Associate. The authors thank Manuel López-Ibáñez for adapting the code of iterated F-race to deal with the tuning task studied in this paper.

References

  1. Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In: Proceedings of the IEEE congress on evolutionary computation (CEC 2005). IEEE Press, Piscataway, pp 1769–1776Google Scholar
  2. Auger A, Hansen N, Zerpa JMP, Ros R, Schoenauer M (2009) Experimental comparisons of derivative free optimization algorithms. In: Vahrenhold J (ed) LNCS 5526. Proceedings of the symposium on experimental algorithmics (SEA 2009). Springer, Heidelberg pp 3–15Google Scholar
  3. Balaprakash P, Birattari M, Stützle T (2007) Improvement strategies for the F-Race algorithm: sampling design and iterative refinement. In: Bartz-Beielstein T et al (eds) LNCS 4771. Proceedings of the international workshop on hybrid metaheuristics (HM 2007). Springer, Heidelberg, pp 108–122Google Scholar
  4. Bartz-Beielstein T (2006) Experimental research in evolutionary computation—the new experimentalism. Springer, BerlinMATHGoogle Scholar
  5. Birattari M (2009) Tuning metaheuristics: a machine learning perspective. Springer, BerlinMATHGoogle Scholar
  6. Birattari M, Stützle T, Paquete L, Varrentrapp K (2002) A racing algorithm for configuring metaheuristics. In: Langdon WB et al (eds) GECCO 2002: Proceedings of the genetic and evolutionary computation conference. Morgan Kaufmann, San Francisco, pp 11–18Google Scholar
  7. Birattari M, Yuan Z, Balaprakash P, Stützle T (2010) F-Race and iterated F-race: an overview. In: Bartz-Beielstein T et al (eds) Experimental methods for the analysis of optimization algorithms. Springer, Berlin, pp 311–336Google Scholar
  8. Chiarandini M, Birattari M, Socha K, Rossi-Doria O (2006) An effective hybrid algorithm for university course timetabling. J Sched 9(5):403–432MathSciNetMATHCrossRefGoogle Scholar
  9. Clerc M, Kennedy J (2002) The particle swarm–explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73CrossRefGoogle Scholar
  10. Conn AR, Gould NIM, Toint PL (2000) Trust-region methods. MPS-SIAM series on optimization. MPS-SIAM, PhiladelphiaGoogle Scholar
  11. Eshelman LJ, Schaffer JD (1993) Real-coded genetic algorithms and interval-schemata. In: Whitley DL (ed) Foundation of genetic algorithms 2. Morgan Kaufmann, San Mateo, pp 187–202Google Scholar
  12. Hansen N (2010) The CMA evolution strategy. http://www.lri.fr/hansen/cmaesintro.html. Last accessed July 2010
  13. Hansen N, Ostermeier A, Gawelczyk A (1995) On the adaptation of arbitrary normal mutation distributions in evolution strategies: the generating set adaptation. In: Eshelman L (ed) Proceedings of the sixth international conference on genetic algorithms. Morgan Kaufmann, San Francisco, pp 57–64Google Scholar
  14. Herrera F, Lozano M, Molina D (2010) Test suite for the special issue of soft computing on scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems. http://sci2s.ugr.es/eamhco/updated-functions1-19.pdf. Last accessed July 2010
  15. Hoos HH, Stützle T (2005) Stochastic local search: foundations and applications. Morgan Kaufmann, San FranciscoMATHGoogle Scholar
  16. Hutter F, Hoos HH, Leyton-Brown K, Murphy KP (2009) An experimental investigation of model-based parameter optimisation: SPO and beyond. In: Rothlauf F (ed) GECCO 2009: Proceedings of the genetic and evolutionary computation conference. ACM Press, New York, pp 271–278Google Scholar
  17. Johnson SG (2010) The NLopt nonlinear-optimization package. http://ab-initio.mit.edu/nlopt. Last accessed July 2010
  18. Kennedy J, Eberhart R (2001) Swarm intelligence. Morgan Kaufmann, San FranciscoGoogle Scholar
  19. KhudaBukhsh AR, Xu L, Hoos HH, Leyton-Brown K (2009) SATenstein: automatically building local search SAT solvers from components. In: Boutilier C et al (eds) Proceedings of the international joint conference on artificial intelligence (IJCAI 2009), pp 517–524Google Scholar
  20. López-Ibáñez M, Stützle T (2010) Automatic configuration of multi-objective ACO algorithms. In: Dorigo M et al (eds) LNCS 6234. Proceedings of the international conference on swarm intelligence (ANTS 2010). Springer, Heidelberg, pp 95–106Google Scholar
  21. Lozano M, Herrera F (2010) Call for papers: Special issue of soft computing: a fusion of foundations, methodologies and applications on scalability of evolutionary algorithms and other metaheuristics for large scale continuous optimization problems. http://sci2s.ugr.es/eamhco/CFP.php. Last accessed July 2010
  22. Montes de Oca MA, Aydın D, Stützle T. An incremental particle swarm for large-scale optimization problems: Complete data. http://iridia.ulb.ac.be/supp/IridiaSupp2010-011
  23. Montes de Oca MA, Stützle T, Van den Enden K, Dorigo M (2010) Incremental social learning in particle swarms. IEEE Trans Syst Man Cybern B Cybern (in press)Google Scholar
  24. Montes de Oca MA, Van den Enden K, Stützle T (2008) Incremental particle swarm-guided local search for continuous optimization. In: Blesa MJ et al (eds) LNCS 5296. Proceedings of the international workshop on hybrid metaheuristics (HM 2008). Springer, Heidelberg, pp 72–86Google Scholar
  25. Moré J, Wild S (2009) Benchmarking derivative-free optimization algorithms. SIAM J Optim 20(1):172–191MathSciNetMATHCrossRefGoogle Scholar
  26. Nannen V, Eiben AE (2007) Relevance estimation and value calibration of evolutionary algorithm parameters. In: Proceedings of the international joint conference on artificial intelligence (IJCAI 2009), pp 975–980Google Scholar
  27. Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Comput J 7(2):155–162MathSciNetMATHCrossRefGoogle Scholar
  28. Powell MJD (2006) The NEWUOA software for unconstrained optimization. Large-scale nonlinear optimization. Nonconvex optimization and its applications, vol 83. Springer, Berlin, pp 255–297Google Scholar
  29. Powell MJD (2009) The BOBYQA algorithm for bound constrained optimization without derivatives. Technical Report NA2009/06, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, CambridgeGoogle Scholar
  30. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical Recipes in C. The Art of Scientific Computing, 2nd edn. Cambridge University Press, New YorkMATHGoogle Scholar
  31. Smit SK, Eiben AE (2009) Comparing parameter tuning methods for evolutionary algorithms. In: Proceedings of the IEEE ccongress on evolutionary computation (CEC 2009). IEEE Press, Piscataway, pp 399–406Google Scholar
  32. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359MathSciNetMATHCrossRefGoogle Scholar
  33. Stützle T, Birattari M, Hoos HH (eds) (2007) Engineering stochastic local search algorithms. Designing implementing and analizing effective heuristics. International workshop, SLS 2007. LNCS 4638. Springer, HeidelbergGoogle Scholar
  34. Stützle T, Birattari M, Hoos HH (eds) (2009) Engineering stochastic local search algorithms. Designing implementing and analizing effective heuristics. Second international workshop, SLS 2009. LNCS 5752. Springer, HeidelbergGoogle Scholar
  35. Yuan Z, Montes de Oca MA, Birattari M, Stützle T (2010) Modern continuous optimization algorithms for tuning real and integer algorithm parameters. In: Dorigo M et al (eds) LNCS 6234. Proceedings of the international conference on swarm intelligence (ANTS 2010). Springer, Heidelberg, pp 204–215Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Marco A. Montes de Oca
    • 1
  • Doğan Aydın
    • 2
  • Thomas Stützle
    • 1
  1. 1.IRIDIA, CoDE Université Libre de BruxellesBrusselsBelgium
  2. 2.Department of Computer EngineeringEge UniversityIzmirTurkey

Personalised recommendations