Advertisement

Experimental Analysis of Optimization Algorithms: Tuning and Beyond

  • Thomas Bartz-BeielsteinEmail author
  • Mike Preuss
Chapter
Part of the Natural Computing Series book series (NCS)

Abstract

This chapter comprises the essence of several years of tutorials the authors gave on experimental research in evolutionary computation. We highlight the renaissance of experimental techniques also in other fields to especially focus on the specific conditions of experimental research in computer science, or more concretely, metaheuristic optimization. The experimental setup is discussed together with the pitfalls awaiting the unexperienced (and sometimes even the experienced). We present a severity criterion as a meta statistical concept for evaluating statistical inferences, which can be used to avoid fallacies, i.e., misconceptions resulting from incorrect reasoning in argumentation caused by floor or ceiling effects. The sequential parameter optimization is discussed as a meta statistical framework which integrates concepts such as severity. Parameter tuning is considered as a relatively new tool in method design and analysis, and it leads to the question of adaptability of optimization algorithms. Another branch of experimentation aims at attaining more concrete problem knowledge, we may term it “exploratory landscape analysis”, containing sample and visualization techniques that are often applied but not seen as being a methodological contribution. However, this chapter is not only a renarration of well-known facts. We also attempt to look into the future to estimate what the hot topics of methodological research will be in the coming years and what changes we may expect for the whole community.

Keywords

Interactive Approach Novelty Detection Gaussian Process Model Tuning Procedure Problem Knowledge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

This work was supported by the Bundesministerium für Bildung und Forschung (BMBF) under the grants FIWA (AIF FKZ 17N2309), MCIOP (AIF FKZ 17N0311), and by the Cologne University of Applied Sciences under the research focus grant COSA.

References

  1. 1.
    T. Bartz-Beielstein, Experimental Research in Evolutionary Computation—The New Experimentalism. Natural Computing Series (Springer, Berlin/Heidelberg/New York, 2006)zbMATHGoogle Scholar
  2. 2.
    T. Bartz-Beielstein, How experimental algorithmics can benefit from Mayo’s extensions to Neyman-Pearson theory of testing. Synthese 163(3), 385–396 (2008). doi:10.1007/s11229-007-9297-zCrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    T. Bartz-Beielstein, Sequential parameter optimization—an annotated bibliography. CIOP technical report 04/10, Research Center CIOP (Computational Intelligence, Optimization and Data Mining), Cologne University of Applied Science, Faculty of Computer Science and Engineering Science, Apr 2010Google Scholar
  4. 4.
    T. Bartz-Beielstein, SPOT: an R package for automatic and interactive tuning of optimization algorithms by sequential parameter optimization. CIOP technical report 05/10, Research Center CIOP (Computational Intelligence, Optimization and Data Mining), Cologne University of Applied Science, Faculty of Computer Science and Engineering Science, Jun 2010. Comments: related software can be downloaded from http://cran.r-project.org/web/packages/SPOT/index.html
  5. 5.
    T. Bartz-Beielstein, Writing interfaces for the sequential parameter optimization toolbox SPOT. CIOP technical report 07/10, Cologne University of Applied Sciences, Cologne University of Applied Science, Faculty of Computer Science and Engineering Science, July 2010Google Scholar
  6. 6.
    T. Bartz-Beielstein, M. Preuss, CEC tutorial on experimental research in evolutionary computation, in IEEE Congress on Evolutionary Computation, Tutorial Program, Tutorials given at CEC 2004, San Diego and CEC 2005, EdinburghGoogle Scholar
  7. 7.
    T. Bartz-Beielstein, M. Preuss, Experimental research in evolutionary computation (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2005), Washington, June 2005Google Scholar
  8. 8.
    T. Bartz-Beielstein, M. Preuss, Considerations of budget allocation for sequential parameter optimization (SPO), in Workshop on Empirical Methods for the Analysis of Algorithms, Proceedings, Reykjavik, ed. by L. Paquete et al., 2006, pp. 35–40Google Scholar
  9. 9.
    T. Bartz-Beielstein, M. Preuss, Experimental research in evolutionary computation (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2006), Seattle, July, 2006Google Scholar
  10. 10.
    T. Bartz-Beielstein, M. Preuss, Experimental research in evolutionary computation–the future of experimental research (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2007), London, July 2007Google Scholar
  11. 11.
    T. Bartz-Beielstein, M. Preuss, Experimental research in evolutionary computation–the future of experimental research (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2008), Atlanta, July 2008Google Scholar
  12. 12.
    T. Bartz-Beielstein, M. Preuss, Experimental research in evolutionary computation–the future of experimental research (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2009), Montreal, July 2009Google Scholar
  13. 13.
    T. Bartz-Beielstein, M. Preuss, The future of experimental research, in Experimental Methods for the Analysis of Optimization Algorithms, ed. by T. Bartz-Beielstein, M. Chiarandini, L. Paquete, M. Preuß (Springer, Berlin/Heidelberg/New York, 2010), pp. 17–46CrossRefGoogle Scholar
  14. 14.
    T. Bartz-Beielstein, M. Preuss, Tuning and experimental analysis in evolutionary computation: what we still have wrong (tutorial), in Genetic and Evolutionary Computation Conference (GECCO 2010), Portland, July 2010Google Scholar
  15. 15.
    T. Bartz-Beielstein, M. Preuss, Automatic and interactive tuning of algorithms, in GECCO 2011 (Companion), ed. by N. Krasnogor, P.L. Lanzi (ACM, New York, 2011), pp. 1361–1380Google Scholar
  16. 16.
    T. Bartz-Beielstein, K.E. Parsopoulos, M.N. Vrahatis, Design and analysis of optimization algorithms using computational statistics. Appl. Numer. Anal. Comput. Math. 1(2), 413–433 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    T. Bartz-Beielstein, C. Lasarczyk, M. Preuss, Sequential parameter optimization, in Proceedings 2005 Congress on Evolutionary Computation (CEC’05), Edinburgh, vol. 1, ed. by B. McKay et al. (IEEE, Piscataway, 2005), pp. 773–780Google Scholar
  18. 18.
    T. Bartz-Beielstein, M. Chiarandini, L. Paquete, M. Preuss (ed.), Experimental Methods for the Analysis of Optimization Algorithms. (Springer, Berlin/Heidelberg/New York, 2010)zbMATHGoogle Scholar
  19. 19.
    T. Bartz-Beielstein, M. Friese, O. Flasch, W. Konen, P. Koch, B. Naujoks, Ensemble-based modeling. CIOP technical report 06/11, Research Center CIOP (Computational Intelligence, Optimization and Data Mining), Cologne University of Applied Science, Faculty of Computer Science and Engineering Science, July 2011Google Scholar
  20. 20.
    R.E. Bechhofer, T.J. Santner, D.M. Goldsman, Design and Analysis of Experiments for Statistical Selection, Screening, and Multiple Comparisons (Wiley, New York, 1995)Google Scholar
  21. 21.
    C.J.P. Belisle, Convergence theorems for a class of simulated annealing algorithms. J. Appl. Probab. 29, 885–895 (1992)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    M. Birattari, Tuning Metaheuristics (Springer, Berlin/Heidelberg/New York, 2005)zbMATHGoogle Scholar
  23. 23.
    G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters (Wiley, New York, 1978)zbMATHGoogle Scholar
  24. 24.
    A.F. Chalmers, What Is This Thing Called Science (University of Queensland Press, St. Lucia, 1999)Google Scholar
  25. 25.
    C.H. Chen, An effective approach to smartly allocate computing budget for discrete event simulation, in Proceedings of the 34th IEEE Conference on Decision and Control, New Orleans, 1995, pp. 2598–2605Google Scholar
  26. 26.
    M. Chimani, K. Klein, Algorithm engineering: concepts and practice, in Experimental Methods for the Analysis of Optimization Algorithms, ed. by T. Bartz-Beielstein, M. Chiarandini, L. Paquete, M. Preuß (Springer, New York, 2010)Google Scholar
  27. 27.
    P.R. Cohen, A survey of the eighth national conference on artificial intelligence: pulling together or pulling apart? AI Mag. 12(1), 16–41 (1991)Google Scholar
  28. 28.
    P.R. Cohen, Empirical Methods for Artificial Intelligence (MIT, Cambridge, 1995)zbMATHGoogle Scholar
  29. 29.
    A.E. Eiben, M. Jelasity, A critical note on experimental research methodology in EC, in Proceedings of the 2002 Congress on Evolutionary Computation (CEC’2002), Hawaii (IEEE, 2002), pp. 582–587Google Scholar
  30. 30.
    O. Flasch, T. Bartz-Beielstein, A. Davtyan, P. Koch, W. Konen, T.D. Oyetoyan, M. Tamutan, Comparing CI methods for prediction models in environmental engineering. CIOP technical report 02/10, Research Center CIOP (Computational Intelligence, Optimization and Data Mining), Faculty of Computer Science and Engineering Science, Cologne University of Applied Sciences, Germany, Feb 2010Google Scholar
  31. 31.
    T. Fober, Experimentelle Analyse Evolutionärer Algorithmen auf dem CEC 2005 Testfunktionensatz. Master’s thesis, Universität Dortmund, 2006Google Scholar
  32. 32.
    T. Fober, M. Mernberger, G. Klebe, E. Hüllermeier, Evolutionary construction of multiple graph alignments for the structural analysis of biomolecules. Bioinformatics 25(16), 2110–2117 (2009)CrossRefGoogle Scholar
  33. 33.
    T. Fober, S. Glinca, G. Klebe, E. Hüllermeier, Superposition and alignment of labeled point clouds. IEEE/ACM Trans. Comput. Biol. Bioinfo. 8(6), 1653–1666 (2011)CrossRefGoogle Scholar
  34. 34.
    M. Gallagher, B. Yuan, A general-purpose tunable landscape generator. IEEE Trans. Evol. Comput. 10(5), 590–603 (2006)CrossRefGoogle Scholar
  35. 35.
    N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)CrossRefGoogle Scholar
  36. 36.
    N. Hansen, A. Auger, S. Finck, R. Ros, Real-parameter black-box optimization benchmarking 2009: experimental setup. Technical report RR-6828, INRIA, 2009Google Scholar
  37. 37.
    N. Hansen, S. Finck, R. Ros, A. Auger, Real-parameter black-box optimization benchmarking 2009: noiseless functions definitions. Technical report RR-6829, INRIA, 2009Google Scholar
  38. 38.
    J. He, C. Reeves, C. Witt, X. Yao, A note on problem difficulty measures in black-box optimization: classification, realizations and predictability. Evol. Comput. 15(4), 435–443 (2007)CrossRefGoogle Scholar
  39. 39.
    F. Henrich, C. Bouvy, C. Kausch, K. Lucas, M. Preuss, G. Rudolph, P. Roosen, Economic optimization of non-sharp separation sequences by means of evolutionary algorithms. Comput. Chem. Eng. 32(7), 1411–1432 (2008)CrossRefGoogle Scholar
  40. 40.
    J.N. Hooker, Testing heuristics: we have it all wrong. J. Heuristics 1(1), 33–42 (1996)CrossRefGoogle Scholar
  41. 41.
    H.H. Hoos, T. Stützle, Evaluating Las Vegas algorithms: pitfalls and remedies, in UAI ’98: Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, Madison, ed. by G.F. Cooper, S. Moral (Morgan Kaufmann, 1998), pp. 238–245Google Scholar
  42. 42.
    F. Hutter, T. Bartz-Beielstein, H. Hoos, K. Leyton-Brown, K.P. Murphy, Sequential model-based parameter optimisation: an experimental investigation of automated and interactive approaches empirical methods for the analysis of optimization algorithms, in Experimental Methods for the Analysis of Optimization Algorithms, ed. by T. Bartz-Beielstein, M. Chiarandini, L. Paquete, M. Preuß (Springer, Berlin/Heidelberg/New York, 2010), pp. 361–414Google Scholar
  43. 43.
    F. Hutter, H.H. Hoos, K. Leyton-Brown, K.P. Murphy, Time-bounded sequential parameter optimization, in Proceedings of LION 2010, Venice. LNCS, 6073 (2010), pp. 281–298Google Scholar
  44. 44.
    T. Jansen, On classifications of fitness functions, in Theoretical Aspects of Evolutionary Computing, ed. by L. Kallel, B. Naudts, A. Rogers (Springer, Berlin, 2001), pp. 371–386CrossRefGoogle Scholar
  45. 45.
    D.S. Johnson, A theoretician’s guide to the experimental analysis of algorithms, in Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges (AMS, Providence, 2002), pp. 215–250Google Scholar
  46. 46.
    T. Jones, S. Forrest, Fitness distance correlation as a measure of problem difficulty for genetic algorithms, in Proceedings of the Sixth International Conference on Genetic Algorithms, Pittsburgh (Morgan Kaufmann, 1995), pp. 184–192Google Scholar
  47. 47.
    K. Knight, P. Langley, P.R. Cohen, What makes a compelling empirical evaluation? IEEE Intel. Syst. 11, 10–14 (1996)Google Scholar
  48. 48.
    W. Konen, T. Zimmer, T. Bartz-Beielstein, Optimized modelling of fill levels in stormwater tanks using CI-based parameter selection schemes (in German). at-Automatisierungstechnik 57(3), 155–166 (2009)Google Scholar
  49. 49.
    O. Kramer, B. Gloger, A. Goebels, An experimental analysis of evolution strategies and particle swarm optimisers using design of experiments, in Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO ’07, London (ACM, 2007), pp. 674–681Google Scholar
  50. 50.
    C.W.G. Lasarczyk, Genetische Programmierung einer algorithmischen Chemie. PhD thesis, Technische Universität Dortmund, 2007Google Scholar
  51. 51.
    C.W.G. Lasarczyk, W. Banzhaf, Total synthesis of algorithmic chemistries, in GECCO ’05: Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, Washington D.C. (ACM, New York, 2005), pp. 1635–1640Google Scholar
  52. 52.
    D.G. Mayo, Error and the Growth of Experimental Knowledge (The University of Chicago Press, Chicago, 1996)CrossRefGoogle Scholar
  53. 53.
    D.G. Mayo, A. Spanos, Severe testing as a basic concept in a Neyman–Pearson philosophy of induction. Br. J. Philos. Sci. 57, 323–357 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  54. 54.
    D.G. Mayo, A. Spanos, Error and Inference (Cambridge University Press, Cambridge, 2010)zbMATHGoogle Scholar
  55. 55.
    C.C. McGeoch, Toward an experimental method for algorithm simulation. INFORMS J. Comput. 8(1), 1–15 (1996)CrossRefzbMATHMathSciNetGoogle Scholar
  56. 56.
    J. Mehnen, T. Michelitsch, C. Lasarczyk, T. Bartz-Beielstein, Multi-objective evolutionary design of mold temperature control using DACE for parameter optimization. Int. J. Appl. Electromagn. Mech. 25(1–4), 661–667 (2007)Google Scholar
  57. 57.
    O. Mersmann, M. Preuss, H. Trautmann, Benchmarking evolutionary algorithms: towards exploratory landscape analysis, in Proceedings of the 11th International Conference on Parallel Problem Solving from Nature: Part I, PPSN’10, Krakow (Springer, Berlin/Heidelberg, 2010), pp. 73–82Google Scholar
  58. 58.
    O. Mersmann, B. Bischl, H. Trautmann, M. Preuss, C. Weihs, G. Rudolph, Exploratory landscape analysis, in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO ’11, Dublin (ACM, New York, 2011), pp. 829–836Google Scholar
  59. 59.
    B.M. Moret, H.D. Shapiro, Algorithms and experiments: the new (and old) methodology. J. Univers. Comput. Sci. 7(5), 434–446 (2001)zbMATHMathSciNetGoogle Scholar
  60. 60.
    V. Nannen, Evolutionary agent-based policy analysis in dynamic environments. PhD thesis, Vrije Universiteit Amsterdam, 2009Google Scholar
  61. 61.
    V. Nannen, A.E. Eiben, A method for parameter calibration and relevance estimation in evolutionary algorithms, in Genetic and Evolutionary Computation Conference, GECCO 2006, Proceedings, Seattle, ed. by M. Cattolico (ACM, 2006), pp. 183–190Google Scholar
  62. 62.
    J.C. Nash, Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation, 2nd edn. (IOP, Bristol, 1990)zbMATHGoogle Scholar
  63. 63.
    B. Naujoks, D. Quagliarella, T. Bartz-Beielstein, Sequential parameter optimisation of evolutionary algorithms for airfoil design, in Proceedings Design and Optimization: Methods and Applications (ERCOFTAC’06), Berlin, ed. by G. Winter et al. (University of Las Palmas de Gran Canaria, 2006), pp. 231–235Google Scholar
  64. 64.
    J. Neyman, E.S. Pearson, On the problem of the most efficient tests of statistical hypotheses. Philos. Trans. R. Soc. A 231, 289–337 (1933)CrossRefGoogle Scholar
  65. 65.
    N.H. Pothmann, Kreuzungsminimierung für k-seitige Buchzeichnungen von Graphen mit Ameisenalgorithmen. Master’s thesis, Universität Dortmund, 2007Google Scholar
  66. 66.
    M. Preuss, Niching prospects, in Bioinspired Optimization Methods and Their Applications (BIOMA 2006), ed. by B. Filipic, J. Silc (Jozef Stefan Institute, Ljubljana, 2006), pp. 25–34Google Scholar
  67. 67.
    M. Preuss, T. Bartz-Beielstein, Sequential parameter optimization applied to self-adaptation for binary-coded evolutionary algorithms, in Parameter Setting in Evolutionary Algorithms, ed. by F. Lobo, C. Lima, Z. Michalewicz. Studies in Computational Intelligence (Springer, New York, 2007), pp. 91–120Google Scholar
  68. 68.
    M. Preuss, G. Rudolph, F. Tumakaka, Solving multimodal problems via multiobjective techniques with application to phase equilibrium detection, in Proceedings of the International Congress on Evolutionary Computation (CEC2007), Singapore (IEEE, Piscataway, 2007)Google Scholar
  69. 69.
    M. Preuss, G. Rudolph, S. Wessing, Tuning optimization algorithms for real-world problems by means of surrogate modeling, in Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO ’10, Portland (ACM, New York, 2010), pp. 401–408Google Scholar
  70. 70.
    M. Preuss, C. Stoean, R. Stoean, Niching foundations: basin identification on fixed-property generated landscapes, in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO ’11, Dublin (ACM, 2011), pp. 837–844Google Scholar
  71. 71.
    R.L. Rardin, R. Uzsoy, Experimental evaluation of heuristic optimization algorithms: a tutorial. J. Heuristics 7(3), 261–304 (2001)CrossRefzbMATHGoogle Scholar
  72. 72.
    G. Rudolph, M. Preuss, J. Quadflieg, Two-layered surrogate modeling for tuning optimization metaheuristics. Algorithm engineering report TR09-2-005, Faculty of Computer Science, Algorithm Engineering (Ls11), Technische Universität Dortmund, Sept 2009Google Scholar
  73. 73.
    R. Salomon, Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions: a survey of some theoretical and practical aspects of genetic algorithms. BioSystems 39, 263–278 (1996)CrossRefGoogle Scholar
  74. 74.
    S.K. Smit, A.E. Eiben, Comparing parameter tuning methods for evolutionary algorithms, in IEEE Congress on Evolutionary Computation (CEC), Trondheim, 2009, pp. 399–406Google Scholar
  75. 75.
    P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.-P. Chen, A. Auger, S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical report, Nanyang Technological University, Singapore, 2005. http://www.ntu.edu.sg/home/EPNSugan
  76. 76.
    A. Törn, M. Ali, S. Viitanen, Stochastic global optimization: problem classes and solution techniques. J. Glob. Optim. 14(4), 437–447 (1999)CrossRefzbMATHGoogle Scholar
  77. 77.
    M. Tosic, Evolutionäre Kreuzungsminimierung. Diploma thesis, University of Dortmund, Jan 2006Google Scholar
  78. 78.
    H. Trautmann, J. Mehnen, Statistical methods for improving multi-objective evolutionary optimisation. Intern. J. Comput. Intell. Res. 5(2), 72–78 (2009)Google Scholar
  79. 79.
    L. Volkert, Investigating EA based training of HMM using a sequential parameter optimization approach, in Proceedings of the 2006 IEEE Congress on Evolutionary Computation, Vancouver, ed. by G.G. Yen et al. (IEEE, 2006), pp. 2742–2749Google Scholar
  80. 80.
    S. Wessing, Towards optimal parameterizations of the S-metric selection evolutionary multi-objective algorithms. Algorithm engineering report TR09-2-006, Universität Dortmund, Sept 2009Google Scholar
  81. 81.
    S. Wessing, M. Preuß, G. Rudolph, When parameter tuning actually is parameter control, in Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO ’11, Dublin (ACM, 2011), pp. 821–828Google Scholar
  82. 82.
    Y. Yi, Fuzzy operator trees for modeling utility functions. PhD thesis, Philipps-Universität Marburg, 2008Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Faculty of Computer Science and Engineering Science, Institute of Computer ScienceCologne University of Applied SciencesCologneGermany
  2. 2.Algorithm Engineering, Department of Computer ScienceTU DortmundDortmundGermany

Personalised recommendations