The Future of Experimental Research
In the experimental analysis of metaheuristic methods, two issues are still not sufficiently treated. Firstly, the performance of algorithms depends on their parametrizations—and of the parametrizations of the problem instances. However, these dependencies can be seen as means for understanding an algorithm’s behavior. Secondly, the nondeterminism of evolutionary and other metaheuristic methods renders result distributions, not numbers.
Based on the experience of several tutorials on the matter, we provide a comprehensive, effective, and very efficient methodology for the design and experimental analysis of metaheuristics such as evolutionary algorithms. We rely on modern statistical techniques for tuning and understanding algorithms from an experimental perspective. Therefore, we make use of the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems.
Unable to display preview. Download preview PDF.
- Auger A, Hansen N (2005) Performance Evaluation of an Advanced Local Search Evolutionary Algorithm. In: McKay B, et al. (eds) Proc. 2005 Congress on Evolutionary Computation (CEC’05), IEEE Press, Piscataway, NJGoogle Scholar
- Bartz-Beielstein T (2006) Experimental Research in Evolutionary Computation— The New Experimentalism. Natural Computing Series, Springer, Berlin, Heidelberg, New YorkGoogle Scholar
- Beyer HG (2001) The Theory of Evolution Strategies. SpringerGoogle Scholar
- Chalmers AF (1999) What Is This Thing Called Science. University of Queensland Press, St. Lucia, AustraliaGoogle Scholar
- Champion R (2009) What is this thing called falsificationism. http://www.the-rathouse.com/shortreviews/ WhatisThisThingCalledScience.html. Cited 18 March 2009.
- De Jong K (2007) Parameter Setting in EAs: a 30 Year Perspective. In: Fernando G Lobo and Cláudio F Lima and Zbigniew Michalewicz (ed) Parameter Setting in Evolutionary Algorithms, SpringerGoogle Scholar
- Demetrescu C, Italiano GF (2000) What do we learn from experimental algorithmics? In: MFCS ’00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science, Springer, pp 36–51Google Scholar
- Fisher RA (1935) The Design of Experiments. Oliver and Boyd, EdinburghGoogle Scholar
- Giere RN (1999) Using models to represent reality. In: Magnani L (ed) Model Based Reasoning in Scientific Discovery. Proceedings of the International Conference on Model-Based Reasoning in Scientific Discovery, Kluwer, New York, NY, pp 41–57Google Scholar
- Gregoire T (2001) Biometry in the 21st century: Whither statistical inference? (invited keynote). Proceedings of the Forest Biometry and Information Science Conference held at the University of Greenwich, June 2001. http://cms1. gre.ac.uk/conferences/iufro/proceedings/gregoire.pdf. Cited 19 May 2004
- Gregory DE, Gao L, Rosenberg AL, Cohen PR (1996) An empirical study of dynamic scheduling on rings of processors. In: Proceedings of the 8th IEEE Symposium on Parallel and Distributed Processing, SPDP’96 (New Orleans, Louisiana, October 23-26, 1996), IEEE Computer Society, Los Alamitos, CA, pp 470–473Google Scholar
- Hansen N, Auger A, Finck S, Ros R (2009) Real-parameter black-box optimization benchmarking 2009: Experimental setup. Tech. Rep. RR-6828, INRIA, URL http://hal.inria.fr/inria-00362649/en/
- Jägersküpper J, Preuss M(2008) Empirical investigation of simplified step-size control in metaheuristics with a view to theory. In: McGeoch CC (ed) Experimental Algorithms, 7th InternationalWorkshop, WEA 2008, Proceedings, Springer, Lecture Notes in Computer Science, vol 5038, pp 263–274CrossRefGoogle Scholar
- Johnson DS (2002) A theoretician’s guide to the experimental analysis of algorithms. In: Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, AMS, pp 215–250Google Scholar
- Mayo DG (1996) Error and the Growth of Experimental Knowledge. The University of Chicago Press, Chicago ILGoogle Scholar
- Mayo DG, Spanos A (2006b) Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. British Journal Philos Sci 57(2):323–357, DOI 10.1093/bjps/axl003, URL http://bjps.oxfordjournals.org/cgi/content/abstract/57/2/323, http://bjps.oxfordjournals.org/cgi/reprint/57/2/323.pdf Google Scholar
- McGeoch CC (1986) Experimental analysis of algorithms. PhD thesis, Carnegie Mellon University, PittsburghGoogle Scholar
- Morrison DE, Henkel RE (eds) (1970) The Significance Test Controversy—A Reader. Butterworths, London, UKGoogle Scholar
- Schwefel HP (1995) Evolution and Optimum Seeking. Sixth-Generation Computer Technology, Wiley, New York, NYGoogle Scholar
- Suppes P (1969) A comparison of the meaning and uses of models in mathematics and the empirical sciences. In: Suppes P (ed) Studies in the Methodology and Foundation of Science, Reidel, Dordrecht, The Netherlands, pp 11–13Google Scholar
- Thomke SH (2003) Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business School PressGoogle Scholar