The Future of Experimental Research

Chapter

Abstract

In the experimental analysis of metaheuristic methods, two issues are still not sufficiently treated. Firstly, the performance of algorithms depends on their parametrizations—and of the parametrizations of the problem instances. However, these dependencies can be seen as means for understanding an algorithm’s behavior. Secondly, the nondeterminism of evolutionary and other metaheuristic methods renders result distributions, not numbers.

Based on the experience of several tutorials on the matter, we provide a comprehensive, effective, and very efficient methodology for the design and experimental analysis of metaheuristics such as evolutionary algorithms. We rely on modern statistical techniques for tuning and understanding algorithms from an experimental perspective. Therefore, we make use of the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ackermann R (1989) The new experimentalism. British Journal for the Philosophy of Science 40:185–190CrossRefGoogle Scholar
  2. Auger A, Hansen N (2005) Performance Evaluation of an Advanced Local Search Evolutionary Algorithm. In: McKay B, et al. (eds) Proc. 2005 Congress on Evolutionary Computation (CEC’05), IEEE Press, Piscataway, NJGoogle Scholar
  3. Barr RS, Golden BL, Kelly JP, Resende MG, Stewart WR (1995) Designing and reporting on computational experiments with heuristic methods. Journal of Heuristics 1(1):9–32MATHCrossRefGoogle Scholar
  4. Bartz-Beielstein T (2006) Experimental Research in Evolutionary Computation— The New Experimentalism. Natural Computing Series, Springer, Berlin, Heidelberg, New YorkGoogle Scholar
  5. Bartz-Beielstein T (2008) How experimental algorithmics can benefit from Mayo’s extensions to Neyman-Pearson theory of testing. Synthese 163(3):385–396, DOI 10.1007/s11229-007-9297-zMATHCrossRefMathSciNetGoogle Scholar
  6. Beyer HG (2001) The Theory of Evolution Strategies. SpringerGoogle Scholar
  7. Chalmers AF (1999) What Is This Thing Called Science. University of Queensland Press, St. Lucia, AustraliaGoogle Scholar
  8. Champion R (2009) What is this thing called falsificationism. http://www.the-rathouse.com/shortreviews/ WhatisThisThingCalledScience.html. Cited 18 March 2009.
  9. Cohen PR (1995) Empirical Methods for Artificial Intelligence. MIT Press, Cambridge MAMATHGoogle Scholar
  10. De Jong K (2007) Parameter Setting in EAs: a 30 Year Perspective. In: Fernando G Lobo and Cláudio F Lima and Zbigniew Michalewicz (ed) Parameter Setting in Evolutionary Algorithms, SpringerGoogle Scholar
  11. Demetrescu C, Italiano GF (2000) What do we learn from experimental algorithmics? In: MFCS ’00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science, Springer, pp 36–51Google Scholar
  12. Eiben AE, Smith JE (2003) Introduction to Evolutionary Computing. SpringerMATHGoogle Scholar
  13. Fisher RA (1935) The Design of Experiments. Oliver and Boyd, EdinburghGoogle Scholar
  14. Giere RN (1999) Using models to represent reality. In: Magnani L (ed) Model Based Reasoning in Scientific Discovery. Proceedings of the International Conference on Model-Based Reasoning in Scientific Discovery, Kluwer, New York, NY, pp 41–57Google Scholar
  15. Gregoire T (2001) Biometry in the 21st century: Whither statistical inference? (invited keynote). Proceedings of the Forest Biometry and Information Science Conference held at the University of Greenwich, June 2001. http://cms1. gre.ac.uk/conferences/iufro/proceedings/gregoire.pdf. Cited 19 May 2004
  16. Gregory DE, Gao L, Rosenberg AL, Cohen PR (1996) An empirical study of dynamic scheduling on rings of processors. In: Proceedings of the 8th IEEE Symposium on Parallel and Distributed Processing, SPDP’96 (New Orleans, Louisiana, October 23-26, 1996), IEEE Computer Society, Los Alamitos, CA, pp 470–473Google Scholar
  17. Hansen N, Auger A, Finck S, Ros R (2009) Real-parameter black-box optimization benchmarking 2009: Experimental setup. Tech. Rep. RR-6828, INRIA, URL http://hal.inria.fr/inria-00362649/en/
  18. Hooker J (1996) Testing heuristics: We have it all wrong. Journal of Heuristics 1(1):33–42CrossRefGoogle Scholar
  19. Hoos HH, Stützle T (2005) Stochastic Local Search—Foundations and Applications. Elsevier/Morgan KaufmannMATHGoogle Scholar
  20. Jägersküpper J, Preuss M(2008) Empirical investigation of simplified step-size control in metaheuristics with a view to theory. In: McGeoch CC (ed) Experimental Algorithms, 7th InternationalWorkshop, WEA 2008, Proceedings, Springer, Lecture Notes in Computer Science, vol 5038, pp 263–274CrossRefGoogle Scholar
  21. Johnson DS (2002) A theoretician’s guide to the experimental analysis of algorithms. In: Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, AMS, pp 215–250Google Scholar
  22. Kleijnen JPC (1987) Statistical Tools for Simulation Practitioners. Marcel Dekker, New York, NYMATHGoogle Scholar
  23. Mayo DG (1983) An objective theory of statistical testing. Synthese 57:297–340CrossRefMathSciNetGoogle Scholar
  24. Mayo DG (1996) Error and the Growth of Experimental Knowledge. The University of Chicago Press, Chicago ILGoogle Scholar
  25. Mayo DG, Spanos A (2006a) Severe testing as a basic concept in a neyman–pearson philosophy of induction. British Journal for the Philosophy of Science 57:323– 357MATHCrossRefMathSciNetGoogle Scholar
  26. Mayo DG, Spanos A (2006b) Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. British Journal Philos Sci 57(2):323–357, DOI 10.1093/bjps/axl003, URL http://bjps.oxfordjournals.org/cgi/content/abstract/57/2/323, http://bjps.oxfordjournals.org/cgi/reprint/57/2/323.pdf Google Scholar
  27. Mayo DG, Spanos A (2010) Error and Inference. Cambridge University Press, CambridgeMATHGoogle Scholar
  28. McGeoch CC (1986) Experimental analysis of algorithms. PhD thesis, Carnegie Mellon University, PittsburghGoogle Scholar
  29. Morrison DE, Henkel RE (eds) (1970) The Significance Test Controversy—A Reader. Butterworths, London, UKGoogle Scholar
  30. Popper K (1959) The Logic of Scientific Discovery. Hutchinson, London, UKMATHGoogle Scholar
  31. Preuss M (2009) Adaptability of algorithms for real-valued optimization. In: Giacobini M, et al. (eds) Applications of Evolutionary Computing, EvoWorkshops 2009. Proceedings, Springer, Lecture Notes in Computer Science, vol 5484, pp 665–674CrossRefGoogle Scholar
  32. Rardin R, Uzsoy R (2001) Experimental evaluation of heuristic optimization algorithms: A tutorial. Journal of Heuristics 7(3):261–304MATHCrossRefGoogle Scholar
  33. Sacks J, Welch WJ, Mitchell TJ, Wynn HP (1989) Design and analysis of computer experiments. Statistical Science 4(4):409–435MATHCrossRefMathSciNetGoogle Scholar
  34. Schwefel HP (1995) Evolution and Optimum Seeking. Sixth-Generation Computer Technology, Wiley, New York, NYGoogle Scholar
  35. Suppes P (1969) A comparison of the meaning and uses of models in mathematics and the empirical sciences. In: Suppes P (ed) Studies in the Methodology and Foundation of Science, Reidel, Dordrecht, The Netherlands, pp 11–13Google Scholar
  36. Thomke SH (2003) Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business School PressGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.Institute of Computer ScienceCologne University of Applied SciencesGummersbachGermany
  2. 2.Algorithm Engineering, TU DortmundDortmundGermany

Personalised recommendations