Statistical Racing Techniques for Improved Empirical Evaluation of Evolutionary Algorithms

  • Bo Yuan
  • Marcus Gallagher
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3242)

Abstract

In empirical studies of Evolutionary Algorithms, it is usually desirable to evaluate and compare algorithms using as many different parameter settings and test problems as possible, in order to have a clear and detailed picture of their performance. Unfortunately, the total number of experiments required may be very large, which often makes such research work computationally prohibitive. In this paper, the application of a statistical method called racing is proposed as a general-purpose tool to reduce the computational requirements of large-scale experimental studies in evolutionary algorithms. Experimental results are presented that show that racing typically requires only a small fraction of the cost of an exhaustive experimental study.

References

  1. 1.
    Whitley, D., Mathias, K., Rana, S., Dzubera, J.: Evaluating Evolutionary Algorithms. Artificial Intelligence 85(1-2), 245–276 (1996)CrossRefGoogle Scholar
  2. 2.
    De Jong, K.A., Potter, M.A., Spears, W.M.: Using Problem Generators to Explore the Effects of Epistasis. In: Bäck, T. (ed.) Seventh International Conference on Genetic Algorithms, pp. 338–345. Morgan Kauffman, San Francisco (1997)Google Scholar
  3. 3.
    Eiben, A.E., Jelasity, M.: A Critical Note on Experimental Research Methodology in EC. In: Congress on Evolutionary Computation, Hawaii, pp. 582–587. IEEE, Los Alamitos (2002)Google Scholar
  4. 4.
    Maron, O., Moore, A.W.: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation. In: Cowan, J.D., et al. (eds.) Advances in Neural Information Processing Systems, vol. 6, pp. 59–66 (1994)Google Scholar
  5. 5.
    Maron, O., Moore, A.W.: The Racing Algorithm: Model Selection for Lazy Learners. Artificial Intelligence Review 11, 193–225 (1997)CrossRefGoogle Scholar
  6. 6.
    Hoeffding, W.: Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association 58(301), 13–30 (1963)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Conover, W.J.: Practical Nonparametric Statistics, 3rd edn. John Wiley & Sons, Inc., Chichester (1999)Google Scholar
  8. 8.
    Box, G.E.P., Hunter, W.G., Hunter, J.S.: Statistics for Experimenters. Wiley, Chichester (1978)MATHGoogle Scholar
  9. 9.
    Birattari, M., Stutzle, T., Paquete, L., Varrentrapp, K.: A Racing Algorithm for Configuring Metaheuristics. In: Genetic and Evolutionary Computation Conference (GECCO 2002), pp. 11–18 (2002)Google Scholar
  10. 10.
    Mühlenbein, H., Paaß, G.: From Recombination of Genes to the Estimation of Distributions: I. Binary Parameters. In: Voigt, H.-M., et al. (eds.) Parallel Problem Solving from Nature IV, pp. 178–187 (1996)Google Scholar
  11. 11.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)MATHGoogle Scholar
  12. 12.
    Gallagher, M.: Multi-Layer Perceptron Error Surfaces: Visualization, Structure and Modelling. PhD Thesis, The University of Queensland (2000)Google Scholar
  13. 13.
    Yuan, B., Gallagher, M.: On Building a Principled Framework for Evaluating and Testing Evolutionary Algorithms: A Continuous Landscape Generator. In: The 2003 Congress on Evolutionary Computation, pp. 451–458 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Bo Yuan
    • 1
  • Marcus Gallagher
    • 1
  1. 1.School of Information Technology and Electrical EngineeringUniversity of QueenslandAustralia

Personalised recommendations