SRCS: A Technique for Comparing Multiple Algorithms under Several Factors in Dynamic Optimization Problems

Part of the Studies in Computational Intelligence book series (SCI, volume 433)

Abstract

Performance comparison among several algorithms is an essential task. This is already a difficult process when dealing with stationary problems where the researcher usually tests many algorithms, with several parameters, under different problems. The situation is even more complex when dynamic optimization problems are considered, since additional dynamism-specific configurations should also be analyzed (e.g. severity, frequency and type of the changes, etc). In this work, we present a technique to compact those results in a visual way, improving their understanding and providing an easy way to detect algorithms’ behavioral patterns. However, as every form of compression, it implies the loss of part of the information. The pros and cons of this technique are explained, with a special emphasis on some statistical issues that commonly arise when dealing with random-nature algorithms.

Keywords

Dynamic Optimization Problem Global Difference Stochastic Local Search Independent Repetition Multiple Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bartz-Beielstein, T.: Experimental Research in Evolutionary Computation: The New Experimentalism. Natural Computing Series. Springer, Heidelberg (2006), doi:10.1007/3-540-32027-XMATHGoogle Scholar
  2. 2.
    Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.): Experimental Methods for the Analysis of Optimization Algorithms. Springer, Heidelberg (2010), doi:10.1007/978-3-642-02538-9MATHGoogle Scholar
  3. 3.
    Branke, J.: Memory enhanced evolutionary algorithms for changing optimization problems. In: Proceedings of the 1999 IEEE Congress on Evolutionary Computation (CEC 1999), vol. 3, pp. 1875–1882. IEEE (1999), doi:10.1109/CEC.1999.785502Google Scholar
  4. 4.
    Branke, J.: Evolutionary Optimization in Dynamic Environments. Genetic algorithms and evolutionary computation, vol. 3. Kluwer Academic Publishers, Massachusetts (2001)Google Scholar
  5. 5.
    Chen, C.-H., Härdle, W., Unwin, A., Friendly, M.: Handbook of Data Visualization. Springer Handbooks of Computational Statistics. Springer, Heidelberg (2008), doi:10.1007/978-3-540-33037-0MATHGoogle Scholar
  6. 6.
    Cruz, C., González, J., Pelta, D.: Optimization in dynamic environments: a survey on problems, methods and measures. In: Soft Computing, pp. 1–22 (2010), doi:10.1007/s00500-010-0681-0Google Scholar
  7. 7.
    De Jong, K.: An analysis of the behavior of a class of genetic adaptive systems. PhD thesis, University of Michigan, Ann Arbor, MI, USA (1975)Google Scholar
  8. 8.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research 7(1) (2006)Google Scholar
  9. 9.
    Fonseca, V.G., Fonseca, C.M.: The attainment-function approach to stochastic multiobjective optimizer assessment and comparison. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 103–130. Springer, Heidelberg (2010), doi:10.1007/978-3-642-02538-9_5CrossRefGoogle Scholar
  10. 10.
    García, S., Herrera, F.: An extension on ”statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. Journal of Machine Learning Research 9, 2677–2694 (2008)MATHGoogle Scholar
  11. 11.
    García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. Journal of Heuristics 15(6), 617–644 (2009), doi:10.1007/s10732-008-9080-4MATHCrossRefGoogle Scholar
  12. 12.
    Gräning, L., Jin, Y., Sendhoff, B.: Individual-based management of meta-models for evolutionary optimization with application to three-dimensional blade optimization. In: Yang, S., Ong, Y.-S., Jin, Y. (eds.) Evolutionary Computation in Dynamic and Uncertain Environments. SCI, vol. 51, pp. 225–250. Springer, Heidelberg (2007), doi:10.1007/978-3-540-49774-5_10CrossRefGoogle Scholar
  13. 13.
    Hollander, M., Wolfe, D.: Nonparametric Statistical Methods, 2nd edn. John Wiley & Sons, Inc. (1999)Google Scholar
  14. 14.
    Holm, S.: A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics 6(2), 65–70 (1979)MathSciNetMATHGoogle Scholar
  15. 15.
    Kruskal, W.H., Allen Wallis, W.: Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association 47(260), 583–621 (1952)MATHCrossRefGoogle Scholar
  16. 16.
    Russell, V.: Lenth. Some practical guidelines for effective sample size determination. The American Statistician 55(3), 187–193 (2001), doi:10.1198/000313001317098149MathSciNetCrossRefGoogle Scholar
  17. 17.
    López-Ibáñez, M., Paquete, L., Stützle, T.: Exploratory analysis of stochastic local search algorithms in biobjective optimization. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 209–222. Springer, Heidelberg (2010), doi:10.1007/978-3-642-02538-9_9CrossRefGoogle Scholar
  18. 18.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. The Annals of Mathematical Statistics 18(1), 50–60 (1947), doi:10.1214/aoms/1177730491Google Scholar
  19. 19.
    Randles, R.H., Wolfe, D.: Introduction to the Theory of Nonparametric Statistics. John Wiley & Sons, Inc. (1979)Google Scholar
  20. 20.
    Reyes-Sierra, M., Coello, C.: A study of techniques to improve the efficiency of a multi-objective particle swarm optimizer. In: Yang, S., Ong, Y.-S., Jin, Y. (eds.) Evolutionary Computation in Dynamic and Uncertain Environments. SCI, vol. 51, pp. 269–296. Springer, Heidelberg (2007), doi:10.1007/978-3-540-49774-5_12CrossRefGoogle Scholar
  21. 21.
    Weicker, K.: Performance Measures for Dynamic Environments. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G., Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 64–73. Springer, Heidelberg (2002), doi:10.1007/3-540-45712-7_7CrossRefGoogle Scholar
  22. 22.
    Wilcoxon, F.: Individual comparisons by ranking methods. Biometrics Bulletin 1(6), 80–83 (1945), doi:10.2307/3001968CrossRefGoogle Scholar
  23. 23.
    Yang, S.: Explicit memory schemes for evolutionary algorithms in dynamic environments. In: Yang, S., Ong, Y.-S., Jin, Y. (eds.) Evolutionary Computation in Dynamic and Uncertain Environments. SCI, vol. 51, pp. 3–28. Springer, Heidelberg (2007), doi:10.1007/978-3-540-49774-5_1CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Models of Decision and Optimization Research Group (MODO), Dept. of Computer Sciences and Artificial IntelligenceUniversity of Granada, I.C.T. Research Centre (CITIC-UGR)GranadaSpain

Personalised recommendations