Advertisement

Deep Statistical Comparison Applied on Quality Indicators to Compare Multi-objective Stochastic Optimization Algorithms

  • Tome Eftimov
  • Peter KorošecEmail author
  • Barbara Koroušić Seljak
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10710)

Abstract

In this paper, a study of how to compare the performance of multi-objective stochastic optimization algorithms using quality indicators and Deep Statistical Comparison (DSC) approach is presented. DSC is a recently proposed approach for statistical comparison of meta-heuristic stochastic optimization algorithms over single-objective problems. The main contribution of DSC is the ranking scheme that is based on the whole distribution, instead of using only one statistic such as average or median. Experimental results performed by using 6 multi-objective stochastic optimization algorithms on 16 test problems show that the DSC gives more robust results compared to some standard statistical approaches that are recommended for a comparison of multi-objective stochastic optimization algorithms according to some quality indicator.

Keywords

Multi-objective optimization Quality indicators Deep statistical comparison Single problem analysis Multiple problem analysis 

Notes

Acknowledgments

This work is supported by the project ISO-FOOD, which received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement No. 621329 (2014–2019) and the project that has received funding from the Slovenian Research Agency (research core funding No. L3-7538). We would like to thank Ph.D. Tea Tušar from the Department of Intelligent Systems at the Jožef Stefan Institute, for providing us the data involved in the experiments, which is also available on her website.

References

  1. 1.
    Breslow, N.: A generalized Kruskal-Wallis test for comparing K samples subject to unequal patterns of censorship. Biometrika 57(3), 579–594 (1970)CrossRefGoogle Scholar
  2. 2.
    Coello Coello, C.A., Lamont, G.B., Van Veldhuizen, D.A.: Evolutionary Algorithms for Solving Multi-objective Problems, vol. 5. Springer, Boston (2007).  https://doi.org/10.1007/978-0-387-36797-2CrossRefzbMATHGoogle Scholar
  3. 3.
    Deb, K., Sindhya, K., Hakanen, J.: Multi-objective optimization. In: Sengupta, R.N., Gupta, A., Dutta, J. (eds.) Decision Sciences: Theory and Practice, pp. 145–184. CRC Press, Boca Raton (2016)CrossRefGoogle Scholar
  4. 4.
    Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Abraham, A., Jain, L., Goldberg, R. (eds.) Evolutionary Multiobjective Optimization. Advanced Information and Knowledge Processing. Springer, London (2005).  https://doi.org/10.1007/1-84628-137-7_6CrossRefzbMATHGoogle Scholar
  5. 5.
    Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Durillo, J.J., Nebro, A.J., Alba, E.: The jMetal framework for multi-objective optimization: design and architecture. In: 2010 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE (2010)Google Scholar
  7. 7.
    Eftimov, T., Korošec, P., Koroušić Seljak, B.: Disadvantages of statistical comparison of stochastic optimization algorithms. In: Proceedings of the Bioinspired Optimizaiton Methods and Their Applications, BIOMA 2016, pp. 105–118. JSI (2016)Google Scholar
  8. 8.
    Eftimov, T., Korošec, P., Koroušić Seljak, B.: A novel approach to statistical comparison of meta-heuristic stochastic optimization algorithms using deep statistics. Inf. Sci. 417, 186–215 (2017)CrossRefGoogle Scholar
  9. 9.
    Engmann, S., Cousineau, D.: Comparing distributions: the two-sample Anderson-Darling test as an alternative to the Kolmogorov-Smirnoff test. J. Appl. Quant. Methods 6(3), 1–17 (2011)Google Scholar
  10. 10.
    da Fonseca, V.G., Fonseca, C.M., Hall, A.O.: Inferential performance assessment of stochastic optimisers and the attainment function. In: Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D. (eds.) EMO 2001. LNCS, vol. 1993, pp. 213–225. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-44719-9_15CrossRefGoogle Scholar
  11. 11.
    García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use of non-parametric tests for analyzing the evolutionary algorithms behaviour: a case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 15(6), 617–644 (2009)CrossRefGoogle Scholar
  12. 12.
    Hochberg, Y.: A sharper Bonferroni procedure for multiple tests of significance. Biometrika 75(4), 800–802 (1988)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Holm, S.: A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6(2), 65–70 (1979)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Huband, S., Barone, L., While, L., Hingston, P.: A scalable multi-objective test problem toolkit. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 280–295. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31880-4_20CrossRefzbMATHGoogle Scholar
  15. 15.
    Knowles, J., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. Tik report no. 214, pp. 327–332 (2006)Google Scholar
  16. 16.
    van der Laan, M.J., Dudoit, S., Pollard, K.S.: Multiple testing. Part II. Step-down procedures for control of the family-wise error rate. Stat. Appl. Genet. Mol. Biol. 3(1), 1–33 (2004)zbMATHGoogle Scholar
  17. 17.
    Lam, F., Longnecker, M.: A modified Wilcoxon rank sum test for paired data. Biometrika 70(2), 510–513 (1983)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Riquelme, N., Von Lücken, C., Baran, B.: Performance metrics in multi-objective optimization. In: 2015 Latin American Computing Conference (CLEI), pp. 1–11. IEEE (2015)Google Scholar
  19. 19.
    Ruxton, G.D.: The unequal variance t-test is an underused alternative to Student’s t-test and the Mann-Whitney U test. Behav. Ecol. 17(4), 688–690 (2006)CrossRefGoogle Scholar
  20. 20.
    Tušar, T., Filipič, B.: Differential evolution versus genetic algorithms in multiobjective optimization. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) EMO 2007. LNCS, vol. 4403, pp. 257–271. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-70928-2_22CrossRefGoogle Scholar
  21. 21.
    Van Veldhuizen, D.A., Lamont, G.B.: Multiobjective evolutionary algorithm research: a history and analysis. Technical report, Citeseer (1998)Google Scholar
  22. 22.
    Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Tome Eftimov
    • 1
    • 2
  • Peter Korošec
    • 1
    • 3
    Email author
  • Barbara Koroušić Seljak
    • 1
  1. 1.Computer Systems DepartmentJožef Stefan InstituteLjubljanaSlovenia
  2. 2.Jožef Stefan International Postgraduate SchoolLjubljanaSlovenia
  3. 3.Faculty of MathematicsNatural Science and Information TechnologiesKoperSlovenia

Personalised recommendations