Skip to main content

Part of the book series: Natural Computing Series ((NCS))

  • 539 Accesses

Abstract

The evaluation and analysis of optimisation algorithms through benchmarks is an important aspect of research in evolutionary computation. This is especially true in the context of many-objective optimisation, where the complexity of the problems usually makes theoretical analysis difficult. However, the availability of suitable benchmarking problems is lacking in many research areas within the field of evolutionary computation for example, optimisation under noise or with constraints. Several additional open issues in common benchmarking practice exist as well, for instance related to reproducibility and the interpretation of results. In this book chapter, we focus on discussing these issues for multi- and many-objective optimisation (MMO) specifically. We thus first provide an overview of existing MMO benchmarks and find that besides lacking in number and diversity, improvements are needed in terms of ease of use and the ability to characterise and describe benchmarking functions. In addition, we provide a concise list of common pitfalls to look out for when using benchmarks, along with suggestions of how to avoid them. This part of the chapter is intended as a guide to help improve the usability of benchmarking results in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Since no future competition is planned, the used problems have since been made publicly available.

References

  1. T. Bartz-Beielstein, C. Doerr, J. Bossek, S. Chandrasekaran, T. Eftimov, A. Fischbach, P. Kerschke, M. Lopez-Ibanez, K. M. Malan, J.H. Moore, B. Naujoks, P. Orzechowski, V. Volz, M. Wagner, T. Weise, Benchmarking in optimization: Best practice and open issues (2020)

    Google Scholar 

  2. L.C.T. Bezerra, M. López-Ibáñez, T. Stützle, An empirical assessment of the properties of inverted generational distance indicators on multi- and many-objective optimization, in Evolutionary Multi-criterion Optimization (EMO) (2017), pp. 31–45

    Google Scholar 

  3. M. Chiarandini, L. Paquete, M. Preuss, E. Ridge, Experiments on metaheuristics: Methodological overview and open issues. Technical Report DMF-2007-03-003, The Danish Mathematical Society, Denmark (2007)

    Google Scholar 

  4. S.J. Daniels, A.A. Rahat, R.M. Everson, G.R. Tabor, J.E. Fieldsend, A suite of computationally expensive shape optimisation problems using computational fluid dynamics, in Parallel Problem Solving from Nature (PPSN) (Springer, 2018), pp. 296–307

    Google Scholar 

  5. K. Deb, Evolutionary algorithms for multi-criterion optimization in engineering design, in Evolutionary Algorithms in Engineering and Computer Science (EUROGEN) (1999), pp. 135–161

    Google Scholar 

  6. K. Deb, C. Myburgh, Breaking the billion-variable barrier in real-world optimization using a customized evolutionary algorithm, in Genetic and Evolutionary Computation Conference (GECCO) (ACM Press, 2016), pp. 653–660

    Google Scholar 

  7. K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable multi-objective optimization test problems, in Congress on Evolutionary Computation (CEC) (IEEE Press, 2002), pp. 825–830

    Google Scholar 

  8. T. Eftimov, P. Korošec, Identifying practical significance through statistical comparison of meta-heuristic stochastic optimization algorithms. Appl. Soft Comput. 85(105862) (2019)

    Google Scholar 

  9. T. Eftimov, P. Korošec, The impact of statistics for benchmarking in evolutionary computation research, in Genetic and Evolutionary Computation Conference (GECCO) Companion (ACM Press, 2018), pp. 1329–1336

    Google Scholar 

  10. T. Eftimov, G. Petelin, P. Korošec, Dsctool: a web-service-based framework for statistical comparison of stochastic optimization algorithms. Appl. Soft Comput. 87(105977) (2019)

    Google Scholar 

  11. K. Eggensperger, M. Lindauer, F. Hutter, Pitfalls and best practices in algorithm configuration. J. Artif. Intell. Res. 64, 861–893 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  12. A. Eiben, S. Smit, Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm Evol. Comput. 1(1), 19–31 (2011)

    Article  Google Scholar 

  13. X. Gandibleux, The MOCO numerical instances library. http://xgandibleux.free.fr/MOCOlib/, Accessed 20 July 2020

  14. T. Glasmachers, M.T.M. Emmerich, EMO’2017 Real-World Problems. https://www.ini.rub.de/PEOPLE/glasmtbl/projects/bbcomp/. Online, accessed 22 August 2020

  15. T. Glasmachers, I. Loshchilov, Black Box Optimization Competition BBComp. https://www.ini.rub.de/PEOPLE/glasmtbl/projects/bbcomp/. Online, Accessed 22 August 2020

  16. N. Hansen, A. Auger, O. Mersmann, T. Tušar, D. Brockhoff, COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw. 36, 114–144 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  17. N. Hansen, D. Brockhoff, O. Mersmann, T. Tusar, D. Tusar, O.A. ElHara, P.R. Sampaio, A. Atamna, K. Varelas, U. Batu, D.M. Nguyen, F. Matzner, A. Auger, COmparing Continuous Optimizers: numbbo/COCO on Github (2019)

    Google Scholar 

  18. N. Hansen, S. Finck, R. Ros, A. Auger, Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. Technical Report RR-6829, Inria, France (2009). [Updated February 2010]

    Google Scholar 

  19. S. Huband, P. Hingston, L. Barone, L. While, A review of multiobjective test problems and a scalable test problem toolkit. Trans. Evol. Comput. 10(5), 477–506 (2006)

    Article  MATH  Google Scholar 

  20. E.J. Hughes, Radar waveform optimisation as a many-objective application benchmark, in Evolutionary Multi-criterion Optimization (EMO) (Springer, 2007), pp. 700–714

    Google Scholar 

  21. H. Ishibuchi, L. He, K. Shang, Regular Pareto front shape is not realistic, in Congress on Evolutionary Computation (CEC) (IEEE Press, 2019), pp. 2034–2041

    Google Scholar 

  22. H. Ishibuchi, Y. Setoguchi, H. Masuda, Y. Nojima, Performance of decomposition-based many-objective algorithms strongly depends on pareto front shapes. IEEE Trans. Evol. Comput. 21(2), 169–190 (2017)

    Article  Google Scholar 

  23. H. Ishibuchi, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization: a short review, in Congress on Evolutionary Computation (CEC) (IEEE Press, 2008), pp. 2419–2426

    Google Scholar 

  24. H. Jain, K. Deb, An improved adaptive approach for elitist nondominated sorting genetic algorithm for many-objective optimization, in Evolutionary Multi-Criterion Optimization (EMO) (Springer, 2013), pp. 307–321

    Google Scholar 

  25. S. Jiang, M. Kaiser, S. Yang, S. Kollias, N. Krasnogor, A scalable test suite for continuous dynamic multiobjective optimization. IEEE Trans. Cybernet. 50(6), 2814–2826 (2020)

    Article  Google Scholar 

  26. P. Kerschke, H. Trautmann, Comprehensive Feature-based Landscape Analysis of Continuous and Constrained Optimization Problems Using the R-package flacco, in Applications in Statistical Computing (Springer, 2019), pp. 93 – 123

    Google Scholar 

  27. T. Kohira, H. Kemmotsu, O. Akira, T. Tatsukawa, Proposal of benchmark problem based on real-world car structure design optimization, in Genetic and Evolutionary Computation Conference (GECCO) (ACM Press, 2018), pp. 183–184

    Google Scholar 

  28. H. Li, K. Deb, Q. Zhang, P. Suganthan, L. Chen, Comparison between MOEA/D and NSGA-III on a set of novel many and multi-objective benchmark problems with challenging difficulties. Swarm Evol. Comput. 46, 104–117 (2019)

    Article  Google Scholar 

  29. J. Liang, C. Yue, G. Li, B. Qu, P.N. Suganthan, K. Yu, Problem definitions and evaluation criteria for the CEC 2021 on multimodal multiobjective path planning optimization. Technical report, Computational Intelligence Laboratory - Zhengzhou Universit, China and Nanyang Technological University, Singapore (2020)

    Google Scholar 

  30. S. Liu, Q. Lin, K.C. Tan, Q. Li, Benchmark problems for CEC2021 competition on evolutionary transfer multiobjectve optimization. Technical report, City University of Hong Kong (2021)

    Google Scholar 

  31. Y. Marca, H. Aguirre, S. Z. Martinez, A. Liefooghe, B. Derbel, S. Verel, K. Tanaka, Approximating Pareto set topology by cubic interpolation on bi-objective problems, in Evolutionary Multi-criterion Optimization (EMO) (Springer, 2019), pp. 386–398

    Google Scholar 

  32. H. Masuda, Y. Nojima, H. Ishibuchi, Common properties of scalable multiobjective problems and a new framework of test problems, in 2016 IEEE Congress on Evolutionary Computation (CEC) (2016), pp. 3011–3018

    Google Scholar 

  33. T. Matsumoto, N. Masuyama, Y. Nojima, H. Ishibuchi, A multiobjective test suite with hexagon Pareto fronts and various feasible regions, in Congress on Evolutionary Computation (CEC) (IEEE Press, 2019), pp. 2058–2065

    Google Scholar 

  34. I.R. Meneghini, M.A. Alves, A. Gaspar-Cunha, F.G. Guimarães, Scalable and customizable benchmark problems for many-objective optimization. Appl. Soft Comput. 90, 106139 (2020)

    Article  Google Scholar 

  35. O. Mersmann, B. Bischl, H. Trautmann, M. Preuss, C. Weihs, G. Rudolph, Exploratory landscape analysis, in Conference on Genetic and Evolutionary Computation (GECCO) (ACM Press, 2011), pp. 829–836

    Google Scholar 

  36. Y. Nojima, T. Fukase, Y. Liu, N. Masuyama, H. Ishibuchi, Constrained multiobjective distance minimization problems, in Genetic and Evolutionary Computation Conference (GECCO) (ACM Press, 2019), pp. 586–594

    Google Scholar 

  37. T. Ray, K. Liew, A swarm metaphor for multiobjective design optimization. Eng. Optim. 34(2), 141–153 (2002)

    Article  Google Scholar 

  38. L. Relund, Multi-objective optimization repository (MOrepo). https://github.com/MCDMSociety/MOrepo, Accessed 20 July 2020

  39. R. Tanabe, H. Ishibuchi, An easy-to-use real-world multi-objective optimization problem suite. Appl. Soft Comput. 89, 106078 (2020). https://github.com/ryojitanabe/reproblems, Accessed 15 April 2020

  40. K. Tang, X. Li, P.N. Suganthan, Z. Yang, T. Weise, Benchmark functions for the CEC’2010 special session and competition on large-scale global optimization. Technical report, Nature Inspired Computation and Applications Laboratory (2009)

    Google Scholar 

  41. The Benchmarking Network, Benchmarking Network Homepage (2019). https://sites.google.com/view/benchmarking-network, Accessed 13 September 2020

  42. The Japanese Society of Evolutionary Computation (JSEC), The 3rd Evolutionary Computation Competition - Wind Turbine Design Optimization (2019). http://www.jpnsec.org/files/competition2019/EC-Symposium-2019-Competition-English.html, Accessed 1 September 2020

  43. The Task Force on Benchmarking. IEEE CIS Task Force on Benchmarking Homepage (2019). https://cmte.ieee.org/cis-benchmarking/, Accessed 8 October 2020

  44. T. Tušar, D. Brockhoff, N. Hansen, Mixed-integer benchmark problems for single- and bi-objective optimization, in Genetic and Evolutionary Computation Conference (GECCO) (ACM Press, 2019), pp. 718–726

    Google Scholar 

  45. D.A. Van Veldhuizen, Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Ph.D. thesis, Air University, USA, Air Force Institute of Technology, Ohio (1999)

    Google Scholar 

  46. M. Vasile, Robust optimisation of trajectories intercepting dangerous neo, in AIAA/AAS Astrodynamics Specialist Conference and Exhibit. AIAA (2002)

    Google Scholar 

  47. V. Volz, B. Naujoks, Towards game-playing AI benchmarks via performance reporting standards, in Conference on Games (CoG) (IEEE Press, 2020) pp. 764–777

    Google Scholar 

  48. V. Volz, B. Naujoks, P. Kerschke, T. Tušar, Single- and multi-objective game-benchmark for evolutionary algorithms, in Genetic and Evolutionary Computation Conference (GECCO) (ACM Press, 2019), pp. 647–655. http://www.gm.fh-koeln.de/~naujoks/gbea/, Accessed 8 October 2020

  49. H. Wang, D. Vermettern, F. Ye, C. Doerr, T. Bäck, IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic (2020). arXiv:2007.03953

  50. E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vanessa Volz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Volz, V., Irawan, D., van der Blom, K., Naujoks, B. (2023). Benchmarking. In: Brockhoff, D., Emmerich, M., Naujoks, B., Purshouse, R. (eds) Many-Criteria Optimization and Decision Analysis. Natural Computing Series. Springer, Cham. https://doi.org/10.1007/978-3-031-25263-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25263-1_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25262-4

  • Online ISBN: 978-3-031-25263-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics