Quantitative Performance Assessment of Multiobjective Optimizers: The Average Runtime Attainment Function

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10173)


Numerical benchmarking of multiobjective optimization algorithms is an important task needed to understand and recommend algorithms. So far, two main approaches to assessing algorithm performance have been pursued: using set quality indicators, and the (empirical) attainment function and its higher-order moments as a generalization of empirical cumulative distributions of function values. Both approaches have their advantages but rely on the choice of a quality indicator and/or take into account only the location of the resulting solution sets and not when certain regions of the objective space are attained. In this paper, we propose the average runtime attainment function as a quantitative measure of the performance of a multiobjective algorithm. It estimates, for any point in the objective space, the expected runtime to find a solution that weakly dominates this point. After defining the average runtime attainment function and detailing the relation to the (empirical) attainment function, we illustrate how the average runtime attainment function plot displays algorithm performance (and differences in performance) for some algorithms that have been previously run on the biobjective bbob-biobj test suite of the COCO platform.



The authors acknowledge the support of the French National Research Agency (ANR) within the Modèles Numérique project “NumBBO – Analysis, Improvement and Evaluation of Numerical Blackbox Optimizers” (ANR-12-MONU-0009). In addition, this work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 692286. This work was partially funded also by the Slovenian Research Agency under research program P2-0209. We finally thank the anonymous reviewers for their valuable comments.


  1. 1.
    Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: Benchmarking MATLAB’s Gamultiobj (NSGA-II) on the bi-objective BBOB-2016 test suite. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1233–1239. ACM (2016)Google Scholar
  2. 2.
    Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: Benchmarking the pure random search on the bi-objective BBOB-2016 testbed. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1217–1223. ACM (2016)Google Scholar
  3. 3.
    Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., Wagner, T.: The impact of variation operators on the performance of SMS-EMOA on the bi-objective BBOB-2016 test suite. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1225–1232. ACM (2016)Google Scholar
  4. 4.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Fonseca, C.M., Fleming, P.J.: On the performance assessment and comparison of stochastic multiobjective optimizers. In: Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 584–593. Springer, Heidelberg (1996). doi: 10.1007/3-540-61723-X_1022 CrossRefGoogle Scholar
  6. 6.
    Grunert da Fonseca, V., Fonseca, C.M., Hall, A.O.: Inferential performance assessment of stochastic optimisers and the attainment function. In: Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D. (eds.) EMO 2001. LNCS, vol. 1993, pp. 213–225. Springer, Heidelberg (2001). doi: 10.1007/3-540-44719-9_15 CrossRefGoogle Scholar
  7. 7.
    Hansen, N., Auger, A., Brockhoff, D., Tušar, D., Tušar, T.: COCO: performance assessment. CoRR abs/1605.03560 (2016).
  8. 8.
    Hansen, N., Auger, A., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: a platform for comparing continuous optimizers in a black-box setting. CoRR abs/1603.08785 (2016).
  9. 9.
    Hoos, H., Stützle, T.: Evaluating Las Vegas algorithms: pitfalls and remedies. In: Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, pp. 238–245. Morgan Kaufmann Publishers Inc. (1998)Google Scholar
  10. 10.
    Hoos, H.H., Stützle, T.: Stochastic Local Search: Foundations and Applications. Elsevier, San Francisco (2004)zbMATHGoogle Scholar
  11. 11.
    López-Ibáñez, M., Paquete, L., Stützle, T.: Exploratory analysis of stochastic local search algorithms in biobjective optimization. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds.) Experimental Methods for the Analysis of Optimization Algorithms, pp. 209–222. Springer, Heidelberg (2010). Chap. 9CrossRefGoogle Scholar
  12. 12.
    Moré, J., Wild, S.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009). Preprint available as Mathematics and Computer Science Division, Argonne National Laboratory, Preprint ANL/MCS-P1471-1207, May 2008Google Scholar
  13. 13.
    Stevens, S.S.: On the theory of scales of measurement. Science 103(2684), 677–680 (1946)CrossRefzbMATHGoogle Scholar
  14. 14.
    Tušar, T., Brockhoff, D., Hansen, N., Auger, A.: COCO: the bi-objective black box optimization benchmarking (bbob-biobj) test suite. CoRR abs/1604.00359 (2016).
  15. 15.
    Wong, C., Al-Dujaili, A., Sundaram, S.: Hypervolume-based DIRECT for multi-objective optimisation. In: GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB 2016), pp. 1201–1208. ACM (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Inria Saclay – Ile-de-France and CMAP, UMR CNRS 7641, Ecole PolytechniquePalaiseauFrance
  2. 2.Department of Intelligent SystemsJožef Stefan InstituteLjubljanaSlovenia

Personalised recommendations