Generic Postprocessing via Subset Selection for Hypervolume and Epsilon-Indicator

  • Karl Bringmann
  • Tobias Friedrich
  • Patrick Klitzke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8672)


Most biobjective evolutionary algorithms maintain a population of fixed size μ and return the final population at termination. During the optimization process many solutions are considered, but most are discarded. We present two generic postprocessing algorithms which utilize the archive of all non-dominated solutions evaluated during the search. We choose the best μ solutions from the archive such that the hypervolume or ε-indicator is maximized. This postprocessing costs no additional fitness function evaluations and has negligible runtime compared to most EMOAs.

We experimentally examine our postprocessing for four standard algorithms (NSGA-II, SPEA2, SMS-EMOA, IBEA) on ten standard test functions (DTLZ 1–2,7, ZDT 1–3, WFG 3–6) and measure the average quality improvement. The median decrease of the distance to the optimal ε-indicator is 95%, the median decrease of the distance to the optimal hypervolume value is 86%. We observe similar performance on a real-world problem (wind turbine placement).


Wind Turbine Pareto Front Evolutionary Computation Multiobjective Optimization Subset Selection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Investigating and exploiting the bias of the weighted hypervolume to articulate user preferences. In: Genetic and Evolutionary Computation Conference, GECCO 2009, pp. 563–570 (2009)Google Scholar
  2. 2.
    Bringmann, K., Friedrich, T.: Approximating the volume of unions and intersections of high-dimensional geometric objects. Computational Geometry: Theory and Applications 43, 601–610 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Bringmann, K., Friedrich, T.: Convergence of hypervolume-based archiving algorithms II: Competitiveness. In: Genetic and Evolutionary Computation Conference, GECCO 2012, pp. 457–464 (2012)Google Scholar
  4. 4.
    Bringmann, K., Friedrich, T.: Parameterized average-case complexity of the hypervolume indicator. In: Genetic and Evolutionary Computation Conference, GECCO 2013, pp. 575–582 (2013)Google Scholar
  5. 5.
    Bringmann, K., Friedrich, T., Neumann, F., Wagner, M.: Approximation-guided evolutionary multi-objective optimization. In: 22nd International Joint Conference on Artificial Intelligence, IJCAI 2011, pp. 1198–1203. IJCAI/AAAI (2011)Google Scholar
  6. 6.
    Bringmann, K., Friedrich, T., Klitzke, P.: Two-dimensional subset selection for hypervolume and epsilon-indicator. In: Genetic and Evolutionary Computation Conference, GECCO (2014)Google Scholar
  7. 7.
    Brockhoff, D., Friedrich, T., Hebbinghaus, N., Klein, C., Neumann, F., Zitzler, E.: On the effects of adding objectives to plateau functions. IEEE Trans. Evolutionary Computation 13(3), 591–603 (2009)CrossRefGoogle Scholar
  8. 8.
    Deb, K., Pratap, A., Agrawal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolutionary Computation 6(2), 182–197 (2002)CrossRefGoogle Scholar
  9. 9.
    Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary Multiobjective Optimization, Advanced Information and Knowledge Processing, pp. 105–145 (2005)Google Scholar
  10. 10.
    Durillo, J.J., Nebro, A.J., Alba, E.: The jMetal framework for multi-objective optimization: Design and architecture. In: IEEE Congress on Evolutionary Computation, CEC 2010, pp. 4138–4325 (2010)Google Scholar
  11. 11.
    Ehrgott, M.: Multicriteria Optimization, 2nd edn. Springer (2005)Google Scholar
  12. 12.
    Emmerich, M.T.M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: 3rd International Conference on Evolutionary Multi-Criterion Optimization, EMO 2005, pp. 62–76 (2005)Google Scholar
  13. 13.
    Friedrich, T., He, J., Hebbinghaus, N., Neumann, F., Witt, C.: Approximating covering problems by randomized search heuristics using multi-objective models. Evolutionary Computation 18(4), 617–633 (2010)CrossRefGoogle Scholar
  14. 14.
    Friedrich, T., Hebbinghaus, N., Neumann, F.: Plateaus can be harder in multi-objective optimization. Theoretical Computer Science 411(6), 854–864 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Giel, O.: Expected runtimes of a simple multi-objective evolutionary algorithm. In: IEEE Congress on Evolutionary Computation, CEC 2003, pp. 1918–1925 (2003)Google Scholar
  16. 16.
    Giel, O., Lehre, P.K.: On the effect of populations in evolutionary multi-objective optimisation. Evolutionary Computation 18(3), 335–356 (2010)CrossRefGoogle Scholar
  17. 17.
    Glasmachers, T.: Optimized approximation sets of low-dimensional benchmark pareto fronts. In: Bartz-Beielstein, T., et al. (eds.) PPSN XIII 2014. LNCS, vol. 8672, pp. 569–578. Springer, Heidelberg (2014)Google Scholar
  18. 18.
    Huband, S., Barone, L., While, R.L., Hingston, P.: A scalable multi-objective test problem toolkit. In: 3rd International Conference on Evolutionary Multi-Criterion Optimization, EMO 2005, pp. 280–295 (2005)Google Scholar
  19. 19.
    Ponte, A., Paquete, L., Figueira, J.R.: On beam search for multicriteria combinatorial optimization problems. In: 9th International Conference in Integration of AI and OR Techniques in Contraint Programming for Combinatorial Optimization Problems, CPAIOR 2012, pp. 307–321 (2012)Google Scholar
  20. 20.
    Tran, R., Wu, J., Denison, C., Ackling, T., Wagner, M., Neumann, F.: Fast and effective multi-objective optimisation of wind turbine placement. In: Genetic and Evolutionary Computation Conference, GECCO 2013, pp. 1381–1388 (2013)Google Scholar
  21. 21.
    Vaz, D., Paquete, L., Ponte, A.: A note on the ε-indicator subset selection. Theoretical Computer Science 499, 113–116 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Wagner, M., Day, J., Neumann, F.: A fast and effective local search algorithm for optimizing the placement of wind turbines. Renewable Energy 51, 64–70 (2013)CrossRefGoogle Scholar
  23. 23.
    Zitzler, E., Künzli, S.: Indicator-based selection in multiobjective search. In: Yao, X., et al. (eds.) PPSN VIII. LNCS, vol. 3242, pp. 832–842. Springer, Heidelberg (2004)Google Scholar
  24. 24.
    Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evolutionary Computation 3, 257–271 (1999)CrossRefGoogle Scholar
  25. 25.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation 8(2), 173–195 (2000)CrossRefGoogle Scholar
  26. 26.
    Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary algorithm for multiobjective optimization. In: Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems, EUROGEN 2001, pp. 95–100 (2002)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Karl Bringmann
    • 1
  • Tobias Friedrich
    • 2
  • Patrick Klitzke
    • 3
  1. 1.Max Planck Institute for InformaticsSaarbrückenGermany
  2. 2.Friedrich-Schiller-Universität JenaJenaGermany
  3. 3.Universität des SaarlandesSaarbrückenGermany

Personalised recommendations