Advertisement

Time Complexity and Zeros of the Hypervolume Indicator Gradient Field

  • Michael Emmerich
  • André Deutz
Part of the Studies in Computational Intelligence book series (SCI, volume 500)

Abstract

In multi-objective optimization the hypervolume indicator is a measure for the size of the space within a reference set that is dominated by a set of μ points. It is a common performance indicator for judging the quality of Pareto front approximations. As it does not require a-priori knowledge of the Pareto front it can also be used in a straightforward manner for guiding the search for finite approximations to the Pareto front in multi-objective optimization algorithm design.

In this paper we discuss properties of the gradient of the hypervolume indicator at vectors that represent approximation sets to the Pareto front. An expression for relating this gradient to the objective function values at the solutions in the approximation set and their partial derivatives is described for arbitrary dimensions m ≥ 2 as well as an algorithm to compute the gradient field efficiently based on this information. We show that in the bi-objective and tri-objective case these algorithms are asymptotically optimal with time complexity in Θ(μd + μlogμ) for d being the dimension of the search space and μ being the number of points in the approximation set. For the case of four objective functions the time complexity is shown to be in \(\mathcal{O}(\mu d + \mu^2)\). The tight computation schemes reveal fundamental structural properties of this gradient field that can be used to identify zeros of the gradient field. This paves the way for the formulation of stopping conditions and candidates for optimal approximation sets in multi-objective optimization.

Keywords

Set Oriented Optimization Multiobjective Gradient Hypervolume Indicator Computational Complexity Optimality Conditions 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Emmerich, M.T.M., Deutz, A.H., Beume, N.: Gradient-Based/Evolutionary Relay Hybrid for Computing Pareto Front Approximations Maximizing the S-Metric. In: Bartz-Beielstein, T., Blesa Aguilera, M.J., Blum, C., Naujoks, B., Roli, A., Rudolph, G., Sampels, M. (eds.) HCI/ICCV 2007. LNCS, vol. 4771, pp. 140–156. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  2. 2.
    Fliege, J., Svaiter, B.F.: Steepest Descent Methods for Multicriteria Optimization. Mathematical Methods of Operations Research 51(3), 479–494 (2000)MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Brown, M., Smith, R.E.: Effective Use of Directional Information in Multi-objective Evolutionary Computation. In: Cantú-Paz, E., et al. (eds.) GECCO 2003. LNCS, vol. 2723, pp. 778–789. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  4. 4.
    Bosman, P.A., de Jong, E.D.: Exploiting Gradient Information in Numerical Multi-Objective Evolutionary Optimization. In: Beyer, H.G., et al. (eds.) GECCO 2005, vol. 1, pp. 755–762. ACM Press, New York (2005)Google Scholar
  5. 5.
    Lara, A., Schütze, O., Coello, C.A.C.: On Gradient-Based Local Search to Hybridize Multi-objective Evolutionary Algorithms. In: Tantar, E., Tantar, A.-A., Bouvry, P., Del Moral, P., Legrand, P., Coello Coello, C.A., Schütze, O. (eds.) EVOLVE- A bridge between Probability, Set Oriented Numerics and Evolutionary Computation. SCI, vol. 447, pp. 303–330. Springer, Heidelberg (2013)Google Scholar
  6. 6.
    Timmel, G.: Ein stochastisches Suchverfahren zur Bestimmung der Optimalen Kompromißlösungen bei statistischen polykriteriellen Optimierungsaufgaben. Journal TH Ilmenau 6, 139–148 (1980)Google Scholar
  7. 7.
    Schäffler, S., Schultz, R., Wienzierl, K.: Stochastic Method for the Solution of Unconstrained Vector Optimization Problems. Journal of Optimization Theory and Applications 114(1), 209–222 (2002)MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    Shukla, P.K., Deb, K., Tiwari, S.: Comparing Classical Generating Methods with an Evolutionary Multi-objective Optimization Method. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 311–325. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Hillermeier, C.: Generalized Homotopy Approach to Multiobjective Optimization. Journal of Optimization Theory and Applications 110(3), 557–583 (2001)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Schütze, O., Dell’Aere, A., Dellnitz, M.: Continuation Methods for the Numerical Treatment of Multi-Objective Optimization Problems. In: Branke, J., Deb, K., Miettinen, K., Steuer, R. (eds.) Practical Approaches to Multi-Objective Optimization. Dagstuhl Seminar Proceedings, vol. 04461. IBFI, Schloss Dagstuhl, Germany (2005)Google Scholar
  11. 11.
    Schütze, O., Lara, A., Coello Coello, C.A.: The Directed Search Method for Unconstrained Multi-Objective Optimization Problems. In: Proceedings of the EVOLVE–A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation (2011)Google Scholar
  12. 12.
    Ehrgott, M.: Multicriteria Optimization. Springer (2005)Google Scholar
  13. 13.
    Zitzler, E., Thiele, L.: Multiobjective Optimization Using Evolutionary Algorithms—A Comparative Case Study. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 292–301. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  14. 14.
    Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., da Fonseca, V.G.: Performance Assessment of Multiobjective Optimizers: an Analysis and Review. IEEE Trans. Evolutionary Computation 7(2), 117–132 (2003)CrossRefGoogle Scholar
  15. 15.
    Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Hypervolume-based Multiobjective Optimization: Theoretical Foundations and Practical Implications. Theor. Comput. Sci. 425, 75–103 (2012)MathSciNetMATHCrossRefGoogle Scholar
  16. 16.
    Beume, N.: Hypervolume-Based Metaheuristics for Multiobjective Optimization. PhD Thesis. Eldorado (2011)Google Scholar
  17. 17.
    Custódio, A.L., Emmerich, M., Madeira, J.F.A.: Recent Developments in Derivative-free Multiobjective Optimization. In: Topping, B. (ed.) Computational Technology Reviews, vol. 5, pp. 1–30. Saxe-Coburg Publications (2012)Google Scholar
  18. 18.
    Bringmann, K.: Bringing Order to Special Cases of Klee’s Measure Problem. CoRR abs/1301.7154 (2013)Google Scholar
  19. 19.
    Beume, N., Fonseca, C.M., López-Ibáñez, M., Paquete, L., Vahrenhold, J.: On the Complexity of Computing the Hypervolume Indicator. IEEE Trans. Evolutionary Computation 13(5), 1075–1082 (2009)CrossRefGoogle Scholar
  20. 20.
    Yıldız, H., Suri, S.: On Klee’s Measure Problem for Grounded Boxes. In: Dey, T.K., Whitesides, S. (eds.) Symposium on Computational Geometry, pp. 111–120. ACM (2012)Google Scholar
  21. 21.
    Fonseca, C.M., Guerreiro, A.P., López-Ibáñez, M., Paquete, L.: On the Computation of the Empirical Attainment Function. In: [29], pp. 106–120Google Scholar
  22. 22.
    Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Grunert da Fonseca, V.: Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE TEC 7(2), 117–132 (2003)Google Scholar
  23. 23.
    Guerreiro, A.P., Fonseca, C.M., Emmerich, M.T.M.: A Fast Dimension-Sweep Algorithm for the Hypervolume Indicator in Four Dimensions. In: CCCG, pp. 77–82 (2012)Google Scholar
  24. 24.
    Kung, H.T., Luccio, F., Preparata, F.P.: On Finding the Maxima of a Set of Vectors. Journal of the ACM 22(4), 469–476 (1975)MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    Baeza-Yates, R.: A Fast Set Intersection Algorithm for Sorted Sequences. In: Sahinalp, S.C., Muthukrishnan, S.M., Dogrusoz, U. (eds.) CPM 2004. LNCS, vol. 3109, pp. 400–408. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  26. 26.
    Hupkens, I., Emmerich, M.: Logarithmic-time Updates in SMS-EMOA and Hypervolume-based Archiving. In: Emmerich, M., et al. (eds.) EVOLVE - A Bridge between Probability, Set Oriented Numerics,and Evolutionary Computation IV. AISC, vol. 227, pp. 155–169. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  27. 27.
    Emmerich, M.T.M., Fonseca, C.M.: Computing Hypervolume Contributions in Low Dimensions: Asymptotically Optimal Algorithm and Complexity Results. In: [27], pp. 121–135Google Scholar
  28. 28.
    Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Theory of the Hypervolume Indicator: Optimal μ-Distributions and the Choice of the Reference Point. In: Foundations of Genetic Algorithms (FOGA 2009), pp. 87–102. ACM, New York (2009)Google Scholar
  29. 29.
    Takahashi, R.H.C., Deb, K., Wanner, E.F., Greco, S. (eds.): EMO 2011. LNCS, vol. 6576. Springer, Heidelberg (2011)MATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Leiden Institute for Advanced Computer ScienceLeiden UniversityLeidenThe Netherlands

Personalised recommendations