Faster Hypervolume-Based Search Using Monte Carlo Sampling

  • Johannes Bader
  • Kalyanmoy Deb
  • Eckart Zitzler
Conference paper
Part of the Lecture Notes in Economics and Mathematical Systems book series (LNE, volume 634)


In recent years, the hypervolume indicator – a set quality measure considering the dominated portion of the objective space – has gained increasing attention in the context of multiobjective search. This is mainly due to the following feature: whenever one Pareto set approximation completely dominates another approximation, the hypervolume of the former will be greater than the hypervolume of the latter. Unfortunately, the calculation of the hypervolume measure is computationally highly demanding, and current algorithms are exponential in the number of objectives. This paper proposes a methodology based on Monte Carlo sampling to estimate the hypervolume contribution of single solutions regarding a specific Pareto set approximation. It is therefore designed to be used in the environmental selection process of an evolutionary algorithm, and allows substantial speedups in hypervolume-based search as the experimental results demonstrate.


Hypervolume indicator Monte Carlo sampling Evolutionary multiobjective algorithms. 



Johannes Bader has been supported by the Indo-Swiss Joint Research Program IT14.


  1. Agresti, A. & Coull, B. A. (1998). Approximate is Better than “Exact” for Interval Estimation of Binomial Proportions. The American Statistician, 52(2), 119–126.CrossRefGoogle Scholar
  2. Beume, N. & Rudolph, G. (2006). Faster S-Metric Calculation by Considering Dominated Hypervolume as Klee’s Measure Problem. Technical Report CI-216/06, Sonderforschungsbereich 531 Computational Intelligence, Universität Dortmund. shorter version published at IASTED International Conference on Computational Intelligence (CI 2006).Google Scholar
  3. Bleuler, S., Laumanns, M., Thiele, L., & Zitzler, E. (2002). PISA – A Platform and Programming Language Independent Interface for Search Algorithms. TIK Report 154, Computer Engineering and Networks Laboratory (TIK), ETH Zurich.Google Scholar
  4. Branke, J., Chick, S. E., & Schmidt, C. (2005). New developments in ranking and selection: an empirical comparison of the three main approaches. In Proceedings of the 37th conference on Winter simulation (WSC 2005) (pp. 708–717). Winter Simulation Conference.Google Scholar
  5. Bringmann, K. & Friedrich, T. (2008). Approximating the Volume of Unions and Intersections of High-Dimensional Geometric Objects. In S. H. Hong, H. Nagamochi, & T. Fukunaga, editors, International Symposium on Algorithms and Computation (ISAAC 2008) (Vol. 5369) of LNCS (pp. 436–447). Berlin, Germany: Springer.Google Scholar
  6. Chen, C.-H. (1996). A lower bound for the correct subset-selection probability and its application to discrete event simulations. IEEE Transactions on Automatic Control, 41(8), 1227–1231.CrossRefGoogle Scholar
  7. Deb, K., Agrawal, S., Pratap, A., & Meyarivan, T. (2000). A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In M. Schoenauer et al. (Eds.), Conference on Parallel Problem Solving from Nature (PPSN VI) (Vol. 1917) of LNCS (pp. 849–858). Springer.Google Scholar
  8. Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2001). Scalable Test Problems for Evolutionary Multi-Objective Optimization. TIK Report 112, Computer Engineering and Networks Laboratory (TIK), ETH Zurich.Google Scholar
  9. Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2002). Scalable Multi-Objective Optimization Test Problems. In Congress on Evolutionary Computation (CEC 2002) (pp. 825–830). IEEE.Google Scholar
  10. Emmerich, M., Beume, N., & Naujoks, B. (2005). An EMO Algorithm Using the Hypervolume Measure as Selection Criterion. In Conference on Evolutionary Multi-Criterion Optimization (EMO 2005) (Vol. 3410) of LNCS (pp. 62–76). Springer.Google Scholar
  11. Everson, R., Fieldsend, J., & Singh, S. (2002). Full Elite-Sets for Multiobjective Optimisation. In I. Parmee (Ed.), Conference on adaptive computing in design and manufacture (ADCM 2002) (pp. 343–354). London, UK: Springer.Google Scholar
  12. Fonseca, C. M., Paquete, L., & López-Ibáñez, M. (2006). An Improved Dimension-Sweep Algorithm for the Hypervolume Indicator. In Congress on Evolutionary Computation (CEC 2006) (pp. 1157–1163). Sheraton Vancouver Wall Centre Hotel, Vancouver, BC Canada: IEEE.Google Scholar
  13. Igel, C., Hansen, N., & Roth, S. (2007). Covariance Matrix Adaptation for Multi-objective Optimization. Evolutionary Computation, 15(1), 1–28.CrossRefGoogle Scholar
  14. Khare, V. R., Yao, X., & Deb, K. (2003). Performance Scaling of Multi-objective Evolutionary Algorithms. In Conference on Evolutionary Multi-Criterion Optimization (EMO 2003), (Vol. 2632) of LNCS (pp. 376–390). Springer.Google Scholar
  15. Knowles, J. D., Corne, D. W., & Fleischer, M. (2006). Bounded Archiving using the Lebesgue Measure. In Congress on Evolutionary Computation CEC 2003 (pp. 2490–2497). Canberra, Australia: IEEE.Google Scholar
  16. Laplace, P. S. (1816). Théorie analytique des probabilités. Premier supplément (3rd ed.). Paris: Courcier.Google Scholar
  17. Purshouse, R. C. & Fleming, P. J. (2003). Evolutionary Many-Objective Optimisation: an Exploratory Analysis. In Congress on Evolutionary Computation (CEC 2003) (pp. 2066–2073). IEEE.Google Scholar
  18. While, L. (2005). A New Analysis of the LebMeasure Algorithm for Calculating Hypervolume. In Conference on Evolutionary Multi-Criterion Optimization (EMO 2005) (Vol. 3410) of LNCS (pp. 326–340). Guanajuato, México: Springer.Google Scholar
  19. While, L., Bradstreet, L., Barone, L., & Hingston, P. (2005). Heuristics for Optimising the Calculation of Hypervolume for Multi-objective Optimisation Problems. In Congress on Evolutionary Computation (CEC 2005) (pp. 2225–2232). IEEE Service Center, Edinburgh, Scotland: IEEE.Google Scholar
  20. While, L., Hingston, P., Barone, L., & Huband, S. (2006). A Faster Algorithm for Calculating Hypervolume. IEEE Transactions on Evolutionary Computation, 10(1), 29–38.CrossRefGoogle Scholar
  21. Zitzler, E. & Künzli, S. (2004). Indicator-Based Selection in Multiobjective Search. In X. Yao et al. (Eds.), Conference on Parallel Problem Solving from Nature (PPSN VIII), (Vol. 3242) of LNCS (pp. 832–842). Springer.Google Scholar
  22. Zitzler, E. & Thiele, L. (1998). Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study. In Conference on Parallel Problem Solving from Nature (PPSN V) (pp. 292–301). Amsterdam.Google Scholar
  23. Zitzler, E., Laumanns, M., & Thiele, L. (2002). SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In K. Giannakoglou et al., (Eds.), Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001) (pp. 95–100). International Center for Numerical Methods in Engineering (CIMNE).Google Scholar
  24. Zitzler, E., Brockhoff, D., & Thiele, L. (2007). The Hypervolume Indicator Revisited: On the Design of Pareto-compliant Indicators Via Weighted Integration. In S. Obayashi et al. (Eds.), Conference on Evolutionary Multi-Criterion Optimization (EMO 2007), (Vol. 4403) of LNCS (pp. 862–876). Berlin: Springer.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.Computer Engineering and Networks LabETH ZurichZurichSwitzerland

Personalised recommendations