Advertisement

Multi-Objective Equivalent Random Search

  • Evan J. Hughes
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4193)

Abstract

This paper introduces a new metric vector for assessing the performance of different multi-objective algorithms, relative to the range of performance expected from a random search. The metric requires an ensemble of repeated trials to be performed, reducing the chance of overly favourable results. The random search baseline for the function-under-test may be either analytic, or created from a Monte-Carlo process: thus the metric is repeatable and accurate.

The metric allows both the median and worst performance of different algorithms to be compared directly, and scales well with high-dimensional many-objective problems. The metric quantifies and is sensitive to the distance of the solutions to the Pareto set, the distribution of points across the set, and the repeatability of the trials. Both the Monte-Carlo and closed form analysis methods will provide accurate analytic confidence intervals on the observed results.

Keywords

Weight Vector Objective Space Random Search Cumulative Density Function Horizontal Solid Line 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 849–858. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Hughes, E.J.: MOERS example software, http://code.evanhughes.org
  4. 4.
    Hughes, E.J.: Assessing robustness of optimisation performance for problems with expensive evaluation functions. In: World Congress on Computational Intelligence, Vancouver, Canada, July 2006, IEEE, Los Alamitos (to appear, 2006)Google Scholar
  5. 5.
    Knowles, J., Corne, D.: On metrics for comparing non-dominated sets. In: Congress on Evolutionary Computation (CEC 2002), Hawaii, vol. 1, pp. 711–716 (2002)Google Scholar
  6. 6.
    Okabe, T., Jin, Y., Sendhoff, B.: A critical survey of performance indices for multi-objective optimization. In: Congress on Evolutionary Computation (CEC 2003), Canberra, Australia, December 2003, vol. 2, pp. 878–885 (2003)Google Scholar
  7. 7.
    Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., da Fonseca, V.G.: Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation 7, 117–132 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Evan J. Hughes
    • 1
  1. 1.Department of Aerospace, Power and SensorsCranfield University, ShrivenhamSwindon, WiltshireEngland

Personalised recommendations