Advertisement

Upper Confidence Bound (UCB) Algorithms for Adaptive Operator Selection in MOEA/D

  • Richard A. Gonçalves
  • Carolina P. Almeida
  • Aurora Pozo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9018)

Abstract

Adaptive Operator Selection (AOS) is a method used to dynamically determine which operator should be applied in an optimization algorithm based on its performance history. Recently, Upper Confidence Bound (UCB) algorithms have been successfully applied for this task. UCB algorithms have special features to tackle the Exploration versus Exploitation (EvE) dilemma presented on the AOS problem. However, it is important to note that the use of UCB algorithms for AOS is still incipient on Multiobjective Evolutionary Algorithms (MOEAs) and many contributions can be made. The aim of this paper is to extend the study of UCB based AOS methods. Two methods are proposed: MOEA/D-UCB-Tuned and MOEA/D-UCB-V, both use the variance of the operators’ rewards in order to obtain a better EvE tradeoff. In these proposals the UCB-Tuned and UCB-V algorithms from the multiarmed bandit (MAB) literature are combined with MOEA/D (MOEA based on decomposition), one of the most successful MOEAs. Experimental results demonstrate that MOEA/D-UCB-Tuned can be favorably compared with state-of-the-art adaptive operator selection MOEA/D variants based on probability (ENS-MOEA/D and ADEMO/D) and multi-armed bandits (MOEA/D-FRRMAB) methods.

Keywords

Adaptive Operator Selection (AOS) MOEA/D Upper Confidence Bound (UCB) Algorithms UCB1 UCB-Tuned UCB-V 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Mach. Learn. 47(2–3), 235–256 (2002)CrossRefzbMATHGoogle Scholar
  3. 3.
    Conover, W.J.: Practical Nonparametric Statistics, 3rd edn. Wiley (1999)Google Scholar
  4. 4.
    Ehrgott, M.: A discussion of scalarization techniques for multiple objective integer programming. Ann. Oper. Res. 147, 343–360 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Fialho, A.: Adaptive operator selection for optimization. Ph.D. thesis, Comput. Sci. Dept. - Univ. Paris-Sud XI (2010)Google Scholar
  6. 6.
    Fialho, A., Schoenauer, M., Sebag, M.: Analysis of adaptive operator selection techniques on the royal road and long k-path problems. In: Conference on Genetic and Evolutionary Computation, pp. 779–786 (2009)Google Scholar
  7. 7.
    Goldberg, D.E.: Probability matching, the magnitude of reinforcement, and classifier system bidding. Mach. Learn. 5, 407–425 (1990)Google Scholar
  8. 8.
    Gong, W., Fialho, l, Cai, Z., Li, H.: Adaptive strategy selection in differential evolution for numerical optimization: An empirical study. Inform. Sciences 181(24), 5364–5386 (2011)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Mashwani, Khan: W., Salhi, A.: A decomposition-based hybrid multiobjective evolutionary algorithm with dynamic resource allocation. Appl. Soft Comput. 12(9), 2765–2780 (2012)CrossRefGoogle Scholar
  10. 10.
    Knowles, J., Thiele, L., Zitzler, E.: A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. TIK Report 214, Computer Engineering and Networks Laboratory (TIK), ETH Zurich (February 2006)Google Scholar
  11. 11.
    Li, K., Fialho, A., Kwong, S., Zhang, Q.: Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 18(1), 114–130 (2014)CrossRefGoogle Scholar
  12. 12.
    Sato, H.: Inverted PBI in MOEA/D and its impact on the search performance on multi and many-objective optimization. In: Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, GECCO 2014, pp. 645–652. ACM, New York (2014). http://doi.acm.org/10.1145/2576768.2598297
  13. 13.
    Thierens, D.: An adaptive pursuit strategy for allocating operator probabilities. In: Conference on Genetic and Evolutionary Computation, pp. 1539–1546 (2005)Google Scholar
  14. 14.
    Venske, S.M., Gonalves, R.A., Delgado, M.R.: ADEMO/D: Multiobjective optimization by an adaptive differential evolution algorithm. Neurocomputing 127, 65–77 (2014), advances in Intelligent Systems Selected papers from the 2012 Brazilian Symposium on Neural NetworksGoogle Scholar
  15. 15.
    Zhang, Q., Zhou, A., Zhao, S., Suganthan, P.N., Liu, W., Tiwari, S.: Multiobjective optimization test instances for the CEC 2009 special session and competition. Tech. rep., University of Essex and Nanyang Technological University, CES-487 (2008)Google Scholar
  16. 16.
    Zhang, Q., Liu, W., Li, H.: The performance of a new version of MOEA/D on CEC09 unconstrained mop test instances. In: IEEE Congress on Evolutionary Computation, CEC 2009, pp. 203–208 (May 2009)Google Scholar
  17. 17.
    Zhao, S.Z., Suganthan, P.N., Zhang, Q.: Decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes. IEEE Trans. Evol. Comput. 16(3), 442–446 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Richard A. Gonçalves
    • 1
  • Carolina P. Almeida
    • 1
  • Aurora Pozo
    • 2
  1. 1.Department of Computer ScienceUNICENTROGuarapuavaBrazil
  2. 2.Computer Science DepartmentFederal University of Paraná (UFPR)CuritibaBrazil

Personalised recommendations