Runtime Analysis of \((1+1)\) Evolutionary Algorithm Controlled with Q-learning Using Greedy Exploration Strategy on OneMax+ZeroMax Problem

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9026)

Abstract

There exist optimization problems with the target objective, which is to be optimized, and several extra objectives. The extra objectives may or may not be helpful in optimization process in terms of the number of objective evaluations necessary to reach an optimum of the target objective.

OneMax+ZeroMax is a previously proposed benchmark optimization problem where the target objective is OneMax and a single extra objective is ZeroMax, which is equal to the number of zero bits in the bit vector. This is an example of a problem where extra objectives are not good, and objective selection methods should ignore the extra objectives. The EA+RL method is a method which selects objectives to be optimized by evolutionary algorithms (EA) using reinforcement learning (RL). Previously it was shown that it runs in \(\varTheta (N \log N)\) on OneMax+ZeroMax when configured to use the randomized local search algorithm and the Q-learning algorithm with the greedy exploration strategy.

We present the runtime analysis for the case when the \((1+1)\)-EA algorithm is used. It is shown that the expected running time is at most \(3.12 e N \log N\).

References

  1. 1.
    Brockhoff, D., Friedrich, T., Hebbinghaus, N., Klein, C., Neumann, F., Zitzler, E.: On the effects of adding objectives to plateau functions. Trans. Evol. Comput. 13(3), 591–603 (2009)CrossRefGoogle Scholar
  2. 2.
    Buzdalov, M., Buzdalova, A., Shalyto, A.: A first step towards the runtime analysis of evolutionary algorithm adjusted with reinforcement learning. In: Proceedings of the International Conference on Machine Learning and Applications, vol. 1, pp. 203–208. IEEE Computer Society (2013)Google Scholar
  3. 3.
    Buzdalova, A., Buzdalov, M.: Increasing efficiency of evolutionary algorithms by choosing between auxiliary fitness functions with reinforcement learning. In: Proceedings of the International Conference on Machine Learning and Applications, vol. 1, pp. 150–155 (2012)Google Scholar
  4. 4.
    Hajek, B.: Hitting-time and occupation-time bounds implied by drift analysis with applications. Adv. Appl. Probab. 14(3), 502–525 (1982)CrossRefMATHGoogle Scholar
  5. 5.
    Handl, J., Lovell, S.C., Knowles, J.D.: Multiobjectivization by decomposition of scalar cost functions. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 31–40. Springer, Heidelberg (2008) CrossRefGoogle Scholar
  6. 6.
    Jensen, M.T.: Helper-objectives: using multi-objective evolutionary algorithms for single-objective optimisation: evolutionary computation combinatorial optimization. J. Math. Model. Algorithms 3(4), 323–347 (2004)CrossRefMATHMathSciNetGoogle Scholar
  7. 7.
    Knowles, J.D., Watson, R.A., Corne, D.W.: Reducing local optima in single-objective problems by multi-objectivization. In: Zitzler, E., Deb, K., Thiele, L., Coello Coello, C.A., Corne, D.W. (eds.) EMO 2001. LNCS, vol. 1993, pp. 269–283. Springer, Heidelberg (2001) CrossRefGoogle Scholar
  8. 8.
    Lochtefeld, D.F., Ciarallo, F.W.: Helper-objective optimization strategies for the job-shop scheduling problem. Appl. Soft Comput. 11(6), 4161–4174 (2011)CrossRefGoogle Scholar
  9. 9.
    Neumann, F., Wegener, I.: Can single-objective optimization profit from multiobjective optimization? In: Knowles, J., Corne, D., Deb, K., Chair, D.R. (eds.) Multiobjective Problem Solving from Nature. Natural Computing Series, pp. 115–130. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  11. 11.
    Witt, C.: Optimizing linear functions with randomized search heuristics - the robustness of mutation. In: Proceedings of the 29th Annual Symposium on Theoretical Aspects of Computer Science, pp. 420–431 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.ITMO UniversitySaint-PetersburgRussia
  2. 2.LIXÉcole PolytechniquePalaiseau CedexFrance

Personalised recommendations