Distribution of Computational Effort in Parallel MOEA/D

  • Juan J. Durillo
  • Qingfu Zhang
  • Antonio J. Nebro
  • Enrique Alba
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6683)


MOEA/D is a multi-objective optimization algorithm based on decomposition, which consists in dividing a multi-objective problem into a number of single-objective sub-problems. This work presents two variants, called pMOEA/Dv1 and pMOEA/Dv2, of a new parallel model of MOEA/D that have been developed under the observation that different sub-problems may require different computational effort, and thus, demand different number of evaluations. Our interest in this paper is to analyze whether the proposed models are able of outperforming the MOEA/D in terms of the quality of the computed fronts. To cope with this issue, our proposals have been evaluated using a benchmark composed of eight problems and the obtained results have been compared against MOEA/D-DE, an extension of the original MOEA/D where new individuals are generated by an operator taken from differential evolution. Our experiments show that some configurations of pMOEA/Dv1 and pMOEA/Dv2 have been able to compute fronts of higher quality than MOEA/D-DE in many of the evaluated problems, giving room for further research in this line.


Weight Vector Pareto Front Multiobjective Optimization Pareto Optimal Solution Parallel Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Branke, J., Schmeck, H., Deb, K., Reddy, M.,, S.: Parallelizing multi-objective evolutionary algorithms: cone separation. In: Congress on Evolutionary Computation, CEC 2004, vol. 2, pp. 1952–1957 (2004)Google Scholar
  2. 2.
    Demšar, J.: Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 7, 1–30 (2006)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Durillo, J.J., Nebro, A.J., Alba, E.: The jmetal framework for multi-objective optimization: Design and architecture. In: Proceedings of the IEEE 2010 Congress on Evolutionary Computation, pp. 4138–4325 (2010)Google Scholar
  4. 4.
    Knowles, J., Thiele, L., Zitzler, E.: A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers. Technical Report 214, Computer Engineering and Networks Laboratory (TIK), ETH Zurich (2006)Google Scholar
  5. 5.
    Li, H., Zhang, Q.: Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II. IEEE Transactions on Evolutionary Computation 2(12), 284–302 (2009)CrossRefGoogle Scholar
  6. 6.
    Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer, Norwell (1999)zbMATHGoogle Scholar
  7. 7.
    Nebro, A.J., Durillo, J.J.: A study of the parallelization of the multi-objective metaheuristic MOEA/D. In: Blum, C., Battiti, R. (eds.) LION 4. LNCS, vol. 6073, pp. 303–317. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  8. 8.
    Zhang, Q., Li, H.: MOEA/D: A multi-objective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 1(6), 712–731 (2007)CrossRefGoogle Scholar
  9. 9.
    Zhang, Q., Zhou, A., Li, H.: The performance of a new version of MOEA/D on cec09 unconstrained mop test instances. Technical Report CES-491, School of CS & EE, University of Essex (2009)Google Scholar
  10. 10.
    Zitzler, E., Thiele, L.: Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Transactions on Evolutionary Computation 3(4), 257–271 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Juan J. Durillo
    • 1
  • Qingfu Zhang
    • 2
  • Antonio J. Nebro
    • 1
  • Enrique Alba
    • 1
  1. 1.Department Lenguajes y Ciencias de la ComputaciónUniversity of MálagaSpain
  2. 2.The School of Computer Science and Electronic EngineeringUniversity of EssexColchesterU.K.

Personalised recommendations