Water cycle algorithm for solving multi-objective optimization problems

Abstract

In this paper, the water cycle algorithm (WCA), a recently developed metaheuristic method is proposed for solving multi-objective optimization problems (MOPs). The fundamental concept of the WCA is inspired by the observation of water cycle process, and movement of rivers and streams to the sea in the real world. Several benchmark functions have been used to evaluate the performance of the WCA optimizer for the MOPs. The obtained optimization results based on the considered test functions and comparisons with other well-known methods illustrate and clarify the robustness and efficiency of the WCA and its exploratory capability for solving the MOPs.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for optimization inspires by imperialistic competition. IEEE Congress on Evolutionary Computation, Singapore, pp 4661–4667

  2. Blum C, Andrea R (2003) Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput Surv 35(3):268–308

    Article  Google Scholar 

  3. Coello CAC, Lechuga MS (2002) MOPSO: A proposal for multiple objective particle swarm optimization. In: Proceedings of the congress on evolutionary computation (CEC’2002), Honolulu, vo 1, pp1051–1056

  4. Coello CAC (2000) An updated survey of GA-based multi-objective optimization techniques. ACM Comput Surv 32(2):109–143

    Article  Google Scholar 

  5. Coello CAC, Veldhuizen DAV, Lamont G (2002) Evolutionary algorithms for solving multi-objective problems., Genetic Algorithms and Evolutionary ComputationKluwer, Dordrecht

    Google Scholar 

  6. Coello CAC (2004) Handling multiple objectives with particle swarm optimization. IEEE T Evolut comput 8(3):256–279

    Article  Google Scholar 

  7. Coello CAC, Cruz Cortés N (2005) Solving multiobjective optimization problems using an artificial immune system. Genet Program Evol M 6:163–190

    Article  Google Scholar 

  8. Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York

    Google Scholar 

  9. Deb K, Pratap A, Agarwal S, Meyarivan T (2002a) A fast and elitist multi objective genetic algorithm: NSGA-II. IEEE Trans Evolut Comput 6(2):182–197

    Article  Google Scholar 

  10. Deb K (2002) Multi-objective genetic algorithms: problem difficulties and construction of test problems. Evol Comput 7:205–230

    Article  Google Scholar 

  11. Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multi-objective optimization test problems. In: Proceedings of IEEE Conference on Evolutionary Computation, pp 825–830

  12. Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm—a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110–111:151–166

    Article  Google Scholar 

  13. Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjective optimization: formulation, discussion and generalization. In: Forrest S (ed) Proceedings of the fifth international conference on genetic algorithms. Morgan Kauffman, San Mateo, pp 416–423

    Google Scholar 

  14. Freschi F, Repetto M (2006) VIS: an artificial immune network for multi-objective optimization. Eng Optim 38(8):975–996

    Article  Google Scholar 

  15. Gao J, Wang J (2010) WBMOAIS: a novel artificial immune system for multiobjective optimization. Comput Oper Res 37:50–61

    MathSciNet  Article  Google Scholar 

  16. Glover FW, Kochenberger GA (2003) Handbook of metaheuristics. Kluwer, Dordrecht

  17. Haupt RL, Haupt SE (2004) Practical genetic algorithms, 2nd edn. John Wiley, New York

  18. Holland J (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor

    Google Scholar 

  19. Kaveh A, Laknejadi K (2011) A novel hybrid charge system search and particle swarm optimization method for multi-objective optimization. Expert Syst Appl 38(12):15475–15488

    Article  Google Scholar 

  20. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks. Perth, Australia, pp 1942–1948

  21. Knowles JD, Corne DW (2000) Approximating the nondominated front using the Pareto archived evolution strategy. Evol Comput 8(2):149–172

    Article  Google Scholar 

  22. Kursawe F (1991) A variant of evolution strategies for vector optimization. In: Lecture Notes in Computer Science. In: Proceedings of the Parallel Problem Solving From Nature, PPSN I, vol 496, pp 193–197

  23. Lin Q, Chen J (2013) A novel micro-population immune multiobjective optimization algorithm. Comput Oper Res 40(6):1590–1601

    MathSciNet  Article  Google Scholar 

  24. Mahmoodabadi MJ, Adljooy Safaie A (2013) A novel combination of particle swarm optimization and genetic algorithm for pareto optimal design of a five-degree of freedom vehicle vibration model. Appl Soft Comput 13:2577–2591

    Article  Google Scholar 

  25. Mostaghim S, Teich J (2003) Strategies for finding good local guides in multi objective particle swarm optimization (MOPSO). In: Proceedings of the IEEE swarm intelligence symposium, pp 26–33

  26. Osman IH, Laporte G (1996) Metaheuristics: a bibliography. Ann Oper Res 63:513–623

    MathSciNet  Article  Google Scholar 

  27. Poloni C (1997) Hybrid GA for multiobjective aerodynamic shape optimization in genetic algorithms., Engineering and Computer ScienceWiley, New York

    Google Scholar 

  28. Pradhan PM, Panda G (2012) Solving multiobjective problems using cat swarm optimization. Expert Syst Appl 39:2956–2964

    Article  Google Scholar 

  29. Sierra MR, Coello CAC (2005) Improving PSO-based multi objective optimization using crowding, mutation and e-dominance. In: Proceedings of evolutionary multi-criterion optimization conference. Guanajuato, Mexico, pp 505–519

  30. Srinivas N, Deb K (1995) Multi objective function optimization using nondominated sorting genetic algorithms. Evol Comput 2(3):221–248

    Article  Google Scholar 

  31. Viennet R, Fontiex C, Marc I (1995) New multicriteria optimization method based on the use of a diploid genetic algorithm: example of an industrial problem. In: Proceedings of Artificial Evolution. Brest, France, pp 120–127

  32. Wang L, Zhong X, Liu M (2012) A novel group search optimizer for multi-objective optimization. Expert Syst Appl 39:2939–2946

    Article  Google Scholar 

  33. Wang L, Zhong X, Liu M (2012) A novel group search optimizer for multi-objective optimization. Expert Syst Appl 39(3):2939–2946

    Article  Google Scholar 

  34. Wang Y, Zeng J (2013) A multi-objective artificial physics optimization algorithm based on ranks of individuals. Soft Comput 17:939–952

    Article  Google Scholar 

  35. Zhang B, Ren W, Zhao L, Deng X (2009) Immune system multiobjective optimization algorithm for DTLZ problems. In: Fifth international conference on natural computation, pp 603–609

  36. Zitzler E, Thiele L (1999) Multi objective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evolut Comput 3(4):257–271

    Article  Google Scholar 

  37. Zitzler E, Deb K, Thiele L (2000) Comparison of multi-objective evolutionary algorithms: empirical results. Evol Comput 8(2):173–195

    Article  Google Scholar 

  38. Zitzler E, Laumanns M, Thiele L (2001) SPEA2: Improving the strength Pareto evolutionary algorithm. Swiss Federal Institute Technology, Zurich, Switzerland, TIK Report, vol 103, pp 1–21

Download references

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (NRF-2013R1A2A1A01013886).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ardeshir Bahreininejad.

Additional information

Communicated by V. Loia.

Appendix A: Mathematical formulation of studied MOPs

Appendix A: Mathematical formulation of studied MOPs

This Appendix represents the MOPs used in this paper to conduct a qualitative assessment for performance and efficiency of the MOWCA.

  1. (1)

    Test problem 1–DEB: Deb’s function is a problem with two design variables. This problem is defined as follows (Deb 2002):

    $$\begin{aligned} \mathrm{DEB}:\min \left\{ {\begin{array}{l} f_1 (X)=x_1 \\ f_2 (X)=g(X)\times h(X) \\ \end{array}} \right. \!\!, \end{aligned}$$
    (22)

    where

    $$\begin{aligned} g(X)=11+x_2^2 -10\cos (2\pi x_2 ), \end{aligned}$$
    (23)
    $$\begin{aligned}&h(X)=\left\{ {{\begin{array}{*{20}c} {1-\sqrt{f_1 /g} } &{} \quad {\mathrm{if}\;\;f_1 (X)\le g(X)\;} \\ 0 &{} {\mathrm{Otherwise}} \\ \end{array} }} \right. \nonumber \\&\mathrm{where}\;\;0\le x_1 \le 1,\;-30\le x_2 \le 30. \end{aligned}$$
    (24)

    The Pareto optimal front for this problem is convex and defined as \(x_{1} \epsilon [0, 1], x_{2}\) = 0.

  2. (2)

    Test problem 2–FON: Fonseca and Fleming’s function (FON) is an eight-design variable function suggested as follows (Fonseca and Fleming 1993):

    $$\begin{aligned}&\mathrm{FON}:\min \left\{ {\begin{array}{l} f_1 (X)=1-\exp \left( {-\sum \limits _{i=1}^8 {\left( {x_i -\frac{1}{\sqrt{8} }}\right) ^2} }\right) \\ f_2 (X)=1-\exp \left( {-\sum \limits _{i=1}^8 {\left( {x_i +\frac{1}{\sqrt{8} }}\right) ^2} }\right) \\ \end{array}} \right. \nonumber \\&\quad \mathrm{where}\;-2<x_i <2,\;i=1,2,3,\ldots ,8, \end{aligned}$$
    (25)

    The optimal Pareto front for this bi-objective problem is \(x_i^*=[-1/\sqrt{8} ,1/\sqrt{8} ]\)   for \(i = 1,2,3,{\ldots },8\).

  3. (3)

    Test problem 3–POL: This function was first introduced by Poloni (1997) which has been widely analyzed in the literature (Deb et al. 2002a; Kaveh and Laknejadi 2011). The mathematical formulation proposed by Poloni (1997) is given as follows:

    $$\begin{aligned}&\mathrm{POL}:\min \nonumber \\&\quad \times \left\{ {\begin{array}{l} f_1 (X)=[1+(A_1 -B_1 )^2+(A_2 -B_2 )^2] \\ f_2 (X)=[(x_1 +3)^2+(x_2 +1)^2] \\ A_1 =0.5\sin 1-2\cos 1+\sin 2-1.5\cos 2 \\ A_2 =1.5\sin 1-\cos 1+2\sin 2-0.5\cos 2 \\ B_1 =0.5\sin (x_1 )-2\cos (x_1 )+\sin (x_2 )-1.5\cos (x_2 ) \\ B_2 =1.5\sin (x_1 )-\cos (x_1 )+2\sin (x_2 )-0.5\cos (x_2 ) \\ \end{array}} \right. \nonumber \\&\quad \mathrm{where}\;-\pi <x_1 ,x_2 <\pi . \end{aligned}$$
    (26)

    The Pareto optimal front for the POL function is non-convex and discontinuous.

  4. (4)

    Test problem 4–KUR: This problem, presented by Kursawe (1991), has three design variables having non-convex and discontinuous Pareto optimal front. The KUR’s mathematical formulation is as follows (Kursawe 1991):

    $$\begin{aligned}&\mathrm{KUR}:\min \left\{ {\begin{array}{l} f_1 (X)=\sum \limits _{i=1}^{n-1} {\left( {-10\exp \left( -0.2\sqrt{x_i^2 +x_{i+1}^2 } \right) }\right) } \\ f_2 (X)=\sum \limits _{i=1}^n {\left( {\left| {x_i } \right| ^{0.8}+0.5\sin x_i^3 }\right) } \\ \end{array}} \right. \nonumber \\&\quad \mathrm{where}\;-5<x_i <5,\;i=1,2,3.\nonumber \\ \end{aligned}$$
    (27)
  5. (5)

    Test problem 5–VNT: This problem is a three-dimensional problem in objective space suggested by Viennet et al. (1995). This problem has previously been investigated by many researchers (Freschi and Repetto 2006; Gao and Wang 2010) and is formulated as follows (Viennet et al. 1995):

    $$\begin{aligned}&\mathrm{VNT}=\min \nonumber \\&\quad \times \left\{ {\begin{array}{lll} {f(X)=0.5(x_1^2 +x_2^2 )+\sin (x_1^2 +x_2^2 )} \\ {f(X)=(3x_1 -2x_2 +4)^2/8+(x_1 -x_2 +1)^2/27\!+\!15} \\ {f(X)=(x_1^2 +x_2^2 +1)^{-1}-1.1\exp (-x_1^2 -x_2^2 )} \\ \end{array} }\right. \nonumber \\&\quad \mathrm{where}\;-3\le x_1 ,x_2 \le 3 . \end{aligned}$$
    (28)

    It is worth mentioning that the discontinuous Pareto optimal set and having several local Pareto fronts are considered to be challenging features of the VNT problem (Gao and Wang 2010).

  6. (6)

    Test problem 6–ZDT1: The ZDT1 function, was suggested by Zitzler et al. (2000), and has been extensively investigated (Deb et al. 2002a; Kaveh and Laknejadi 2011). This problem is described as follows:

    $$\begin{aligned}&\mathrm{ZDT1}:\min \left\{ {\begin{array}{l} f_1 (X)=x_1 \\ f_2 (X)=g(X)[1-\sqrt{x_1 /g(X)} \\ g(X)=1+9\left( {\sum \limits _{i=2}^n {x_i }}\right) /(n-1) \\ \end{array}} \right. \nonumber \\&\quad \mathrm{where}\;0<x_i <1,\;i=1,2,3,\ldots ,30. \end{aligned}$$
    (29)

    The ZDT1 problem has 30 design variables and its Pareto optimal front is convex and defined as \(x_{1}\epsilon [0, 1], x_{i}\) = 0, for \(i = 2,{\ldots }, 30\).

  7. (7)

    Test problem 7–ZDT3: The ZDT3 problem, suggested by Zitzler et al. (2000), has 30 design variables with a non-convex and discontinuous Pareto optimal front. The mathematical formulation of the ZDT3 problem is as follows (Zitzler et al. 2000):

    $$\begin{aligned}&\mathrm{ZDT3}:\min \nonumber \\&\quad \left\{ {\begin{array}{l} f_1 (X)=x_1 \\ f_2 (X)=g(X)[1-\sqrt{x_1 /g(X)} -\frac{x_1 }{g(X)}\sin (10\pi x_1 )] \\ g(X)=1+9\left( {\sum \limits _{i=2}^n {x_i } }\right) /(n-1) \\ \end{array}} \right. \nonumber \\&\;\mathrm{where}\;0<x_i <1,\;i=1,2,3,\ldots ,30. \end{aligned}$$
    (30)

    The Pareto optimal front is defined as \(x_{1} \epsilon [0, 1], x_{i}\) = 0, for \(i = 2, 3,4, {\ldots }, 30\).

  8. (8)

    Test problem 8–ZDT4: The ZDT4 problem, proposed by Zitzler et al. (2000), has 10 design variables having several local Pareto fronts. This problem is given as follows:

    $$\begin{aligned}&\mathrm{ZDT4}:\min \nonumber \\&\quad \left\{ {\begin{array}{l} f_1 (X)=x_1 \\ f_2 (X)=g(X)[1-\sqrt{x_1 /g(X)} ] \\ g(X)=1+10(n-1)+\sum \limits _{i=2}^n {[x_i^2 -10\cos (4\pi x_i )]} \\ \end{array}}\right. \!,\quad \end{aligned}$$
    (31)

    where \(x_{1} \epsilon [0, 1] \mathrm{and} x_{i}\) [\(-\)5,5], for \(i = 2, 3,\dots , 10\). In addition, the optimal Pareto front is defined as \(x_{1}\epsilon [0, 1], x_{i} = 0\), for \(i = 2, 3, 4,\dots , 10\).

  9. (9)

    Test problem 9–ZDT6: The ZDT6, introduced by Zitzler et al. (2000), has 10 design variables with a non-convex Pareto optimal front. The mathematical formulation of this problem is given as follows:

    $$\begin{aligned} \mathrm{ZDT6}:\min \left\{ {\begin{array}{l} f_1 (X)=1-\exp (-4x_1 )\sin ^6(6\pi x_1 ) \\ f_2 (X)=g(X)[1-( {f_1 (X)/g(X)})^2] \\ g(X)=1+9\left[ {( {\sum \limits _{i=2}^n {x_i } })/(n-1)} \right] ^{0.25} \\ \end{array}} \right. \nonumber \\ \;\mathrm{where}\;0<x_i <1,\;i=1,2,3,\ldots ,10.\nonumber \\ \end{aligned}$$
    (32)

    The Pareto optimal front is characterized as \(x_{1 }\epsilon [0, 1], x_{i}\) = 0 for \(i\) from 2 to 10 for test problem 9.

  10. (10)

    Test problem 10–DTLZ series

The DTLZ series, proposed by Deb et al. (2002b), known as scalable problem, are considered in this paper. The mathematical formulation of DTLZ problems are given in Table 9. The Pareto optimal front for all DTLZ problems in this paper is defined as \(x_{i} = 0.5\) in which \(i \epsilon x_{M}.\) The design space for considered DTLZ problems is between zero and one.

Table 9 Mathematical formulations of the DTLZ series problems

Two and three objective functions (\(M =2\) and 3) are considered for the DTLZ series given in Table 9. The DTLZ series are minimization problems and the number of design variables for these problems is calculated as follows:

$$\begin{aligned} n=M+\left| {x_M } \right| -1, \end{aligned}$$
(33)

where \(n\) and \(M\) are the number of design variables and number of objective functions, respectively. Also, \(\left| {x_M } \right| \) is set to 10 for all considered problems in this paper.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sadollah, A., Eskandar, H., Bahreininejad, A. et al. Water cycle algorithm for solving multi-objective optimization problems. Soft Comput 19, 2587–2603 (2015). https://doi.org/10.1007/s00500-014-1424-4

Download citation

Keywords

  • Multi-objective optimization
  • Water cycle algorithm
  • Pareto-optimal solutions
  • Benchmark function
  • Metaheuristics