Skip to main content
Log in

A hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

Estimation of distribution algorithms (EDAs) and differential evolution (DE) are two types of evolutionary algorithms. The former has fast convergence rate and strong global search capability, but is easily trapped in local optimum. The latter has good local search capability with slower convergence speed. Therefore, a new hybrid optimization algorithm which combines the merits of both algorithms, a hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution (cDE/EDA) was proposed. Due to its effective nature of harmonizing the global search of EDA with the local search of DE, the proposed algorithm can discover the optimal solution in a fast and reliable manner. Chaotic policy was used to strengthen the search ability of DE. Meantime the global convergence of algorithm was analyzed with the aid of limit theorem of monotone bounded sequence. The proposed algorithm was tested through a set of typical benchmark problems. The results demonstrate the effectiveness and efficiency of the proposed cDE/EDA algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Abdollahzadeh A, Reynolds A, Cristie M, Corne D (2012) A parallel GA-EDA hybrid algorithm for history-matching. In: SPE Oil and Gas India Conference and Exhibition. Society of Petroleum Engineers, Mumbai, pp 424–41

  • Ahn CW (2006) Advances in evolutionary algorithms: theory, design and practice. Springer, New York

    MATH  Google Scholar 

  • Ahn CW, An J, Yoo J-C (2012) Estimation of particle swarm distribution algorithms: combining the benefits of PSO and EDAs. Inf Sci 192:109–119

    Article  Google Scholar 

  • Asafuddoula M, Ray T, Sarker R (2014) An adaptive hybrid differential evolution algorithm for single objective optimization. Appl Math Comput 231:601–618

    MathSciNet  Google Scholar 

  • Bai L, Wang J, Jiang Y, Huang D (2012) Improved hybrid differential evolution-estimation of distribution algorithm with feasibility rules for NLP/MINLP engineering optimization problems. Chin J Chem Eng 20(6):1074–1080

    Article  Google Scholar 

  • Bedri O (2010) Ahmet. CIDE: chaotically initialized differential evolution. Expert Syst Appl 37(6):4632–4641

    Article  Google Scholar 

  • Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. Evol Comput IEEE Trans 10(6):646–657

    Article  Google Scholar 

  • Chang W-W, Yeh W-C, Huang P-C (2010) A hybrid immune-estimation distribution of algorithm for mining thyroid gland data. Expert Syst Appl 37(3):2066–2071

    Article  Google Scholar 

  • Coelho LdS, Ayala HVH, Mariani VC (2014) A self-adaptive chaotic differential evolution algorithm using gamma distribution for unconstrained global optimization. Appl Math Comput 234:452–459

    MathSciNet  MATH  Google Scholar 

  • Das S, Suganthan PN (2011) Differential evolution: a survey of the state-of-the-art. Evol Comput IEEE Trans 15(1):4–31

    Article  Google Scholar 

  • Gämperle R, D.Müller S, Koumoutsakos P (2002) A parameter study for differential evolution. Advances in intelligent systems, fuzzy systems, evolutionary computation. WSEAS Press, New York, pp 293–298

  • Gao S, Chai H, Chen B, Yang G (2013) Hybrid Gravitational Search and Clonal Selection Algorithm for Global Optimization. In: Tan Y, Shi Y, Mo H, (eds) Advances in Swarm Intelligence. Springer, Berlin, Heidelberg, pp 1–10

  • Guo Z, Cheng B, Ye M, Kang L, Cao B (2007) Parallel chaos differential evoluition algorithm. J Xi’an Jiaotong Univ 41(3):299–302

    MATH  Google Scholar 

  • Guo P (2012) Research on improvement of differential evolution algorithm [PhD]. Tianjing University, Tianjing

    Google Scholar 

  • Hansen N, Auger A, Finck S, Ros R (2010) Real-parameter black-box optimization benchmarking 2010: Experimental setup. Institut National de Recherche en Informatique et en Automatique (INRIA)

  • He R, Yang Z (2012) Differential Evolution with Adaptive Mutation and Parameter Control Using Levy Probability Distribution. J Comput Sci Technol 27(5):1035–1055

    Article  MathSciNet  MATH  Google Scholar 

  • Heinz M, Dirk S-V (1993) Predictive Models for the Breeder Genetic Algorithm. Evol Comput 1(1):25–49

    Article  Google Scholar 

  • Hemmati M, Amjady N, Ehsan M (2014) System modeling and optimization for islanded micro-grid using multi-cross learning-based chaotic differential evolution algorithm. Int J Electr Power Energy Syst 56:349–360

    Article  Google Scholar 

  • Jia D, Zheng G, Khurram Khan M (2011) An effective memetic differential evolution algorithm based on chaotic local search. Inf Sci 181(15):3175–3187

    Article  Google Scholar 

  • Larrañaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer Norwel, USA

    Book  MATH  Google Scholar 

  • Larrañaga P, Etxeberria R, Lozano JA, Peña JM (2000) Optimization in continuous domains by learning and simulation of Gaussian networks:201–204

  • Liu X, Li R, Yang P (2011) Bacterial foraging optimization algorithm based on estimation of distribution. Control Decis 26(08):1233–1238

    Google Scholar 

  • Liu B, Li H, Wu T, Zhang Q (2008) Hybrid ant colony algorithm and its application on function optimization. In: 3rd International Symposium on Intelligence Computation and Applications, ISICA 2008. Springer, Wuhan, pp 769–77

  • Li X, Yin M (2014) Parameter estimation for chaotic systems by hybrid differential evolution algorithm and artificial bee colony algorithm. Nonlinear Dyn:1–11

  • Montgomery DC (2008) Design and analysis of experiments. Wiley, Arizona

    Google Scholar 

  • Nguyen TT, Li Z, Zhang S, Truong TK (2014) A hybrid algorithm based on particle swarm and chemical reaction optimization. Expert Syst Appl 41(5):2134–2143

    Article  Google Scholar 

  • Pelikan M, Goldberg DE, Lobo FG (2002) A survey of optimization by building and using probabilistic models. Comput Optim Appl 21(1):5–20

    Article  MathSciNet  MATH  Google Scholar 

  • Peng Z, Xie L (2012) Gloabal convergence analysis of hybrid optimization algorithms. Trans Beijing Inst Technol 32(04):435–440

    MATH  Google Scholar 

  • Price KV (1997) Differential evolution vs. the functions of the 2\(^{nd}\) ICEO. In: Evolutionary Computation, 1997, IEEE International Conference on. Indianapolis, IN: IEEE. pp 153–157

  • Price KV, Storn RM, Lampinen JA (2005) Differential evolution a practical approach to global optimization. Springer, New York

    MATH  Google Scholar 

  • Qin AK, Suganthan PN (2005) Self-adaptive differential evolution algorithm for numerical optimization. In: IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Institute of Electrical and Electronics Engineers Computer Society, Edinburgh, pp 1785–1791

  • Santana R, Larranaga P, Lozano JA (2008) Combining variable neighborhood search and estimation of distribution algorithms in the protein side chain placement problem. J Heuristics 14(5):519–547

    Article  MATH  Google Scholar 

  • Sebag M, Ducoulombier A (1998) Extending population-based incremental learning to continuous search spaces. In: 5th International Conference on Parallel Problem Solving from Nature, PPSN. Springer, Amsterdam, pp 418–427

  • Senkerik R, Pluhacek M, Oplatkova ZK (2013) Chaos Driven Evolutionary Algorithm: a novel approach for optimization. In: Proceedings of the 2013 International Conference on Systems, Control, Signal Processing and Informatics 2013. pp 222–226

  • Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359

    Article  MathSciNet  MATH  Google Scholar 

  • Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Report

  • Sun J, Zhang Q, Tsang EPK (2005) DE/EDA: a new evolutionary algorithm for global optimization. Inf Sci 169(3–4):249–262

    Article  MathSciNet  Google Scholar 

  • Tzeng Y-R, Chen C-L, Chen C-L (2012) A hybrid EDA with ACS for solving permutation flow shop scheduling. Int J Adv Manuf Tech 60(9–12):1139–1147

    Article  Google Scholar 

  • Wang L, Fang C (2012) A hybrid estimation of distribution algorithm for solving the resource-constrained project scheduling problem. Expert Syst Appl 39(3):2451–2460

    Article  Google Scholar 

  • Wang L, Li L-p (2013) An effective differential harmony search algorithm for the solving non-convex economic load dispatch problems. Int J Electr Power Energy Syst 44(1):832–843

    Article  Google Scholar 

  • Wang Y, Li B, Lai X (2009) Variance priority based cooperative co-evolution differential evolution for large scale global optimization. Evolutionary Computation, 2009 CEC’09. IEEE Congress Norway IEEE:1232–1239

  • Xiangman S, Lixin T (2013) A novel hybrid Differential Evolution-Estimation of Distribution Algorithm for dynamic optimization problem. Evolutionary Computation (CEC). IEEE Congress Cancun:1710–1717

  • Xiao J, Huang Y, Cheng Z, He J, Niu Y (2014) A hybrid membrane evolutionary algorithm for solving constrained optimization problems. Optik Int J Light Electron Optics 125(2):897–902

    Article  Google Scholar 

  • Zhang J, Sanderson AC (2009) JADE: adaptive differential evolution with optional external archive. Evol Comput IEEE Trans 13(5):945–958

    Article  Google Scholar 

  • Zhang H, Zhou J, Zhang Y, Fang N, Zhang R (2013) Short term hydrothermal scheduling using multi-objective differential evolution with three chaotic sequences. Int J Electr Power Energy Syst 47:85–99

    Article  Google Scholar 

Download references

Acknowledgments

This work was financially supported by the National Natural Science Foundation of China under Grant number 51365030. It was also supported by scientific research funds from Gansu University, the General and Special Program of the Postdoctoral Science Foundation of China, the Science Foundation for Excellent Youth Scholars of Lanzhou University of Technology under Grant numbers 1114ZTC139, 2012M521802, 2013T60889 and 1014ZCX017, respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuqing Zhao.

Additional information

Communicated by Natasa Krejic.

Appendices

Appendix A

Table 7 displays the definition of all test function and Fig. 3 displays the graphic models of all functions.

Table 7 Benchmark test functions

Appendix B

The detailed proofs of Theorems 1 and 2 have been presented as follows.

Theorem 1

The evolutional direction of population is monotonous that \(f(\varvec{X}(n+1))\le f(\varvec{X}(n))\), thus the sequence \(\{f(\varvec{x}^{(n)})\}\) is monotonous and no ascending as well as has lower bound.

Proof

The greedy selection operation is used by cDE/EDA according to Eq. (14). The promising solutions of prior generations are reserved, therefore the fitness of the population is not ascending that is \(f(\varvec{X}(n+1))\le f(\varvec{X}(n))\). In consequence, there exists \(f(\varvec{x}^{(n+1)})\le f(\varvec{x}^{(n)})\), where \(\varvec{x}^{(n)}\) presents the minimum of \(n\)th\((n=1,2,\ldots ,k)\) time iteration, \(k\) is the Zimum number of iteration and sufficiently large. When \(n\rightarrow +\infty \), it has \(f(\varvec{x}^{(1)})\ge f(\varvec{x}^{(2)})\ge \cdots \ge f(\varvec{x}^{(n)})\ge \cdots \), thus \(\{f(\varvec{x}^{(n)})\}\) is monotonic sequence. Definition 1 presents that the optimum problems exist the global optimum, so \(\{f(\varvec{x}^{(n)})\}\) is bounded. Therefore, \(\{f(\varvec{x}^{(n)})\}\) is monotonic, no ascending and bounded sequence. \(\square \)

Theorem 2

cDE/EDA can converge to global optimum with the probability of 1.

Proof

According to Theorem 1, \(\{f(\varvec{x}^{(n)})\}\) is monotonic, no ascending and bounded sequence, and then \(\{f(\varvec{x}^{(n)})\}\) must possess a limit with Lemmas 1 and 2. If \(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*)\) exists and \(f(\varvec{x}^*)\) is the global optimumthen \(\{f(x^{(n)})\}\) is globally convergent. \(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*)\) is random event, as \(\{f(\varvec{x}^{(n)})\}\) is stochastic sequence. Thus, we need prove that the sequence of \(\{f(\varvec{x}^{(n)})\}\) can converge with the probability of 1, as long as \(P(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*))=1\) is true. Therefore, it has been proved as following. \(\square \)

For \(\forall \varepsilon \ge 0\), there exists \(T_\varepsilon =\{\varvec{x}\in D,f(\varvec{x})-f(\varvec{x}^*)<\varepsilon \}\) and the monotonic, no ascending and bounded sequence \(\{f(\varvec{x}^{(n)})\}\) which is \(f(\varvec{x}^{(1)})\ge f(\varvec{x}^{(2)})\ge \cdots \ge f(\varvec{x}^{(n)})\ge \cdots \), therefore, we have \(f(\varvec{x}^{(1)})-f(\varvec{x}^*)\ge f(\varvec{x}^{(2)})-f(\varvec{x}^*)\ge \cdots \ge f(\varvec{x}^{(n)}-f(\varvec{x}^*)\ge \cdots \). Let the stochastic time-series \(C=\{\alpha \varvec{x}^{(i)}\in T_\varepsilon ,i\in \{1,2,\ldots k\}\}\) represent the iterated sequence dropped into the neighborhood \(T_\varepsilon \) for \(i\)th times. Thus, there exists \(C_1 \subseteq C_2 \subseteq \cdots \subseteq C_i \cdots \) for \(\varepsilon \), thereby the inequality \(P(C_1 )\le P(C_2 )\le \cdots \le P(C_i )\le \cdots \) is true. And as \(0\le P(C_1 )\le 1\), \(\lim \limits _{i\rightarrow +\infty } P(C_i )\) is existent.

Let the stochastic sequence be presented by

$$\begin{aligned} \zeta _i =\left\{ \begin{array}{lll} 1, &{} i\mathrm{th} \quad \mathrm{iteration} \quad \mathrm{drop} &{} \mathrm{into} \quad T_\varepsilon \\ 0, &{} i\mathrm{th} \quad \mathrm{teration} \quad \mathrm{not} \quad \mathrm{drop} &{} \mathrm{into} \quad T_\varepsilon \\ \end{array} \right. ,i=1,2,\ldots k \end{aligned}$$

and \(C_i =\{\zeta _i =1\}\). Let \(P\{\zeta _i =1\}=q_i \), \(P\{\zeta _i =0\}=1-q_i \). Thus, we have \(B_i =\frac{1}{i}\sum \nolimits _{j=1}^i {\zeta _j } \), there exists

$$\begin{aligned} E(B_i )=\frac{1}{i}\sum \nolimits _{j=1}^i {q_j } ,\quad i=1,2,\ldots ,k, \end{aligned}$$
(16)
$$\begin{aligned} D(B_i )=\frac{1}{i^2}\sum \nolimits _{j=1}^{j=i} {D(\zeta _i )}=\frac{1}{i^2}\sum \nolimits _{j=1}^{j=i} {q_j (1-q_j )} ,\quad i=1,2,\ldots , k, \end{aligned}$$
(17)

where \(E(B_i )\), \(D(B_i )\) are the expected value and standard deviation of the sequence \(B_i (i=1,2,\ldots ,k)\), respectively. And due to Chebyshev inequality, there exists

$$\begin{aligned} P\{\vert B_i -E(B_i )<\varepsilon \vert \}\ge 1-\frac{D(B_i )}{\varepsilon ^2} \end{aligned}$$
(18)

and \(q_j (1-q_j )\le \frac{1}{4}\) is true in Eq. (17), we can get that

$$\begin{aligned} P\{\vert B_i -E(B_i )\vert <\varepsilon \}\ge 1-\frac{1}{4i\varepsilon ^2} \end{aligned}$$
(19)
$$\begin{aligned} \mathop {\lim }\limits _{i\rightarrow +\infty } P\{\vert B_i -E(B_i )<\varepsilon \vert \}=1 \end{aligned}$$
(20)

And because of \(\zeta _i =iB_i -(i-1)B_{i-1} ,i=1,2,\ldots ,k,\) there exists

$$\begin{aligned} \mathop {\lim }\limits _{i\rightarrow +\infty } P\{\vert \zeta _i -E(\zeta _i )\vert <\varepsilon \}=1 \end{aligned}$$
(21)

The sequence of stochastic variables \(\zeta _i (i=1,2,\ldots ,k)\) converges with the probability so that the sequence of stochastic event \(B_i (i=1,2,\ldots ,k)\) also converges with the probability, that is \(\lim \limits _{i\rightarrow +\infty } P\{B_i \}=1\). Therefore, there exists

$$\begin{aligned} \mathop {\lim }\limits _{n\rightarrow +\infty } P\{\vert f(\varvec{x}(n))-f(\varvec{x}^*)\vert \le \varepsilon \}=1 \end{aligned}$$
(22)

For \(\forall \varepsilon \ge 0\), when \(\varepsilon \) tend to very small, that is

$$\begin{aligned} \mathop {\lim }\limits _{\varepsilon \rightarrow 0} \vert f(\varvec{x}^{(n)})-f(\varvec{x}^*)\vert =0 \end{aligned}$$
(23)
$$\begin{aligned} \mathop {\lim }\limits _{\varepsilon \rightarrow 0} f(x^{(n)})=f(x^*) \end{aligned}$$
(24)

Thus, we can get that

$$\begin{aligned} \mathop {\lim }\limits _{n\rightarrow +\infty } P\{f(\varvec{x}^{(n)})=f(\varvec{x}^*)\}=1 \end{aligned}$$
(25)

which the sequence \(\left\{ {f(\varvec{x}^{(n)})} \right\} \) can converge to global optimum with the probability of 1.

\(\square \)

Appendix C

See Table 8.

Table 8 Experimental results (continues)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, F., Shao, Z., Wang, J. et al. A hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution. Comp. Appl. Math. 36, 433–458 (2017). https://doi.org/10.1007/s40314-015-0237-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40314-015-0237-0

Keywords

Mathematics Subject Classification

Navigation