Abstract
Estimation of distribution algorithms (EDAs) and differential evolution (DE) are two types of evolutionary algorithms. The former has fast convergence rate and strong global search capability, but is easily trapped in local optimum. The latter has good local search capability with slower convergence speed. Therefore, a new hybrid optimization algorithm which combines the merits of both algorithms, a hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution (cDE/EDA) was proposed. Due to its effective nature of harmonizing the global search of EDA with the local search of DE, the proposed algorithm can discover the optimal solution in a fast and reliable manner. Chaotic policy was used to strengthen the search ability of DE. Meantime the global convergence of algorithm was analyzed with the aid of limit theorem of monotone bounded sequence. The proposed algorithm was tested through a set of typical benchmark problems. The results demonstrate the effectiveness and efficiency of the proposed cDE/EDA algorithm.
Similar content being viewed by others
References
Abdollahzadeh A, Reynolds A, Cristie M, Corne D (2012) A parallel GA-EDA hybrid algorithm for history-matching. In: SPE Oil and Gas India Conference and Exhibition. Society of Petroleum Engineers, Mumbai, pp 424–41
Ahn CW (2006) Advances in evolutionary algorithms: theory, design and practice. Springer, New York
Ahn CW, An J, Yoo J-C (2012) Estimation of particle swarm distribution algorithms: combining the benefits of PSO and EDAs. Inf Sci 192:109–119
Asafuddoula M, Ray T, Sarker R (2014) An adaptive hybrid differential evolution algorithm for single objective optimization. Appl Math Comput 231:601–618
Bai L, Wang J, Jiang Y, Huang D (2012) Improved hybrid differential evolution-estimation of distribution algorithm with feasibility rules for NLP/MINLP engineering optimization problems. Chin J Chem Eng 20(6):1074–1080
Bedri O (2010) Ahmet. CIDE: chaotically initialized differential evolution. Expert Syst Appl 37(6):4632–4641
Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. Evol Comput IEEE Trans 10(6):646–657
Chang W-W, Yeh W-C, Huang P-C (2010) A hybrid immune-estimation distribution of algorithm for mining thyroid gland data. Expert Syst Appl 37(3):2066–2071
Coelho LdS, Ayala HVH, Mariani VC (2014) A self-adaptive chaotic differential evolution algorithm using gamma distribution for unconstrained global optimization. Appl Math Comput 234:452–459
Das S, Suganthan PN (2011) Differential evolution: a survey of the state-of-the-art. Evol Comput IEEE Trans 15(1):4–31
Gämperle R, D.Müller S, Koumoutsakos P (2002) A parameter study for differential evolution. Advances in intelligent systems, fuzzy systems, evolutionary computation. WSEAS Press, New York, pp 293–298
Gao S, Chai H, Chen B, Yang G (2013) Hybrid Gravitational Search and Clonal Selection Algorithm for Global Optimization. In: Tan Y, Shi Y, Mo H, (eds) Advances in Swarm Intelligence. Springer, Berlin, Heidelberg, pp 1–10
Guo Z, Cheng B, Ye M, Kang L, Cao B (2007) Parallel chaos differential evoluition algorithm. J Xi’an Jiaotong Univ 41(3):299–302
Guo P (2012) Research on improvement of differential evolution algorithm [PhD]. Tianjing University, Tianjing
Hansen N, Auger A, Finck S, Ros R (2010) Real-parameter black-box optimization benchmarking 2010: Experimental setup. Institut National de Recherche en Informatique et en Automatique (INRIA)
He R, Yang Z (2012) Differential Evolution with Adaptive Mutation and Parameter Control Using Levy Probability Distribution. J Comput Sci Technol 27(5):1035–1055
Heinz M, Dirk S-V (1993) Predictive Models for the Breeder Genetic Algorithm. Evol Comput 1(1):25–49
Hemmati M, Amjady N, Ehsan M (2014) System modeling and optimization for islanded micro-grid using multi-cross learning-based chaotic differential evolution algorithm. Int J Electr Power Energy Syst 56:349–360
Jia D, Zheng G, Khurram Khan M (2011) An effective memetic differential evolution algorithm based on chaotic local search. Inf Sci 181(15):3175–3187
Larrañaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation. Kluwer Norwel, USA
Larrañaga P, Etxeberria R, Lozano JA, Peña JM (2000) Optimization in continuous domains by learning and simulation of Gaussian networks:201–204
Liu X, Li R, Yang P (2011) Bacterial foraging optimization algorithm based on estimation of distribution. Control Decis 26(08):1233–1238
Liu B, Li H, Wu T, Zhang Q (2008) Hybrid ant colony algorithm and its application on function optimization. In: 3rd International Symposium on Intelligence Computation and Applications, ISICA 2008. Springer, Wuhan, pp 769–77
Li X, Yin M (2014) Parameter estimation for chaotic systems by hybrid differential evolution algorithm and artificial bee colony algorithm. Nonlinear Dyn:1–11
Montgomery DC (2008) Design and analysis of experiments. Wiley, Arizona
Nguyen TT, Li Z, Zhang S, Truong TK (2014) A hybrid algorithm based on particle swarm and chemical reaction optimization. Expert Syst Appl 41(5):2134–2143
Pelikan M, Goldberg DE, Lobo FG (2002) A survey of optimization by building and using probabilistic models. Comput Optim Appl 21(1):5–20
Peng Z, Xie L (2012) Gloabal convergence analysis of hybrid optimization algorithms. Trans Beijing Inst Technol 32(04):435–440
Price KV (1997) Differential evolution vs. the functions of the 2\(^{nd}\) ICEO. In: Evolutionary Computation, 1997, IEEE International Conference on. Indianapolis, IN: IEEE. pp 153–157
Price KV, Storn RM, Lampinen JA (2005) Differential evolution a practical approach to global optimization. Springer, New York
Qin AK, Suganthan PN (2005) Self-adaptive differential evolution algorithm for numerical optimization. In: IEEE Congress on Evolutionary Computation, IEEE CEC 2005. Institute of Electrical and Electronics Engineers Computer Society, Edinburgh, pp 1785–1791
Santana R, Larranaga P, Lozano JA (2008) Combining variable neighborhood search and estimation of distribution algorithms in the protein side chain placement problem. J Heuristics 14(5):519–547
Sebag M, Ducoulombier A (1998) Extending population-based incremental learning to continuous search spaces. In: 5th International Conference on Parallel Problem Solving from Nature, PPSN. Springer, Amsterdam, pp 418–427
Senkerik R, Pluhacek M, Oplatkova ZK (2013) Chaos Driven Evolutionary Algorithm: a novel approach for optimization. In: Proceedings of the 2013 International Conference on Systems, Control, Signal Processing and Informatics 2013. pp 222–226
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Report
Sun J, Zhang Q, Tsang EPK (2005) DE/EDA: a new evolutionary algorithm for global optimization. Inf Sci 169(3–4):249–262
Tzeng Y-R, Chen C-L, Chen C-L (2012) A hybrid EDA with ACS for solving permutation flow shop scheduling. Int J Adv Manuf Tech 60(9–12):1139–1147
Wang L, Fang C (2012) A hybrid estimation of distribution algorithm for solving the resource-constrained project scheduling problem. Expert Syst Appl 39(3):2451–2460
Wang L, Li L-p (2013) An effective differential harmony search algorithm for the solving non-convex economic load dispatch problems. Int J Electr Power Energy Syst 44(1):832–843
Wang Y, Li B, Lai X (2009) Variance priority based cooperative co-evolution differential evolution for large scale global optimization. Evolutionary Computation, 2009 CEC’09. IEEE Congress Norway IEEE:1232–1239
Xiangman S, Lixin T (2013) A novel hybrid Differential Evolution-Estimation of Distribution Algorithm for dynamic optimization problem. Evolutionary Computation (CEC). IEEE Congress Cancun:1710–1717
Xiao J, Huang Y, Cheng Z, He J, Niu Y (2014) A hybrid membrane evolutionary algorithm for solving constrained optimization problems. Optik Int J Light Electron Optics 125(2):897–902
Zhang J, Sanderson AC (2009) JADE: adaptive differential evolution with optional external archive. Evol Comput IEEE Trans 13(5):945–958
Zhang H, Zhou J, Zhang Y, Fang N, Zhang R (2013) Short term hydrothermal scheduling using multi-objective differential evolution with three chaotic sequences. Int J Electr Power Energy Syst 47:85–99
Acknowledgments
This work was financially supported by the National Natural Science Foundation of China under Grant number 51365030. It was also supported by scientific research funds from Gansu University, the General and Special Program of the Postdoctoral Science Foundation of China, the Science Foundation for Excellent Youth Scholars of Lanzhou University of Technology under Grant numbers 1114ZTC139, 2012M521802, 2013T60889 and 1014ZCX017, respectively.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Natasa Krejic.
Appendices
Appendix A
Table 7 displays the definition of all test function and Fig. 3 displays the graphic models of all functions.
Appendix B
The detailed proofs of Theorems 1 and 2 have been presented as follows.
Theorem 1
The evolutional direction of population is monotonous that \(f(\varvec{X}(n+1))\le f(\varvec{X}(n))\), thus the sequence \(\{f(\varvec{x}^{(n)})\}\) is monotonous and no ascending as well as has lower bound.
Proof
The greedy selection operation is used by cDE/EDA according to Eq. (14). The promising solutions of prior generations are reserved, therefore the fitness of the population is not ascending that is \(f(\varvec{X}(n+1))\le f(\varvec{X}(n))\). In consequence, there exists \(f(\varvec{x}^{(n+1)})\le f(\varvec{x}^{(n)})\), where \(\varvec{x}^{(n)}\) presents the minimum of \(n\)th\((n=1,2,\ldots ,k)\) time iteration, \(k\) is the Zimum number of iteration and sufficiently large. When \(n\rightarrow +\infty \), it has \(f(\varvec{x}^{(1)})\ge f(\varvec{x}^{(2)})\ge \cdots \ge f(\varvec{x}^{(n)})\ge \cdots \), thus \(\{f(\varvec{x}^{(n)})\}\) is monotonic sequence. Definition 1 presents that the optimum problems exist the global optimum, so \(\{f(\varvec{x}^{(n)})\}\) is bounded. Therefore, \(\{f(\varvec{x}^{(n)})\}\) is monotonic, no ascending and bounded sequence. \(\square \)
Theorem 2
cDE/EDA can converge to global optimum with the probability of 1.
Proof
According to Theorem 1, \(\{f(\varvec{x}^{(n)})\}\) is monotonic, no ascending and bounded sequence, and then \(\{f(\varvec{x}^{(n)})\}\) must possess a limit with Lemmas 1 and 2. If \(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*)\) exists and \(f(\varvec{x}^*)\) is the global optimumthen \(\{f(x^{(n)})\}\) is globally convergent. \(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*)\) is random event, as \(\{f(\varvec{x}^{(n)})\}\) is stochastic sequence. Thus, we need prove that the sequence of \(\{f(\varvec{x}^{(n)})\}\) can converge with the probability of 1, as long as \(P(\lim \limits _{n\rightarrow +\infty } f(\varvec{x}^{(n)})=f(\varvec{x}^*))=1\) is true. Therefore, it has been proved as following. \(\square \)
For \(\forall \varepsilon \ge 0\), there exists \(T_\varepsilon =\{\varvec{x}\in D,f(\varvec{x})-f(\varvec{x}^*)<\varepsilon \}\) and the monotonic, no ascending and bounded sequence \(\{f(\varvec{x}^{(n)})\}\) which is \(f(\varvec{x}^{(1)})\ge f(\varvec{x}^{(2)})\ge \cdots \ge f(\varvec{x}^{(n)})\ge \cdots \), therefore, we have \(f(\varvec{x}^{(1)})-f(\varvec{x}^*)\ge f(\varvec{x}^{(2)})-f(\varvec{x}^*)\ge \cdots \ge f(\varvec{x}^{(n)}-f(\varvec{x}^*)\ge \cdots \). Let the stochastic time-series \(C=\{\alpha \varvec{x}^{(i)}\in T_\varepsilon ,i\in \{1,2,\ldots k\}\}\) represent the iterated sequence dropped into the neighborhood \(T_\varepsilon \) for \(i\)th times. Thus, there exists \(C_1 \subseteq C_2 \subseteq \cdots \subseteq C_i \cdots \) for \(\varepsilon \), thereby the inequality \(P(C_1 )\le P(C_2 )\le \cdots \le P(C_i )\le \cdots \) is true. And as \(0\le P(C_1 )\le 1\), \(\lim \limits _{i\rightarrow +\infty } P(C_i )\) is existent.
Let the stochastic sequence be presented by
and \(C_i =\{\zeta _i =1\}\). Let \(P\{\zeta _i =1\}=q_i \), \(P\{\zeta _i =0\}=1-q_i \). Thus, we have \(B_i =\frac{1}{i}\sum \nolimits _{j=1}^i {\zeta _j } \), there exists
where \(E(B_i )\), \(D(B_i )\) are the expected value and standard deviation of the sequence \(B_i (i=1,2,\ldots ,k)\), respectively. And due to Chebyshev inequality, there exists
and \(q_j (1-q_j )\le \frac{1}{4}\) is true in Eq. (17), we can get that
And because of \(\zeta _i =iB_i -(i-1)B_{i-1} ,i=1,2,\ldots ,k,\) there exists
The sequence of stochastic variables \(\zeta _i (i=1,2,\ldots ,k)\) converges with the probability so that the sequence of stochastic event \(B_i (i=1,2,\ldots ,k)\) also converges with the probability, that is \(\lim \limits _{i\rightarrow +\infty } P\{B_i \}=1\). Therefore, there exists
For \(\forall \varepsilon \ge 0\), when \(\varepsilon \) tend to very small, that is
Thus, we can get that
which the sequence \(\left\{ {f(\varvec{x}^{(n)})} \right\} \) can converge to global optimum with the probability of 1.
\(\square \)
Appendix C
See Table 8.
Rights and permissions
About this article
Cite this article
Zhao, F., Shao, Z., Wang, J. et al. A hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution. Comp. Appl. Math. 36, 433–458 (2017). https://doi.org/10.1007/s40314-015-0237-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40314-015-0237-0
Keywords
- Hybrid optimization
- Estimation of distribution algorithm
- Chaotic differential evolution algorithm
- Convergence
- Global optimization