Abstract
Differential evolution (DE) is one of the highly acknowledged population-based optimization algorithms due to its simplicity, user-friendliness, resilience, and capacity to solve problems. DE has grown steadily since its beginnings due to its ability to solve various issues in academics and industry. Different mutation techniques and parameter choices influence DE's exploration and exploitation capabilities, motivating academics to continue working on DE. This survey aims to depict DE's recent developments concerning parameter adaptations, parameter settings and mutation strategies, hybridizations, and multi-objective variants in the last twelve years. It also summarizes the problems solved in image processing by DE and its variants.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Optimization is a procedure of finding the best possible decision variables' values under the given set of constraints and a selected optimization objective function. Applying the optimization procedure minimizes the total cost or maximizes the possible reliability or any other specific objective. Optimization problems are commonly found in science & engineering, industry, and business decision-making, which are solved by implementing the proper optimization approach. Almost every real-world optimization problem is fundamentally and practically challenging. Therefore, continuing research in this field is required as the best possible solution can be guaranteed only by using an appropriate optimization approach. Nature inspires researchers as it helps to model and solve complex computational problems. That is why researchers have been looking after it for several years. According to Darwin's theory, every species had to reconstruct its physical structure to survive in changing natural conditions. The relationship between optimization and biological evolution paved the way for developing evolutionary computing techniques for performing complex searches and optimization. It was around the 1950s when the idea of using Darwin's theory for solving automated problems originated. Lawrence J. Fogel introduced Evolutionary programming (EP) [1] in the USA; in Germany, Rechenberg and Schwefel introduced Evolution strategies (ES) [2] before the 1960s. At the University of Michigan, John Henry Holland proposed an independent method replicating Darwin's evolution theory to solve practical optimization problems called Genetic Algorithm (GA) [3], which was almost a decade later. In the early 1990s, the idea of Genetic Programming (GP) [4] emerged. Figure 1 depicts the classification of meta-heuristic algorithms.
In evolutionary computation, updating the population occurs through iterative progress. Selecting the population with a guided random search and using parallel processing desired result can be achieved. DE has a specified beginning with the genetic annealing algorithm developed by Kenneth Price and published in October 1994 in Dr. Dobb's Journal (DDJ), a renowned programmer's magazine. Genetic Annealing is a population-based, combinatorial optimization algorithm that uses thresholds to implement annealing criteria. Later, the same author discovered a differential mutation operator, which is the base of DE. At present, in the area of nature-inspired metaheuristics, evolutionary algorithms (e.g., EP, ES, GA, GP, DE, etc.) and swarm intelligence algorithms (e.g., monkey search [5], bee colony optimization [6], cuckoo search [7], firefly algorithm [8], wolf search [9], whale optimization algorithm [10], etc.) are used. In the broader field, it includes the artificial immune system [11], memetic and cultural algorithms [12], harmony search [13], etc.
Differential evolution (DE), [14] first suggested by Storn and Price in 1995, is a straightforward and effective evolutionary technique utilized in a continuous domain to address global optimization issues. Due to DE's adaptability and effectiveness, numerous modified variants have been developed. The most effective and adaptable evolutionary computing strategies have been DE and its several versions up until that point. The number of publications referencing the best original piece indicates a rise in curiosity in DE. It's effectively addressed various real-world issues from different scientific and technological fields. The DE varieties have consistently placed first in EA competitions run by the IEEE Congress on Evolutionary Computation (CEC) conference series. The CEC competitions on a single objective, large-scale, multi-objective, constrained, dynamic, and multimodal optimization problems have never had a particular search algorithm secure a place in all of them, excluding DE.
To present a broader perspective regarding the advances, relevance, and impact of the basic DE algorithm, Fig. 2 shows scatter plots of citations of the basic DE articles from 1995 to now. Therefore, it is evident from the graph that researchers' interest in DE has been growing from its inception. As illustrated in Fig. 2, the first publication that used the term DE appeared in the literature in 1995. Considering the citation structure of the various research area of the DE-related journal publications, there’s been an exponential growth in the citation count over the years. The highest citations of 17,370 were received in the year 1997. We provide in Fig. 3 the networked-based visualization of the publication and citation analysis structure of the article among authors covering the main discussion of the DE algorithm and its variants over the last 46 years to give a better understanding of the exponential growth in the research interest and applicability of the DE algorithm. There are eight clusters in the visualization. The red color cluster has Storn and Price [14] as the prominent node, implying that other authors in this cluster, such as Noman and Iba [15], Mohanty et al. [16], Das et al. [17], Rahnamayan and Tizhoosh [18], and Tanabe and Fukunaga [19] have cited more papers on DE from Storn and Price [14]. Another prominent cluster is the blue color containing Das et al., implying that other authors like Ter Braak [20] have cited more papers on DE than the aforementioned author had published.
Choosing a uniformly random set of the population sampled from the feasible search volume, every iteration (known as generation according to the terminology of evolutionary computing) of DE executes the same computational steps as followed by a standard Evolutionary Algorithm (EA). However, like genetic algorithm, it never uses binary encoding and does not use the probability density function to self-adapt its parameters like Evolution Strategy. DE executes mutation in the algorithm upon the distribution of the solutions in the current population. Therefore, the direction of search and possible step sizes depends on the randomly selected individuals' location to calculate the mutation values. DE has four phases: the initialization of vectors, difference vector-based mutation, crossover, and selection. Once the vector population is initialized, iteratively repeat the other three steps as long as a specific condition is met. Figure 4 shows the main steps of the DE algorithm.
According to practical experience, the most appropriate mutation strategies and DE parameters are generally different when applied to various optimization problems. This happens because DE's exploitation and exploration capabilities vary with varying mutation and parameter tuning techniques. It is time-consuming and computationally expensive to configure methods and parameters by trial and error approach. Researchers devised several techniques for intelligent mutation strategy ensemble and automated parameter adaptation to overcome this challenge. For example, a self-adaptive DE algorithm (SaDE) was proposed by Qin et al. [21], whereas; Mallipeddiet al. [22] proposed another DE variant called EPSDE by using a self-adaptive parameter tuning approach. Though they both performed excellently, the SaDE could not find an optimal solution for the high modality composite function due to poor local searchability, which made the SaDE converge prematurely. Similarly, EPSDE provided slight improvement and was only compared to JADE.
Similarly, several efficient adaptive DE variants have been presented during the last few decades. These DE variants have their respective advantages in dealing with different optimization problems. The idea of mixing EAs with other EAs or with some other techniques (hybridization) is to use the benefits of two or more algorithms for better results. Multi-objective variants of DE were also developed to solve optimization problems with multiple objectives. Various survey articles [23,24,25,26] were published on DE during the time. Eltaein et al. [27] presented a survey of DE incorporating the variants developed with adaptive, self-adaptive, and hybrid techniques. Pant et al. [28] surveyed DE regarding its update since initiation using numerous strategies like change in population initialization, modified mutation and crossover scheme, altering parameters, hybridization, and its discrete variants. The authors have also reviewed the DE variants developed to solve various problems in science and engineering. Georgioudakis et al. [29] studied the performance of DE variants like CODE, jDE, JADE, and SaDE in handling constrained structural optimization issues linked to truss structures.
An exception from all the studies mentioned above, this article only includes the recent efficient variants of DE, including multi-criteria type variants from 2009 to 2020. To our knowledge, none of the above literature multi-objective variants of DE were studied. In the application survey part, we have only studied the DE variants developed for solving image processing problems which might be useful for the researchers working on this application domain. To achieve the study objective, the systematic literature review procedure was used as a guide, and carefully selected keywords were used to search and retrieve relevant pieces of literature. The following keywords or noun phrases were used to search the Web of Science (WoS), Scopus repository, and websites listed in Table 1 for articles published in reputable peer-review journals, edited books, and conference proceedings: differential evolution, DE, variants of DE algorithm, a survey of DE. The choice of academic database is influenced by high-quality articles published in SCI-indexed journals and ranked international conferences.
The year of publication is between 2009 and 2020. During each search, articles retrieved were perused to collect more related articles from their citations. The inclusion and exclusion criteria were then applied to the collected articles to ensure that only the ones that fit the study objective were selected. The inclusion criteria are only recent efficient variants of DE, including multi-criteria type variants from 2009 to 2020, multi-objective variants of DE, and DE variants developed for image processing problems. Everything else was excluded.
The paper is structured as follows: The basic DE and its popular mutation strategies are described in Sect. 2. DE variants proposed with modifications in parameter settings and the mutation strategies and hybridization techniques are presented in Sect. 3. Section 4 contains a collection of multi-objective DE variants with brief descriptions. Section 5 summarizes the DE variants developed to solve image processing problems. Section 6 presents potential future direction for DE, and finally, Sect. 7 concludes the study.
2 Basics of Differential Evolution Algorithm
Storn and Price [14] proposed the DE method, a subset of evolutionary computation, to address optimization issues in continuous domains. Every variable's expression in DE is a real number. DE employs mutagenesis for exploring, while selection is used to focus the approach on the likely locations within the possible area. Initialization, mutation, recombination or crossover, and selection are the four essential processes in the conventional DE algorithm. Until the termination requirement (such as the expiration of the maximum functional evaluations) is met, the final three phases of DE run in a loop.
2.1 Decision Vector Initialization
DE looks for the best global point in a D-dimensional real decision variable region. This then starts with a population of NP vectors with legitimate variables that are randomly begun. The multi-dimensional optimization problem has several candidate solutions, each of which is a vector known as a genome or chromosome. In DE, generations are indicated by \({G}_{n}=\mathrm{0,1},2,\dots .,{G}_{max}.\)
We may employ the following notation to denote the kth vector of the group in the latest generation because the parameter(s) used to identify the vectors are likely to differ throughout various generations \({(G}_{n})\):
A defined range may be provided for each decision variable of a specific problem, and the decision variable's value should be limited. Decision variables are typically assessed using natural boundaries or connected to physical elements (for example, if one decision variable is a length or mass, we would want it not to be negative). By consistently randomizing solutions inside this problem space bounded by the recommended lower and higher bounds, the initial population (at \({G}_{n}\)= 0) should encompass this region as broadly feasible: \({X}_{lo\omega }=\left\{{x}_{1,low}, {x}_{2,low},\dots ..{x}_{D,low }\right\}\) and \({X}_{hi}=\left\{{x}_{1,hi}, {x}_{2,hi},\dots ..{x}_{D,hi}\right\}\). Hence, we may initialize the \(j\)th component of the \(k\)th decision vector as:
where \(rn{d}_{j}^{k}\left[0, 1\right]\) is a uniformly distributed random number lying between 0 and 1 (actually 0 ≤ \(rn{d}_{j}^{k}\left[0, 1\right]\)≤ 1) and is instantiated independently for each component of the \(k\)th vector.
2.2 Mutation with Difference Vectors
A "mutation" in biology is a sudden alteration to a chromosome's gene composition. The mutation is viewed as a modification or alteration with a randomized component in the evolutionary computing concept. A parent vector from the most recent generation is referred to as the target vector in the DE concept. Donor vectors are mutant vectors created during the mutation operation. The term "trial vector" refers to a descendant created by recombining the donor and the target vector. In a traditional DE-mutation, three more different parameter vectors, for instance, \({p}_{r{nd}_{1}^{\left(l\right)}}, {p}_{r{nd}_{2}^{\left(m\right)}}, {p}_{r{nd}_{3}^{\left(n\right)}}\) are used to construct the donor vector for each kth target vector from the existing population. The indices \(r{nd}_{1}^{\left(l\right)}, r{nd}_{2}^{\left(m\right)}, r{nd}_{3}^{\left(n\right)}\) are mutually exclusive integers randomly chosen from the range \([1, NP]\); all these three solutions are different from the target vector \(k\). For every mutant vector generation, these indices are produced at random once. The difference between second and third solution vectors is scaled by a scalar number \(F\) (its value generally lies within the interval \([0.4, 1]\)). The parameter \(F\) is described as the scaling factor or scale factor, and the scaled difference is added to the first solution vector to obtain the donor vector \({p{^{\prime}}}_{Gn}^{\left(k\right)}\). A conceptual representation of the DE mutation phase is shown in Figs. 5 . The procedure can be expressed mathematically as:
2.3 Crossover
Crossover is how the donor vector components and the target vector are mixed to bring diversity to the population. The donor vector tries to replace its components with the target vector \({P}_{Gn}^{\left(k\right)}\) and forms the trial vector. For trial vector generation purposes, DE uses two crossover methods: (i) Binomial crossover and (ii) Exponential crossover. In an exponential crossover, an integer n is chosen randomly among the numbers \([1, D]\). The value of this integer \(n\) acts as a starting point in the target vector; from that point, the crossover or exchange of components with the donor vector starts. Another integer \({L}_{n}\) is chosen within the interval \([1, D]\). \({L}_{n}\) signifies the donor vector will contribute the number of components to the target vector. Now, having the value of \(n\) and \({L}_{n}\) trial vector can be obtained as
where the angular brackets denote a modulo function with modulus \(D\). The integer \({L}_{n}\) is drawn from \([1, D]\) according to the following pseudo-code:
"\(Cros\)" is called the crossover rate, and it is also a control parameter of DE. A new set of \(n\) and \({L}_{n}\) must be chosen randomly for each donor vector. A conceptual representation of the DE crossover phase is shown in Fig. 6.
On the other hand, in a binomial crossover, whenever a randomly generated number between 0 and 1 is less than or equal to the \(Cros\) value, the donor vector value is selected as a trial vector. Otherwise, it remains the same. This operation is performed in each of the variables of \(D\) dimensions. In this method, the donor vector's contribution to the trial vector has (nearly) a binomial distribution. The process may be outlined as follows:
where \(r{nd}_{k,j}\left[0, 1\right]\) is a uniformly distributed random number, \({j}_{rnd}\) belonging to \([1, D]\) is a randomly chosen index; it ensures that \({v}_{j,Gn}^{\left(k\right)}\) gets at least one component from \({p}_{j,Gn}^{\left(k\right)}.\)
2.4 Selection
After the generation of trial vectors by the crossover process, it is determined by the selection process whether a trial vector or a target vector will act as a target vector for the next generation \(Gn+1\). The selection process can be written as follows:
where \(f(x)\) is the objective function that needs to be minimized. Therefore, if the new trial vector produces an equal or lower value of the objective function, the corresponding target vector is replaced in the next iteration; otherwise, the target vector is retained in the next generation population. Hence, the population vector gets a better value in every iteration (considering the minimization problem); i.e., the value never depreciates (Fig. 7).
2.5 DE Algorithm
Step1: Fix the values of crossover rate \((cros)\), scale factor \((F)\), and population size \((NP)\) as input from the user.
Step2: Initialize the value of generation \({(G}_{n})\) to 0 and also initialize a random population of\(NP\). Individuals \({P}_{Gn}=\{{p}_{1,Gn} , {p}_{2,Gn},{p}_{3,Gn},\dots ,{p}_{np,Gn}\}\), for every\({P}_{Gn}^{\left(k\right)}=\{{p}_{1,Gn}^{\left(k\right)} , {p}_{2,Gn}^{\left(k\right)} , \dots ,{p}_{D,Gn}^{\left(k\right)}\}\), every individual is uniformly dispersed within the range {\({P}_{lo\omega }\),\({P}_{hi}\)}, where \({P}_{lo\omega }=\{\)p1,low, p2,low, …, pD,low} and \({P}_{hi }=\{{p}_{1,hi} , {p}_{2,hi},\dots ,{p}_{D,hi}\}\), the value of \(k\) lies between \([1, NP]\).
Step3: While the given condition is true.
2.6 Mutation Strategies of DE
DE is a simple but efficient evolutionary algorithm used in a continuous domain to solve global optimization problems. According to practical experience, the most appropriate mutation strategies and DE parameters are generally different when applied to different optimization problems. This happens because DE's exploitation and exploration capabilities vary with different mutation strategies and parameter tuning. Based on this experience and after continuous work, several mutation strategies have been proposed by researchers. At generation G, the kth vector can be written as:
The most popular mutation strategies of DE are:
In the above equations,\(r{nd}_{1}^{l}\),\(r{nd}_{2}^{m}\),\(r{nd}_{3}^{n}\), \(r{nd}_{4}^{o}\), \(r{nd}_{5}^{q}\) are mutually exclusive real numbers selected randomly between 1 to\(NP\). They have a different value from\({p}^{\left(k\right)}\). "\(rnd\)" signifies random, and "\(bml\)" is used to mention that the calculation method used for the trial vector generation is binary. \(F\) and \({p}_{best,Gn}\) are the scaling factor and best value in the present population, respectively. \({p}_{pbest,Gn}\) is the random best individual among the best \(p\) solutions.
2.7 Advantages and Disadvantages of DE
A trustworthy and beneficial global optimizer, DE is a population-based evolutionary algorithm. DE produces offspring by disturbing the solutions with a scaled difference between two randomly chosen individuals from the population, which is different from other evolutionary algorithms. The parent solution is replaced by the sibling only if the sibling is better than its parent. Basic DE is a reasonably straightforward algorithm that can be implemented with just a few lines of code in any common programming language, unlike many other evolutionary computation techniques. Additionally, the scaling factor, crossover rate, and population size are the only control factors needed for the canonical DE, which makes it simple for practitioners to utilize [30]. No other search methodology has been able to obtain a competitive position in essentially all the CEC contests on a single objective, limited, dynamic, large-scale, multi-objective, and multimodal optimization problems. The inferred self-adaptation confined in the algorithm's structure is the cause of DE's remarkable success [23]. An optimization algorithm must be explorative in its early stages since solutions are dispersed throughout the search space. Exploitation around the discovered potential solution is also necessary for the optimization process. DE is heavily explorative early in the process before progressively transitioning to exploitation. Due to this, DE may typically exit the local minima with a reasonable rate of convergence. Regarding the search features of an evolutionary algorithm, DE is a well-balanced algorithm.
DE has certain drawbacks despite these benefits. Liu et al. [31] opined that if the siblings generated in a few iterations are worse in fitness than their parent solutions, it has become impossible for DE to get out of that situation, leading the algorithm to stagnation. If prospective solutions are not found after a few exploratory steps, it is possible to say that the search process is seriously negotiated. The population size in the algorithm is correlated with the algorithm's potential movements over time. A small population may only move a tiny amount, whereas a large population may engage in numerous activities. Unproductive behaviors can increase the amount of computational work wasted as the population grows. Small populations may result in premature convergence [32]. The crossover rate value and scaling factor are essential to the algorithm's efficiency. However, choosing these values is a laborious process. According to several research studies [33, 34] setting a fair value for the parameters is problem-dependent. The probability of stalling in DE grows with greater dimensionality; according to Zamuda et al. [35], parameter setup can be challenging while handling real-life optimization problems with larger dimensions. DE is ineffective in noisy optimization tasks in addition to the dimensionality problem. According to Krink et al. [36], standard DE can struggle to handle the noisy fitness function.
2.8 DE Variants as the Champion of CEC Competitions
DE has proven to be one of the best methods, and many modifications have been made to it during the last two decades. Modified variants of DE could secure a place in almost all CEC competitions held during the time. A list of DE methods with the year of competition and the respective method's position is given in Table 2. The percentage of google scholar citation of DE variants secured position in CEC competition and the performance count of DE variants in CEC competition are shown in Figs 8 and 9.
3 Variants of DE Algorithm
The exploration and exploitation capability of DE also varies with the complexity of the problem. Exploring is concerned with finding new reasonable solutions, and exploitation is searching for a solution near the new good solutions. These two terms are interrelated in the evolutionary search. Equilibrium between these two search processes can give a better result. DE has three control parameters, namely scaling factor \(F\), population size \(NP\), and crossover operator \(CR\). The role of these control parameters is to keep the exploration/exploitation in an equilibrium state. Efficiency, effectiveness, and DE's robustness in applying to a practical problem are positively related to the appropriate choice of these parameter values. Choosing an appropriate value for control parameters is tough when symmetry between exploration and exploitation is required. The selection of control parameter values for a problem is time-consuming and affects the algorithm's efficiency.
The performance of DE mainly depends on its parameter selection and trial vector generation strategy. During the last two decades, several works have been done on DE, and several policies of parameter adaptation, parameter adaptation with mutation strategy selection, and hybridization have been proposed. The below-given framework shown in Fig. 10 displays the work done on DE so far for its modification.
3.1 Modified by Parameter Selection
Three essential parameters, namely, mutation factor, crossover rate, and population size, are used in DE. The crossover rate defines the length of the string passed down from one generation to the next, while the mutation factor determines the population's diversity. The population size affects how robust the process is. Deterministic, adaptive, or self-adaptive parameter selection is all possible. Deterministic parameter selection involves changing values according to a deterministic rule after a predetermined number of generations. The "adaptive parameter selection" method involves altering parameters in response to feedback from the search process. The superior value of these parameters during self-adaptation parameters imprinted into the chromosomes generates better children, propagating to the following generation. Table 3 provides concise descriptions of various algorithms with illustrations.
3.1.1 jDE
The parameters of DE are problem-dependent like other optimization algorithms. Tuning these parameters is a hectic job. Considering this fact, in this study, the authors tried to use \(F\) and \(Cr\) values adaptively during the optimization process. Four new parameters \({F}_{l}, {F}_{u}, {\tau }_{1} and {\tau }_{2}\) were introduced. \({\tau }_{1} and {\tau }_{2}\) represents the probability with a value of 0.1. Value of \({F}_{l} and {F}_{u}\) were fixed as 0.1 and 0.9, respectively. A new value of the scaling factor was evaluated first multiplying a random value with \({F}_{u}\) and adding the result with \({F}_{l}\). The newly evaluated value of \(F\) for the next generation was accepted only if the probability value \({\tau }_{1}\) greater than a random number. Else, the value of \(F\) in the last generation was selected. \(Cr\) value for the next generation will be a random number if the value of \({\tau }_{2}\) greater than a random number. Otherwise, the present generation value was selected. Twenty classical benchmark functions were evaluated and compared with the basic and self-adaptive DE algorithms to establish the performance enhancement.
3.1.2 DEGL
It described a family of improved variants of \(DE-target-to-best/1/bin\) by utilizing the neighborhood scheme for each member of the population. The drawback of the \(DE-target-to-best/1/bin\) technique was identified, the best vector used to generate the donor vector promoting the chance of exploitation as all the vectors try to move to the same best position. To overcome this two-neighborhood strategy, the local and global neighborhoods were used. Each vector was mutated in a local neighborhood using the best value found in the local (small) neighborhood. In contrast, every vector in the global neighborhood was transformed using the entire generation's best value. Later, the local and global model was combined using a weight factor. The result showed that DEGL is the best performer among \(DE/rand/1/bin, DE-target-best/1/bin,\) SADE, etc.
3.1.3 εDEag
The study proposed εconstrained DE with an archive and gradient-based mutation strategy for improving stability, useability, and efficacy of the previously proposed εDEg. The average approach of individuals towards the optimal solution was attained, re-evaluating a child whenever it was not better than the parent. The parameter value in the algorithm was set automatically depending on the state of the initial archive. The introduction of this scheme enabled to specify the ε level's control parameters and enhanced the useability of the algorithm. Gradient-based mutation with skip-move and the long move was introduced, and the ε level was increased to expand the search space. Eighteen problems of “single objective real parameter constrained optimization” were solved using the proposed algorithm.
3.1.4 FiADE
Adaptation of \(F\) and \(CR\) more simply and effectively without the user's intervention was introduced. The adaptation of variables depends on the objective function value of population members. If a particular generation's fitness value of a vector is close to the objective function, the \(F\) value was reduced by allowing a more extensive local search. On the contrary, the \(F\) value was increased to a level if the function value deviates from the objective function value, obstructing premature convergence. Similarly, if the donor vector's fitness value deviates from the objective function value in a negative direction, then the \(CR\) value was increased and vice versa. A study on the application of unimodal and multimodal functions showed that the proposed scheme was easy and competitive.
3.1.5 SFcDE-PSR
Compact DE runs on an optimization problem in systems with less memory power and computational ability. Though the reduction in population size may lead to premature convergence, these algorithms are designed to keep the algorithm's reliability intact. SFcDE-PSR used a super fit mechanism. In this process, an external algorithm was executed to improve a solution with less performance and included in the DE population. Consequently, this solution guided the search process with improvements. This way, the exploration ability of the search process was increased. The second strategy was the reduction of the population progressively. The implication of these strategies together made SFcDE-PSR more efficient than other previous cDE versions.
3.1.6 OXDE
DE uses a binomial/exponential crossover, which generates a hyper-rectangle defined by the mutant and target vector. Therefore, the search area of DE may be limited. In this scheme, the QOX operator improved the searchability of DE. Application of QOX to every pair mutation and the target vector will increase the computational overhead. Thus, QOX was applied to each generation once only. The scheme switches between binomial/exponential and QOD flexible to generate target vectors. The application of this scheme enhanced the efficiency of \(DE/rand/1/bin\).
3.1.7 JADE-APTS
According to the proposed ATPS approach, the size of the total population on the algorithm can be dynamically adjusted by distributing the population and the algorithm's searching status. New possible solutions were discovered by placing new individuals in the proper area with an elite-based population incremental strategy. Under privileged particles are eliminated according to a ranking process, and the place was reserved for better reproduction using an inferior-based population cut strategy. A status monitor regulated these dynamic population control strategies. JADE-APTS gave a competitive performance in 30-dimensional problems and best in 100-dimensional problems among jDE, JADE, SaDE, EPSDE, and CoDE.
3.1.8 EDE
A new modified mutation scheme was proposed considering the best and worst individuals of a particular generation. This scheme was incorporated with the basic mutation scheme of DE through a non-linear decreasing probability rule. This scheme enabled better local search ability and a faster convergence rate, leading to premature convergence. The issue was tackled with a random mutation scheme and modified BGA mutation. Scaling factor \(F\) was set as a uniform random variable in [0.2,0.8] to ensure exploration as a lower value and exploitation as a higher value. \(CR\) was introduced as a uniform random variable in [0.5,0.9] to increase diversity and convergence rate.
3.1.9 DE with RBMO
DE with the ranking-based mutation operators works on the concept that if during the selection of parents with random selection in DE, parents with better characteristics were selected, then the chance of survival of their children and finding a good result is also increased. Vectors were first sorted according to their fitness, and then the selection probability of each vector was calculated. The base vector and the terminal point of the difference vector were selected according to their selection probabilities, while the other vectors were chosen according to the mutation strategy used. This process has given a better result than JDE, which was a superior algorithm among others at that time.
3.1.10 SHADE
The basis of this algorithm is JADE, and it used the \(current-to-pbest/1\) mutation strategy, an external archive, and adaptive control of the \(F\) and \(CR\) parameter values. \({\mu }_{CR}\) and \({\mu }_{F}\) values in JADE are continuously updated to approach the previous generation's successful mean values of \(CR\) and \(F\). \({S}_{CR}\) and \({S}_{F}\) stores the mean values of \(CR\) and \(F,\) which were effective in the last generations. Due to the probabilistic nature of DE, the poor value of \(CR\) and \(F\) may also be included in the \({S}_{CR}\) and \({S}_{F}.\) It could degrade the performance. SHADE used a historical memory \({M}_{CR}\), \({M}_{F}\), to store mean values of \({S}_{CR}\) and \({S}_{F}\) for each generation. SHADE used a diverse set of parameters compared to the JADE’s single pair of parameters, which guides the control parameter adaptation during the search process. This modified algorithm has shown a competitive result on several benchmark problems.
3.1.11 PA-jDE
In lower-dimensional problems, if the population's diversity is less, it leads to stagnation. Stagnation halts the improvement of the population. Population adaptation has emerged to overcome the issue. This algorithm assumes the occurrence of stagnation and diversifies the population by measuring Euclidian distance among the individuals. This scheme, when applied with jDE performance, amplified significantly.
3.1.12 LSHADE
In LSHADE, the basic SHADE algorithm's search performance was enhanced using a linear population reduction mechanism. The population reduction increases the convergence speed and decreases the algorithm's complexity. The performance of the LSHADE was verified by evaluating IEEE CEC 2017 function set and comparing it with state-of-the-art DE and CMA-ES variants. Performance tests confirmed the supremacy or competitiveness of the algorithm.
3.1.13 DE-DPS
The aim was to find the most appropriate value of parameters during the process of evolution. The set of values was defined for the parameters \(F\), \(CR\), and \(NP\). Random individuals were generated within the variable bounds. Each individual in the population was assigned a random \(F\) and random \(CR\) value. Each new offspring was generated using the mutation operator and the crossover operator. If the offspring was better than the individual, it was selected for the next generation, and the success value of the parameter combination was increased by 1.
3.1.14 Repairing the Crossover Rate in Adaptive DE
Crossover rate is not directly related to the trial vector, but the trial vector is directly related to its binary string. Based on this theory, the crossover repair technique was generated in which the average number of components taken from the mutant was calculated. Then the trial vector was generated according to the formula specified. Afterward, the successful combination of the repaired crossover rate and the scaling factor was updated. This process gave superior or competitive results when combined with JADE in terms of robustness and convergence.
3.1.15 MS-DE
In the DE algorithm, the base and the difference vectors were selected randomly without fully utilizing the fitness information, and the diversity information was also unnoticed. The algorithm may trap in local optima while solving multimodal optimization problems. Given the above problem, a multi-objective sorting-based mutation operator was induced to utilize the fitness and diversity information while selecting parents. Non-dominated sorting was used to sort individuals of the latest population according to their fitness and diversity. Parents were selected according to their ranking. The adoption of this new operator enhanced performance satisfactorily.
3.1.16 ESADE
Enhanced self-adaptive DE used for global numerical optimization over continuous space. In this process, population initialization was done in two groups. Then initially, the first set of control parameters was initialized. The second set of control parameters was generated by mutating the first set of parameters. Trial vectors were generated on each set of populations after mutation and crossover. Lastly, target vectors were generated from the trial vector set by selection operation using simulated annealing. Control parameters that performed better were selected for the next generation. ESADE performed better on several state-of-art DE algorithms and PSO.
3.1.17 MDEVM
Micro algorithms work with a minimal number of populations to amplify convergence speed, with increased risk of stagnation. It is necessary to raise the diversity of the population to overcome the threat of stagnation—a vectorized mutation process with micro DE was used here. According to the process, after the generation of the initial population randomized vector of mutation factor was calculated instead of a constant mutation factor. The mutation and crossover strategies were applied the same as DE. The process stopped when the difference between the best fitness value and fitness value to reach was less than an error value or function evolution the maximum limit. MDEVM enhanced the convergence speed of its predecessors.
3.1.18 Enhancing DE Utilizing Eigenvector-Based Crossover Operator
In the proposed approach, eigen vector information of the population's covariance matrix was used for rotating the coordinate system. The trial vectors developed stochastically from the target vectors with a standard coordinate system or a rotating coordinate system. The selection probability of a particular coordinate system among the two was managed using a suitable control parameter. This strategy increased the diversity of the population while reducing the risk of premature convergence. Including the proposed scheme with DE and its other variants gave a satisfactory result.
3.1.19 mJADE
DE and most of its other variants accommodate many populations, which require a large amount of computational cost memory size. The requirement of a large memory size restricts the feasibility of using these variants in embedded systems. mJADE was proposed with a new mutation operator considering the algorithm's reliability despite reducing population size for tackling the issue. The same mutation strategy used by JADE during the search process was also used here. The proposed algorithm was highly competitive and better than the variants of its type.
3.1.20 DEGPA
The proposed approach was designed to lessen the effort while selecting effective DE parameters during the search process. Control parameters of DE were selected adaptively during the search process. The parameter search space was discretized by forming a grid. A local search was conducted on the grid based on DE's estimated performance under matching parameters. The most significant parameter setting found was accepted for several iterations. The exact process was repeated iteratively to select the useful parameters during the process. DEGPA was extended (eDEGPA) with the selection of crossover types in the same study. Rigorous testing of the approach is done with the high dimensional problem of up to 500 dimensions and several composite functions. The results advocated the successful adaptation of parameters without compromising the diversity of the population.
3.1.21 iL-SHADE
In this algorithm, the mutation strategy, external archive, and reducing population mechanism remain unchanged as in the L-SHADE algorithm. However, the historical memory values of \(CR\) are initialized at 0.8 instead of 0.5 in LSHADE. One historical value entry of \(F\) and \(CR\) contains a pair of values 0.9 so that the higher values for both the variables could be used together. The restriction was given using a very high \(CR\) and low \(F\) value while the search process was in its initial stage. The next generation's historical memory values were enumerated, giving the same weight to the present generation's historical memory values with the weighted Lehmer means. After every generation value of \(p\), which controls the algorithm's greediness, was evaluated using a novel formula. An empirical test with benchmark functions with different dimensions such as 10, 30, 50, and 100 confirmed the algorithm's competitiveness.
3.1.22 LSHADE-EpSin
The new algorithm automatically adapted the value of the scaling factor using an ensemble sinusoidal strategy. The ensemble strategy consists of two methods. One was a non-adaptive sinusoidal decreasing adjustment, and the other was an adaptive history-based sinusoidal increasing adjustment. These two methods were used in a way that balanced the exploration and exploitation of the algorithm. A Gaussian walk-based local search method was used at advanced generations to amplify the algorithm's exploitation capability. LSHADE-EpSin was used to evaluate CEC 2014 benchmark functions. The comparison with LSHADE and other state-of-the-art algorithms confirmed the enhanced ability and robustness of the proposed algorithm.
3.1.23 ADE-ALC
The aging leader and challenger mechanism were used in this algorithm. The best individual in the population was selected as the leader, and it was activated for a defined period. This process improved exploitation in the search process. After completing a leader's life based on two local search operators, a challenger was generated, challenging the leader's leading ability. This way, diversity in the population was maintained while increasing the exploration ability of the process. Parameters in this scheme were selected adaptively. The proposed scheme was efficient and comparable to DE's variants while using unimodal, multimodal, and hybrid functions.
3.1.24 FDE
This approach altered the parameters of DE using fuzzy systems dynamically. This system modified the parameter \(F\) in a decreasing manner. The fuzzy system worked on the number of generations. If the generation was low, then \(F\)'s value kept high. When the generation value was medium, the \(F\) value was chosen in the medium range, and if the generation value was high, then the F value was kept low. In this way, the proposed approach balanced the exploration and exploitation process.
3.1.25 LSHADE-cnEpsin
Ensemble of two sinusoidal waves was used to adapt the value of scaling factor \(F\) efficiently. Adaptive sinusoidal waves were used to increase, and non-adaptive sinusoidal waves were used to decrease the adjustment. One of the schemes was chosen based on past performance to evaluate the value of the scaling factor. The high correlation between the variables was undertaken by introducing a co-variance matrix learning with the crossover operator's Euclidean neighborhood. The algorithm's performance enhancement was tested by computing IEEE CEC 2017 functions and comparing them with state-of-the-art algorithms.
3.1.26 SADE-FP
Values of DE's parameters are problem-dependent; therefore, the dynamic selection of parameters during the search process is tedious. SADE-FP addressed this issue by adjusting the parameter self-adaptively and using a perturbation strategy based on an individual's fitness performance. The scaling factor value was evaluated using a cosine distribution. When an individual's fitness was less, the large scaling factor value provided the individual more exploration power. On the contrary, if the fitness value were good enough, the scaling factor value would be less, giving more exploitation ability to the process. This strategy has enhanced the performance of DE and its other variants.
3.1.27 Db SHADE
This algorithm was designed to address the premature convergence of SHADE family algorithms. A distance-based parameter adaptation technique was induced to convert the exploration phase longer in higher dimensional space while considering the computational complexity factor. The distance-based approach works between the original vector and the trial vector. In this approach, the Euclidian distance between them is calculated. The individuals moved farthest will have the highest mutation and crossover value. After empirical tests, it was found that the performance of DbSHADE was better than the SHADE family algorithms.
3.1.28 DEA-6
In this algorithm, mutation and crossover operators were designed and applied in a new way. According to this scheme, the mutation occurred in two phases; in the first phase, exploration was the motive. In the second phase of the mutation process, the individual's best value was considered, thus promoting exploitation. The binomial crossover was also replaced, introducing a new vector whose components were random functions generated by a uniform probability density function between 0 and 1.
3.1.29 ATBDE
A failure member-driven self-adaptive top–bottom strategy was used in DE. The top–bottom approach used previous information from both successful and failed individuals. Two archives for failure and successful members were constructed. The individuals from different archives were selected according to the input from the heuristics. The parameter value for mutation and crossover for every member of the population was updated based on status. When a failure's threshold limit is exceeded, the parameter adaptation comes into force.
3.1.30 Hard-DE
Hierarchical archive-based mutation strategy was proposed here. A new crossover rate with the grouping strategy and parabolic population size reduction scheme was also presented. Hard-DE tested with CEC 2017 and CEC2013 best suits real parameter single objective optimization and two real-world optimization problems from CEC 2011. It was evident from the result that Hard-DE was better than several state-of-art of DE algorithms.
3.1.31 SALSHADE-cnEpSin
Self-adaptive LSHADE-cnEpSin used the mutation strategy, \(DE/current-to-pBest/1\). The scaling factor of the new algorithm was generated utilizing the Weibull distribution. Using some other parameters, this parameter generation method estimates the lifetime of a problem. It helped the algorithm fix the scaling parameter's value from a global search to a local one. \(CR\) values in this method were used in an exponentially decreasing order. The archive was used to store the infeasible solutions.\(F\) and \(CR\) values were also stored in the archive for the next generation. The linear population size reduction mechanism was used as the parent algorithm.
3.1.32 Predicting Effective Control Parameters for DE Using Cluster Analysis of Objective Function
According to this strategy, the cluster analysis technique was used to identify the effective control parameters for the objective function. During the cluster analysis, three features of the objective function were used for parameter identification. These three features are the number of dimensions of the function, interquartile range of the normalized data, and skewness of the normalized data. Collectively these features make \(\beta\) characteristics of the objective function. Training data was used to explore any relationship between the control parameter, the objective function, and its performance. \(\beta\) data points were identified by applying \(k\) means++ to the training data set. Mean \(P\) is calculated from the set, and the top 10% of data points were recognized and used to optimize new functions.
3.1.33 DE-NPC
The algorithm used \(DE/target-to-pbest/1/bin\) mutation strategy considering its effectiveness. The total population was divided into m subgroups. Each of the subgroups uses all the three DE control parameters separately. The scale factor for every individual in a subgroup was calculated using Cauchy distribution with a readjustment policy. The crossover rate of each individual in the subgroup was enumerated using Gaussian distribution and resettled after generating. A linear-parabolic population reduction policy was employed to reduce the population and increase the convergence speed of the algorithm. DE-NPC was tested with several DE variants using CEC 2013, CEC 2014, and CEC 2017 functions; the comparison results confirmed the efficacy of the proposed algorithm.
3.1.34 j2020
The algorithm was designed using the functionality of both the algorithms jDE and jDE100. It uses parameters like jDE. Alike jDE100, it divided the total population into two sub-populations. Among the sub-populations, one is big, and another is small. For mutation, \(jDE/rand/1\) mutation strategy was used. After every generation, the best solution is evaluated in a bigger sub-population used in both populations to generate solutions. Both the sub-populations were reinitialized based on predefined criteria. Euclidean distance-based crowding mechanism was used to identify an individual nearest to the trial vector. The algorithm's efficiency was measured with DE and jSO using the CEC 2020 function set.
3.1.35 ERG-DE
The algorithm used an elite regeneration technique. The new individuals were generated around the best individuals using Gaussian sampling or Cauchy probability distribution methods. The number of the best population was selected using a linearly reducing deterministic technique. The process requires tunning of only two parameters. New solutions were generated in the surroundings of the parent solution, thereby increasing the exploitation capacity of the algorithm and bypassing the local solution. Evaluated results of the CEC 2014 functions set and comparison with various DE variants confirmed the algorithm's effectiveness.
Among the several other studies, a few more exciting improvements in DE based on a modification of parameters are given below. In [76] Wang et al. proposed a modified binary DE algorithm (MBDE) with novel probability estimation operators to solve binary-coded optimization problems. Zhang, in [77], described a dynamic multi-group self-adaptive DE (DMSDE) strategy. The population was divided into multiple groups, exchanging information dynamically; here, parameters were also made self-adaptive. Wang et al. [78] modified the binary DE algorithm with a new probability estimation operator to balance exploration and exploitation in cooperation with the selection operator. Intersect mutation DE (IMDE) proposed by Zhou et al. [79] intersecting the total individuals into two parts, the best and worst parts, new mutation, and crossover operator used to generate population for the next generation. An adaptive ranking mutation operator base DE to solve constraint optimization problems is given in [80]. Zamuda and Brest [81] described a self-adaptive DE with a randomness level parameter to regulate the randomness of the generated control parameters.
In summary, the volume of articles in this category and the successes recorded speaks to the importance modifying these parameters plays in DE's performance. Three essential parameters, namely, mutation factor, crossover rate, and population size, are considered in this category. The deterministic, adaptive, or self-adaptive parameter selection has all been used, as explained in this section. The performance is also dependent on the nature of the optimization problems. It can be concluded that modifying these parameters significantly enhances the performance of DE.
3.2 Modified by Parameter Tuning and Different Mutation Strategy
The parameters of DE have a very important role in the searching process. Likewise, the mutation strategy selection for generating donor vectors also has equal importance. The selection of good parameters with its mutation strategy is problem-dependent. Diversity in the population depends on the strategy used with the amplification factor. The researchers pay much more effort to design algorithms by modifying existing strategies, combining several techniques in one algorithm, combining strategies and parameter settings, and using these policies differently. A list of these modifications on DE is given in Table 4.
3.2.1 JADE
JADE was proposed with a new mutation strategy \(DE/current-to-pbest,\) a modification of the mutation strategy \(DE/current-to-best\). According to this new mutation policy, instead of the global best solution, a randomly chosen solution among \(100p\%\) best solutions will be used. An external archive was maintained to store the recently rejected inferior solutions. Whenever the archive exceeded the limit, a few solutions were deleted randomly to keep the limit. The second random solution was selected from the union of the present solution and the solution in the archive. Scaling factor and crossover parameter values are generated using truncated Cauchy distribution and Normal distribution, respectively. The process showed enhanced results compared to the adaptive DE variants, jDE, SaDE, basic DE, and PSO methods.
3.2.2 SaJADE
Strategy adaptation mechanism used four mutation strategies, non-archived and archived version of \(DE/rand to pbest\) and \(DE/current to pbest\). Strategies without archives are suitable for low-dimensional problems and converge faster. On the contrary, archived strategies are fit for high-dimensional problems and bring diversity to the population. The appropriate process was selected among these strategies based on a strategy parameter. The strategy parameter was updated after every generation. This proposed scheme can be combined with other DE variants and the parameter setting of that particular variant. When combined with JADE named, SaJADE gave a better performance than JADE.
3.2.3 EPSDE
The three mutation strategies \(DE/Best/2/bin\),\(DE/rand/1/bin, and DE/current-to-rand/1/bin\) made a pool of mutation strategies with different characteristics. Each population vector was assigned a randomly chosen mutation strategy for trial vector generation with linked parameters. If the trial vector is generated better than the target vector in the present generation, this trial vector is selected as the next generation's target vector. Simultaneously, the combination of the mutation strategy and parameter values generated the better offspring also stored. On either side, if the trial vector generated was not better than the target vector, then a new pair of mutation strategies and parameter values were allocated from their respective pools, or a pair of successful mutation strategies and parameter values stored which have equal probability was assigned. This way, it worked with a better probability of generating offspring using mutation strategy and control parameters.
3.2.4 CoDE
The proposed method used three trial vector generation strategies to make the strategy candidate pool, and three control parameter settings were used to make the parameter candidate pool. The selected trial vector generation strategies were \("DE/rand/1/bin", "DE/rand/2/bin", and "DE/current-to-rand/1".\) Here the word \("rand"\) denotes uniformly distributed random numbers between 0 and 1. Three selected control parameter settings were \([F =1.0, CR =0.1],[F =1.0, CR =0.9]\) and \([F =0.8, CR =0.2]\). Firstly, the initial population was generated from the feasible solution space. Then, every generation trial vector was generated using every strategy from the strategy pool with a randomly selected parameter setting from the parameter pool. Thus, three trial vectors were generated for every target vector at each generation. Only the best one replaced the target vector and entered the next generation among these three trial vectors. Empirical results showed that this strategy was better than the four DE variants jDE, JADE, SaDE, EPSDE, and some other non-DE variants.
3.2.5 MDE_pBX
Modified DE with p best crossover used a new mutation strategy. For every target vector,\(q\%\) of the total population was randomly selected as a group. The best member of this group was used in the mutation process according to the \(DE/current-to-best/1\) strategy. A crossover policy named \(pbest\) crossover was also incorporated in this scheme, allowing mutant vectors to exchange their components with an individual that falls in the \(p\)- top-ranked individual of the present generation using the binomial crossover. Values of \(F\) and \(CR\) were updated based on their previous successful values in generating a trial vector. DE, jDE, and JADE's performance was amplified when combined with one or more proposed modifications.
3.2.6 MRL-DE
Modified random location-based DE algorithm worked to increase the search process's explorative ability. The whole population was divided into three regions after sorting according to their fitness. The user decided what percentage of the total population would reside in every region. Three random variables were selected for mutation purposes, one from every region. Other phases of the algorithm acted like basic DE. The new algorithm enhanced DE's performance and outperformed in comparison to some of its variants.
3.2.7 GDE
Individuals of the whole population were divided into a superior group and an inferior group based on their fitness value. The superior group was used to increase the algorithm's searchability. Hence \(DE/rand\) mutation strategy was used on that group as the inferior group did not contain much more information.\(DE/Best\) mutation strategy was used to amplify the algorithm's exploitation ability. A self-adaptive strategy automatically selected the group size. The other phases were like the basic DE algorithm. This new algorithm gave better performance compared to some different DE mutation strategies.
3.2.8 SapsDE
As in DE and most of its variants, the size of the population was fixed; this approach worked on a variable size population. The behavior of the search process dynamically adjusts population size. When the fitness value is better, then it increases the population size. If the fitness value is less, then population size decreases, and if stagnation occurs, population size increases for exploring. The resizing of the population was done by population resizing triggers. Two different mutation strategies were used to balance the exploration and exploitation in the search process. Among them, one has the capability of investigation, and the other was used for exploitation. This proposed strategy gave better results on some benchmark functions than DE, SaDE, jDE, JADE, etc.
3.2.9 SAS-DE
Self-adaptive combined strategies using DEcomposed of four mutation strategies, two crossover operators, and two constraint handling techniques. A total of 16 combinations can be obtained from these mechanisms. Each individual in the population was assigned a value between 1 and 16. The donor vector was generated at the first generation using the allocated mutation strategy, a crossover operator, and a constraint handling technique. A tournament selection process was run for the first m individuals of the population to select the best individual and combine strategies. The chosen strategies were applied to the individual. The remaining individuals of the population are assigned a random combination of approaches. The searching process was more effective than DE and its four other variants.
3.2.10 DE with Two-Level Parameter Adaptation
A new mutation strategy,\(DE/lbest/1\), was used here, different from the \(DE/best/1\). The earlier approach used the global best value for the optimization process; the new mutation scheme used multiple best values generated from the different local sub-populations. Thus, it maintains both convergence speed and diversity of the individual in the process. The adaptive parameter control scheme was implemented in this scheme in two steps. According to the search process, the first step controls \(F\) and \(CR\)'s values for each generation's whole population, whether it is exploration or exploitation. The second phase generates each individual's parameters from population-level parameters based on the individual's fitness value and distance from the global best individual. The proposed scheme was superior to some self-adaptive DE variants like CoDE, jDE, JADE, SaDE, etc.
3.2.11 RSDE
The proposed scheme used two population replacement strategies to alleviate the problem of stagnation. The first approach introduced an individual replacement strategy that verifies the individual's improvement within a predefined number of iterations. If the individual has not improved, a new individual is generated, combining the best individual's value with a random parameter offset. The second replacement strategy compares the fitness value of the current best individual with the fitness value of the best individual found before a pre-specified iteration. If the value is improved, the last best individual value is replaced by the current best individual, and the evolution continues. On the other side, if the current best fitness value is not improved, the population will be generated again with the current generation's best individual. While the population was replaced a predefined number of times, the population was randomly generated using a uniform distribution within the whole parameter space. The Exploration and exploration ability of DEwas improved by combining these strategies into it.
3.2.12 Cluster-Based DE with Self-adaptive Parameters
Multi-population was adopted to increase multimodal optimization searchability. The clustering method was used to divide the whole population into sub-populations. Control parameters were selected based on the search process stages in an adaptive manner that speeds the convergence and gives the solution accuracy. This strategy was combined with crowding DE (CDE) and species-based DE (SDE) to form new algorithms, Self-CCDE and Self-CSDE, which provided a far better result than the parent algorithms.
3.2.13 DMPSADE
Self-adaptive discrete parameter control strategy was proposed to control the exploration and exploitation during the search process. Every vector of the population has its mutation strategy and control parameters. Mutation strategies and control parameters were updated on areal-time basis. The cumulative selection probability of the total five mutations was calculated based on the cumulative selection probability; a randomly selected strategy was selected for each individual using a roulette wheel. Comparison with several self-adaptive DE variants revealed that the average performance of the proposed algorithm was better.
3.2.14 mDE-bES
The population was divided into independent subgroups, increasing population diversity on each step of optimization. Each subgroup is allocated a promising mutation and update strategy within a period of function evaluation. At the end of the function evaluation period, individuals in different subgroups are exchanged. This exchange of information guided the algorithm to explore more effective regions in the search space. This algorithm introduced a new mutation strategy using a convex linear combination of randomly selected individuals.
3.2.15 SWDE_Success
The proposed scheme mentioned switching of scaling factor and crossover value within the lowest and highest value of the range using a uniform random distribution for various individuals of the population. During the mutation process, each individual of the population goes through one mutation strategy among the two strategies \(DE/rand/1\) and \(DE/best/1\). Among the two strategies, the strategy which generated successful offspring in the last generation was selected to act for the next generation. SWDE_Success showed comparative results or better results than similar types of algorithms.
3.2.16 jESDE
This approach modified the SaEPSDE model after identifying unnecessary or redundant parts. jESDE used only mutation strategies \(DE/rand/1\) and \(DE/current-to-best\) and two crossover strategies. The SaEPSDE learning process identifies the successful mutation and crossover strategy and applies it to all individuals until the optimization process ends. jESDE used controlled randomization of \(F\) and \(CR\) values like the jDE algorithm. It was given high performance in low dimensional problems, and high dimensional problems performance was good.
3.2.17 IDEI
A combination of mutation strategies \(DE/rand-to guiding/1\) and \(DE/current-to-guiding/1\) with an arranged probability was used to balance exploration and exploitation. The parameter set was guided by the fitness values of both original and guiding individuals. In the selection process, a diversity-based selection strategy was employed to accumulate a greedy selection strategy. It calculated a new value of weighted fitness based on vectors' fitness value and trial and target vectors' position. Experimentally, it was found that the proposed algorithm was very competitive.
3.2.18 SAKPDE
In this proposed strategy, a learning-forgetting mechanism was introduced to implement self-adaptive mutation and crossover strategies. Adjustment of parameters made by prior and non-traditional knowledge. A set of parameter values, a crossover strategy, and a mutation strategy were allocated for each individual in the population during evolution. Initialization of the population was done using \(DE/rand/1\) to diversify the population. Consequently, mutation strategies were selected based on cumulative selection probability using a roulette wheel. One was selected self-adaptively among the binomial, exponential, and Eigenvector-based crossover. Opposition learning was used to generate control parameters. Eight well-known DE variants were compared with this algorithm and found competitive or better performance.
3.2.19 NRDE
The proposed approach used to optimize a single objective noisy function. It employs switching between the values of parameters \(F\) and \(CR\). These parameter values bounce randomly between the limits during each offspring generation process. One is chosen randomly in the mutation phase among the two mutation strategies \(DE/best/1\) strategy or the \(DE/rand/1\) strategy. A modified crossover operator and an adaptive threshold scheme are used to select individuals for the next generation. A competitive performance with robustness was obtained on state-of-the-art DE algorithms while solving noisy optimization.
3.2.20 AGDE
Adaptive guided DE algorithm introduced a new mutation strategy. Instead of choosing three random variables like \(DE/rand/1\), random variables are selected from \(100Z\%\) of the top and bottom individuals, and the other \([PS-2*(100Z\%)]\) individuals. According to the proposed mutation strategy difference, the best and worst vector is multiplied by the scaling factor and added with the individual chosen from the middle individuals. This process curtails the risk of premature convergence and enhances the exploration ability. A new adaptation scheme for updating \(CR\) values at each generation is employed by making a pool of known \(CR\) values. Crossover probability is calculated at each generation following uniform distribution in the range specified.
3.2.21 jSO
The jSO algorithm is the modified version of iLSHADE. \(DE/Current-to-pBest/1\) is the mutation strategy used by its two parent algorithms LSHADE and iLSHADE, whereas, in jSO, a weighted mutation strategy \(DE/Current-to-pBest-w/1\) was introduced. The weighted scaling factor (\({F}_{\omega }\)) was used to multiply a smaller value in the early stages, and in the later phase, a greater value was applied. In other points, it is the same as the iLSHADE algorithm.
3.2.22 RNDE
Mutation strategies of DE, \(DE/rand/1\), and \(DE/best/1\) have good exploration ability and exploitation ability, respectively. A novel strategy,\(DE/neighbor/1\), is used in this algorithm to use both the approach's advantages. According to the \(DE/neighbor/1\) process, the individual's neighbor is chosen randomly, and the best neighbor is selected as the base vector for calculation. The selection of the number of neighbors is vital in this algorithm as it controls the exploration and exploitation of the search process. Comparison with algorithms of \(DE/rand/1\) and \(DE/best/1\) individually and with six other state-of-the-art DE algorithms (DEGL/SAW, EPSDE, MGBDE, SaDE, ODE, and OXDE) depicted the superiority of the proposed approach.
3.2.23 AMECoDEs
Adaptive multiple elites guided composite DE proposed the generation of trial vectors for each individual in two elite vectors' guidance. The procedure of selection for the elite was different. An elite vector with diverse strategy selection produces two trial vectors, among which the better one participates in the selection process. Thus, the probability of trapping the search process in a discouraging area was reduced. Parameters for each trial vector generation strategy were autonomously adapted to the successful experience of previous generations. A shift mechanism is used to move the search process from stagnation. Comparing the proposed algorithm with other DE variants of its time has given a better or more competitive result.
3.2.24 PALM-DE
Parameters with an adaptive learning mechanism addresses parameter tuning and appropriate mutation strategy with the associated parameter values. \(F\) and \(CR\) values are separated into different groups to control the unwanted interaction between the control parameters. The value of \(CR\) was adapted based on its success probability, and the value of \(F\) was adjusted to success values and fitness values associated with it. A timestamp mechanism was introduced to trace and eliminate too old inferior solutions from the archive. A dynamic population size reduction approach ensures a dynamic change in population size. Enhanced or competitive performance is achieved over DE and some of its variants on single objective functions.
3.2.25 HHDE
The algorithm used a historical experience-based mechanism (HEM) and heuristic information (HIM) based parameter setting to accomplish the search process. A mutation strategy is allocated to the individual considering the individual's preference in the search process, determined by the individual's heuristic information. Mutation strategies were also indexed according to the selection rate in the whole population. Allocation of parameters was done by first ranking the individuals according to their fitness value and ranking parameters on their success.
3.2.26 LSHADE-RSP
It is an enhanced variant of the LSHADE algorithm. A new mutation strategy \(current-to-pbest/r\) is introduced in this variant. The new strategy is the modification of the existing \(current-to-pbest/1\) strategy. According to the new strategy, individuals with the highest fitness get the most considerable rank, and the minor position is given to the individual with the least fitness. Scaling factor \(F\) computed using Cauchy distribution with location parameter \(\mu {F}_{r}\) and a scale parameter of 0.1. During the computation of \(Cr\), a normal distribution is used with mean \(\mu {Cr}_{r}\) and variance 0.1. Finally, the algorithm evaluated the IEEE CEC 2018 function set and circular antenna array design problem.
3.2.27 GPDE
The proposed approach used a Gaussian mutation operator and a modified common mutation operator in which the \(DE/rand/1\) strategy changed to \(DE/rand-worst/1\). Both the strategies work together based on their combined score for producing potential places for each individual. Periodic adjustment of the scaling factor is confirmed using a cosine function, and the diversity of the population is dynamically adjusted to the evolutionary process by employing a Gaussian function. Performance analysis of GPDE showed that it outperforms most of the well-known algorithms of its time.
3.2.28 PaDE
Parameter adaptive used a grouping strategy in which individuals of the whole population were clustered into \(k\) groups. Every group was associated with two control parameters \({\mu }_{CR}\) and a selection probability. Control parameter \(F\) was calculated using a Cauchy distribution. A parabolic population reduction mechanism was proposed to control the fast reduction of the population at the beginning when a linear reduction strategy was used in some variants. A timestamp strategy increased the effectiveness of mutation by eliminating too old inferior solutions from the archive. This algorithm has shown better results on state-art-of-the DE variants on single-objective optimization, especially in higher dimension problems.
3.2.29 CIpBDE
Modified the basic DE with a few new strategies. The mutation was performed using two mutation strategies modified \(DE/target-to-ci\_mbest/1\) and \(DE/target-to-pbest/1\) strategies. Selection between the strategy achieved using a constant probability value of 0.5. Parameter values of scale factor and crossover were evaluated adaptively using a modified scheme with successful values from the last generations. Scale factor value for every individual at every iteration generated using Cauchy distribution with a location parameter value. Crossover value evaluated using Gaussian distribution. This process guaranteed the evolution of individuals with better fitness. Finally, CIpBDE was compared with several DE variants using CEC 2013 function and feature selection problem.
3.2.30 Di-DE
In the Di-DE technique, external archive arrangements were kept up during the search. These arrangements are utilized in the mutation methodology, where they can improve the variety of the trial vectors. Besides, a novel gathering system was used in the Di-DE method, and the entire population was partitioned into a few gatherings with data exchange in each gathering. The Di-DE technique acquired a generally better execution with DE variations with fixed population size under several benchmark functions by these improvements.
3.2.31 DMCDE
DMCDE acquainted a new mechanism guided by the best individual to propose two unique variations of the old-style \(DE/rand/2\) and \(DE/best/2\) procedures; the proposed mutation strategies were named as \(DE/e-rand/2\) and \(DE/e-best/2\). They utilized the solution arbitrarily browsed prevalent best solution as the base vector and the primary vector of contrast vectors, giving a more precise direction to singular mutation without losing diversity. The new strategies were used to balance the global and local search trade-offs. Finally, the new method evaluated basic benchmark functions on different dimensions and flight scheduling problem. Comparison of results with DE variants proved the efficacy of the new algorithm.
3.2.32 WMSDE
The population was divided into several groups of subpopulations. The scale factor value was calculated using the wavelet base function. Total five mutation strategies \(DE/rand/1, DE/rand/2, DE/best/1, DE/best/2\) and \(DE/rand-to-best/1\) were used to generate trial vectors. The strategy that developed the best solution was used in the later iterations for mutation purpose. The crossover value was evaluated using the normal distribution. Finally, the normal benchmark functions and airport gate assignment problem were solved using the method.
3.2.33 COLSHADE
COLSHADE emerged from the L-SHADE algorithm by presenting noteworthy highlights, for example, versatile Lévy flights and dynamic tolerance. Lévy flight was used to perform the global search in COLSHADE. The Lévy flight's objective is to manage the choice pressing factor applied over the population to track down the attainable area and keep variety in the solution. Hence, another versatile Lévy flight mutation technique was presented here, called the levy/1/bin. The local search stage was performed by \(current-to-pbest\) strategy. Notwithstanding, the universal system adaptively controlled the execution of these two mutation approaches at various rates and times.
Other than the above-discussed strategies, some other striking strategies are found in the literature. Some of those are given below. Ali [112] proposed MDE, where a failure counter is used to identify the individuals whose performance is worst within several generations. The Cauchy mutation was also used to improve performance. Alguliev et al. [113] developed a new variant DESAMC to alleviate redundant information from a document and summarize it. Cai et al. [114] presented NDi-DE using neighborhood-guided parents for mutation and a direction-induced mutation strategy to enhance the performance. For solving problems with rotated objective functions, the authors of [115] used a rotation in-variant current-to-pbest mutation scheme and developed a new DE variant. The authors [116] proposed a modified DE variant by self-adapting parameters and mutation strategies. A micro population-based version of JADE to solve unconstrained and constrained continuous optimization problems was developed by Brown [117]. A new variant of DE with a new mutation strategy, “\(DE/rand-to-best/2\)” with a guiding force parameter, developed by Zaheer et al. [118], a self-adaptive DE named ZEPDE [119] used adaptive mutation strategies, and parameters were generated by comparing different combination in various zoning. IMSaDE, a modified variant, was developed by Wang et al. [120]. They improved the existing \(DE/rand/2\) strategy of mutation, incorporating an approach archiving the best individual and self-adaptation of control parameters. EFADE [121] proposed employing a new mutation strategy and a triangular operator to solve global numerical optimization problems. Another variant of DE, in which JADE was modified with a sorted crossover rate namedJADE_Sort [122], allocated a small crossover value to the best performing individuals and used the successful mutation strategies involved in off-spring generation.
Finally, the parameters of DE have a crucial role in the searching process. Likewise, the mutation strategy selection for generating donor vectors also has equal importance. The selection of good parameters with its mutation strategy is problem-dependent. The performance of the articles in this section confirms the diversity in the population, and the strategy used with the amplification factor could guarantee an effective and robust optimization process. Hence, significant efforts are put by researchers to design algorithms that modify existing strategies, combine several techniques in one algorithm, combine strategies and parameter settings, or use these policies differently.
3.3 Hybridized Differential Evolution Algorithms
Nowadays, hybridization is a common term in optimization. Exploitation and exploitation are the two key factors that play a significant role in any optimization algorithm. An algorithm with more exploration ability becomes slower in convergence and converges prematurely. On the contrary, an algorithm with high exploitation ability can converge faster but cannot bypass the local solution and converge prematurely. The researchers studied that merging two algorithms with high exploration and exploitation ability or vice-versa can produce a balanced algorithm. The resultant algorithm has found more balance than the individual algorithm used during the merger. The algorithm's efficiency may be increased in convergence speed, computational complexity, ability to get out from local optima, identifying the stagnation, etc. Therefore, hybridization merges two or more algorithms to get a more efficient algorithm than its parent algorithms. A list of hybridized algorithms where DE is a component algorithm is given in Table 5:
3.3.1 DE/BBO
DE (DE) is helpful in exploration; therefore, it locates the region of the global minimum efficiently. However, it works lazily in the exploitation phase of the algorithm. The biogeography optimization (BBO) algorithm is helpful in exploitation. Considering these facts, DE and BBO are combined, and a new algorithm DE/BBO, was generated with balanced exploration and exploitation. In the proposed hybrid algorithm, a new mutation operator was introduced to less destroy the good solutions while inadequate solutions can accept new structures from better solutions. DE/BBO compared DE and BBO independently using several unimodal and multimodal functions in which DE/BBO gave a superior result.
3.3.2 SaDE-MMTS
In this algorithm, the \(DE/current-to-pbest\) mutation strategy provided the \(pbest\) individual and evolution path. Each individual in the current population was allocated a mutation strategy from the candidate pool based on the strategy's success rate probability. The success rate was updated after a pre-specified number of generations. The MMTS was used periodically and taken adaptively by the search process. The success rate of both the algorithm SaDE and MMTS were calculated regularly. Function evaluation was assigned to the methods based on the technique's success rate. The proposed algorithm has shown superior performance against its basic algorithms.
3.3.3 DE-\(\wedge\) Cr
The algorithm was proposed with a sequential quadratic programming routine and an adaptive crossover value for the DE algorithm. Including sequential quadratic programming, routine enhanced the solution finding the algorithm's speed. It used triangular distribution for both \(F\) and \(Cr\) values. Though the triangular distribution for \(F\) was kept fixed, the same for \(Cr\) was adapted after a predefined number of successes in generating the solution. The new variant detected separable problems by choosing binomial crossover with a low \(Cr\) value. It could detect the problem's strong dependency on decision variables. The IEEE CEC 2011 problem set was evaluated efficiently using the proposed method.
3.3.4 SGMDE
Population-based algorithms like DE suffers from long computational times because of their stochastic nature. Considering this problem in the new algorithm, three DE mutation strategies were combined. Simulated annealing was combined with DE to improve the ability of search and convergence speed. Proper annealing control parameters were selected to control the selection operation. After a pre-specified number of iterations to improve the local optima, some optimum solutions were chosen from the current population. The vectors of each solution were copied using an adaptive Gaussian immune algorithm. This policy decreased the probability of getting stuck in the local optimum and increased performance. Comparing the proposed algorithm with algorithms like BDE, MDE, LDE, and basic DE yields a better result.
3.3.5 LDE
Enhanced DE using the concept of opposition-based learning and random localization. In EDE population was initialized using low discrepancy sequences. They were integrated to form the hybridized algorithm LDE. Optimal parameters of a Gaussian mixture model evaluated using LDE; these parameters are used to estimate image histogram. Consequently, optimal thresholds were calculated by minimizing the misclassification error criterion. Higher computational efficiency was achieved, allowing it to be used for image segmentation in multi-thresholding.
3.3.6 SaHDE
DE with extra migration and acceleration operation was used to form Hybridized DE (HDE). HDE with self-adaptive control parameters and parameter tuning achieved a fast convergence speed. The authors used this algorithm to balance phases in an unbalanced distribution system. The performance of SaHDE was found to be better than DE and HDE.
3.3.7 HPDE
DE is influential in global search, but DE's local search ability worsens with the increased complexity of the problem. Considering this, permutation-based DE was combined with FCH, which has increased exploitation ability in local space. The method developed is known as Hybrid Permutation based DE (HPDE). The proposed algorithm was applied to solve the batch plant’s zero-weight scheduling problem. The experimental result showed that the performance of HPDE was better than PDE, FCH, and HPDEILS in different test environments.
3.3.8 HDRE
The new method was used to evaluate optimum machine parameters for milling operations. The proposed approach was composed of DE and receptor editing properties of the immune system. Receptor editing was applied in this method in the whole population to get out of local optima. Receptor editing randomly generated 25% of the total population, and they randomly replaced individuals of the same size. If the individual chosen is the best, another individual will be selected randomly, keeping the best individual in the population. When the proposed approach was applied to a case study, it found a significant result than PSO, Hybrid PSO, Immune algorithm, Hybrid immune algorithm, etc.
3.3.9 DE–TS
Tabu search algorithm was combined with DE to increase DE's local searchability. The resultant algorithm was called DE-TS. Promising areas in the search space were identified by using DE. The local optimum in a particular search space is then determined by inserting local search steps inside DE's evolution loop. The proposed hybrid algorithm [130] was used to solve the Job Shop Scheduling problem. DE-TS produced better results for mid-size problems when applied to complex problems; it has given a competitive performance.
3.3.10 DHDE
Discrete hybrid DE algorithm deals with the problems of integer programming. An integer coding technique was applied in this algorithm to optimize the problems with discrete variables. The crossover operator of DE is combined with a two-level orthogonal array and factor analysis. A simplified quadratic interpolation method as a local search operator improved the local searchability of the algorithm. The quality of the solution prevented the algorithm from being trapped in local optima. Integer restrictions on decision variables are achieved by including a mixed truncation procedure into DE mutation and local search. A comparison with similar algorithms depicted its efficiency.
3.3.11 DEEPSO
The proposed approach combined Evolutionary particle swarm optimization and DE. EPSO itself is a hybrid approach of PSO with evolutionary programming and a self-adaptive recombination operator. In the DEEPSO, the self-adaptive strategy of EPSO is used with a rough gradient concept of DE. This algorithm in efficiency test on the IEEE 24 bus test system with eight possible PAR locations has shown 96% efficiency in finding the optimal solution.
3.3.12 NSA–DE
Earlier adaptive nature of unsolicited email spam used to detect spam. In the proposed algorithm [134], a hybrid model, which is a combination of a Negative selection algorithm (NSA) and DE (DE), is used to detect email spam. DE implemented in the random detector generation phase of NSA. The distance between generated detectors is maximized by using local fitness factors as the fitness function. The distance between the overlapping detectors was also used as a fitness function to overcome overlapping between two detectors. The test result depicted that the proposed new model superseded the NSA model.
3.3.13 TLBO–DE
Prediction of chaotic series is a non-linear, multivariable, and multi-modal optimization problem. It requires global optimization techniques to avoid local optima. In the proposed hybrid model [134], DE is used to update individuals' previous best positions, which forces the TLBO to jump out of stagnation as DE has strong searchability. TLBO-DE was used as a learning algorithm in a single network model to train and determine weights and biases.
3.3.14 AH-DEa
Adaptive hybrid DE algorithm designed to make an effective algorithm, including the strengths of well-known DE variants. The proposed algorithm used adaptive crossover rates within the value of 0.1 to 1.0. The mutation factor was selected adaptively, and during parent selection, two of them were chosen from the present population, and the third one was selected from the archive. One of the crossover strategies from the binomial and exponential crossover was selected based on the stage of the algorithm. A local search mechanism is incorporated to update the best solution at the end of the DE stages. A reinitialization mechanism was adopted to tackle premature convergence, and an epsilon constraint handling technique effectively handled constraints. It has given better results on different self-adaptive models.
3.3.15 DE-FPA
The DE algorithm has a good exploration ability. It has a lack of power in respect of exploitation. Considering the weakness of DE mentioned above, it was combined with PFA with a modified operator with two adaptive dynamic weights, which increased global and local searchability in the proposed algorithm. Convergence speed, searchability, and effectiveness of DE-PFA were better than its mother algorithms.
3.3.16 HACC-D
The concept of this algorithm [137] was to use the different search biases in the evolution process. Two well-known DE variants, JADE and SaNSDE, were combined to form HACC-D. Initially, population initialization is done distinctly as a subcomponent using both the algorithms for a suitable number of iterations. Among the two subcomponents, the one has a better fitness value which was used for the evolution process. Suppose the algorithm trapped in a local optima or fitness value does not improve for a pre-defined number of generations. In that case, the subcomponent optimization algorithm changed to the other subcomponent. The proposed algorithm had a faster convergence rate than its predecessors.
3.3.17 BA-DE
Bat-inspired algorithm sometimes suffers from stagnation. Since DE has better exploration ability merging DE with BA can pull out the algorithm from stagnation or local optima. During the whole iteration process, BA is used to update the individuals, while DE is used to search and update the optimal local solution in the current iteration. BA-DE has better results than SAPF, CRGA, CDE, PSO-DE, SMES, CPSO-GD, etc.
3.3.18 HADE
Parameters selection in DE during various stages of evolution is a tedious task. The proposed algorithm [139] employed an adaptive scaling factor designed motivated by biological, genetic strategy. A dynamic crossover operator probability is designed based on each individual’s fitness value to retain the current population's best individuals. Two neighborhood search operations were developed based on bee colony searching to increase the local search ability of DE. Comparing the proposed algorithm with GA, PSO, DE, and adaptive DE has improved performance.
3.3.19 DE with the Grey Wolf Optimizer Algorithm
In the proposed algorithm [140], the \(DE/current-to-best/1\) mutation strategy was updated using two mutation parameters. An extra parameter expanded the difference vector between the best and the target vector. The scaling factor is used to amplify the difference vector between the two random vectors. Regeneration of the crossover parameter is done at the end of each generation to find appropriate parameters that may be used in the next generation. A newly proposed crossover strategy generated a pre-trial vector. If the pre-trial vector's fitness value was better than the mutant vector, then the trial vector generated by the pre-trial vector was accepted. Otherwise, trial vectors were developed by the Grey Wolf optimizer algorithm. Comparing the proposed algorithm with DE, PSO, and self-adaptive jDE using different functions depicted a better performance.
3.3.20 HMJCDE
JADE is suitable for solving unimodal and simple multimodal problems, while CoDE can handle complex multimodal problems. Therefore, JADE was modified to form MJADE, and CoDE was modified to MCoDE. Both these were merged to form a hybridized method HMJCDE to tackle different types of global optimization problems. Improvement of the adaptive parameter approach and implant of Gaussian mutation on the best individual employed in JADE to form MJADE. Modification in CoDE was done utilizing an external archive to enhance population diversity, incorporated Gaussian and Cauchy distribution for the generation of F and CR, respectively, and named MCoDE. Compared with JADE, CoDE, other state-of-the-art DE variants, and non-DE metaheuristic, the proposed hybrid algorithm superseded all of them in most functions.
3.3.21 E-DEBSA
DE and BSA merged to form the new algorithm. A set of random population and their opposition population was initialized. Among them,\(N\) individuals were selected based on fitness. DE parameters \(F\) and \(CR\), BSA parameter \(F\) calculated adaptively. The population was first updated using mutation and crossover of the DE scheme, then the mutation and crossover of the BSA scheme. Opposition vectors of the generated trial vectors were enumerated based on a jumping rate. The next generation's population was selected based on trial and opposition vectors' fitness value. The new algorithm has performed better than its basic algorithms and a few other PSO variants.
3.3.22 hjDE
jDE algorithm with a reset strategy from Cuckoo search merged in this approach to form the new hybridized algorithm. Reset strategy used in the primary evolution of the DE algorithm. A switching probability is used to determine whether a trial vector generation will be done using a basic DE strategy or a reset strategy by levy flight from the population's best individual. A comparison of the proposed approach with well-known algorithms like DE, PSO, SaDE, ABC, Cuckoo search, etc., showed a superior image segmentation result with increasing thresholds.
3.3.23 ABC & DE
In the proposed hybrid algorithm, the ABC algorithm was first modified with a binary neighborhood search mechanism and an onlooker bee process. The ABC algorithm's neighborhood search phase with the new binary operators was used to solve the feature selection without compromising computational time. A new binary mutation phase for DE was also employed. The modified onlooker process introduced in ABC to overcome the trapping in local optima may occur due to DE's mutation phase. Besides the feature selection problem, the proposed hybrid algorithm also be used for solving binary optimization problems.
3.3.24 DEGAH
Clustered color image segmentation problem solved clustering color and texture of an image and later obtained an accurate cluster center. In this model, color features were taken using the homogeneity model, and texture feature was taken using a power-law descriptor. Firstly, population initialization is done using a soft-rough fuzzy \(c-means\) algorithm. The Davies–Bouldin index (DBI) is used as a fitness calculation function. Top \(N/5\) individuals were selected as elite and given a chance to participate in the next generation's crossover operation directly. DBI separated the initial population with a mixture of color and texture. The Genetic Algorithm freckled the texture portion and DE knot the color portion. The new hybrid algorithm's application on five natural images and comparison with the same field's benchmark algorithms gave a competitive result.
3.3.25 CADE
Cultural algorithm and DE algorithm combined to form a new hybrid algorithm CADE. The exact population was shared by both the algorithms and executed simultaneously, followed by a high-level hybrid teamwork model. After population generation, the initial participation ratio was calculated at the first step of the algorithm. Every technique generates an auxiliary population by selecting random individuals from the shared population-based participation ratio. Every technique generated a subpopulation by processing its auxiliary population. A quality function calculated in the next step, using its participation ratio in next-generation for each technique, was determined. The number of trial vectors that can be considered in the population is determined using a formula. The newly calculated quality function decided which approach will guide the greatest number of individuals in the next generation.
3.3.26 HABCDE
Modified Artificial Bee Colony (ABC) algorithm merged with DE algorithm to form HABCDE. In the new algorithm, the onlooker and scout bee phases of ABC are modified in the proposed hybrid algorithm. The bee moves to other food sources in the hybridized algorithm, selecting the food source randomly and based on the current best food source. The onlooker phase position of the bee was the bee updated using DE strategy \(DE/Best/1/bin\). In the scout phase, the bees who were not updated for a pre-specific number of times reinitialized themselves. Comparison of HABCDE done with 20 test problems, four real-world optimization problems, state-of-the-art algorithms, and basic algorithms. HABCEDE proved its superiority in most of the cases.
3.3.27 hDEBSA
Component of algorithms DE and BSA used to form a new modified algorithm. Parameters related to both the component algorithms made self-adaptive. Modified by the ranking-based mutation strategy, the worst individual was updated depending on a probability value. The result of applying CEC 2013 functions and comparing them with other state-of-the-art algorithms proved its efficiency.
3.3.28 LSHADE-SPACMA
LSHADE-SPA and CMA-ES merged to form the new method LSHADE-SPACMA. A crossover operation was added to CMA-ES to increase the searching ability of the hybridized approach. Both methods worked with the same population. In the new process, the selection of one among the two strategies for updating an individual was determined by a control parameter (FCP) value. The value of the control parameter is selected randomly from the memory pool. The memory pool of the control parameter value is updated at the end of every generation. The policy assigned a maximum individual to the algorithm, which performed better.
3.3.29 ADELI
Local search Lagrange interpolation (LSLI) integrated with DE resulting in a new variant, Adaptive DE with Lagrange interpolation (ADELI). Using Lagrange interpolation, a local search was generated in the neighborhood of the best solution for the present generation. This local search enhanced DE's exploitation ability and convergence speed. An adaptive argument strategy determined the use of LSLI based on the performance of the search process in the last generation. The efficiency of ADELI was tested with CEC 2014 benchmark functions and compared with other EAs. ADELI is also used to solve a path synthesis problem. Performance analysis represented the enhancement of performance in the proposed algorithm.
3.3.30 DEVNS
In the proposed algorithm [151], DE selection and crossover operators were modified to make them more efficient. The use of a mutation operator in every iteration is omitted by using an effective neighborhood search mechanism. During the selection process, three individuals were selected randomly using the roulette wheel depending on fitness, paving the way for better individuals to be chosen. After that, one individual was determined using the crossover operator. VNS algorithm was applied to the individual selected and accepted for the next iteration if it was better than its parent. The performance comparison of the proposed algorithm with different algorithms gave a better result in most cases.
3.3.31 DADE
Firstly, the memory component is coordinated in DA to monitor its best arrangement in every iteration process. In the subsequent stage, the retained neighborhood best individual and global best of the population were utilized in the improved mutation procedure of DE to discover optimal arrangement in the search space. Hence, the algorithm ensured investigation at the early advances and exploitation at the later part of the search and affirmed to find the global optimal solution with improved exactness. The algorithm compared with PSO, DE, and DA using CEC 2005, CEC 2015, and CEC 2017 functions.
Apart from the above-described hybridized methods, some other techniques that attracted us are SaDE-MMTS [124]. A modified multi trajectory search was merged with SaDE to solve large-scale continuous optimization problems. DE combined with opposition-based learning to form a hybrid DE [153] in which an individual’s value and opposite values are enumerated with mutation strategies and parameter settings. The new individuals were created from the parallel processing of both values. Piotrowski [154] developed a new method by combining a few policies like the adaptation of control parameters and the probability of using different mutation strategies. The author used the Nelder-Mead algorithm to enhance local search and divide mutation into two models: global mutation and local mutation. Ahandani et al. [155] proposed a hybrid DE with Nelder-Mead simplex variant for exploration and a bidirectional random optimization method to exploit continuous functions. In [156], DE merged with the Sine cosine algorithm to pull it out from stagnation and solve feature selection problems. DECMSA [157] a new method formed with DE and CMA-ES. The high local search ability of CMA-ES and the global exploration ability of DE are combined here. Wu et al. [158] proposed a new variant by an ensemble of three well-known variants JADE, CoDE, and EPSDE. The total population is divided into small groups and a larger reward population; after a predefined number of generations, the reward population is allocated to the best performing variant and ensures the generation of better individuals.
Conclusively, the resultant algorithms have found more balance in exploration and exploitation than the individual algorithm used during the merger. This is confirmed by the ability of the resultant algorithms to return optimal solutions, increased convergence speed, computational complexity, and ability to escape local optima. Therefore, hybridization merges two or more algorithms to get a more efficient algorithm than its parent algorithms.
3.3.32 DE in Multi-objective Environment
The methods discussed above deal with only one objective function. In the real world, we have many problems with multiple objectives. Dealing with multiple objectives problems require handling all the objectives simultaneously and finding the optimum solution for each objective function. The DE methods developed for handling multi-objective problems are known as multi-objective variants of DE. Some of these multi-objective DE variants are listed in Table 6.
3.3.33 DE for Multi-objective Crop Planning Model
In this study, the existing MEDA1 strategy with the other three newly developed strategies MEDA2, MEDA3, and MEDA4 were applied to the multi-objective crop model. \(DE/rand/1/bin\), \(DE/rand/2/bin\), \(DE/rand/1/exp\), \(DE/rand/2/exp\) were the strategies used in MEDA1, MEDA2, MEDA3, MEDA4, respectively, to find the non-dominated solutions. After analysis, it was found that the approach MEDA1 and MEDA2 with binomial crossover were better in terms of increasing the farmers' total net income by reducing irrigation water use.
3.3.34 MODE-DE
The extended version of the MODE with a diversity enhancement technique to overcome the probability of trapping in local Pareto optimal solution of the basic models. The merger of the current population with randomly generated parameter vectors of equal size was done in 20% of the function evaluations. The current population and the basic MODE algorithm’s procedure were used during an outstanding 80% of the function evaluations. Adoption of this technique enhanced the diversity of the population. MODE-DE has given an improved or equal performance compared to its basic algorithm.
3.3.35 EMODE
In this method, an external archive of non-dominated solutions is maintained in a set. During the mutation phase of DE, random individuals were selected from the non-dominated group. A local random search operator is employed to balance the exploration and exploitation while solving Environmental Economically Dispatch (EED) problems. Before the selection operation, a constraint handling method is executed to ensure that solutions satisfy all the constraints. During the selection operation, if two individuals were non-dominated to each other, both individuals were incorporated into the population. After that, the non-dominated sorting and crowding distance metric was used to select the next generation population. The performance of EMODE was found better than the other compared methods.
3.3.36 MODEA
Multi-objective DE algorithm was an extension of the Modified DE algorithm for the multi-objective environment. In MODEA, during population initialization, the opposition-based learning process is used to generate \(2NP\) solutions. The mutation phase was carried out first by selecting three random individuals. Later, the best among these three were chosen as the first individual, increasing the exploitation ability. A modified selection process was adopted. The concept of DEMO and PDEA combined for generating \(2NP\) solutions, and NSGA-II was directly applied to select the non-dominated front. MODEA compared with bi-objective, tri-objective problems, and other multi-objective algorithms to assess performance. Study and analysis of the result showed MODEA either competitive or better.
3.3.37 CMODE
CW method, a constrained optimization process introduced by the authors, had some drawbacks in determining values for problem-dependent parameters; it limits the application of the proposed approach in real-world problems. In the proposed method, DE is used for searching purposes. An infeasible solution replacement strategy of a new type based on multi-objective optimization is employed to alleviate the previously proposed approach's drawbacks. Two phases of DE mutation and crossover are used here to evaluate the trial vector, and non-dominated individuals of the trial vectors are allowed to replace the target vector from the population. Replacement of infeasible solutions is done using deterministic replacement and random replacement. The empirical results show that CMODE is highly competitive in the family of constrained optimization problems.
3.3.38 I-MODE
Most evolutionary algorithms use the maximum number of generations as the search process's termination criteria. The proposed approach evaluated the termination criteria based on the non-dominated solutions generated during the search process. The basic model followed in this approach was DE with a taboo list, and it has been modified to solve multi-objective problems. During the search process, parameters were adapted self-adaptively, following Zhang & Sanderson (2008). Different performance matrices were modified for termination criteria selection; if the performance evaluation of two selected matrices in recent generations were immaterial statistically, the search process terminates. Application of the proposed approach to chemical engineering applications alkylation, Williams-Otto, and fermentation process optimization for two objectives completed the search process at the actual generation.
3.3.39 Optimized Task Scheduling and Resource Allocation Using IDEA in a Cloud Computing Environment
DE has a good exploration ability. Very few parameters are needed to handle during the search process. On the other hand, the Taguchi method can exploit better individuals in a tiny space. These two methods were combined with balancing exploration and exploitation, resulting in an Improved DE algorithm (IDEA). The IDEA incorporated two calculated models for cloud computing and five original scheduling ideas. Excluding IDEA, a time model and a cost model were also proposed. A non-dominated sorting technique found the Pareto front of total cost and makespan. It was influential in optimizing resource allocation and task scheduling compared to DEA and NSGA-II.
3.3.40 MODE-RMO
According to the proposed approach, fast non-dominated sorting and crowding distance and ranking-based mutation operator is employed in MODE to get the new variant. The incorporation of ranking-based mutation operators accelerated the convergence speed and automatically the performance. MODE-RMO merged with control vector parameterization (CVP) to solve multi-objective dynamic optimization problems. MODE-RMO compared with other MOEAS using ten test problems; results show that it was good enough to generate Pareto optimal fronts with fast convergence and diversity. Its application on MDOPs has demonstrated its effectiveness in solving such problems.
3.3.41 Fuzzy Control Optimized by a Multi-objective DE Algorithm
Elements of the smart structures include active, passive, or hybrid control. Multi-objective DE (MODE) with three new mutation operators allied with a multi-objective approach proposed for the controllers' optimal design. In MODE1 target vector was selected randomly from the Pareto front, and the other two vectors were chosen from the population. While in the MODE2 target vector is selected from the population, and the remaining two vectors are randomly selected from the Pareto front. In the third mutation type, i.e., MODE3, all vectors are chosen randomly from the Pareto front. The proposed approach's performance measurement was done with multi-objective PSO, which stated a better or more competitive result.
3.3.42 IM2ODE
Multi-objective optimization approaches are generally applied to predict the material's structure and properties. Prediction of the material's design needs to find the energy surface, and the energy of the surface has many energy minima. This method used MODE with an inverse design problem to predict metastable structures with desired properties with comparatively low energy. The process first initialized the structure, then local optimization, Pareto front-ranking, selection, and mutation operations were performed iteratively up to a particular generation. The outcome was the optimized structure. IM2ODE used a greedy selection process in which an offspring is placed in the population if it dominates the parent. 60% of the population was kept after ranking, and the remaining were newly generated to maintain its diversity. The application of IM2ODEon Al2O3, C3N4, and carbon phases depicted an efficient approach for computerized material design.
3.3.43 MOABCDE
It was an approach to optimize the trade-off between time, cost, and quality during the planning, management, and execution of construction projects. The artificial Bee colony (ABC) algorithm combined with DE to solve TCQT in a multi-objective manner. In the new algorithm, the ABC algorithm's policy is used with the operators from DE to find a trial solution by combining the target vector and other vectors from the population. The hybridization of the two algorithms balanced the exploration and exploitation trade-off in the new algorithm. MOABCDE performed efficiently on non-dominated sorting-based genetic algorithm, multiple objective particle swarm optimization, the multiple objective DE, and the multiple objective artificial bee colony algorithms.
3.3.44 OSADE
A variant of DE with Opposition based self-adaptive non-dominated sorting merged with multi-objective evolutionary gradient search (MO-EGS). It used mutation and crossover the scheme of \(DE/rand/1/bin\). Self-adaptive parameter strategy was used to alleviate the need for parameter optimization. The integration of MO-EGS amplified the local searchability. Hence, a balance between exploration and exploitation was obtained. NSGA-II technique was incorporated to find the Pareto optimal solution. Comparison with NSGA-II, Non-dominated Sorting DE (NSDE), MOEA/D-SBX, MOEA/D-DE, MO-EGS, etc., better or at least a competitive performance.
3.3.45 ADE-MOIA
Multi-objective immune algorithm suffers from the lack of diversity in the population. Either side DE has a good exploration ability. Both these were combined with a new adaptive DE operator. ADE operator used a preferable parent selection strategy and a new parameter control mechanism. With the advancement of the search process, the ADE operator changed the \(CR\) slowly. The integration of ADE significantly improved the convergence speed and the population's diversity. Comparison of ADE-MOIA with various natural-inspired heuristic algorithms, such as NSGA-II, SPEA2, AbYSS, MOEA/D-DE, MIMO, and D2MOPSO, given the best performance on most of the benchmark problems.
3.3.46 MoDE-CIQ
Image quantization is the process of lessening the number of colors in a color image with less distortion. To solve the image quantization problem in the multi-objective domain, the self-adaptive algorithm Sade-CIQ merged with the multi-population DE algorithm VEDE. The hybrid algorithm developed two sets of populations. Both the population set was updated based on a particular objective. Then up to a preset iteration number of best individuals of two groups were exchanged, and non-dominated solutions were obtained for a predefined number of iterations. From the non-defined solution set, Pareto optimal solutions were generated. The proposed approach did quantization of images effectively and competitively.
3.3.47 MODE-ESM
It was designed to solve the multi-objective extended Dynamic Economic Emission dispatch problem. The proposed algorithm worked using a Multi-objective DEalgorithm with an ensemble of selection methods. Among the non-dominated sorting and summation-based sorting, one was selected based on a random value between 0 and 1. The non-dominated selection process is used if the generated random number is equal to or greater than 0.5; otherwise, the selection process follows summation-based sorting. A heuristic constraint handling method was adopted to solve the extended Dynamic Economic Emission dispatch (DEED) problem, a high dimensional and highly constrained problem. It showed a preferable efficiency in solving the wind power problem.
3.3.48 DE-Based Efficient Multi-objective Optimal Power Flow
The proposed approach is applied to solve the MOO-based OPF problem using an incremental power flow model based on sensitivities and some heuristics. While executing the process, the optimum value of every objective function was evaluated separately. The worse values of other objective functions were also assessed. Then starting from one end of the Pareto front, two sets of optimization problems were solved. The process was executed multiple times to get the non-dominated Pareto optimal solutions. Test results showed that the proposed approach was much faster than the other few evolutionary-based multi-objective optimization algorithms.
3.3.49 AMODE
The algorithm was designed in a way that improved local and global searchability. The selection of scaling factor and crossover rate was done adaptively with the search process's advancement. The incorporation of intelligent multi-objective optimization control helped in solving objectives simultaneously. Adaptive non-dominated sorting calculated the non-dominated Pareto set, which was used to find the optimal set of solutions. AMODE with IMOOP was used to find the optimal solution for the wastewater treatment process.
3.3.50 AIMA
Multi-objective immune algorithm used a clonal operator that clones the best individuals during the search process. Cloning the best individuals made the process greedy, and the process may be trapped in local optima resulting in premature convergence while solving the multi-objective optimization problems. The proposed algorithm adapted the DE strategies based on the search process's present position. Thus, the exploration ability of the search process increased. The experimental results showed that the new approach was better than some state-of-the-art multi-objective optimization algorithms and multi-objective immune algorithms.
3.3.51 MONNDE
The new approach combined the Multi-objective neural network with DE to solve Dynamic environment economic dispatch (DEED) problems. The aim was to produce a Pareto front from a set of network weights depending on the present environment state and current objective weight. MONNDE applied the optimization process to the neural network once, which regulated the DEED problem variables. MONNDE incorporated a new fitness function that used a Pareto penalty function to help the network to produce a Pareto front successfully at each time step. Online and offline learning and the run of optimization algorithms once only for changing the environment also made the algorithm unique.
3.3.52 HMOBBPSO
In the proposed approach, hybrid multi-objective bare-bones particle swarm optimization merged with DE to plan the mobile robots' path. The path length, smoothness degree of the path, and safety degree of a path was designed as a mathematical model of the tri-objective function. The feasible paths were then generated by combining infeasible paths with possible paths using improved hybrid multi-objective bare bone particle swarm optimization and improved mutation strategy of DE. The personal best position of a particle is selected by defining a new Pareto domination with collision constraint.
3.3.53 Guided DEary MO-MFEA
In the proposed method for strengthening the local search method, genetic operators enhanced using memetic procedures. Instead of Simulated Binary Crossover guided DE, the crossover was used to update the individuals towards the global optimal solution slowly. Powell search method modified here to use as a mutation operator. The new approach has given remarkable effectiveness on multi-objective continuous optimization problems.
3.3.54 Ensemble-MODE
The approach was proposed for solving economic emission dispatch problems. At first, the equality constraints of the problem altered to inequality constraints. Then the DE algorithm was modified by using two mutation strategies, viz. \(DE/rand/1\) and \(DE/current-to- rand/1\). Using two different mutation strategies amplified the convergence speed while preserving the diversity in the population. The proposed approach has given a superior result compared with MODE and SPEA-2, and in comparison with NSGA-2 and PDE, it has recorded an equal performance.
3.3.55 GDE3-APM
In the proposed study, the GDE3 algorithm was used with the Adaptive penalty method (APM) to tackle the constraints in structural optimization problems. GDE-3 and APM's performance was measured individually. In the APM penalty function, the value is tuned adaptively according to the feedback from the search process. Using APM with GDE-3 allowed the method to find an extreme Pareto front solution. APM technique was found efficient in measuring boundary solutions between feasible and infeasible solutions against GDE-3 and NSGA-II. GDE3 + APM strategy was better than individual GDE-3 and NSGA-II in most of the matrices.
3.3.56 ESDSSMODE
Self-organizing map (SOM) and DE are used to extract a single document summarization automatically. Operators based on SOM and the polynomial concept of mutation incorporated with the DE structure to improve new eminence solutions.
3.3.57 GDE4
Generalized DE is a multi-objective algorithm based on DE. The proposed approach modified the basic GDE with a ranking-based mutation operator. The candidate solutions were ordered based on the popular procedures of multi-objective optimization and used these solutions in the mutation operator. Among the three randomly selected solutions, the best one was chosen as the parent, and the other two were used as second and third candidate solutions during mutation operation. Performance improvement is made during the selection process by modifying the genetic operator. The test result showed that the proposed algorithm performed better than GDE3 and the basic GDE algorithm.
3.3.58 MODE-FM
In this study, multi-objective DE with a powerfully versatile scale factor utilizing a mix of non-dominated arranging and crowding distance was proposed to take care of the multi-target enhancement issues. Fuzzy interface was used to powerfully adjust the scale factor to adjust the method's global and local search capacities. Count of generation and diversity of the population in the decision space was given as input to the fuzzy system. Comparison of the proposed algorithm with other algorithms of the same type using CEC 2009 functions and five degree of freedom vehicle vibration model confirmed its efficacy.
3.3.59 IMDE
In IMDE with infeasible-directing transformation operators and a multi-strategy was incorporated. An infeasible arrangement with lower fitness kept up for each individual in the primary population. This infeasible arrangement was then consolidated into some regular DE mutation operators to control the investigation toward the region with promising values. Additionally, numerous mutation techniques and control parameters are involved during the generation of trial vectors to speed up the convergence and maintain diversity in the population. Performance validation of IMDE with state-of-the-art multi-criteria algorithms exposed the supremacy of the new algorithm.
Few other works in the multi-objective environment based on DE are Gujarathi et al. [186] developed H-MODE (Hybrid multi-objective DE algorithm) to optimize industrial variables wiped polyethylene film of terephthalate reactor. In [187], Basu proposed a multi-objective DE to solve environmental and economic dispatch problems. The authors of [188] proposed ADEMO/D, a hybridized variant using MOEA/D and SaDE, to solve continuous optimization problems using the multi-objective approach. In [189], jDE and NSGA-II merged to present a multi-objective self-adaptive DE algorithm. They found the solution to the form-finding problem of high-rise building design in the conceptual phase. Rashidi et al. [190] used the concept of DE with a multi-objective approach to design a multi-generation energy system. Baraldiet al. [191] utilized the idea of DE with the multi-objective approach and generated a method to identify the state of degradation of equipment and predicted the required maintenance schedule of equipment.
4 Application of DE in Image Processing Problems
DE has proven its efficiency in solving optimization problems in different engineering areas. DE has been extensively used to solve specific problems in Artificial intelligence, Pattern recognition, Image processing, Electrical engineering, Robotics, Civil engineering, Bioinformatics and Bio-medical Engineering, Electronics, and Communication Engineering (see Fig. 11). Among all these fields, we have chosen to summarize the applications in the image processing field. The application of DE in the image process with its specific purpose and application process is given in Table 7.
5 Potential Future Direction
Although significant research efforts have been made in the area of optimization using DE and its variants, there are still areas of interest that can be explored. The same can be said of the application areas, as new application areas are continually emerging for the algorithm. The following are suggested future directions:
-
It is observed from the study that the implementation of DE in the multi-objective domain is limited. Typically, multi-objective optimization deals with more than three objective functions, and many applications of EAs in this area use Pareto optimality as a ranking metric. They tend to perform poorly as the number of objective functions increases. The goal of improving DE variants to solve many-objective problems remains an exciting research direction.
-
Other interesting multi-objective areas that could be explored
-
o
In the area of combustion engine optimization, particularly the optimization of a spark ignition engine knock, an in-depth study is needed on the effect of early opening and late closing angles of intake and exhaust valves on engine performance and then carry out multiparameter optimization research under the knock limitation on an experimental bench.
-
p
Most studies on the optimization of COVID-19 disease using DE use mainly the basic DE variants. The newer variants exhibit superior performance compared to the basic DE and should be used in COVID-19 research. Also, researchers should explore this area since the codes of various successful versions are widely available from different authors and web pages of Competitions
-
o
-
Exploration in DE and other algorithms enables the algorithm to advance from one solution to another to find the optimum. It would be fascinating to see how researchers can explore the possibility of integrating the mutation operator with the non-elitist, fitness proportionate selection without crossover that can result in good saddle crossing abilities.
-
The ensemble approach already used by DE variants can be investigated further with enhanced mutation strategies and with different crossover approaches to solve multi-objective problems.
-
Concepts like transformation matrix could be borrowed to rotate the mutant vectors at different angles and use different self-adaptation schemes to improve the exploration of DE
6 Conclusion
The researchers have extensively studied the DE algorithm during the last two decades. As a result, numerous variants of DE are available in the literature. This paper is divided into three phases. The first phase describes the basic DE, its advantages and disadvantages, and proposed various proposed mutation strategies. In the second phase, modified DE variants offered using a new parameter selection policy, mutation strategy, and parameter selection policy, hybridized with other algorithms, and multi-objective DE are discussed. The DE variants proposed within the year 2009 to 2020 are incorporated. These two phases will help the readers identify the present state of the algorithm and the policies used for its updating and future work. The third phase discusses DE's variants to solve a particular problem in the image processing field. This will guide the researchers to find new problems based on problems already solved. For each of the variants discussed, the emphasis given to updating will let the readers know about the particular algorithm's working mechanism. However, this study is limited to multi-criteria type variants from 2009 to 2020 and DE variants developed for solving image processing problems. Also, a comparative study of the performance of the different variants identified in the study was not carried out. It is observed from the study that the implementation of DE in the multi-objective domain is limited. There is a scope for future research in this direction.
References
Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated evolution. Wiley, New York
Rechenberg I (1971), Evolutions strategy, - OptimierungTechnischerSystemenachPrinzipien der Biologischen Evolution. Reprinted by Fromman-Holzboog, 1973. PhD Thesis
Holland JH (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor
Koza JR (1992) Genetic programming: on the programming of computers by means of natural evolution. MIT Press, Massachusetts
Mucherino A, Seref O (2007) Monkey search: a novel metaheuristic search for global optimization. AIP Conf Proc 953:162–173. https://doi.org/10.1063/1.2817338
Teodorović D (2009) Bee colony optimization (BCO). Stud Comput Intell 248:39–60. https://doi.org/10.1007/978-3-642-04225-63
Yang XS, Deb S (2010) Cuckoo search via Levy flights. In: Nature & biologically inspired computing, pp 210–214
Yang XS (2010) Firefly algorithm, stochastic test functions and design optimization. Int J Bio-Inspired Comput 2(2):78–84. https://doi.org/10.1504/IJBIC.2010.032124
Tang R, Fong S, Yang X, Deb S (2012). Wolf search algorithm with ephemeral memory. In: Seventh international conference on digital information management (ICDIM), pp 165–172. https://doi.org/10.1109/ICDIM.2012.6360147
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Dasgupta D (1999) Artifical immune systems and their applications. Springer, Berlin
Kobti Z, Reynolds R, Kohler T (2003) A multi-agent simulation using cultural algorithms: the effect of culture on the resilience of social systems. In: IEEE congress on evolutionary computation (CEC 2003), Canberra, pp 5–12
Lee KS, Geem ZW (2005) A new metaheuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Methods Appl Mech Eng 194(36–38):3902–3933
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359
Sui X, Chu S-C, Pan J-S, Luo H (2020) Parallel compact differential evolution for optimization applied to image segmentation. Appl Sci 10(6):2195. https://doi.org/10.3390/app10062195
Singh D, Kumar V, Kaur M (2020) Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. Eur J Clin Microbiol Infect Dis 39(7):1379–1389
Das S, Abraham A, Chakraborty UK, Konar A (2009) Differential evolution using a neighborhood-based mutation operator. IEEE Trans Evol Comput 13(3):526–553. https://doi.org/10.1109/tevc.2008.2009457
Rahnamayan S, Tizhoosh HR (2008) Image thresholding using micro opposition-based differential evolution (Micro-ODE). 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). https://doi.org/10.1109/cec.2008.4630979
Tanabe R, Fukunaga AS (2014) Improving the search performance of SHADE using linear population size reduction. In: 2014 IEEE congress on evolutionary computation (CEC), 1658–1665. IEEE
Ter Braak CJ (2006) A Markov Chain Monte Carlo version of the genetic algorithm differential evolution: easy Bayesian computing for real parameter spaces. Stat Comput 16(3):239–249
Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Tran Evol Comput 3(2):398–417
Mallipeddi R, Suganthan PN, Pan QK, Tasgetiren MF (2011) Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl Soft Comput 11(2):1679–1696
Neri F, Tirronen V (2010) Recent advances in differential evolution: a survey and experimental analysis. Artif Intell Rev 33:61–106. https://doi.org/10.1007/s10462-009-9137-2
Das S, Suganthan PN (2011) Differential evolution: a survey of the state-of-the-art. IEEE Trans Evol Comput 15(1):4–31. https://doi.org/10.1109/TEVC.2010.2059031
Das S, Mullick S, Suganthan PN (2016) Recent advances in differential evolution updated survey. Swarm and Evolut Comput. https://doi.org/10.1016/j.swevo.2016.01.004i
Opara KR, Arabas J (2018) Differential evolution: a survey of theoretical analyses. Swarm Evol Comput. https://doi.org/10.1016/j.swevo.2018.06.010
Eltaeib T, Mahmood A (2018) Differential evolution: a survey and analysis. Appl Sci 8(10):1945
Pant M, Zaheer H, Garcia-Hernandez L, Abraham A (2020) Differential evolution: a review of more than two decades of research. Eng Appl Artif Intell 90:103479
Georgioudakis M, Plevris V (2020) A comparative study of differential evolution variants in constrained structural optimization. Front Built Environ 6:102
Das S, Mullick SS, Suganthan PN (2016) Recent advances in differential evolution—an updated survey. Swarm Evol Comput 27:1–30. https://doi.org/10.1016/j.swevo.2016.01.004
Liu J, Lampinen J (2002) On setting the control parameter of the differential evolution algorithm. In: Proceedings of the 8th international mendel conference on soft computing, pp 11–18
Eiben AE, Smith JE (2003) Introduction to evolutionary computation. Springer, Berlin
Gämperle R, Müller SD, Koumoutsakos P (2002). A parameter study for differential evolution. In: Proceedings of the conference in neural networks and applications (NNA), fuzzy sets and fuzzy systems (FSFS) and evolutionary computation (EC), WSEAS, pp 293–298
Mallipeddi R, Suganthan PN (2008) Empirical study on the effect of population size on differential evolution algorithm. In: Proceedings of the IEEE congress on evolutionary computation, pp 3663–3670
Zamuda A, Brest J, Boškovi´c B, Žumer V (2008). Large scale global optimization using differential evolution with self-adaptation and cooperative co-evolution. In: Proceedings of the IEEE world congress on computational intelligence, pp 3719–3726
Krink T, Filipic B, Fogel GB (2004). Noisy optimization problems—a particular challenge for differential evolution? In: Proceedings of the IEEE congress on evolutionary computation, pp 332–339
Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans Evol Comput 10(6):646–657
Takahama T, Sakai S (2010) Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation. In: IEEE congress on evolutionary computation, pp 1–9. IEEE
Reynoso-Meza G, Sanchis J, Blasco X, Herrero JM (2011) Hybrid DE algorithm with adaptive crossover operator for solving real-world numerical optimization problems. In: 2011 IEEE congress of evolutionary computation (CEC), pp 1551–1556. IEEE.
Tanabe R, Fukunaga A (2013) Success-history based parameter adaptation for differential evolution. In: 2013 IEEE congress on evolutionary computation, pp 71–78. IEEE.
Tanabe R, Fukunaga AS (2014) Improving the search performance of SHADE using linear population size reduction. In: 2014 IEEE congress on evolutionary computation (CEC), pp 1658–1665. IEEE
Guo SM, Tsai JSH, Yang CC, Hsu PH (2015) A self-optimization approach for L-SHADE incorporated with eigenvector-based crossover and successful-parent-selecting framework on CEC 2015 benchmark set. In: 2015 IEEE congress on evolutionary computation (CEC), pp 1003–1010. IEEE.
Awad NH, Ali MZ, Suganthan PN, Reynolds RG (2016) An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems. In: 2016 IEEE congress on evolutionary computation (CEC), pp 2958–2965
Awad NH, Ali MZ, Suganthan PN (2017) Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In: 2017 IEEE congress on evolutionary computation (CEC), pp 372–379. IEEE.
Akhmedova S, Stanovov V, Semenkin E (2018) LSHADE algorithm with a rank-based selective pressure strategy for the circular antenna array design problem. ICINCO 1:159–165
Brest J, Maučec MS, Bošković B (2019) The 100-digit challenge: algorithm jDE100. In: 2019 IEEE congress on evolutionary computation (CEC), pp 19–26. IEEE
Gurrola-Ramos J, Hernàndez-Aguirre A, Dalmau-Cedeño O (2020) COLSHADE for real-world single-objective constrained optimization problems. In: 2020 IEEE congress on evolutionary computation (CEC), pp 1–8
Ghosh A, Das S, Chowdhury A, Giri R (2011) An improved differential evolution algorithm with fitness-based adaptation of the control parameters. Inf Sci 181(18):3749–3765. https://doi.org/10.1016/j.ins.2011.03.010
Iacca G, Mallipeddi R, Mininno E, Neri F, Suganthan PN (2011) Super-fit and population size reduction in compact differential evolution. In: 2011 IEEE workshop on memetic computing (MC). https://doi.org/10.1109/mc.2011.5953633
Wang Y, Cai Z, Zhang Q (2012) Enhancing the search ability of differential evolution through orthogonal crossover. Inf Sci 185(1):153–177. https://doi.org/10.1016/j.ins.2011.09.001
Zhu W, Tang Y, Fang J, Zhang W (2013) Adaptive population tuning scheme for differential evolution. Inf Sci 223:164–191. https://doi.org/10.1016/j.ins.2012.09.019
Mohamed AW, Sabry HZ, Abd-Elaziz T (2013) Real parameter optimization by an effective differential evolution algorithm. Egypt Inf J 14(1):37–53. https://doi.org/10.1016/j.eij.2013.01.001
Gong W, Cai Z (2013) Differential evolution with ranking-based mutation operators. IEEE Trans Cybern 43(6):2066–2081. https://doi.org/10.1109/tcyb.2013.2239988
Yang M, Cai Z, Li C, Guan J (2013) An improved adaptive differential evolution algorithm with population adaptation. In: Proceeding of the fifteenth annual conference on genetic and evolutionary computation conference—GECCO ’13. https://doi.org/10.1145/2463372.2463374
Sarker RA, Elsayed SM, Ray T (2014) Differential evolution with dynamic parameters selection for optimization problems. IEEE Trans Evol Comput 18(5):689–707. https://doi.org/10.1109/tevc.2013.2281528
Gong W, Cai Z, Wang Y (2014) Repairing the crossover rate in adaptive differential evolution. Appl Soft Comput 15:149–168. https://doi.org/10.1016/j.asoc.2013.11.005
Wang J, Liao J, Zhou Y, Cai Y (2014) Differential evolution enhanced with multi objective sorting-based mutation operators. IEEE Trans Cybern 44(12):2792–2805. https://doi.org/10.1109/tcyb.2014.2316552
Guo H, Li Y, Li J, Sun H, Wang D (2014) Differential evolution improved with self-adaptive control parameters based on simulated annealing. Swarm Evol Comput. https://doi.org/10.1016/j.swevo.2014.07.001
Salehinejad H, Rahnamayan S, Tizhoosh HR, Chen SY (2014) Micro-differential evolution with vectorized random mutation factor. In: 2014 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2014.6900606
Guo SM, Yang CC (2014) Enhancing differential evolution utilizing eigenvector-based crossover operator. IEEE Trans Evol Comput 19(1):31–49
Brown C, Jin Y, Leach M, Hodgson M (2015) mJADE: adaptive differential evolution with a small population. Soft Comput. https://doi.org/10.1007/s00500-015-1746-x
Tatsis VA, Parsopoulos KE (2015) Differential evolution with grid-based parameter adaptation. Soft Comput 21(8):2105–2127. https://doi.org/10.1007/s00500-015-1911-2
Brest J, Maucec MS, Boskovic B (2016) iL-SHADE: improved L-SHADE algorithm for single objective real-parameter optimization. In: 2016 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2016.7743922
Fu CM, Jiang C, Chen GS, Liu QM (2017) An adaptive differential evolution algorithm with an aging leader and challenger’s mechanism. Appl Soft Comput 57:60–73. https://doi.org/10.1016/j.asoc.2017.03.032
Ochoa P, Castillo O, Soria J (2017) Differential evolution using fuzzy logic and a comparative study with other metaheuristics. Springer Publication 667:257–268. https://doi.org/10.1007/978-3-319-47054-2_17
Cheng C-Y, Li S-F, Lin Y-C (2017) Self-adaptive parameters in differential evolution based on fitness performance with a perturbation strategy. Soft Comput. https://doi.org/10.1007/s00500-017-2958-z
Viktorin A, Senkerik R, Pluhacek M, Kadavy T, Zamuda A (2018) Distance based parameter adaptation for success-history based differential evolution. Swarm Evol Comput. https://doi.org/10.1016/j.swevo.2018.10.013
Greco R, Vanzi I (2018) New few parameters differential evolution algorithm with application to structural identification. J Traffic Transport Eng (Engl Ed). https://doi.org/10.1016/j.jtte.2018.09.002
Zhao X, Xu G, Rui L, Liu D, Liu H, Yuan J (2019) A failure remember-driven self-adaptive differential evolution with top-bottom strategy. Swarm Evol Comput 45:1–14. https://doi.org/10.1016/j.swevo.2018.12.006
Meng Z, Pan J-S (2019) HARD-DE:HierarchicalARchive based mutation strategy with Depth information of evolution for the enhancement of Differential Evolution on numerical optimization. IEEE Access. https://doi.org/10.1109/access.2019.2893292
Salgotra R, Singh U, Saha S, Nagar A (2019) New improved SALSHADE-cnEpSin algorithm with adaptive parameters. In: 2019 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2019.8789983
Walton SP, Brown MR (2019) Predicting effective control parameters for differential evolution using cluster analysis of objective function features. J Heurist. https://doi.org/10.1007/s10732-019-09419-8
Meng Z, Chen Y, Li X (2020) Enhancing differential evolution with novel parameter control. IEEE Access 8:51145–51167. https://doi.org/10.1109/access.2020.2979738
Brest J, Maučec MS, Bošković B (2020) Differential evolution algorithm for single objective bound-constrained optimization: algorithm j2020. In: 2020 IEEE congress on evolutionary computation (CEC), pp 1–8
Deng LB, Zhang LL, Fu N, Sun HL, Qiao LY (2020) ERG-DE: an elites regeneration framework for differential evolution. Inf Sci 539:81–103
Wang L, Fu X, Menhas MI, Fei M (2010) A modified binary differential evolution algorithm. Life Syst Model Intell Comput. https://doi.org/10.1007/978-3-642-15597-0_6
Zhang X, Chen W, Dai C, Cai W (2010) Dynamic multi-group self-adaptive differential evolution algorithm for reactive power optimization. Int J Electr Power Energy Syst 32(5):351–357. https://doi.org/10.1016/j.ijepes.2009.11.009
Wang L, Fu X, Mao Y, Menhas M, Fei M (2012) A novel modified binary differential evolution algorithm and its applications. Neurocomputing. https://doi.org/10.1016/j.neucom.2011.11.033
Zhou Y, Li X, Gao L (2013) A differential evolution algorithm with intersect mutation operator. Appl Soft Comput 13(1):390–401. https://doi.org/10.1016/j.asoc.2012.08.014
Gong W, Cai Z, Liang D (2015) Adaptive ranking mutation operator based differential evolution for constrained optimization. IEEE Trans Cybern 45(4):716–727. https://doi.org/10.1109/tcyb.2014.2334692
Zamuda A, Brest J (2015) Self-adaptive control parameters׳ randomization frequency and propagations in differential evolution. Swarm Evol Comput 25:72–99. https://doi.org/10.1016/j.swevo.2015.10.007
Zhang J, Sanderson AC (2009) JADE: adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958
Gong W, Cai Z, Ling CX, Li H (2010) Enhanced differential evolution with adaptive strategies for numerical optimization. IEEE Trans Syst Man Cybern Part B 41(2):397–413. https://doi.org/10.1109/tsmcb.2010.2056367
Wang Y, Cai Z, Zhang Q (2011) Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans Evol Comput 15(1):55–66. https://doi.org/10.1109/tevc.2010.2087271
Islam SM, Das S, Ghosh S, Roy S, Suganthan PN (2012) An adaptive differential evolution algorithm with novel mutation and crossover strategies for global numerical optimization. IEEE Trans Syst Man Cybern Part B 42(2):482–500. https://doi.org/10.1109/tsmcb.2011.2167966
Kumar P, Pant M (2012) Enhanced mutation strategy for differential evolution. In: 2012 IEEE congress on evolutionary computation. https://doi.org/10.1109/cec.2012.6252914
Han M-F, Liao S-H, Chang J-Y, Lin C-T (2012) Dynamic group-based differential evolution using a self-adaptive strategy for global optimization problems. Appl Intell 39(1):41–56. https://doi.org/10.1007/s10489-012-0393-5
Wang X, Zhao S (2013) Differential evolution algorithm with self-adaptive population resizing mechanism. Math Probl Eng. https://doi.org/10.1155/2013/419372
Elsayed SM, Sarker RA, Essam DL (2014) A self-adaptive combined strategies algorithm for constrained optimization using differential evolution. Appl Math Comput 241:267–282. https://doi.org/10.1016/j.amc.2014.05.018
Yu W-J, Shen M, Chen W-N, Zhan Z-H, Gong Y-J, Lin Y, et al (2014) Differential evolution with two-level parameter adaptation. IEEE Trans Cybern 44(7):1080–1099. https://doi.org/10.1109/tcyb.2013.2279211
Xu C, Huang H, Ye S (2014) A differential evolution with replacement strategy for real-parameter numerical optimization. In: 2014 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2014.6900468
Gao W, Yen GG, Liu S (2014) A cluster-based differential evolution with self-adaptive strategy for multimodal optimization. IEEE Trans Cybern 44(8):1314–1327. https://doi.org/10.1109/tcyb.2013.2282491
Fan Q, Yan X (2015) Self-adaptive differential evolution algorithm with discrete mutation control parameters. Expert Syst Appl 42(3):1551–1572. https://doi.org/10.1016/j.eswa.2014.09.046
Ali M, Awad N, Suganthan P (2015) Multi-population differential evolution with balanced ensemble of mutation strategies for large-scale global optimization. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2015.04.019
Das S, Ghosh A, Mullick SS (2015) A switched parameter differential evolution for large scale global optimization—simpler may be better. Mendel 2015:103–125. https://doi.org/10.1007/978-3-319-19824-8_9
Iacca G, Caraffini F, Neri F (2015) Continuous parameter pools in ensemble self-adaptive differential. Evolution. https://doi.org/10.1109/SSCI.2015.216
Tian M, Gao X, Dai C (2017) Differential evolution with improved individual-based parameter setting and selection strategy. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2017.03.010
Fan Q, Wang W, Yan X (2017) Differential evolution algorithm with strategy adaptation and knowledge-based control parameters. Artif Intell Rev. https://doi.org/10.1007/s10462-017-9562-6
Ghosh A, Das S, Panigrahi BK, Das AK (2017) A noise resilient Differential Evolution with improved parameter and strategy control. In: 2017 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2017.7969620
Mohamed AW, Mohamed AK (2017) Adaptive guided differential evolution algorithm with novel mutation for numerical optimization. Int J Mach Learn Cybern. https://doi.org/10.1007/s13042-017-0711-7
Brest J, Maucec MS, Boskovic B (2017) Single objective real-parameter optimization: algorithm jSO. In: 2017 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2017.7969456
Peng H, Guo Z, Deng C, Wu Z (2018) Enhancing differential evolution with random neighbors-based strategy. J Comput Sci 26:501–511. https://doi.org/10.1016/j.jocs.2017.07.010
Cui L, Li G, Zhu Z, Lin Q, Wong K-C, Chen J, et al (2018) Adaptive multiple-elites-guided composite differential evolution algorithm with a shift mechanism. Inf Sci 422:122–143. https://doi.org/10.1016/j.ins.2017.09.002
Meng Z, Pan JS, Kong L (2018) Parameters with adaptive learning mechanism (PALM) for the enhancement of differential evolution. Knowl-Based Syst 141:92–112. https://doi.org/10.1016/j.knosys.2017.11.015
Liu X-F, Zhan Z-H, Lin Y, Chen W-N, Gong Y-J, Gu T-L, Zhang J (2018) Historical and heuristic-based adaptive differential evolution. IEEE Trans Syst Man Cybern Syst. https://doi.org/10.1109/tsmc.2018.285515
Sun G, Lan Y, Zhao R (2019) Differential evolution with Gaussian mutation and dynamic parameter adjustment. Soft Comput 23(5):1615–1642
Meng Z, Pan J-S, Tseng K-K (2019) PaDE: an enhanced Differential Evolution algorithm with novel control parameter adaptation schemes for numerical optimization. Knowl-Based Syst. https://doi.org/10.1016/j.knosys.2019.01.006
Liu N, Pan JS, Lai J, Chu SC (2020) An efficient differential evolution via both top collective and p-best information. J Internet Technol 21(3):629–643
Meng Z, Yang C, Li X, Chen Y (2020) Di-DE: depth information-based differential evolution with adaptive parameter control for numerical optimization. IEEE Access 8:40809–40827
Li Y, Wang S, Yang B (2020) An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2020.113451
Deng W, Xu J, Song Y, Zhao H (2020) Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2020.106724
Ali M, Pant M (2010) Improving the performance of differential evolution algorithm using Cauchy mutation. Soft Comput 15(5):991–1007. https://doi.org/10.1007/s00500-010-0655-2
Alguliev RM, Aliguliyev RM, Isazade NR (2012) DESAMC+DocSum: differential evolution with self-adaptive mutation and crossover parameters for multi-document summarization. Knowl-Based Syst 36:21–38. https://doi.org/10.1016/j.knosys.2012.05.017
Cai Y, Wang J (2013) Differential evolution with neighborhood and direction information for numerical optimization. IEEE Trans Cybern 43(6):2202–2215. https://doi.org/10.1109/tcyb.2013.2245501
Bujok P, Tvrdik J, Polakova R (2014) Differential evolution with rotation-invariant mutation and competing-strategies adaptation. In: 2014 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2014.6900626
Li X, Yin M (2014) Modified differential evolution with self-adaptive parameters method. J Comb Optim 31(2):546–576. https://doi.org/10.1007/s10878-014-9773-6
Brown C, Jin Y, Leach M, Hodgson M (2015) μ JADE: adaptive differential evolution with a small population. Soft Comput 20(10):4111–4120. https://doi.org/10.1007/s00500-015-1746-x
Zaheer H, Pant M, Kumar S, Monakhov O, Monakhova E, Deep K (2015) A new guiding force strategy for differential evolution. Int J Syst Assur Eng Manag 8(S4):2170–2183. https://doi.org/10.1007/s13198-014-0322-6
Fan Q, Yan X (2016) Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE Trans Cybern 46(1):219–232. https://doi.org/10.1109/tcyb.2015.2399478
Wang S, Li Y, Yang H, Liu H (2017) Self-adaptive differential evolution algorithm with improved mutation strategy. Soft Comput 22(10):3433–3447. https://doi.org/10.1007/s00500-017-2588-5
Mohamed AW, Suganthan PN (2017) Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Comput 22(10):3215–3235. https://doi.org/10.1007/s00500-017-2777-2
Zhou Y-Z, Yi W-C, Gao L, Li X-Y (2017) Adaptive differential evolution with sorting crossover rate for continuous optimization problems. IEEE Trans Cybern 47(9):2742–2753. https://doi.org/10.1109/tcyb.2017.2676882
Gong W, Cai Z, Ling CX (2010) DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization. Soft Comput 15(4):645–665. https://doi.org/10.1007/s00500-010-0591-1
Zhao S-Z, Suganthan PN, Das S (2010) Self-adaptive differential evolution with multi-trajectory search for large-scale optimization. Soft Comput 15(11):2175–2185. https://doi.org/10.1007/s00500-010-0645-4
Yu C, Chen J, Huang Q, Wang S, Zhao X (2012) A new hybrid differential evolution algorithm with simulated annealing and adaptive Gaussian immune. In: 2012 8th international conference on natural computation. https://doi.org/10.1109/icnc.2012.6234554
Nakib A, Daachi B, Siarry P (2012) Hybrid differential evolution using low-discrepancy sequences for image segmentation. In: 2012 IEEE 26th international parallel and distributed processing symposium workshops & PhD Forum. https://doi.org/10.1109/ipdpsw.2012.79
Sathiskumar M, Nirmalkumar A, Lakshminarasimman L, Thiruvenkadam S (2012) A self-adaptive hybrid differential evolution algorithm for phase balancing of unbalanced distribution system. Int J Electr Power Energy Syst 42(1):91–97. https://doi.org/10.1016/j.ijepes.2012.03.029
Dong M-G, Wang N (2012) A novel hybrid differential evolution approach to scheduling of large-scale zero-wait batch processes with setup times. Comput Chem Eng 45:72–83. https://doi.org/10.1016/j.compchemeng.2012.05.008
Yildiz AR (2013) A new hybrid differential evolution algorithm for the selection of optimal machining parameters in milling operations. Appl Soft Comput 13(3):1561–1566. https://doi.org/10.1016/j.asoc.2011.12.016
Ponsich A, CoelloCoello CA (2013) A hybrid Differential Evolution—Tabu Search algorithm for the solution of Job-Shop Scheduling Problems. Appl Soft Comput 13(1):462–474. https://doi.org/10.1016/j.asoc.2012.07.034
Li H, Zhang L (2013) A discrete hybrid differential evolution algorithm for solving integer programming problems. Eng Optim 46(9):1238–1268. https://doi.org/10.1080/0305215x.2013.836637
Miranda V, Alves R (2013) Differential evolutionary particle swarm optimization (DEEPSO): a successful hybrid. In: 2013 BRICS congress on computational intelligence and 11th Brazilian congress on computational intelligence. https://doi.org/10.1109/brics-cci-cbic.2013.6
Idris I, Selamat A, Omatu S (2013) Hybrid email spam detection model with negative selection algorithm and differential evolution. Eng Appl Artif Intell. https://doi.org/10.1016/j.engappai.2013.12.001
Wang L, Zou F, Hei X, Yang D, Chen D, Jiang Q, Cao Z (2014) A hybridization of teaching–learning-based optimization and differential evolution for chaotic time series prediction. Neural Comput Appl 25(6):1407–1422. https://doi.org/10.1007/s00521-014-1627-8
Asafuddoula M, Ray T, Sarker R (2014) An adaptive hybrid differential evolution algorithm for single objective optimization. Appl Math Comput 231:601–618. https://doi.org/10.1016/j.amc.2014.01.041
Chakraborty D, Saha S, Dutta O (2014) DE-FPA: a hybrid differential evolution-flower pollination algorithm for function minimization. In: 2014 international conference on high performance computing and applications (ICHPCA). https://doi.org/10.1109/ichpca.2014.7045350
Ye S, Dai G, Peng L, Wang M (2014) A hybrid adaptive coevolutionary differential evolution algorithm for large-scale optimization. In: 2014 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2014.6900259
Pei S, Ouyang A, Tong L (2015) A hybrid algorithm based on bat-inspired algorithm and differential evolution for constrained optimization problems. Int J Pattern Recognit Artif Intell. https://doi.org/10.1142/S0218001415590077
Sun Z, Wang N, Bi Y, Srinivasan D (2015) Parameter identification of PEMFC model based on hybrid adaptive differential evolution algorithm. Energy 90:1334–1341. https://doi.org/10.1016/j.energy.2015.06.081
Jitkongchuen D (2015) A hybrid differential evolution with grey wolf optimizer for continuous global optimization. In: 2015 7th international conference on information technology and electrical engineering (ICITEE). https://doi.org/10.1109/iciteed.2015.7408911
Li G, Lin Q, Cui L, Du Z, Liang Z, Chen J, Lu N, Ming Z (2016) A novel hybrid differential evolution algorithm with modified CoDE and JADE. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2016.06.011
Nama S, Saha AK, Ghosh S (2016) Int J Ind Eng Comput 7:323–338. https://doi.org/10.5267/j.ijiec.2015.9.003
Mlakar U, Potočnik B, Brest J (2016) A hybrid differential evolution for optimal multilevel image thresholding. Expert Syst Appl 65:221–232. https://doi.org/10.1016/j.eswa.2016.08.046
Zorarpacı E, Özel SA (2016) A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Syst Appl 62:91–103. https://doi.org/10.1016/j.eswa.2016.06.004
Krishna R, Kumar S (2016) Hybridizing differential evolution with a genetic algorithm for color image segmentation. https://doi.org/10.5281/zenodo.162592
Awad NH, Ali MZ, Suganthan PN, Reynolds RG (2017) CADE: a hybridization of cultural algorithm and differential evolution for numerical optimization. Inf Sci 378:215–241. https://doi.org/10.1016/j.ins.2016.10.039
Jadon SS, Tiwari R, Sharma H, Bansal JC (2017) Hybrid artificial bee colony algorithm with differential evolution. Appl Soft Comput 58:11–24. https://doi.org/10.1016/j.asoc.2017.04.018
Nama S, Saha AK (2017) A new hybrid differential evolution algorithm with self-adaptation for function optimization. Appl Intell 48(7):1657–1671. https://doi.org/10.1007/s10489-017-1016-y
Mohamed AW, Hadi AA, Fattouh AM, Jambi KM (2017) LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In: 2017 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2017.7969307
Huang Q, Zhang K, Song J, Zhang Y, Shi J (2019) Adaptive differential evolution with a Lagrange interpolation argument algorithm. Inf Sci 472:180–202
Lotfi N (2019) Data allocation in distributed database systems: a novel hybrid method based on differential evolution and variable neighborhood search. SN Appl Sci. https://doi.org/10.1007/s42452-019-1787-3
Debnath S, Baishya S, Sen D, Arif W (2020) A hybrid memory-based dragonfly algorithm with differential evolution for engineering application. Eng Comput. https://doi.org/10.1007/s00366-020-00958-4
Li J (2012) A hybrid differential evolution algorithm with opposition-based learning. In: 2012 4th international conference on intelligent human-machine systems and cybernetics. https://doi.org/10.1109/ihmsc.2012.27
Piotrowski AP (2013) Adaptive memetic differential evolution with global and local neighborhood-based mutation operators. Inf Sci 241:164–194. https://doi.org/10.1016/j.ins.2013.03.060
Ahandani MA, Vakil-Baghmisheh M-T, Talebi M (2014) Hybridizing local search algorithms for global optimization. Comput Optim Appl 59(3):725–748. https://doi.org/10.1007/s10589-014-9652-1
AbdElaziz ME, Ewees AA, Oliva D, Duan P, Xiong S (2017) A hybrid method of sine cosine algorithm and differential evolution for feature selection. Lect Notes Comput Sci. https://doi.org/10.1007/978-3-319-70139-4_15
He X, Zhou Y (2017) Enhancing the performance of differential evolution with covariance matrix self-adaptation. Appl Soft Comput 64:227. https://doi.org/10.1016/j.asoc.2017.11.050
Wu G, Shen X, Li H, Chen H, Lin A, Suganthan PN (2018) Ensemble of differential evolution variants. Inf Sci 423:172–186. https://doi.org/10.1016/j.ins.2017.09.053
Adeyemo J, Otieno F (2010) Differential evolution algorithm for solving multi-objective crop planning model. Agric Water Manag 97(6):848–856. https://doi.org/10.1016/j.agwat.2010.01.013
Qu B, Suganthan P-N (2010) Multi-objective differential evolution with diversity enhancement. J Zhej Univ Sci C 11(7):538–543. https://doi.org/10.1631/jzus.c0910481
Lu Y, Zhou J, Qin H, Wang Y, Zhang Y (2011) Environmental/economic dispatch problem of power system by using an enhanced multi-objective differential evolution algorithm. Energy Convers Manage 52(2):1175–1183. https://doi.org/10.1016/j.enconman.2010.09.012
Ali M, Siarry P, Pant M (2011) An efficient Differential Evolution based algorithm for solving multi-objective optimization problems. Eur J Oper Res. https://doi.org/10.1016/j.ejor.2011.09.025
Wang Y, Cai Z (2012) Combining multi objective optimization with differential evolution to solve constrained optimization problems. IEEE Trans Evol Comput 16(1):117–134. https://doi.org/10.1109/tevc.2010.2093582
Sharma S, Rangaiah G (2013) An improved multi-objective differential evolution with a termination criterion for optimizing chemical processes. Comput Chem Eng. https://doi.org/10.1016/j.compchemeng.2013.05.004
Tsai J-T, Fang J-C, Chou J-H (2013) Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution algorithm. Comput Oper Res 40(12):3045–3055. https://doi.org/10.1016/j.cor.2013.06.012
Chen X, Du W, Qian F (2014) Multi-objective differential evolution with ranking-based mutation operator and its application in chemical process optimization. Chemom Intell Lab Syst 136:85–96. https://doi.org/10.1016/j.chemolab.2014.05.007
Marinaki M, Marinakis Y, Stavroulakis GE (2015) Fuzzy control optimized by a Multi-Objective Differential Evolution algorithm for vibration suppression of smart structures. Comput Struct 147:126–137. https://doi.org/10.1016/j.compstruc.2014.09.018
Zhang Y-Y, Gao W, Chen S, Xiang H, Gong X-G (2015) Inverse design of materials by multi-objective differential evolution. Comput Mater Sci 98:51–55. https://doi.org/10.1016/j.commatsci.2014.10.054
Tran D-H, Cheng M-Y, Cao M-T (2015) Hybrid multiple objective artificial bee colony with differential evolution for the time–cost–quality tradeoff problem. Knowl-Based Syst 74:176–186. https://doi.org/10.1016/j.knosys.2014.11.018
Chong JK, Tan KC (2015) An opposition-based self-adaptive hybridized differential evolution algorithm for multi-objective optimization (OSADE). In: Proceedings of the 18th Asia Pacific symposium on intelligent and evolutionary systems, pp 447–461. https://doi.org/10.1007/978-3-319-13359-1_35
Lin Q, Zhu Q, Huang P, Chen J, Ming Z, Yu J (2015) A novel hybrid multi-objective immune algorithm with adaptive differential evolution. Comput Oper Res 62:95–111. https://doi.org/10.1016/j.cor.2015.04.003
Hu Z, Su Q, Xia X (2016) Multi objective image color quantization algorithm based on self-adaptive hybrid differential evolution. Comput Intell Neurosci 2016:1–12. https://doi.org/10.1155/2016/2450431
Qu BY, Liang JJ, Zhu YS, Suganthan PN (2017) Solving dynamic economic emission dispatch problem considering wind power by multi-objective differential evolution with ensemble of selection method. Nat Comput. https://doi.org/10.1007/s11047-016-9598-6
Reddy SS, Bijwe PR (2019) Differential evolution-based efficient multi-objective optimal power flow. Neural Comput Appl 31(1):509–522. https://doi.org/10.1007/s00521-017-3009-5
Qiao J-F, Hou Y, Han H-G (2017) Optimal control for wastewater treatment process based on an adaptive multi-objective differential evolution algorithm. Neural Comput Appl. https://doi.org/10.1007/s00521-017-3212-4
Lin Q, Ma Y, Chen J, Zhu Q, Coello C, Wong K-C, Chen F (2017) An adaptive immune-inspired multi-objective algorithm with multiple differential evolution strategies. Inf Sci. https://doi.org/10.1016/j.ins.2017.11.030
Mason K, Duggan J, Howley E (2018) A multi-objective neural network trained with differential evolution for dynamic economic emission dispatch. Int J Electr Power Energy Syst 100:201–221. https://doi.org/10.1016/j.ijepes.2018.02.021
Zhang J-H, Zhang Y, Zhou Y (2018) Path planning of mobile robot based on hybrid multi objective bare bones particle swarm optimization with differential evolution. IEEE Access 6:44542–44555. https://doi.org/10.1109/access.2018.2864188
Tuan NQ, Hoang TD, ThanhBinh HT (2018) A guided differential evolutionary multi-tasking with powell search method for solving multi-objective continuous optimization. In: 2018 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2018.8477860
Yu X, Yu X, Lu Y, Sheng J (2018) Economic and emission dispatch using ensemble multi-objective differential evolution algorithm. Sustainability 10(2):418. https://doi.org/10.3390/su10020418
Vargas DEC, Lemonge ACC, Barbosa HJC, Bernardino HS (2018) Differential evolution with the adaptive penalty method for structural multi-objective optimization. Optim Eng. https://doi.org/10.1007/s11081-018-9395-4
Saini N, Saha S, Jangra A, Bhattacharyya P (2018) Extractive single document summarization using multi-objective optimization: exploring self-organized differential evolution, grey wolf optimizer and water cycle algorithm. Knowl-Based Syst. https://doi.org/10.1016/j.knosys.2018.10.021
Bidgoli AA, Mahdavi S, Rahnamayan S, Ebrahimpour-Komleh H (2019) Gde4: the generalized differential evolution with ordered mutation. In: International conference on evolutionary multi-criterion optimization, pp 101–113. Springer, Cham. https://doi.org/10.1007/978-3-030-12598-1_9
Jamali A, Mallipeddi R, Salehpour M, Bagheri A (2020) Multi-objective differential evolution algorithm with fuzzy inference-based adaptive mutation factor for Pareto optimum design of suspension system. Swarm Evol Comput. https://doi.org/10.1016/j.swevo.2020.100666
Xu B, Duan W, Zhang H, Li Z (2020) Differential evolution with infeasible-guiding mutation operators for constrained multi-objective optimization. Appl Intell 50(12):4459–4481
Gujarathi AM, Babu BV (2010) Hybrid multi-objective differential evolution (H-MODE) for optimisation of polyethylene terephthalate (PET) reactor. Int J Bio-Inspired Comput 2(3/4):213. https://doi.org/10.1504/ijbic.2010.033089
Basu M (2011) Economic environmental dispatch using multi-objective differential evolution. Appl Soft Comput 11(2):2845–2853. https://doi.org/10.1016/j.asoc.2010.11.014
Venske SMS, Goncalves RA, Delgado MR (2012) ADEMO/D: adaptive differential evolution for multi-objective problems. In: 2012 Brazilian symposium on neural networks. https://doi.org/10.1109/sbrn.2012.29
Ekici B, Chatzikonstantinou I, Sariyildiz S, Tasgetiren M, Pan Q-K (2016). A multi-objective self-adaptive differential evolution algorithm for conceptual high-rise building design. https://doi.org/10.1109/CEC.2016.7744069
Rashidi H, Khorshidi J (2018) Exergoeconomic analysis and optimization of a solar based multigeneration system using multi-objective differential evolution algorithm. J Clean Prod 170:978–990. https://doi.org/10.1016/j.jclepro.2017.09.201
Baraldi P, Bonfanti G, Zio E (2018) Differential evolution-based multi-objective optimization for the definition of a health indicator for fault diagnostics and prognostics. Mech Syst Signal Process 102:382–400. https://doi.org/10.1016/j.ymssp.2017.09.013
Omran MGH, Engelbrecht AP, Salman A (nd) Differential evolution methods for unsupervised image classification. In: 2005 IEEE congress on evolutionary computation. https://doi.org/10.1109/cec.2005.1554795
Du J-X, Huang D-S, Wang X-F, Gu X (2007) Shape recognition based on neural networks trained by differential evolution algorithm. Neurocomputing 70(4–6):896–903. https://doi.org/10.1016/j.neucom.2006.10.026
De Falco I, Della Cioppa A, Maisto D, Tarantino E (2008) Differential Evolution as a viable tool for satellite image registration. Appl Soft Comput 8(4):1453–1462. https://doi.org/10.1016/j.asoc.2007.10.013
Das S, Konar A (2009) Automatic image pixel clustering with an improved differential evolution. Appl Soft Comput 9(1):226–236. https://doi.org/10.1016/j.asoc.2007.12.008
Baştürk A, Günay E (2009) Efficient edge detection in digital images using a cellular neural network optimized by differential evolution algorithm. Expert Syst Appl 36(2):2645–2650. https://doi.org/10.1016/j.eswa.2008.01.082
Maulik U, Saha I (2009) Modified differential evolution based fuzzy clustering for pixel classification in remote sensing imagery. Pattern Recogn. https://doi.org/10.1016/j.patcog.2009.01.011
Cuevas E, Zaldivar D, Pérez-Cisneros M (2010) A novel multi-threshold segmentation approach based on differential evolution optimization. Expert Syst Appl 37(7):5265–5271. https://doi.org/10.1016/j.eswa.2010.01.013
Wang X, Long H, Su X (2010) Method of image enhancement based on differential evolution algorithm. In: 2010 international conference on measuring technology and mechatronics automation. https://doi.org/10.1109/icmtma.2010.142
Aslantas V, Kurban R (2010) Fusion of multi-focus images using differential evolution algorithm. Expert Syst Appl 37(12):8861–8870. https://doi.org/10.1016/j.eswa.2010.06.011
Fan S, Yang S (2011) Infrared electric image segmentation using fuzzy Renyi entropy and chaos differential evolution algorithm. In: 2011 international conference on future computer sciences and application. https://doi.org/10.1109/icfcsa.2011.57
Sarkar S, Patra GR, Das S (2011) A differential evolution based approach for multilevel image segmentation using minimum cross entropy thresholding. Lect Notes Comput Sci. https://doi.org/10.1007/978-3-642-27172-4_7
Kumar P, Kumar S, Pant M (2012) Gray level image enhancement by improved differential evolution algorithm. In: Proceedings of seventh international conference on bio-inspired computing: theories and applications (BIC-TA 2012), pp 443–453. https://doi.org/10.1007/978-81-322-1041-2_38
Sarkar S, Das S, Chaudhuri SS (2012) Multilevel image thresholding based on Tsallis entropy and differential evolution. Lect Notes Comput Sci. https://doi.org/10.1007/978-3-642-35380-2_3
Zhong Y, Zhang L (2012) Remote sensing image subpixel mapping based on adaptive differential evolution. IEEE Trans Syst Man Cybern B 42(5):1306–1329. https://doi.org/10.1109/tsmcb.2012.2189561
Bhattacharyya S, Sengupta A, Chakraborti T, Konar A, Tibarewala DN (2013) Automatic feature selection of motor imagery EEG signals using differential evolution and learning automata. Med Biol Eng Compu 52(2):131–139. https://doi.org/10.1007/s11517-013-1123-9
Burman R, Paul S, Das S (2013) A differential evolution approach to multi-level image thresholding using type II fuzzy sets. Lect Notes Comput Sci. https://doi.org/10.1007/978-3-319-03753-0_25
De Falco I, Della Cioppa A, Maisto D, Scafuri U, Tarantino E (2013) Adding chaos to differential evolution for range image registration. In: European conference on the applications of evolutionary computation, pp 344–353. Springer, Berlin. https://doi.org/10.1007/978-3-642-37192-9_35
Kang L, Wu L, Chen X, Yang YH (2013) Practical structure and motion recovery from two uncalibrated images using ε Constrained Adaptive Differential Evolution. Pattern Recogn 46(5):1466–1484. https://doi.org/10.1016/j.patcog.2012.10.028
Mesejo P, Ugolotti R, Di Cunto F, Giacobini M, Cagnoni S (2013) Automatic hippocampus localization in histological images using Differential Evolution-based deformable models. Pattern Recogn Lett 34(3):299–307. https://doi.org/10.1016/j.patrec.2012.10.012
Saraswat M, Arya KV, Sharma H (2013) Leukocyte segmentation in tissue images using differential evolution algorithm. Swarm Evol Comput 11:46–54. https://doi.org/10.1016/j.swevo.2013.02.003
Sarkar S, Das S, Paul S, Polley S, Burman R, Chaudhuri SS (2013) Multi-level image segmentation based on fuzzy - Tsallis entropy and differential evolution. In: 2013 IEEE international conference on fuzzy systems (FUZZ-IEEE). https://doi.org/10.1109/fuzz-ieee.2013.6622406
Dong C, Yeung D, Wang X-Z (2013) An improved differential evolution and its application to determining feature weights in similarity-based clustering. Proc Int Conf Mach Learn Cybern 2:831–838. https://doi.org/10.1109/ICMLC.2013.6890399
Ali M, Ahn CW, Pant M (2014) Multi-level image thresholding by synergetic differential evolution. Appl Soft Comput 17:1–11. https://doi.org/10.1016/j.asoc.2013.11.018
Chandra A, Chattopadhyay S (2014) A new strategy of image denoising using multiplier-less FIR filter designed with the aid of differential evolution algorithm. Multimed Tools Appl 75(2):1079–1098. https://doi.org/10.1007/s11042-014-2358-7
Duan X, Zimei X (2014) Blind separation of permuted alias image base on four-phase-difference and differential evolution. Sensors Transd 163:90–95
Khan A, Jaffar MA, Shao L (2014) A modified adaptive differential evolution algorithm for color image segmentation. Knowl Inf Syst 43(3):583–597. https://doi.org/10.1007/s10115-014-0741-3
Priya RL, Belji T, Sadasivam V (2014) Security of health imagery via reversible watermarking based on differential evolution. In: 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom). https://doi.org/10.1109/medcom.2014.7005570
Ayala HVH, dos Santos FM, Mariani VC, dos Coelho L, S. (2015) Image thresholding segmentation based on a novel beta differential evolution approach. Expert Syst Appl 42(4):2136–2142. https://doi.org/10.1016/j.eswa.2014.09.043
Dhal KG, Quraishi MI, Das S (2015) Performance enhancement of differential evolution by incorporating Lévy flight and chaotic sequence for the cases of satellite images. Int J Appl Metaheuristic Comput 6(3):69–81. https://doi.org/10.4018/ijamc.2015070104
Sanchez-Ferreira C, Ayala HVH, Coelho L dos S, Munoz D, Farias MCQ, Llanos CH (2015) Multi-objective differential evolution algorithm for underwater image restoration. In: 2015 IEEE congress on evolutionary computation (CEC). https://doi.org/10.1109/cec.2015.7256898
Sarkar S, Das S, Chaudhuri SS (2015) A multilevel color image thresholding scheme based on minimum cross entropy and differential evolution. Pattern Recogn Lett 54:27–35. https://doi.org/10.1016/j.patrec.2014.11.009
Shi Y, Gao H, Wu D (2015) Multi-level image segmentation based on an improved differential evolution with adaptive parameter controlling strategy. In: The 27th Chinese control and decision conference (2015 CCDC). https://doi.org/10.1109/ccdc.2015.7162447
Bhandari AK, Kumar A, Chaudhary S, Singh GK (2015) A new beta differential evolution algorithm for edge preserved colored satellite image enhancement. Multidimension Syst Signal Process 28(2):495–527. https://doi.org/10.1007/s11045-015-0353-4
Sarkar S, Das S, Chaudhuri S (2015) Hyper-spectral image segmentation using Rényi entropy based multi-level thresholding aided with differential evolution. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2015.11.016
Deng L, Lu G, Shao Y, Fei M, Hu H (2016) A novel camera calibration technique based on differential evolution particle swarm optimization algorithm. Neurocomputing 174:456–465. https://doi.org/10.1016/j.neucom.2015.03.119
Kar SS, Maity SP (2016) Differential evolution based optimal clustering for retinal blood vessel extraction. In: 2016 5th international conference on informatics, electronics and vision (ICIEV). https://doi.org/10.1109/iciev.2016.7760087
Xu F, Hu H, Gao H, Wang B (2016) Multi-temporal image registration utilizing a differential evolution algorithm with replacement strategy. In: 2016 Chinese control and decision conference (CCDC). https://doi.org/10.1109/ccdc.2016.7531085
Ahmadipour Z, Afrasiabi M, Khotanlou H (2016) Multiple human detection in images based on differential evolution and HOG-LBP. In: 2016 eighth international conference on information and knowledge technology (IKT). https://doi.org/10.1109/ikt.2016.7777779
De Falco I, Della Cioppa A, Scafuri U, Tarantino E (2016) Fast range image registration by an asynchronous adaptive distributed differential evolution. In: 2016 12th international conference on signal-image technology & internet-based systems (SITIS). https://doi.org/10.1109/sitis.2016.107
Lopez-Franco C, Hernandez-Barragan J, Lopez-Franco M, Reynoso M, Nuno E, Lopez-Franco A (2016) Real-time image template matching algorithm based on differential evolution. In: 2016 IEEE-RAS 16th international conference on humanoid robots (humanoids). https://doi.org/10.1109/humanoids.2016.7803332
Choudhary R, Gupta R (2017). Gray level image enhancement using dual mutation differential evolution. https://doi.org/10.1109/ICCCNT.2017.8204113
Chen F, Shi J, Ma Y, Lei Y, Gong M (2017) Differential evolution algorithm with learning selection strategy for SAR image change detection. In: 2017 IEEE congress on evolutionary computation (CEC), 450–457. https://doi.org/10.1109/cec.2017.7969346
Seema GB, Bansal G (2017) Image contrast enhancement approach using differential evolution and particle swarm optimization. Int Res J Eng Technol 4(8):1134–1138
Mlakar U, Fister I, Brest J, Potočnik B (2017) Multi-objective differential evolution for feature selection in facial expression recognition systems. Expert Syst Appl 89:129–137. https://doi.org/10.1016/j.eswa.2017.07.037
Suresh S, Lal S (2017) Modified differential evolution algorithm for contrast and brightness enhancement of satellite images. Appl Soft Comput 61:622–641. https://doi.org/10.1016/j.asoc.2017.08.019
Muangkote N, Sunat K, Chiewchanwattana S (2017) Rr-cr-IJADE: an efficient differential evolution algorithm for multilevel image thresholding. Expert Syst Appl 90:272–289. https://doi.org/10.1016/j.eswa.2017.08.029
Hancer E, Xue B, Zhang M (2018) Differential evolution for filter feature selection based on information theory and feature ranking. Knowl-Based Syst 140:103–119. https://doi.org/10.1016/j.knosys.2017.10.028
Chakraborty R, Sushil R, Garg M (2018) An integral image based text extraction technique from document images by multilevel thresholding using differential evolution. Methodol Appl Issues Contemp Comput Framework. https://doi.org/10.1007/978-981-13-2345-4_4
Casella A, Falco I, Cioppa D, Antonio S, U. & Tarantino, Ernesto, (2018) Exploiting multi-core and GPU hardware to speed up the registration of range images by means of Differential Evolution. J Parallel Distrib Comput. https://doi.org/10.1016/j.jpdc.2018.07.002
Cui X, Niu Y, Zheng X, Han Y (2018) An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image. PLoS ONE. https://doi.org/10.1371/journal.pone.0196306
Bhandari AK (2018) A novel beta differential evolution algorithm-based fast multilevel thresholding for color image segmentation. Neural Comput Appl. https://doi.org/10.1007/s00521-018-3771-z
Vali MH, Aghagolzadeh A, Baleghi Y (2018) Optimized watermarking technique using self-adaptive differential evolution based on redundant discrete wavelet transform and singular value decomposition. Expert Syst Appl 114:296–312. https://doi.org/10.1016/j.eswa.2018.07.004
Mistry K, Issac B, Jacob S, Jasekar J, Zhang L (2018) Multi-population differential evolution for retinal blood vessel segmentation, pp 424–429. https://doi.org/10.1109/ICARCV.2018.8581322
Bidgoli AA, Rahnamayan S, Ebrahimpour-Komleh H (2019) Opposition-based multi-objective binary differential evolution for multi-label feature selection. Int Conf Evol Multi-Criterion Optim. https://doi.org/10.1007/978-3-030-12598-1_44
Guraksin GE, Deperlioglu O, Kose U (2019) A novel underwater image enhancement approach with wavelet transform supported by differential evolution algorithm. Nature Inspired Optim Tech Image Process Appl. https://doi.org/10.1007/978-3-319-96002-9_11
Jia H, Lang C, Oliva D, Song W, Peng X (2019) Hybrid grasshopper optimization algorithm and differential evolution for multilevel satellite image segmentation. Remote Sensing 11(9):1134. https://doi.org/10.3390/rs11091134
Rezaei K, Agahi H, Mahmoodzadeh A (2019) Multi-objective differential evolution-based ensemble method for brain tumour diagnosis. IET Image Proc 13(9):1421–1430. https://doi.org/10.1049/iet-ipr.2018.6377
Tarkhaneh O, Shen H (2019) An adaptive differential evolution algorithm to optimal multi-level thresholding for MRI brain image segmentation. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2019.07.037
Song Y, Ma B, Gao W (2019) Medical image edge detection based on improved differential evolution algorithm and Prewitt operator. ActaMicroscopica 28
Hosny KM, Khalid AM, Mohamed ER (2020) Efficient compression of volumetric medical images using Legendre moments and differential evolution. Soft Comput 24(1):409–427. https://doi.org/10.1007/s00500-019-03922-7
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical Approval
This article does not contain any studies with human participants or animals performed by the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chakraborty, S., Saha, A.K., Ezugwu, A.E. et al. Differential Evolution and Its Applications in Image Processing Problems: A Comprehensive Review. Arch Computat Methods Eng 30, 985–1040 (2023). https://doi.org/10.1007/s11831-022-09825-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11831-022-09825-5