Abstract
In this paper, an adaptive Fluctuant Population size Slime Mould Algorithm (FPSMA) is proposed. Unlike the original SMA where population size is fixed in every epoch, FPSMA will adaptively change population size in order to effectively balance exploitation and exploration characteristics of SMA’s different phases. Experimental results on 13 standard and 30 IEEE CEC2014 benchmark functions have shown that FPSMA can achieve significant reduction in run time while maintaining good solution quality when compared to the original SMA. Typical saving in terms of function evaluations for all benchmarks was between 20 and 30% on average with a maximum being as high as 60% in some cases. Therefore, with its higher computation efficiency, FPSMA is much more favorable choice as compared to SMA in time stringent applications.
1 Introduction
Populationbased metaheuristics have been the dominant methods to find optimal or good solutions to many complex optimization problems in reasonable time [22]. The popularity of metaheuristics has increased considerably due to their key role in learning and knowledge discovery within the emerging fields of big data and machine learning. These metaheuristics derive their inspiration from mimicking intelligent processes arising in nature, and are commonly classified into two categories: evolutionary (EA) and swarm intelligence algorithms [13]. The most popular EA algorithms are genetic (GA) [17] and differential evolution (DE) [47]. As for the swarm intelligence category, this includes particle swarm (PSO) [47], cuckoo search (CS) [58], whale optimization [32], monarch butterfly optimization (MBO) [53], moth search (MSA) [52], and Harris hawks optimization (HHO) [19] among others.
Although the development of metaheuristics has witnessed tremendous progress in recent years, there is still much room for improvement as no single algorithm can solve all problems efficiently as per the “No Free Lunch” theorem [55]. Recently, a new populationbased metaheuristic called slime mould algorithm (SMA) has been proposed by Li et al. [27]. SMA is inspired by a unique slime mould, i.e., physarum polycephalum, which is an organism that can live freely as a single cell but can also form communicating aggregates when foraging food sources. In order to find food, slime mould starts the search process with a randomly distributed population. Once having identified food concentration during the random search, the slime mould will approach and wrap the food and secrete enzyme to digest it, while retaining certain exploration capability to search for better food sources. In order to imitate slime mould’s exploration and exploitation behaviors, authors in [27] proposed a biooscillating policy that separates the population into two groups according to their fitness. The first group, called positive group, is the group of individuals from the population with the best fitness whereas the other one, labeled as negative group, is the one consisting of those with the lowest fitness. The better fitness group will be exploited to find the best possible solution whereas the lower fitness one will be used to explore outward regions. In addition, a vibration parameter based on the sigmoid function is introduced to simulate foodgrabbing behavior of slime mould.
Despite SMA being a recent algorithm, it has demonstrated excellent performance compared to stateoftheart metaheuristics in many fields. However, one of the key disadvantages of SMA lies in its relatively long run time and high computational cost. Being applied successfully in multiple fields, in this work we investigate the enhancement of the algorithm by augmenting it with a population size adaptation method that can reduce its prohibitively long run time. Population size plays a very important role in both runtime efficiency and optimization effectiveness of metaheuristics and thus by balancing exploration and exploitation characteristics of the SMA algorithm, its performance and computational cost can be improved [27]. In order to balance exploration and exploitation phases of an algorithm, population size adaptation schemes can automatically adjust population size according to population diversity during the search process thus enhancing performance and reducing run time. Population size adaptation has been widely studied in genetic algorithms [5, 25], differential evolution [40, 50], artificial bee colony optimization [9], swarm intelligence [7, 12, 41] and recently to sine cosine algorithm [3]. However, to the best of the author’s knowledge, no such work has been reported for SMA.
To fill this research gap, we propose herein an adaptive fluctuant population size SMA algorithm called FPSMA. Unlike the original SMA where population size is fixed in every epoch, FPSMA will adaptively change population size to enhance run time characteristics of both exploitation and exploration phases of the algorithm. Population adaptation concept used in the proposed algorithm is a clusterbased approach borrowed from Kmean clustering algorithm. The threshold used to start the adaptation process is a scaled sigmoid function that decreases smoothly initially, then dramatically midway, and again smoothly near the end of algorithm execution. Once population diversity is out of the threshold, population size increases or decreases using a sine wave pattern (for randomization). Therefore, key contributions of this study can be summarized as the following:

Propose an adaptive fluctuant population size slime mould algorithm (FPSMA) that automatically adjust population size during the search process according to population diversity to effectively balance exploitation and exploration characteristics of conventional SMA algorithm.

FPSMA performance is analyzed over several benchmark functions, including 13 standard and 30 IEEE CEC2014 benchmark functions.

Simulation results revealed that FPSMA can achieve significant reductions in the number of function evaluations as compared to conventional SMA without impacting solution quality.
The remainder of the paper is organized as follows. Section 2 highlights SMA algorithm, literature review, and population diversity adaptation techniques. Section 3 provides details of the proposed algorithm. Experimental results are reported and analyzed in Sect. 4. Finally, conclusions and some future directions are presented in Sect. 5.
2 Background
In this section, we introduce SMA algorithm’s mathematical models followed by the short literature review of SMA, and finally brief discussion of population diversitybased adaptation techniques [56] that motivated this work.
2.1 SMA introduction
In [27], a new stochastic optimizer called slime mould algorithm (SMA) was proposed. The algorithm models the morphological changes of slime mould, namely Physarum polycephalum, when foraging. Slime mould is eukaryote where its organic matter utilizes a process called Plasmodium as its main process to seek food. Once a food source is identified, the slime mould surrounds it and secrete enzymes to digest it. The foraging process of the Slime mould is divided into three phases, where the first process is finding food source, followed by wrapping, and then food grabble. The mathematical model for the various processes of the slime mould is described as follows [27]:
with rand and r are random values \(\in \) [0,1], UB/LB representing the upper/lower bound of the search space, and value z is used to balance exploration and exploitation characteristics. As for \(\overrightarrow{X}\) and \(\overrightarrow{X_{b}}\), they represent the locations of current and best fitness individuals at iteration t, respectively, where best fitness here represents the individual with the highest odor concentration. \(\overrightarrow{X_{A}}\) and \(\overrightarrow{X_{B}}\) are two randomly selected individuals from the slime mould population. Parameter \(\overrightarrow{vc}\) is a linearity decreasing value from 1 to 0 whereas \(\overrightarrow{vb}\) is a variable \(\in \) [a,a] where a is calculated using [27]:
where T represents maximum number of iterations. Parameter \(\overrightarrow{W}\) is a vector representing the slime mould weight and is described mathematically as [27]:
where \(F_{b}\)/\(F_{w}\) represents best/worst fitness value at the current iteration and r being a random number \(\in \) [0,1]. Moreover, S(i) represents the individual’s fitness, condition indicates the rank of S(i) in the first half of the population (i.e., the positive group). In Eq. (4), the term SmellIndex denotes the result of sorting S in an ascending order. Parameter p in Eq. (1) is calculated using [27]:
where DF is the overall global best fitness and S(i) represents the individual’s fitness.
A flowchart for SMA algorithm is depicted in Fig. 1 where it starts with initializing parameters D, P, T, LB, UB, z, and a random population \(\overrightarrow{X_i}(t=0)\). In each iteration, SMA will calculate individuals’ fitness and find the best one in the current iteration. The next population is then updated according to Eq. (1) and conditional weighting parameter W. This iterative process is repeated until maximum number of iteration is reached where the best fitness and the solution are stored as the final result.
In Eq. (1), the exploration capability is guaranteed with a probability of at least z while exploitation is performed with a probability of at least \(1p\). When rand is less than z, SMA will take a random walk within the boundaries defined by LB and UB. However, when another random number r is larger than p, SMA will perform exploitation and search in the neighbourhood of the current location. When r is less than p, SMA will wrap around the current best position, \(\overrightarrow{X_b}(t)\), with wrapping direction and radius depending on the current position’s fitness. Such behaviors can be much more evident when plugging the definition of W in Eq. (3) back to Eq. (1) when \(r<p\) resulting in [27]:
The SMA algorithm will wrap the food in two directions depending on the fitness of the current position’s with the radius depending on the amplitude of \(\overrightarrow{vb}\) and population variance. In Eq. (1), \(\overrightarrow{vb}\) and \(\overrightarrow{vc}\) are two tuning parameters oscillating towards 0 with iterations imitating foodgrabbing behaviour and hence exploitation around the best food source.
2.2 Literature review
SMA and its variants were successfully applied to many problems such as image segmentation [34, 61], breast cancer detection [29, 36], COVID19 early detection [4, 46], parameters estimation of photovoltaic cells [14, 31, 33], medical data classification [54], feature selection [23], proportionalintegralderivative (PID) motor speed controllers [11, 43], power systems [38, 45], bearings defect identification [51], travelling salesman problem [30], and machine learning models parameter tuning for support vector machine [8] and artificial neural network [63] to name a few.
However, SMA may suffer from some shortcomings such as slow convergence rate because of being trapped in local minima and having an unbalanced exploitation and exploration phases. Therefore, to further improve SMA performance, researchers have reported a variety of specific stochastic operators such as Levy distribution [4, 10], cosine function for controlling parameters [16, 18], quantum rotation gate [59], oppositionbased learning [35, 45, 54], and chaotic operator [31]. In addition, SMA has been hybridized with other metaheuristics such as Harris hawk optimization [60], whale optimization [1], particle swarm [15], differential evolution [20, 29], and arithmetic optimization algorithm [62] to efficiently solve specific optimization problems. Furthermore, variants of SMA to solve discrete binary [2, 26] and multiobjective optimization problems [21, 24, 44] have been proposed. However, none of these works have considered population size adaptation to enhance the performance of SMA.
2.3 Population adaptation
Population size adaption has become prevalent in many populationbased metaheuristic algorithms (MAs). The choice of a proper population size can substantially enhance the efficiency of many metaheuristic algorithms including SMA. In a typical linear population size reduction algorithm, there is a large number of individuals in the population initially to enhance exploration capability. During population evolution, population size is decreased linearly until reaching smallest population size at the end of algorithm execution in order to allow for proper exploitation. However, for more complex objective functions, it is also possible to increase population size towards the end of the search process to avoid premature convergence or stagnation. A common criteria to control population size direction is to use population diversity as a metric (i.e., population distribution). For example, in [6, 39, 42, 56, 57], the authors proposed to use population diversity to start and stop population adaption process in differential evolution. The following is a review of population diversity adaptation technique based on the work presented in [56]. Parameters and variables associated with this technique are listed in Table 1.
Population diversity is measured by mean of the Euclidean distances in each feature described as follows:
where value \({\bar{x}}_j\) is calculated as:
During the evolution process, a relative measure of population diversity is calculated using:
where the lower and upper bound for \(RD_t\) \(\in \) \([0.9 \times \gamma R_{FES,t}, 1.1 \times \gamma R_{FES,t}]\) where \(\gamma \) is a scaling factor and \(R_{FES,t}\) is the relative number of depleted function evaluations given by:
When \(RD_t\) drops below the lower bound (i.e., \(0.9 \times \gamma R_{FES,t}\)), \(P_t\) will increase by 1 whereas when it exceeds the upper bound (i.e., \(1.1 \times \gamma R_{FES,t}\)) it will decreased by 1. Such a technique results in a linearly fluctuant population that utilizes population diversity based on Euclidean distance.
In [48], the authors proposed using a pseudo randomization technique to change population size where population size \(P_t\) in the tth function evaluation is calculated using:
where “\([~] \)” is a rounding operator, \(P_{min}\) is minimum population size, D is the dimension of the function to be optimized, and \(M_t\) is a linear reduction parameter defined as:
with T being maximum function evaluations and t being the index of current function evaluation. Parameter M is a function of initial population size and problem dimension which is calculated using:
In this paper, we propose to use population diversity to start and stop population adaptation similar to Poláková et al. [39] but cluster the population by applying Kmean algorithm. As pointed out in [39], in the SMA process when best food sources are gradually grabbed, it is expected that populations are gradually contracted around food sources with the best fitness, and hence \(DI_t\) will gradually decreases toward 0. By tracking the value of \(DI_t\), SMA convergence rate can be envisioned and population size can be adapted accordingly. If \(DI_t\) is high, then the population is too diverse and population size can be reduced. If \(DI_t\) is low, the population is contracted and if that happens during the initial phase of SMA, more population should be added to enhance exploration capability. However, if SMA is approaching the last few iterations, population size should be reduced to save computation time. If \(DI_t\) is decreasing dramatically to a small value during the evolution process, resetting the population can also be considered to avoid being stuck at local optimal. The proposed algorithm herein is based on this concept and its details are described in the next section.
3 FPSMA and analysis
In this section, the proposed algorithm, called FPSMA, is described in details. Suppose at the tth epoch, there are \(x_{i},i=1,...,P_t\) individuals in the population. By applying the Kmean algorithm, each individual \(x_i\) is associated to a group center \(x_{c_i}\) resulting in population diversity at iteration tth as:
The initial population diversity, \(DI_{init}=DI_{t=0}\), is stored as a reference to define relative diversity (\(RD_t\)) during the evolution process as defined in Eq. (9) with relative expected population diversity \(R_{EP}\) calculated as:
where t is current iteration index, T is the total number of iterations, and \(\beta \) is a scaling factor with a default value of 10. \(\gamma \in [0,1]\) is the fraction of relative population diversity expected to be consumed during the SMA process and hence \(R_{EP}\) can be treated as the expected relative population diversity at the current iteration. Initially, it is expected that \(R_{EP}\approx 1\) but then towards the end of the evolution process becomes \(R_{EP}\approx 1\gamma \). It is also expected that \(R_{EP}\) decreases smoothly when \(R_{EP}\approx 1\), then abruptly halfway through the process, and then smoothly again until \(R_{EP}\approx 1\gamma \).
Value \(R_{EP}\) is used as a reference to trigger population adaptation. During the optimization, if current relative population diversity \(RD_t\) is less than \(\upsilon _{low}= 0.9\times R_{EP}\) or larger than \(\upsilon _{high}= 1.1\times R_{EP}\), then population adaptation will start. Optionally, if \(R_{EP} < \epsilon \) and \(P_t<P_{min}\) then the population is reset to its initial size. The term \(P_{min}\) is the required minimum population size to guarantee minimum amount of exploration. Typically values for \(P_{min}=\frac{P_{init}}{2}\) and \(\epsilon =0.1\).
Once population adaptation is started, \(\Delta _t\) points are added/removed to/from the current population with \(\Delta _t\) defined as
The definition of \(\Delta _t\) is similar to that of \(P_t\) in Eq. (11) except for parameter \(\Lambda \), which is a problemspecific parameter to control fluctuation period. In the original definition of \(P_t\), the period of fluctuation is \(2\times M_t\times D^2\) which may fluctuate too slowly for practical engineering design problems. Note that if \(\Lambda = \infty \), then \(\Delta _t\) is fixed to be one and thus is rolling back to the approach proposed in [48]. Therefore, population size change can be summarized as:
As depicted in Fig. 2, the solid green line shows the expected \(R_{EP}\) as a function of t. When the actual \(RD_t\) is moving outside the region covered by dashed blue lines, population adaptation will change by \(\Delta _t\). Furthermore, if \(RD_t\) is always dropping below \(\epsilon \) when population size is already at its minimum level (i.e., \(P_t=P_{min}\)), population size can be reset.
The FPSMA algorithm is illustrated in Algorithm 1. The input to the algorithm is the fitness function to evaluate \({\mathcal {F}}\). In lines 2–6, parameters and the population are initialized and population diversity are calculated using Eq. (14) before starting the iterations. In lines 8–13, the fitness of each individual in the population will be evaluated and then sorted accordingly before being divided into two groups, positive and negative group. Each individual then will have its position updated according to Eq. (1). Line 14 through 15 are the newly proposed steps where \(DI_t\), \(RD_t\) and \(R_{EP}\) are updated accordingly. Finally in line 16, population size for next epoch is updated as defined by the fluctuation strategy described in Eq. (17).
The time complexity of FPSMA depends on number of iterations (T), population size (P), function dimension (D) and is bounded by the computation performed within the while loop (lines 7–17). Therefore, based on simple analysis of the main compute intensive processes executed during the while loop, one can compute FPSMA’s time complexity. For each iteration, computational complexity depends on fitness evaluation and sorting (line8) which can be performed in \({\mathcal {O}}(P\log {}P)\), weight update (line9), and position update (lines 1113) where both can be performed in \({\mathcal {O}}(P*D)\). Kmeans clustering (lines 14–16) for population size adaptation can be performed in \({\mathcal {O}}(P*K)\) for a fixed number of iterations and attributes, where K is the number of clusters. Therefore, the final time complexity of FPSMA is: \({\mathcal {O}}(T*((P\log {}P)+(P*D)+(P*K))\) which is comparable with the original SMA. However, in our case, the average value for P is less due to the adaptive nature at each epoch as compared to SMA which has fixed value for all iterations.
4 Experiment results
In this section, we apply FPSMA on Ackley benchmark function to validate the relationship between relative population diversity and population size in addition to convergence characteristics. Moreover, the performance of FPSMA as compared to original SMA using a set of 13 benchmarks and CEC2014 functions [27, 28, 37] will be discussed. The performance metrics used to compare solution quality will be fitness values, whereas for run time, the number of functional evaluations will be used. All results have been obtained using Python 3.7 running on Intel^{®} Core™2 Quad CPU Q8400 @ 2.66 GHz with a 8 GB RAM and a 64bit OS.
4.1 Convergence analysis
In convergence analysis experiment, FPSMA was applied to Ackley benchmark function which is widely used as a multivariate test function for optimization problems [49]. It is described mathematically by Eq. (18) and plotted in Fig. 3.
The Ackley function is characterized by its nearly flat outer region and a global optimal at the center (\(X^*=0\)) with many local optimal close by. Recommended variable values for Ackley benchmark are \(a=20,b=0.2,c=2\pi \). As for the parameters used in FPSMA implementation they are: D=100, \(P_{init}\)=200, T=1000, z=0.03, \(\Lambda \)=100, \(\epsilon \)=0.1, \(\gamma \)=1. These parameters are used throughout the remainder of this discussion.
Figure 4a shows population size \(P_t\) and relative population diversity \(RD_t\) during FPSMA execution whereas Fig. 4b shows best fitness evolution. In the first tens of epochs, \(RD_t\) is decreasing dramatically from 1 to below 0.5 with best fitness improved to around \(10^0\) indicating convergence toward a local optimal. However, to continue the exploration process, population size is still fixed at its initial value of \(P_{init}=200\). During the first 500 epochs, population size is slightly changing due to \(RD_t\) fluctuation to maintain the balance between exploration and exploitation. After 500 iterations, \(R_{EP}\) drops considerably as are seen in Fig. 4a leading to a sharp decrease in population size down to \(P_{min}=100\). Keeping this minimum population size is sufficient to reach the global minimum at around the 540th iteration.
Figure 5 shows population distribution only in the first two dimensions during the optimization process. At the beginning when \(t=0\), the population is randomly distributed between the lower and upper bound; however, as execution continues, the population is gradually concentrated around multiple centers. As a result, population size can be decreased gradually to save computation without effecting exploitation characteristics. At \(t=750\), the algorithm converges toward two centers indicating that minimum population size is sufficient for exploitation.
4.2 Benchmarks comparisons
FPSMA was also evaluated using 13 standard benchmark function (see Table 5 in appendix) commonly used to evaluate optimization algorithms and additional 30 benchmarks from CEC2014 [28]. The results are the average values for 30 independent runs of the algorithms on each benchmark. Tables 2 and 3 show the best fitness on both standard and CEC2014 benchmarks, respectively. In the tables, column \(f_{min_{x}}\) specifies the fitness value followed by standard deviation (\(\sigma \)). Column \(\Upsilon \) indicates the fitness of SMA whether it is better, equal or worst than FPSMA using symbols "−", "\(=\)" or "\(+\)," respectively. Moreover, a comparison between SMA and FPSMA performance in terms of function evaluations are represented in column \(\delta (\%)\) which represents the percentile decrease in the number of function evaluations calculated using:
Tables 2 and 3 compares that attained best fitness for SMA and FPSMA algorithms on standard and CEC2014 benchmarks, respectively. The tables show that FPSMA was able to reduce the number of function evaluation for standard benchmark on average by approximately 28% and CEC2014 ones by 24%. In 8 out of 13 standard benchmarks and more than half of CEC2014 benchmarks, FPSMA was able to achieve equivalent or better fitness than the original SMA. FPSMA was able to achieve the same fitness in 5 CEC2014 benchmarks while having worst fitness in 10 benchmarks with an average fitness loss of about 9% which is much less than the 26% fitness improvement found in the other benchmarks. As a matter of fact, if \(f_{8}\) and \(f_{19}\) are excluded from fitness results, then overall loss in fitness will become negligible (i.e., 0.8%) for the benchmarks with worst fitness.
To get a sense of run time enhancement as compared to simply the reduction in number of function evaluations, Table 4 shows runtime characteristics for both SMA and FPSMA algorithms on standard benchmarks where the values are given in seconds. Column \(\delta \) gives percentile reduction in runtime when using FPSMA as compared to SMA. It is apparent that on average, FPSMA provide a reduction of 25% in runtime. The same characteristics are observed for CEC2014 benchmarks (i.e., 22% reduction in runtime) but not shown to keep the discussion concise.
Surprisingly, FPSMA was able to save 64.3% of computational cost associated with \(f_{26}\) as compared to original SMA. By plotting population size (\(P_t\)) and relative diversity (\(RD_t\)) in Fig. 6a and best fitness in Fig. 6b, it can be observed that the relative diversity has dramatically decreased after about 100 epochs, indicating algorithm stagnation. Population size fluctuation can no longer help the algorithm escape its stagnation and after 250 epochs, the relative diversity \(RD_t\) drops to zero and population size is set to its minimum value. This results in a considerable reduction in the number of function evaluations as depicted in Table 3. Since the algorithm was able to identify this condition, it is possible to utilize such an approach to trigger early algorithm termination resulting in further reduction in function evaluation. Note that in these tables, we only presented the saving in the number of function evaluation without changing the terminating condition (i.e., maximum number of iteration). A possible further enhancement to the proposed algorithm is to consider this case and terminate the algorithm to boost reduction in overall execution time.
In summary, the following observations can be made from the experimental results:

Population size adaptation based on population diversity played an important role in both runtime efficiency and optimization effectiveness of FPSMA as compared with SMA.

Experimental results on 13 standard and 30 CEC2014 benchmark functions in Tables 2 and 3 have revealed that FPSMA can achieve 20–30% savings in function evaluations on average while maintaining good solution quality when compared to SMA.

As depicted in Table 4, the FPSMA showed a 25% reduction in runtime on standard benchmarks on average when compared to SMA and thus demonstrating its balanced exploration and exploitation capabilities.
5 Conclusion
This paper proposed an acceleration strategy for SMA algorithm named FPSMA that adaptively change population size during algorithm execution. A clusterbased population diversity from Kmean is used as an indicator to change population size to balance exploration and exploitation phases of the algorithm. Thresholds for population diversity that trigger population size change are dynamically finetuned to appropriate levels during different stages of the algorithm. Once population diversity exceeds the threshold, population size is modified using sine wave function pattern. Simulation results on 13 standard and 30 IEEE CEC2014 benchmark functions have revealed that FPSMA algorithm can achieve approximately 20% reduction in computation cost while maintaining good solution quality. The performance gain can be attributed to the flexibility offered by FPSMA to switch between exploration and exploitation phases and the adaptable population size. The proposed algorithm can be found on Github using the link https://github.com/e6la3banoh/FPSMA. As future work, we would like to study the parallelization of FPSMA and its extension for multipleobjective SMA [21, 24, 44].
References
AbdelBasset M, Chang V, Mohamed R (2020) Hsma\_woa: a hybrid novel slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest xray images. Appl Soft Comput 95:106642. https://doi.org/10.1016/j.asoc.2020.106642
AbdelBasset M, Mohamed R, Chakrabortty RK, Ryan MJ, Mirjalili S (2021) An efficient binary slime mould algorithm integrated with a novel attackingfeeding strategy for feature selection. Comput Ind Eng 153:107078
AlFaisal HR, Ahmad I, Salman AA, Alfailakawi MG (2021) Adaptation of population size in sine cosine algorithm. IEEE Access 9:25258–25277. https://doi.org/10.1109/ACCESS.2021.3056520
Anter AM, Oliva D, Thakare A, Zhang Z (2021) Afcmlsma: new intelligent model based on lévy slime mould algorithm and adaptive fuzzy cmeans for identification of covid19 infection from chest xray images. Adv Eng Inform 49:101317
Arabas J, Michalewicz Z, Mulawka J (1994) Gavapsa genetic algorithm with varying population size. In: Proceedings of the 1st IEEE conference on evolutionary computation. IEEE world congress on computational intelligence. IEEE, pp 73–78
Brest J, Greiner S, Boškovič B, Mernik M, Žumer V (2006) Selfadapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans Evol Comput 10:646–657
Chen D, Zhao C (2009) Particle swarm optimization with adaptive population size and its application. Appl Soft Comput 9(1):39–48
Chen Z, Liu W (2020) An efficient parameter adaptive support vector regression using kmeans clustering and chaotic slime mould algorithm. IEEE Access 8:156851–156862
Cui L, Li G, Zhu Z, Lin Q, Wen Z, Lu N, Wong KC, Chen J (2017) A novel artificial bee colony algorithm with an adaptive population size for numerical function optimization. Inf Sci 414:53–67
Cui Z, Hou X, Zhou H, Lian W, Wu J (2020) Modified slime mould algorithm via levy flight. In: 2020 13th international congress on image and signal processing, biomedical engineering and informatics (CISPBMEI). IEEE, pp 1109–1113 (2020)
İzci D ES (2021) Comparative performance analysis of slime mould algorithm for efficient design of proportional–integral–derivative controller. Electrica 21:151–159
Dhal KG, Das A, Sahoo S, Das R, Das S (2019) Measuring the curse of population size over swarm intelligence based algorithms. Evol Syst 12:1–48
Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic algorithms. Comput Ind Eng 137:106040
ElFergany AA (2021) Parameters identification of pv model using improved slime mould optimizer and lambert wfunction. Energy Rep 7:875–887
Gao Z, Zhao J, Li S (2020) The hybridized slime mould and particle swarm optimization algorithms. In: 2020 IEEE 3rd international conference on automation, electronics and electrical engineering (AUTEEE). IEEE, pp 304–308 (2020)
Gao ZM, Zhao J, Li SR (2020) The improved slime mould algorithm with cosine controlling parameters. J Phys: Confer Ser 1631:012083. https://doi.org/10.1088/17426596/1631/1/012083
Goldberg DE, Holland JH (1988) Genetic algorithms and machine learning. Mach Learn 3:95–99
Hassan MH, Kamel S, Abualigah L, Eid A (2021) Development and application of slime mould algorithm for optimal economic emission dispatch. Expert Syst Appl 182:115205
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Fut Gener Comput Syst 97:849–872. https://doi.org/10.1016/j.future.2019.02.028
Houssein EH, Mahdy MA, Blondin MJ, Shebl D, Mohamed WM (2021) Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems. Expert Syst Appl 174:114689
Houssein EH, Mahdy MA, Shebl D, Manzoor A, Sarkar R, Mohamed WM (2022) An efficient slime mould algorithm for solving multiobjective optimization problems. Expert Syst Appl 187:115870
Hussain K, Salleh MNM, Cheng S, Shi Y (2019) Metaheuristic research: a comprehensive survey. Artif Intell Rev 52(4):2191–2233
Ibrahim RA, Yousri D, Abd Elaziz M, Alshathri S, Attiya I (2021) Fractional calculusbased slime mould algorithm for feature selection using rough set. IEEE Access 9:131625–131636
Khunkitti S, Siritaratiwat A, Premrudeepreechacharn S (2021) Multiobjective optimal power flow problems based on slime mould algorithm. Sustainability 13(13):7448
Koumousis VK, Katsaras CP (2006) A sawtooth genetic algorithm combining the effects of variable population size and reinitialization to enhance performance. IEEE Trans Evol Comput 10(1):19–28
Li L, Pan TS, Sun XX, Chu SC, Pan JS (2021) A novel binary slime mould algorithm with au strategy for cognitive radio spectrum allocation. Int J Comput Intell Syst 14(1):1–18
Li S, Chen H, Wang M, Mirjalili AAHS (2020) Slime mould algorithm: a new method forstochastic optimization. Futur Gener Comput Syst. https://doi.org/10.1016/j.future.2020.03.055
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective realparameter numerical optimization, vol 635. Technical report Zhengzhou, China
Liu L, Zhao D, Yu F, Heidari AA, Ru J, Chen H, Mafarja M, Turabieh H, Pan Z (2021) Performance optimization of differential evolution with slime mould algorithm for multilevel breast cancer image segmentation. Comput Biol Med 138:104910
Liu M, Li Y, Huo Q, Li A, Zhu M, Qu N, Chen L, Xia M (2020) A twoway parallel slime mold algorithm by flow and distance for the travelling salesman problem. Appl Sci 10(18):6180
Liu Y, Heidari AA, Ye X, Liang G, Chen H, He C (2021) Boosting slime mould algorithm for parameter identification of photovoltaic models. Energy 234:121164
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67. https://www.sciencedirect.com/science/article/pii/S0965997816300163
Mostafa M, Rezk H, Aly M, Ahmed EM (2020) A new strategy based on slime mould algorithm to extract the optimal model parameters of solar pv panel. Sustain Energy Technol Assess 42:100849
Naik MK, Panda R, Abraham A (2020) Normalized square difference based multilevel thresholding technique for multispectral images using leader slime mould algorithm. J King Saud UniverComput Inf Sci. https://doi.org/10.1016/j.jksuci.2020.10.030
Naik MK, Panda R, Abraham A (2021) Adaptive opposition slime mould algorithm. Soft Comput 25(22):14297–14313
Naik MK, Panda R, Abraham A (2021) An entropy minimization based multilevel colour thresholding technique for analysis of breast thermograms using equilibrium slime mould algorithm. Appl Soft Comput 113:107955
Nguyen T (2020) A framework of optimization functions using numpy (opfunu) for optimization problems (2020). https://doi.org/10.5281/zenodo.3620960
Nguyen TT, Wang HJ, Dao TK, Pan JS, Liu JH, Weng S (2020) An improved slime mold algorithm and its application for optimal operation of cascade hydropower stations. IEEE Access 8:226754–226772
Poláková R, Tvrdík J, Bujok P (2019) Differential evolution with adaptive mechanism of population size according to current population diversity. Swarm Evolut Comput 50:100519
Piotrowski A (2017) Review of differential evolution population size. Swarm Evol Comput 32:1–24
Piotrowski AP, Napiorkowski JJ, Piotrowska AE (2020) Population size in particle swarm optimization. Swarm Evol Comput 58:100718
Polakova R, Tvrdik J, Bujok P (2014) Controlled restart in differential evolution applied to CEC2014 benchmark functions. In: IEEE congress on evolutionary computation, pp 2230–2236
Precup RE, David RC, Roman RC, Petriu EM, SzedlakStinean AI (2021) Slime mould algorithmbased tuning of costeffective fuzzy controllers for servo systems. Int J Comput Intell Syst 14(1):1042–1052
Premkumar M, Jangir P, Sowmya R, Alhelou HH, Heidari AA, Chen H (2021) Mosma: multiobjective slime mould algorithm based on elitist nondominated sorting. IEEE Access 9:3229–3248
RizkAllah RM, Hassanien AE, Song D (2021) Chaosoppositionenhanced slime mould algorithm for minimizing the cost of energy for the wind turbines on highaltitude sites. ISA Trans 121:191–205
Shi B, Ye H, Zheng J, Zhu Y, Heidari AA, Zheng L, Chen H, Wang L, Wu P (2021) Early recognition and discrimination of covid19 severity using slime mould support vector machine for medical decisionmaking. IEEE Access 9:121996–122015
Storn R, Price K (1997) Differential evolution  a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11:341–359
Sun G, Xu G, Gao R, Liu J (2019) A fluctuant population strategy for differential evolution. Evolut Intell. https://doi.org/10.1007/s12065019002876
Back T (1996) Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford University Press on Demand
Teo J (2006) Exploring dynamic selfadaptive populations in differential evolution. Softw Comput. 10:673–686
Vashishtha G, Chauhan S, Singh M, Kumar R (2021) Bearing defect identification by swarm decomposition considering permutation entropy measure and oppositionbased slime mould algorithm. Measurement 178:109389
Wang GG (2018) Moth search algorithm: a bioinspired metaheuristic algorithm for global optimization problems. Memet Comput. https://doi.org/10.1007/s1229301602123
Wang GG, Deb S, Cui Z (2015) Monarch butterfly optimization. Neural Comput Appl. https://doi.org/10.1007/s005210151923y
Wazery YM, Saber E, Houssein EH, Ali AA, Amer E (2021) An efficient slime mould algorithm combined with knearest neighbor for medical classification tasks. IEEE Access 9:113666–113682
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Yang M, Cai Z, Li C, Guan J (2013) An improved adaptive differential evolution algorithm with population adaptation. In: GECCO ’13 proceedings of the 15th annual conference on genetic and evolutionary computation, pp 145–152
Yang M, Li C, Cai Z, Guan J (2014) Differential evolution with autoenhanced population diversity. IEEE Trans Cybern 45:302–315
Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174
Yu C, Heidari AA, Xue X, Zhang L, Chen H, Chen W (2021) Boosting quantum rotation gate embedded slime mould algorithm. Expert Syst Appl 181:115082
Zhao J, Gao ZM (2020) The hybridized Harris hawk optimization and slime mould algorithm. In: Journal of physics: conference series, vol 1682. IOP Publishing, pp 012029
Zhao S, Wang P, Heidari AA, Chen H, Turabieh H, Mafarja M, Li C (2021) Multilevel threshold image segmentation with diffusion association slime mould algorithm and Renyi’s entropy for chronic obstructive pulmonary disease. Comput Biol Med 134:104427
Zheng R, Jia H, Abualigah L, Liu Q, Wang S (2021) Deep ensemble of slime mold algorithm and arithmetic optimization algorithm for global optimization. Processes 9(10):1774
Zubaidi SL, Abdulkareem IH, Hashim KS, AlBugharbee H, Ridha HM, Gharghan SK, AlQaim FF, Muradov M, Kot P, AlKhaddar R (2020) Hybridised artificial neural network model with slime mould algorithm: a novel methodology for prediction of urban stochastic water demand. Water 12(10):2692
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Standard benchmarks
Appendix A: Standard benchmarks
See Appendix Table 5.
Rights and permissions
About this article
Cite this article
Alfadhli, J., Jaragh, A., Alfailakawi, M.G. et al. FPSMA: an adaptive, fluctuant population strategy for slime mould algorithm. Neural Comput & Applic 34, 11163–11175 (2022). https://doi.org/10.1007/s00521022070346
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521022070346