Abstract
Metaheuristic algorithms are novel optimization algorithms often inspired by nature. In recent years, scholars have proposed various metaheuristic algorithms, such as the genetic algorithm (GA), artificial bee colony, particle swarm optimization (PSO), crow search algorithm, and whale optimization algorithm (WOA), to solve optimization problems. Among these, PSO is the most commonly used. However, different algorithms have different limitations. For example, PSO is prone to premature convergence and falls into a local optimum, whereas GA coding is difficult and uncertain. Therefore, an algorithm that can increase the computing power and particle diversity can address the limitations of existing algorithms. Therefore, this paper proposes a hybrid algorithm, called whale particle optimization (WPO), that combines the advantages of the WOA and PSO to increase particle diversity and can jump out of the local optimum. The performance of the WPO algorithm was evaluated using four optimization problems: function evaluation, image clustering, permutation flow shop scheduling, and data clustering. The test data were selected from real-life situations. The results demonstrate that the proposed algorithm competes well against existing algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Algorithms [1] comprise a problem-solving strategy. They are used to perform certain tasks or solve problems under limited resources, and must be executed with limited instructions or time and meet five conditions: accurate input description, clarity and validity, correctness, finiteness, and output. Metaheuristic algorithms [2] are novel optimization algorithms that were conceptualized decades ago when the software and hardware were immature and it was challenging for a computer to perform numerous calculations simultaneously. Recently, owing to software and hardware improvements, the field of optimization problems has received increasing attention. Experts and scholars have proposed different metaheuristic algorithms for solving combinatorial optimization problems such as non-deterministic polynomial hard (NP-hard) problems [3]. A metaheuristic algorithm is a type of optimization algorithm that effectively explores the search space to find a solution closest to the optimal solution. Among optimization algorithms, the swarm intelligence algorithm [4] that simulates biological social behavior has been commonly used in real-life applications, such as data clustering [5, 6], image recognition [7, 8], data mining [9, 10], power systems [11, 12], image processing [13, 14], permutation flow shop scheduling problems (PFSPs) [15, 16], bioinformatics [17], DNA recombination [18], and robot control [19, 20]. The genetic algorithm [21], particle swarm optimization (PSO) [22], ant colony optimization [23], ABC [24], firefly [25], cuckoo search [26], grey wolf optimizer [27], moth-flame optimization [28], and crow search algorithm [29], whale optimization algorithm [30], and butterfly optimization [31] have been the commonly used swarm intelligence algorithms. Among them, PSO is the most used because it solves various problems and is easy to implement. However, PSO has the disadvantage of premature convergence, which occurs when a particle reaches a better position in the search space and stops moving, and other particles approach its position. When a particle is near the local optimum, it may fall into the local optimum, resulting in a rapid decrease in the population diversity. However, the most obvious disadvantage of PSO is that it easily converges prematurely. Therefore, scholars have proposed memetic algorithms [32,33,34,35] to improve algorithm diversity. They involve simulating cultural evolution using a metaheuristic algorithm with a random global search. The main concept is to combine several metaheuristic and local search algorithms [36] to solve NP-hard optimization problems, reduce computation time, and obtain the approximate optimal solution. Therefore, an algorithm that combines different algorithms can provide a better solution. This study proposes a memetic algorithm, called whale particle optimization (WPO), that combines the advantages of WOA and PSO. WPO was evaluated and compared with existing algorithms using four optimization problems (function evaluation, image clustering, PFSP, and data clustering) based on the following four criteria: optimal value eventually representing function optimization, peak signal-to-noise ratio (PSNR) representing image clustering, standard deviation representing the PFSP, and accuracy rate representing data clustering. The main contributions of this study are as follows.
-
First, we developed a new hybrid WPO algorithm that combines the advantages of the WOA and PSO algorithms.
-
Second, by optimizing the hybrid operator and increasing the algorithm diversity, we exchanged particles between the two systems and avoided local optima, thereby improving the global search capability. The introduction of this hybrid operator is the highlight of this study and makes a significant contribution to the literature.
-
We demonstrated the effectiveness and performance of the algorithm through experiments, and applied it for solving practical problems, thus revealing that our algorithm has significant advantages in terms of efficiency and performance.
The rest of this paper is organized as follows. Preliminaries and related work are introduced in Sect. 2. Section 3 describes the proposed algorithm and its application to the four optimization problems. Section 4 presents the results and analysis, and finally Sect. 5 concludes the paper and presents future research directions.
2 Background Knowledges
2.1 Metaheuristic Algorithms
Metaheuristic algorithms, most of which are developed by observing creatures and phenomena in nature, aim to determine an acceptable approximate optimal solution within an effective time limit. Recently, owing to hardware improvements, these algorithms have received increasing attention and have demonstrated the following advantages: good problem-solving performance that is not limited by the initial value setting, ability to jump out of the local optimal solution, and wide application range. The following sections introduce the metaheuristic algorithms employed in this study. The algorithms were used to simulate the particles as the search object proposed by the authors. Assuming that we have n particles, and particle i is at time t, we obtain
where \({x}_{i,j}^{t}\) represents the position of particle i in dimension j at time t.
2.1.1 PSO
PSO is an evolutionary computing technology developed by J. Kennedy and R. C. Eberhart et al. [22] in 1995. Derived from the algorithm developed by humans observing the foraging behavior of birds, this algorithm uses particles to simulate a bird, each with its own position and velocity at a certain time (t). The position of each particle corresponds to a set of solutions for the fitness value, which is used by each particle to determine the pros and cons of its solution. To determine its movement direction at each step, the particle uses the current position and direction as references, and its own best experience (\({P}_{best}\)) and best position according to the group (\({P}_{Gbest}\)). In 1998, Shi and Eberhart [37] added inertia weight to the PSO calculation to increase the search ability (variety) of the particles, which is now used as the standard PSO calculation. Equations (2) and (3) show the update methods for the position and velocity of particle i at time t + 1, respectively:
where \({v}_{i,j}^{t}\) and \({x}_{i,j}^{t}\) denote the position and velocity of particle i, respectively, in dimension j at time t; \(\omega\) denotes the inertia weight, \({p}_{i,j}^{t}\) denotes the best position for particle i has traveled, \({p}_{Gbest,j}^{t}\) denotes the best position all particles have traveled, \({c}_{1}\) denotes the particle experience weight, \({c}_{2}\) denotes the group experience weight, and \({r}_{1}\) and \({r}_{2}\) are random numbers between 0 and 1.
The PSO algorithm operates as follows:
-
Step 1:
Initialize the particle position and velocity.
-
Step 2:
Calculate the fitness value of each particle.
-
Step 3:
Find the individual optimal solution for each particle and the optimal solution for all particles.
-
Step 4:
Update the velocity and position of each particle at the next time step.
-
Step 5:
Check if the end condition is satisfied. If not, return to Step 2 and repeat.
2.1.2 WOA
Mirjalili and Lewis [30] proposed WOA in 2016. It was inspired by the hunting behavior of humpback whales, which yielded three behaviors: encircling prey, bubble-net attacking, and searching for prey:
The first behavior of encircling prey involves humpback whales determining the position of their prey and surrounding them. When the best whale is found, other whales attempt to approach its position. This phenomenon is expressed as follows:
where \({\omega }^{t}\) represents the best whale position at time t, and D denotes the distance between whale i and the best whale \(\upomega\). A and C are coefficients that are updated at every iteration and calculated as
where a is a number that decreases linearly from 2 to 0 with the number of iterations, and \({r}_{1}\) and \({r}_{2}\) are random numbers between 0 and 1.
The second search method is bubble-net attacking. To mimic the bubble-net attacking behavior of humpback whales, the authors proposed two mathematical models. The first is the shrinking encircling mechanism, which is similar to encircling prey, except that A has a random value between -a and a, where a decrease linearly from 2 to 0 with the number of iterations. The second is the spiral updating position model, which imitates the spiral motion of humpback whales. Humpback whales create a spiral-shaped bubble around their prey, follow the bubble, approach the ocean surface, and capture their prey. This spiral motion is expressed as
where b is the shape constant used to define the logarithmic spiral lines, and l is a random number between − 1 and 1. Humpback whales use these two methods to create spiral bubbles for surrounding their prey. The chances of them selecting either of these methods to update their positions is 50–50:
where \({rand}_{p}\) is a random number between 0 and 1.
Finally, humpback whales also search for prey based on the locations of other whales. All metaheuristic algorithms adopt random searching. The mathematical model is similar to that for encircling prey, but humpback whales move based on the position of another whale. The use of random searching depends on the A value. When |A|> 1, random searching is employed as follows:
where \({x}_{rand}^{t}\) is the position of a random whale at time t.
The WOA operates as follows:
-
Step 1:
Initialize the whale positions and optimal point.
-
Step 2:
Calculate the fitness value of each whale.
-
Step 3:
Update the position of each whale at the next time step.
-
Step 4:
Update the optimal point.
-
Step 5:
Check if the end condition is met. If not, return to Step 2 and repeat.
2.2 Functional Optimization
Function evaluation is a branch of applied mathematics that primarily studies the optimization of a specific function in a specific situation. The problem is defined as follows: given a function f: A \(\to {\mathbb{R}}\), find element \({x}^{0}\in A\) such that for all x in A, \(\Vert \mathrm{f}({x}^{0})\Vert \le \Vert \mathrm{f}(x)\Vert\), where A is a subset in the Euclidean space \({\mathbb{R}}^{n}\). This is usually specified by a constraint equation or inequality that A must satisfy. The elements of A are called feasible solutions and function f is called the objective function. The goal is to find a set of feasible solutions closest to the optimal solution after being substituted into the objective function.
2.3 Image Clustering
Image clustering is an important component of computer vision [38] and a proven NP-hard problem. It is the process of dividing an image into multiple sub-regions according to each pixel (Fig. 1). Image clustering simplifies and changes the image representation to facilitate its understanding and analysis. It is often used to accentuate objects and boundaries in images. Particularly, each pixel in an image is labeled such that pixels with the same label share some visual properties. Examples of well-known image clustering algorithms are Otsu [39] and k-means [34]. Its practical applications include medical image processing [40], tumor localization, and face recognition.
2.3.1 PSNR
After image compression, the output image is usually different from the original image. To measure the image quality after processing, we usually refer to the PSNR value and determine if a processing program is satisfactory. The PSNR is calculated as follows:
2.4 PFSPs
A PFSP is a scheduling problem wherein each task should be properly sequenced and processed on a set of machines. Its goal is to discover how to maintain the fluency of task processing with the least amount of idle and waiting time. Assume that there are n jobs and m machines, and the m machines must process each job. Time \({p}_{xy}\) is defined as the execution time of job x on machine y. The order of tasks arranged in the m machines is \(\pi =\left\{{\pi }_{1},{\pi }_{2},\dots ,{\pi }_{n}\right\}\), where n is the total number of tasks. The time required by machine y to complete task x is defined as \(\mathrm{C}({\pi }_{x},\mathrm{y})\), and the time required for completing all n jobs by m machines is calculated as follows:
The makespan is the total time required to complete the final job by the final machine:
2.4.1 NEH Algorithm
The NEH algorithm [41] is a scheduling algorithm proposed by Nawaz, Enscore, and Ham, wherein the task with the highest total time consumption across all machines has the highest priority. It operates as follows:
-
Step 1:
Arrange the time required by n tasks on the machines in the descending order.
-
Step 2:
Schedule the first two tasks such that the makespan decreases.
-
Step 3:
Insert the next job into the existing schedule and to find the minimum makespan.
2.4.2 Local Search for Scheduling Problems
This study adopts the variable neighborhood search (VNS) algorithm [42] for local search that used two methods: pair-swap and insertion. As shown in Fig. 2, it begins by randomly selecting two positions in the schedule. Pair-swap implies exchanging the order of the two selected positions, whereas insertion implies deleting the latter of the selected positions and inserting it before the former.
2.4.3 Standard Deviation
The quality of permutation flow shop scheduling is evaluated using the makespan value. In the final table presentation, this study uses the average percentage relative deviation (\({\Delta }_{avg}\)), which is calculated as
where R denotes the total number of problem executions, \({H}_{i}\) denotes the solution calculated by the algorithm, and \({U}_{i}\) denotes the currently known best solution.
2.5 Data Clustering
Data clustering is a type of unsupervised learning for identifying the cluster that the data belong to, where the data within (inter) the cluster are similar and those outside (outer) the cluster are dissimilar. Suppose n pieces of data exist, they can be expressed as follows:
Each data piece has d dimensions, which implies that each data piece has d different attributes and features. The i-th data piece can be expressed as
where \({\mathrm{O}}_{i,j}\) represents the jth dimension of the ith data.
In the cluster, each data are divided into different clusters, and each cluster can be expressed as
where k is the total number of clusters, and \({C}_{j}\) is the jth cluster. There are several rules for clustering, such as no cluster is an empty set, any two random clusters do not have an intersection, and the data in all clusters equals the total amount of data. They are expressed as follows:
The sum of the Euclidean distances from each data piece to the cluster center were calculated. The smaller the calculated value, the better the clustering result, which is expressed as
where n is the total number of data pieces; k is the total number of clusters; \({\omega }_{ij}\) is a value between 0 and 1, depending on the weight of data piece \({O}_{i}\) and center point \({Z}_{i}\); and \(\Vert {O}_{i}-{Z}_{i}\Vert\) is the Euclidean distance from data piece \({O}_{i}\) to center point \({Z}_{i}\).
2.5.1 Accuracy Rate
Data clustering initially removes the fields representing the clusters in the dataset. If the clustering result of the algorithm is the same as that of the original dataset, the clustering is correct. The total number of correct clusters is denoted as fc, and the total amount of data is denoted as n. The accuracy is calculated as follows:
3 Methodology
3.1 Particle Initialization
This study uses random generation as the initialization method to randomly initialize particles in the search space. To allow WPO to solve the particles, all particles are initially encoded as
where \({x}_{min}\) and \({x}_{max}\) are the lower and upper limits of the particles,
3.2 Simultaneous Execution
WPO combines WOA and PSO. WOA has good search capability, whereas PSO has fast convergence. After initializing the particles, WPO calculates them. The particles are executed simultaneously in the two systems to obtain the novel characteristics of both WOA and PSO. The fitness value is recalculated after each iteration of particle movement to proceed to the next step.
3.3 Particle Update
In each iteration, the particles move in a new direction. However, sometimes the fitness value obtained after moving is worse than that before moving. At this time, WPO adjusts the particle to its position before moving this iteration, as shown in Eq. (30). If the fitness value is low, the particle remains at \({x}_{i,j}^{t}\) in the next iteration \({x}_{i,j}^{t+1}\).
where \(f\left({x}_{i,j}^{t}\right)\) and \(f({x}_{i,j}^{t+1})\) denote the fitness values of particle \({x}_{i,j}\) at times t and t + 1, respectively, which can also be considered as the fitness values calculated before and after the particle moves, respectively.
3.4 Hybrid Operator
When the algorithm reaches a specific iteration, the hybrid operator [43] is executed. The hybrid operator is based on the roulette wheel method of GA. In WOA and PSO, some particles are selected based on their fitness values and swapped. This method can avoid rapid convergence, which makes the result reach the local optimal solution, and maintain the diversity of the two systems. In the example shown in Fig. 3, six particles exist in the WOA and PSO. When executing the hybrid operator, the probabilities of the particles are calculated based on their fitness values. When selecting particles, the WOA obtains a random number of 0.72, resulting in the selection of the fifth particle A5, whereas the PSO algorithm obtains a random number of 0.48, resulting in the selection of the third particle P3. Therefore, after the execution of the hybrid operator, particles A5 and P3 swap positions.
3.5 Summary of the Proposed Algorithm
This hybrid WPO algorithm proposed in this study combines WOA and PSO. WOA has excellent search capability, wherein the three search modes can effectively expand the search range and avoid being trapped in the local optimum. PSO has the advantage of fast convergence, wherein algorithm can quickly converge to the optimal solution to a certain degree. WPO successfully employs both WOA and PSO. The searches are executed in both systems and after a specific number of iterations, the hybrid operator swaps particles among the two systems to increase the diversity of particles and to avoid being trapped in local optimal solutions. This study successfully combines the advantages of the two algorithms such that the proposed algorithm has a fast convergence speed, excellent search capability, and more powerful search capability when using the hybrid operator.
Figure 4 details the steps of using WPO. First, all particles are randomly generated and the two systems then execute simultaneously. After each movement iteration, the fitness values of the particles are recalculated, and their next positions are determined by comparing their fitness values with those before the movement. The hybrid operator executes at certain iterations to improve particle diversity. Finally, the optimal particle is output as the solution. The pseudocode of WPO is shown in Fig. 5.
4 Experimental Results
4.1 Benchmarks
The function evaluation adopted ten functions to evaluate the proposed algorithm and d is the dimension of the function, as shown in Table 1. This study used d = 10 for the evaluation and d = 0 for the optimal values of Functions 1–10. In image clustering tests, six images of 256 × 256 pixels in size were used. The images were Lena, Baboon, Air Plane, Papper, Goldhill, and Sailboat, which were used as the criteria for evaluating the proposed algorithm (Table 2). For permutation flow shop scheduling tests, the popular Taillard dataset [44] was used. The data and the number of jobs ranged from 20 to 500 and the number of machines from 5 to 20 (Table 3). Data clustering employed eight real-world datasets to evaluate the clustering ability of the algorithm. The length of the data ranged from 150 to 6435, the dimension from 3 to 36, and the number of clusters from 2 to 10. The details are listed in Table 4.
4.2 Parameter Settings
The WOA and PSO used 10 particles to search, the number of search iterations was 1000, the hybrid operator executed every 30 iterations to swap 5 particles, and the final data were averaged after executing each algorithm 20 times. Table 5 presents the parameter settings of the algorithm.
4.3 Function Evaluation Results
To evaluate the performance of the proposed algorithm, we compared its performance with that of a well-known hybrid algorithm, called the crow particle optimization [34], as well as with those of PSO, WOA, and CSA. The results of the function evaluation, image clustering, permutation flow scheduling, and data clustering are discussed in the first, second, third, and fourth subsections, respectively.
Figures 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15 show the convergence graphs of performing function evaluations using 10 well-known functions. The figures show that WPO inherits the fast convergence characteristics of PSO and the search capability of WOA, which allow it to quickly determine the approximate optimal solution and escape the local optimum. WPO was compared with some existing algorithms, specifically: CPO, PSO, WOA, and CSA. The results presented in Table 6 indicate that the closer the result is to the optimal value of the function, the better the algorithm performance. WPO performs better than other algorithms for most functions, especially for Sphere, Schwefel, Rastrigin, Cigar, Quartic, and Alpine. Table 7 presents the results of the Wilcoxon signed-rank test for WPO and other algorithms [45]. In Table 6, the ( +) symbol indicates that the WPO performs significantly better than the other algorithms, (−) indicates that it performs significantly worse than the other algorithms, and (≈) indicates an insignificant performance difference.
To evaluate the convergence ability of the proposed algorithm, a goal of \({10}^{-3}\) was set based on the results, and the number of iterations and time required by each algorithm to reach this goal were evaluated. The results (Tables 8 and 9) indicate that six selected functions reached \({10}^{-3}\) within 2000 iterations, and their averages were calculated after 20 iterations. Table 8 lists the number of iterations required for each algorithm to reach the goal. Evidently, WPO achieved the goal in the fewest iterations. Table 9 presents the time required to reach the goal in milliseconds (ms). From Tables 8 and 9, we can conclude that although the proposed algorithm cannot always outperform WOA in terms of time, owing to the powerful search strategy of WOA, combining it with PSO improved the performance in terms of the number of iterations compared to the existing algorithms, which also demonstrated that the proposed algorithm has a strong and fast convergence capability.
4.4 Image Clustering Results
Six well-known images were used for the comparisons, and the results are presented using PSNR values, with higher values indicating lower distortion rates. Table 10 presents the results, wherein WPO outperforms other algorithms.
4.5 Permutation Flow Shop Scheduling Results
Here, the well-known Taillard dataset [44] was used for the evaluation, and the results are presented as standard deviations, with those closer to zero indicating better performances. Table 11 presents the results, wherein the proposed algorithm outperforms the other algorithms. For datasets TA001, TA091, and TA111, all algorithms obtained the same result, whereas for TA061, TA071, and TA081, the WOA performed better. The WPO, which combines WOA and PSO, inherits the search capability of the WOA, and with the addition of PSO, the overall particle diversity is increased and a better solution can be obtained.
4.6 Data Clustering Results
This section presents the data clustering results of WPO and compares them with those of CPO [34], K-means, FCM [46], PSO, WOA, and CSA. The proposed algorithm was evaluated using eight well-known UCI datasets (i.e., Iris, Wine, Breast Cancer, Car Evaluation, Statlog, Yeast, Glass, and CMC). The results (Table 12) are presented in the form of accuracy rate, with higher values indicating better performances. Except for the poorer performance for the Glass dataset, compared to the CSA, the proposed algorithm outperformed the other algorithms.
4.7 Comparison with the SOTA Algorithms
We compared its performance with that of a SOTA algorithms, called the PSO-GSA [47], ARO [48], and COA [49]. The results presented in Table 13 indicate that the closer the result is to the optimal value of the function, the better the algorithm performance. WPO almost performs better than other algorithms for most functions, especially for Sphere, Schwefel, Rastrigin, Cigar, Quartic, and Alpine. Table 14 presents the results of the Wilcoxon signed-rank test for WPO and other algorithms [45]. In Table 13, the ( +) symbol indicates that the WPO performs significantly better than the other algorithms, (−) indicates that it performs significantly worse than the other algorithms, and (≈) indicates an insignificant performance difference.
The results (Tables 15 and 16) indicate that six selected functions reached \({10}^{-3}\) within 2000 iterations, and their averages were calculated after 20 iterations. Table 15 lists the number of iterations required for each algorithm to reach the goal. Evidently, WPO also achieved the goal in the fewest iterations. Table 16 presents the time required to reach the goal in milliseconds (ms). From Tables 15 and 16, we can conclude that the proposed algorithm has a strong and fast convergence capability with these SOTA algorithms.
5 Conclusions and Future work
Owing to the rise of metaheuristic algorithms, researchers have proposed different algorithms for solving various optimization problems. However, different algorithms have different disadvantages. For example, the PSO has rapid convergence characteristics and is easily trapped in the local optimum. Therefore, this paper proposed a hybrid algorithm, called the WPO, that combines a powerful search capability of the WOA and fast convergence characteristics of PSO.
WPO first initializes particles randomly and then simultaneously executes WOA and PSO. The next positions of the particles are determined according to their fitness values after moving. Furthermore, after some specific iterations, a hybrid operator is executed to increase the particle diversity. Finally, the optimal particle is output as the solution. Four optimization problems were used to evaluate the proposed algorithm: function evaluation, image clustering, PFSP, and data clustering. The performance was compared with those of existing algorithms, and WPO achieved some better results than other algorithms. In addition, it can outperform existing algorithms in terms of convergence capability.
In the future, we will aim to improve the performance, speed, and optimization of WPO. Specifically, doctors can use this algorithm in medical image processing applications. Finally, we expect a complete system will be designed and a time-saving, easy-to-use software will be developed based on the results of this study.
Data Availability
Not applicable.
References
Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to algorithms. The MIT Press, Cambridge, Massachusetts (2009)
Glover, F.W., Kochenberger, G.A.: Handbook of Metaheuristics. Springer Science & Business Media, New York (2006)
Lee, R.C.T., Tseng, S.S., Chang, R.C., Tsai, Y.T.: Introduction to the Design and Analysis of Algorithms. Tata McGraw Hill, McGraw-Hill College (1977)
Kennedy, J.: Swarm Intelligence. In: Zomaya, A.Y. (ed.) Handbook of nature-inspired and innovative computing: integrating classical models with emerging technologies, pp. 187–219. Springer, US, Boston, MA (2006). https://doi.org/10.1007/0-387-27705-6_6
van der Merwe, D.W., Engelbrecht, A.P.: Data clustering using particle swarm optimization. In: The 2003 Congress on Evolutionary Computation, 2003. CEC ’03. Vol. 1, pp. 215–220 (2003). https://doi.org/10.1109/CEC.2003.1299577
Ahmad, A., Khan, S.S.: Survey of state-of-the-art mixed data clustering algorithms. IEEE Access. 7, 31883–31902 (2019). https://doi.org/10.1109/ACCESS.2019.2903568
Mistry, K., Zhang, L., Neoh, S.C., Lim, C.P., Fielding, B.: A micro-GA embedded PSO feature selection approach to intelligent facial emotion recognition. IEEE Trans. Cybern. 47, 1496–1509 (2017). https://doi.org/10.1109/TCYB.2016.2549639
Han, F., Jiang, J., Ling, Q.-H., Su, B.-Y.: A survey on metaheuristic optimization for random single-hidden layer feedforward neural network. Neurocomputing 335, 261–273 (2019). https://doi.org/10.1016/j.neucom.2018.07.080
Sousa, T., Silva, A., Neves, A.: Particle swarm based data mining algorithms for classification tasks. Parallel Comput. 30, 767–783 (2004). https://doi.org/10.1016/j.parco.2003.12.015
Niu, W., Zhuo, Z., Zhang, X., Du, X., Yang, G., Guizani, M.: A heuristic statistical testing based approach for encrypted network traffic identification. IEEE Trans. Veh. Technol. 68, 3843–3853 (2019). https://doi.org/10.1109/TVT.2019.2894290
Abdel-Magid, Y.L., Abido, M.A., Al-Baiyat, S., Mantawy, A.H.: Simultaneous stabilization of multimachine power systems via genetic algorithms. IEEE Trans. Power Syst. 14, 1428–1439 (1999). https://doi.org/10.1109/59.801907
Dolatabadi, M., Damchi, Y.: Graph theory based heuristic approach for minimum break point set determination in large scale power systems. IEEE Trans. Power Deliv. 34, 963–970 (2019). https://doi.org/10.1109/TPWRD.2019.2901028
Srinidhi, C.L., Aparna, P., Rajan, J.: Automated method for retinal artery/vein separation via graph search metaheuristic approach. IEEE Trans. Image Process. 28, 2705–2718 (2019). https://doi.org/10.1109/TIP.2018.2889534
Sun, Y., Xue, B., Zhang, M., Yen, G.G.: Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 24, 394–407 (2020). https://doi.org/10.1109/TEVC.2019.2916183
Tasgetiren, M.F., Liang, Y.-C., Sevkli, M., Gencyilmaz, G.: A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem. Eur. J. Oper. Res. 177, 1930–1947 (2007). https://doi.org/10.1016/j.ejor.2005.12.024
Zhu, G.-Y., Ding, C., Zhang, W.-B.: Optimal foraging algorithm that incorporates fuzzy relative entropy for solving many-objective permutation flow shop scheduling problems. IEEE Trans. Fuzzy Syst. 28, 2738–2746 (2020). https://doi.org/10.1109/TFUZZ.2020.2986673
Liu, H., Zhao, R., Fang, H., Cheng, F., Fu, Y., Liu, Y.-Y.: Entropy-based consensus clustering for patient stratification. Bioinformatics 33, 2691–2698 (2017). https://doi.org/10.1093/bioinformatics/btx167
Enayatifar, R., Abdullah, A.H., Isnin, I.F.: Chaos-based image encryption using a hybrid genetic algorithm and a DNA sequence. Opt. Lasers Eng. 56, 83–93 (2014). https://doi.org/10.1016/j.optlaseng.2013.12.003
Hu, Y., Yang, S.X.: A knowledge based genetic algorithm for path planning of a mobile robot. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004. Vol. 5, pp. 4350–4355 (2004). https://doi.org/10.1109/ROBOT.2004.1302402
Khan, A.H., Li, S., Luo, X.: Obstacle avoidance and tracking control of redundant robotic manipulator: an RNN-based metaheuristic approach. IEEE Trans. Ind. Inform. 16, 4670–4680 (2020). https://doi.org/10.1109/TII.2019.2941916
Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3, 95–99 (1988). https://doi.org/10.1023/A:1022602019183
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks. vol. 4, pp. 1942–1948 (1995). https://doi.org/10.1109/ICNN.1995.488968
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1, 28–39 (2006). https://doi.org/10.1109/MCI.2006.329691
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39, 459–471 (2007). https://doi.org/10.1007/s10898-007-9149-x
Yang, X.S.: Firefly algorithm. In: Nature-inspired metaheuristic algorithms. Luniver Press, UK (2008)
Yang, X.-S., Deb, S.: Cuckoo search via Lévy flights. In: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC). pp. 210–214 (2009). https://doi.org/10.1109/NABIC.2009.5393690
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014). https://doi.org/10.1016/j.advengsoft.2013.12.007
Mirjalili, S.: Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl. Based Syst. 89, 228–249 (2015). https://doi.org/10.1016/j.knosys.2015.07.006
Askarzadeh, A.: A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput. Struct. 169, 1–12 (2016). https://doi.org/10.1016/j.compstruc.2016.03.001
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016). https://doi.org/10.1016/j.advengsoft.2016.01.008
Arora, S., Singh, S.: Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput. 23, 715–734 (2019). https://doi.org/10.1007/s00500-018-3102-4
Neri, F., Cotta, C.: Memetic algorithms and memetic computing optimization: a literature review. Swarm Evol. Comput. 2, 1–14 (2012). https://doi.org/10.1016/j.swevo.2011.11.003
Huang, K.-W., Wu, Z.-X., Peng, H.-W., Tsai, M.-C., Hung, Y.-C., Lu, Y.-C.: Memetic particle gravitation optimization algorithm for solving clustering problems. IEEE Access. 7, 80950–80968 (2019). https://doi.org/10.1109/ACCESS.2019.2923979
Huang, K.-W., Wu, Z.-X.: CPO: a crow particle optimization algorithm. Int. J. Comput. Intell. Syst. 12, 426–435 (2019). https://doi.org/10.2991/ijcis.2018.125905658
Wu, Z.-X., Huang, K.-W., Girsang, A.S.: A whole crow search algorithm for solving data clustering. In: 2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI). pp. 152–155 (2018). https://doi.org/10.1109/TAAI.2018.00040
Aarts, E.H.L., Lenstra, J.K.: Local search in combinatorial optimization. Princeton University Press, Princeton (2003)
Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360). pp. 69–73 (1998). https://doi.org/10.1109/ICEC.1998.699146
Chen, L., Chen, C.L.P., Lu, M.: A multiple-kernel fuzzy c-means algorithm for image segmentation. IEEE Trans. Syst. Man. Cybern. Part B Cybern. 41, 1263–1274 (2011)
Sezgin, M., Sankur, B.: Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging. 13, 146–165 (2004). https://doi.org/10.1117/1.1631315
Pham, D.L., Xu, C., Prince, J.L.: Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2, 315–337 (2000). https://doi.org/10.1146/annurev.bioeng.2.1.315
Nawaz, M., Enscore, E.E., Ham, I.: A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 11, 91–95 (1983). https://doi.org/10.1016/0305-0483(83)90088-9
Mladenoviæ, N., Hansen, P.: Variable neighborhood search. Comput. Oper. Res. 24, 1097–1100 (1997). https://doi.org/10.1016/S0305-0548(97)00031-2
Zames, G., Ajlouni, N.M., Holland, J.H., Hills, W.D., Goldberg, D.E.: Genetic algorithms in search, optimization, and machine learning. Inf. Technol. J. 3, 301–302 (1981)
Taillard, E.: Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 64, 278–285 (1993). https://doi.org/10.1016/0377-2217(93)90182-M
Derrac, J., García, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 1, 3–18 (2011). https://doi.org/10.1016/j.swevo.2011.02.002
Bezdek, J.C.: Pattern recognition with Fuzzy objective function algorithms. Springer Science & Business Media, New York (2013)
Hosseini, M., Navabi, M.S.: Hybrid PSO-GSA based approach for feature selection. Journal of Industrial Engineering and Management Studies, 1–15 (2023). https://jiems.icms.ac.ir/article_166460.html
Wang, L., Cao, Q., Zhang, Z., Mirjalili, S., Zhao, W.: Artificial rabbits optimization: a new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 114, 105082 (2022)
Pierezan, J., & Coelho, L.D.S.: Coyote optimization algorithm: a new metaheuristic for global optimization problems. In: 2018 IEEE congress on evolutionary computation (CEC) (pp. 1–8). IEEE. (2018)
Wu, Z.-X.: Design and implementation the whale particle optimization algorithm for solving optimization problems." (2020): 1–61. https://etds.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/ccd=Xn7Ilt/record?r1=1%26h1=1
Acknowledgements
Thanks to Mr. Ze-Xue Wu [50] for still giving more comments on this article after graduating from the institute. In addition, this research was supported by the Ministry of Science and Technology of the Republic of China under grant MOST 110-2222-E-992-006-.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
K-WH and S-HL provided the main concept of this work. K-WH and S-HL wrote the main script text. C-LJ and Z-HH made the experiments. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors confirm that they have no competing interests that are directly or indirectly related to the work submitted for publication.
Ethical Approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Huang, KW., Wu, ZX., Jiang, CL. et al. WPO: A Whale Particle Optimization Algorithm. Int J Comput Intell Syst 16, 115 (2023). https://doi.org/10.1007/s44196-023-00295-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-023-00295-6