Abstract
Manta ray foraging optimization (MRFO) tends to get trapped in local optima as it relies on the direction provided by the previous individual and the best individual as guidance to search for the optimal solution. As enriching population diversity can effectively solve this problem, in this paper, we introduce a hierarchical structure and weighted fitness-distance balance selection to improve the population diversity of the algorithm. The hierarchical structure allows individuals in different groups of the population to search for optimal solutions in different places, expanding the diversity of solutions. In MRFO, greedy selection based solely on fitness can lead to local solutions. We innovatively incorporate a distance metric into the selection strategy to increase selection diversity and find better solutions. A hierarchical manta ray foraging optimization with weighted fitness-distance balance selection (HMRFO) is proposed. Experimental results on IEEE Congress on Evolutionary Computation 2017 (CEC2017) functions show the effectiveness of the proposed method compared to seven competitive algorithms, and the proposed method has little effect on the algorithm complexity of MRFO. The application of HMRFO to optimize real-world problems with large dimensions has also obtained good results, and the computational time is very short, making it a powerful alternative for very high-dimensional problems. Finally, the effectiveness of this method is further verified by analyzing the population diversity of HMRFO.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Meta-heuristic algorithms [37] are inspired by nature [38]. Based on their sources of inspiration, these algorithms can be classified into five categories [39, 40]: evolution-based algorithms (EA) [41], swarm-based intelligence (SI) [42], physics-based methods (PM) [43], human-based behaviors (HB) [44], and other optimization algorithms, as shown in Table 1. They mainly simulate physical or biological phenomena in nature and establish mathematical models to solve optimization problems [45, 46]. These algorithms possess the characteristics of self-organization, self-adaptation, and self-learning, and have been widely used in many fields, such as biology [47, 48], feature selection [49], optimization computing [50], image classification [51], and artificial intelligence [52, 53].
There are many improved meta-heuristic methods, such as incorporating competitive memory and dynamic strategy into the mean shift algorithm to optimize dynamic multimodal functions [54], balancing exploration and exploitation by adding synchronous–asynchronous strategy to the grey wolf optimizer [55], and reformulating the search factor in the algorithm [56]. Manta ray foraging optimization (MRFO) [11] is the latest swarm-based intelligence algorithm proposed in 2020. It has few adjustable parameters, is easy to implement, and can find solutions with specified precision at low computational cost [11]. Therefore, it has great research potential. In [57], the opposition-based learning (OBL) method was introduced into MRFO to achieve an effective structure for the optimization problem. In [58], both OBL and self-adaptive methods were applied to MRFO to optimize energy consumption in residential buildings. In [59], the Lévy flight mechanism and chaos theory were added to solve the problem of premature convergence of MRFO, and the improved algorithm was applied to the proton exchange membrane fuel cell system. In [60], hybrid simulated annealing and MRFO were utilized to optimize the parameters of the proportional–integral–derivative controller. In [61], fractional-order (FO) was utilized in MRFO to escape from local optima, and the proposed algorithm was applied to image segmentation. In [62], a hybrid algorithm based on MRFO and gradient-based optimizer (GBO) was adopted to solve economic emission dispatch problems. In [63], the global exploration ability of MRFO was enhanced by combining control parameter adjustment, wavelet mutation, and quadratic interpolation strategy, and the improved algorithm was used to optimize complex curve shapes. From these references, it can be concluded that MRFO tends to converge prematurely and fall into the local optimal solution.
The search operators [64] of meta-heuristic algorithms include exploration and exploitation [65, 66]. Exploration involves using randomly generated individuals to produce different solutions in the search space, which increases the diversity of the population and improves the quality of the solution. Exploitation involves conducting a local search around the best individual, relying on the advantages of the current optimal solution to accelerate the convergence of the algorithm. Meta-heuristic algorithms based on swarm intelligence are prone to falling into local optima and premature convergence. Solving this problem is our motivation for improving the algorithm.
To improve MRFO, it is necessary to maintain population diversity. Population diversity refers to having many non-neighboring individuals in the search space that can generate different solutions. By maintaining population diversity, individuals can be dispersed instead of being gathered around the local optimal solution, thus escaping local optima and generating better solutions to improve solution quality. This paper proposes adding a hierarchical structure [67] and a fitness-distance balance selection method with functional weight (FW) [68] to increase population diversity. To verify the performance of the proposed algorithm, we compared the hierarchical manta ray foraging optimization with weighted fitness-distance balance selection (HMRFO) with seven competitive algorithms on the IEEE CEC2017 benchmark functions. The results show that HMRFO has superior performance and fast convergence speed. Additionally, HMRFO has the same time complexity as MRFO. The performance of HMRFO in four large-dimensional real-world problems illustrates the practicality of HMRFO in solving large-dimensional problems. By comparing the population diversity of HMRFO, MRFO and the latest variant of MRFO on different types of functions from the IEEE CEC2017 benchmark suite, the effectiveness of the proposed method in this paper is visually verified.
The contributions of the present study can be summarized as follows: (1) The hierarchical structure and FW selection method are effective in improving population diversity and avoiding falling into local optima. (2) The hierarchical structure and FW selection method have little effect on the algorithm’s complexity. (3) HMRFO demonstrates superior search performance, fast convergence speed, and high computational efficiency when dealing with large-dimensional problems, making it applicable to such problems.
The remaining sections of this paper are organized as follows:
Section 2 introduces the original MRFO and some selection methods. Section 3 proposes HMRFO. Section 4 presents the experimental results and analysis. Section 5 discusses the parameters and population diversity of HMRFO. Section 6 concludes the paper and suggests future research directions.
2 Preliminaries
2.1 Manta Ray Foraging Optimization
A manta ray is defined as \(X_{i}=\left( x_{i}^{1}, x_{i}^{2}, \ldots , x_{i}^{d}\right) \), where \(i \in {1,2, \ldots , N}\), and \(x_{i}^d\) represents the position of the ith individual in the dth dimension. Here, N is the total number of manta rays. MRFO establishes mathematical models based on the three foraging behaviors of the manta ray population. The specific behavioral models are as follows.
2.1.1 Chain Foraging
When manta rays find food, each manta ray follows the previous manta ray in a row and swims towards the food location. Therefore, except for the first manta ray, the movement direction of other manta rays not only moves towards the food but also towards the front manta rays, forming a chain foraging behavior. The mathematical model is expressed as follows:
where the best individual is defined as \(X_{\text {best}}=\left( x_{\text {best}}^{1}, x_{\text {best}}^{2}, \ldots , x_{\text {best}}^{d}\right) \), where \(x_{\text{ best } }^{d}(t)\) indicates the position of the best individual in the dth dimension at time t, r is a random vector within [0, 1], and \(\alpha \) is the weight coefficient.
2.1.2 Cyclone Foraging
Manta rays not only follow the manta ray in front of them but also move spirally towards the food. This foraging behavior is called cyclone foraging, and its mathematical equations are expressed as follows:
where \(r_{1}\) is a random number in the range of [0, 1], T represents the maximum number of iterations, and \(\beta \) denotes the weight coefficient.
This process iterates around the position of the best individual. To avoid getting trapped in local optima, a new position is randomly selected as the best position in the search space to update the next generation. This improves the algorithm’s ability to explore the global search space and increases the diversity of solutions. The update equations are as follows:
where r is a random vector in [0, 1], \(x_{\text {rand}}^{d}\) represents a random position, and \(L b^{d}\) and \(U b^{d}\) denote the lower and upper limits of the dth dimension, respectively.
2.1.3 Somersault Foraging
When manta rays approach food, they perform somersaults and circle around the food to pull it towards themselves. This foraging behavior takes the food (i.e., the best individual) as a pivot, and each individual swims back and forth around the pivot. In other words, the search space is limited between the current position and its symmetrical position with respect to the best individual. As the distance between the individual and the best individual decreases, the search space is also reduced, and the individual gradually approaches the best individual. Therefore, in the later stages of iteration, the range of somersault foraging is adaptively reduced. The expression is given below:
where \(r_{2}\) and \(r_{3}\) are two random numbers in [0, 1], S represents the somersault factor, which determines the range of somersaulting, and is set to 2.
The workflow diagram of MRFO is presented in Fig. 1. The value of rand is used to switch between cyclone foraging and chain foraging. In cyclone foraging, the value of t/T is employed to determine whether to use the best individual position (for local exploitation) or a random position (for global exploration) as the reference position. As the number of iterations (t) increases, the value of t/T gradually rises, and the algorithm shifts from exploration to exploitation. Then, each individual updates its position through somersault foraging and ultimately returns the best solution.
MRFO has few adjustable parameters, low computational cost, and is less affected by the increase in problem size, making it a powerful alternative for solving very high-dimensional problems [69].
2.2 Selection Methods and Discussion
There are currently six basic selection methods, which are:
-
1.
Random selection In MRFO, for example, an individual is randomly generated in the search space and used as the reference position for other individuals in the cyclone foraging.
-
2.
Greedy selection based on fitness value In MRFO, \(X_{\text {best}}\) represents the best individual based on fitness.
-
3.
Adaptive selection based on ordinal: In wind-driven optimization (WDO) [70], i denotes the rank of an individual among all population members based on the pressure value.
-
4.
Probabilistic selection Both roulette wheel and tournament strategies are probabilistic selection methods. For example, in ant colony optimization (ACO) [18], the next city for the ants to visit is chosen through roulette wheel selection.
-
5.
Fitness-distance balance selection [64] This is the latest selection method that has been successfully applied in several algorithms [64, 69, 71, 72]. The fitness-distance balance with functional weight (FW) selection added in this paper is an improved variant of this selection method [68].
-
6.
Combined selection This is a combination of at least two of the other selection methods.
The traditional selection methods evaluate the quality of individuals based on the magnitude of their fitness, which can improve the convergence speed of the algorithm, but it also easily leads to local optima. Therefore, an increasing number of alternative selection methods are being used to replace traditional ones. In [73], maximum clique and edge centrality are used to select genes with maximum relevancy and minimum redundancy. In [74], users are clustered into different groups using graph clustering, food ingredients are embedded using deep learning techniques, and based on user and food information, the top few foods are recommended to the target customers. Adding other selection techniques and criteria to select more accurate individuals has become increasingly common. Similarly, the FW algorithm in this article also added distance metrics. With only a single fitness evaluation, it is easy to find a local solution. The introduction of distance metrics enables the algorithm to escape local solutions and find better solutions with higher fitness values farther away, thus increasing the diversity of the solutions obtained.
3 Proposed HMRFO
3.1 Motivation
According to the No-Free-Lunch theorem [75, 76], no single algorithm can find the best global optimal solution for all optimization problems. Similarly, MRFO has its own limitations in optimization. In the case of swarm-based intelligence MRFO, since the back manta rays are influenced by most of the front manta rays, the swarm is prone to being attracted by local points, leading to falling into local optima. To overcome this problem, enriching the population diversity is an effective solution. Therefore, this paper proposes a new selection method, fitness-distance balance with functional weight (FW) [68], and employs a hierarchical structure [67] to update the population, resulting in the proposed algorithm, HMRFO.
3.2 Description of HMRFO
In MRFO, the fitness value of each individual is calculated using the following equations:
where \(G_{i}\) represents the objective function value of the ith individual, \(normG_{i}\) is the normalized value of \(G_{i}\), and \(F_{i}\) is the fitness value of the ith individual.
In MRFO, only the position of the best individual based on fitness is obtained, which can easily lead to falling into a local solution. To address this issue, the fitness-distance balance with functional weight (FW) selection method is added to HMRFO. This selection method considers both fitness and the distance between each individual and the best individual, which effectively maintains population diversity, increases the number of solutions, and improves solution quality. As a result, the algorithm can escape local optima and enhance its exploration ability. The mathematical equations for this selection method are as follows:
where \(D_{i}\) represents the distance between the ith individual and the best individual, \(F_{i}\) is fitness value, \(S_{i}\) denotes the score of the ith individual, \(\omega \) represents the functional weight, and \(\omega \) is randomly generated by Gaussian distribution, according to [68], where \(\mu =3/4\), \(\sigma =1/12\).
After obtaining the score \(S_{i}\), the population is sorted based on the score \(S_{i}\). The higher the score, the greater the contribution of the individual to the optimization problem, and the higher its rank, the more likely it is to be the optimal individual. This approach overcomes the disadvantage of relying solely on fitness to obtain the optimal individual, improves population diversity, and prevents the algorithm from being trapped in local optima. The effectiveness of this method has been demonstrated in [64, 68, 69, 71].
MRFO updates the population only based on \(X_{\text {best}}\) in somersault foraging. To enhance population diversity, a hierarchical structure is added to the somersault foraging of MRFO. This hierarchical structure is divided into three layers, as described below:
-
60% of the population in the next generation is updated using Eq. (13), where \(x_{PR_{1}}^{d}\) is the position of an individual randomly selected from the top \(PR_{1}\) individuals sorted by FW.
-
30% of the population in the next generation is updated using Eq. (14), where \(x_{PR_{2}}^{d}\) is the position of an individual randomly selected from the top \(PR_{2}\) individuals sorted by FW.
-
The remaining 10% of the population in the next generation is updated using Eq. (7), where \(x_{\text{ best } }^{d}\) still represents the position of the individual with the best fitness value.
The schematic diagram of this hierarchical structure is shown in Fig. 2. Equations (13) and (14) are as follows:
Using a hierarchical structure to update the population is more effective in maintaining population diversity. Updating the population only with \(X_{\text{ best } }\) in somersault foraging is too simplistic and may lead to local optima, which cannot achieve the goal of global optimization. The workflow of HMRFO is shown in Fig. 3 and Algorithm 1 presents the pseudocode of HMRFO.
4 Experimental Results and Analysis
4.1 Experimental Settings
In this study, the proposed algorithm’s performance is verified using the IEEE Congress on Evolutionary Computation 2017 (CEC2017) benchmark functions [77]. The IEEE CEC2017 test suite includes 30 functions; however, F2 is excluded due to instability. Among the 29 benchmark functions, there are two unimodal functions (F1 and F3), seven simple multimodal functions (F4–F10), ten hybrid functions (F11–F20), and ten composition functions (F21–F30).
In all experiments, the population size (N) was set to 100. The maximum number of function evaluations (NFEs) was set to \(10,000 * D\), where D represents the dimension of the problem. The search range was set to \([-100, 100]\). Each algorithm was run 51 times on each function. The best values among all compared algorithms in the following tables are shown in bold. All algorithms were implemented in MATLAB R2021b on a PC with a 2.60GHz Intel(R) Core(TM) i7-9750H CPU and 16GB RAM. The source code of HMRFO is released at https://toyamaailab.github.io/sourcedata.html.
4.2 Performance Evaluation Criteria
The performance of HMRFO is evaluated using the following four criteria:
-
(1)
Mean and standard deviation (std) of optimization errors between obtained optimal values and known real optimal values. Since all objective functions are minimization problems, their minimum mean values (i.e., the best values) are highlighted in boldface.
-
(2)
Non-parametric statistical tests, including the Wilcoxon rank-sum test [78,79,80] to compare the obtained p-value and the significant level \(\alpha =0.05\) between the proposed algorithm and the compared algorithm. A p-value of \(\le \) 0.05 indicates a significant difference between the two algorithms. The symbol \(+\)" denotes that the proposed algorithm is superior to its competitor, while the symbol −" represents that the proposed algorithm is significantly worse than its competitor. A p-value > 0.05 indicates no significant difference between the two algorithms, which is recorded as symbol \(\approx \)". W/T/L" indicates how many times the proposed algorithm has won, tied and lost to its competitor, respectively. The Friedman test [81], another non-parametric statistical test, is also used. The mean values of optimization errors are used as test data. The smaller the value of Friedman rank, the better the performance of the algorithm. The minimum value is highlighted in boldface.
-
(3)
Box-and-whisker diagrams to show the robustness and accuracy of the algorithm’s solutions. The blue box’s lower edge, red line, and upper edge indicate the first quartile, the second quartile (median), and the third quartile, respectively. The lines above and below the blue box indicate the maximum and minimum non-outliers, respectively. The red symbol “\(+\)" displays outliers. The height of the box represents the solution’s fluctuation, and the median represents the average level of the solution.
-
(4)
Convergence graphs to intuitively show the convergence speed and accuracy of the algorithm. It can explain whether the improved algorithm jumps out of the local solution.
4.3 Comparison for Competitive Algorithms
To evaluate the effectiveness and search performance of HMRFO, seven competitive algorithms are compared: MRFO [11], adaptive wind driven optimization (AWDO) [70], a hybrid algorithm that combines particle swarm optimization and gravitational search algorithm (PSOGSA) [82], reptile search algorithm (RSA) [12], grey wolf optimizer (GWO) [13], whale optimization algorithm (WOA) [17], and brain storm optimization (BSO) [15].
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs44196-023-00289-4/MediaObjects/44196_2023_289_Figa_HTML.png)
Among them, RSA, GWO, WOA and BSO are meta-heuristic algorithms based on swarm intelligence. The comparison is made on 29 benchmark functions from IEEE CEC2017 with dimensions of 10, 30, 50, and 100. AWDO is a variant of WDO that uses the covariance matrix adaptive evolutionary strategy (CMAES) to update parameters adaptively. PSOGSA is a hybrid algorithm that combines the exploration of GSA and the exploitation of PSO. RSA is a meta-heuristic algorithm inspired by the predatory behavior of crocodiles. GWO, WOA, and BSO are swarm intelligence algorithms inspired by wolves, whales, and brainstorming, respectively. The parameter settings of these algorithms are shown in Table 2. The comparative experimental results are presented in Tables 4, 5, 6, 7, and 3.
In Table 4, HMRFO, MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO achieved the best mean values on 23, 5, 0, 1, 0, 1, 0 and 2 functions, respectively. According to the W/T/L metric, HMRFO outperformed MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO on 20, 28, 27, 29, 24, 29 and 25 functions, respectively, indicating that HMRFO performs well on functions with 10 dimensions.
In Table 5, HMRFO obtained the best mean values on 24 functions, which is the algorithm with the largest number of best mean values. According to W/T/L, HMRFO surpassed MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO on 23, 28, 27, 29, 26, 29 and 26 functions, respectively. This shows that HMRFO has good performance on functions with 30 dimensions.
In Table 6, HMRFO, MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO achieved the best mean values on 20, 3, 0, 1, 0, 4, 0 and 1 functions, respectively. W/T/L demonstrated that HMRFO significantly outperformed the others on 22, 28, 26, 29, 23, 29 and 27 functions, respectively, indicating that HMRFO maintains superior performance on functions with 50 dimensions.
In Table 7, HMRFO obtained the best mean values on 19 functions. According to W/T/L, HMRFO outperformed MRFO, AWDO, PSOGSA, RSA, GWO, WOA and BSO on 21, 28, 26, 29, 22, 29 and 26 functions, respectively. The result shows that HMRFO can optimize functions with 100 dimensions. Therefore, HMRFO has remarkably good performance on functions with low, medium, high and large dimensions.
The Friedman test results in Table 3 further prove that HMRFO has superior performance. From the table, it can be seen that HMRFO performs best on IEEE CEC2017 functions with 10, 30, 50 and 100 dimensions.
Box-and-whisker diagrams and convergence graphs are provided for four different types of functions from the IEEE CEC2017 benchmark suite. Figures 4 and 5 show box-and-whisker diagrams of errors obtained by eight algorithms on IEEE CEC2017 functions with 10, 30, 50, and 100 dimensions. The horizontal axis represents the eight algorithms, and the vertical axis represents the error values. From Figs. 4 and 5, it can be observed that the blue box of HMRFO is the flattest, and its red median line is the lowest. This indicates that HMRFO has superior and stable performance.
Figures 6 and 7 show convergence graphs of average optimizations obtained by eight algorithms on IEEE CEC2017 functions with 10, 30, 50, and 100 dimensions. The horizontal axis represents the number of iterations, and the vertical axis represents the log value of average optimizations. From Figs. 6 and 7, it is clear that the curves of HMRFO are the lowest and the convergence speed is fast. Compared to the original MRFO in the convergence graphs, HMRFO can find a better solution, jump out of local optimization, avoid premature convergence, improve the solution quality, and have high optimization efficiency. This clearly demonstrates that the improved method in this paper is effective, and the population diversity has indeed been improved. The advantages of HMRFO are not fully reflected on unimodal functions. On more complex multimodal, hybrid, and composition functions, HMRFO can search for smaller values and converge quickly, showing strong competitiveness.
4.4 Algorithm Complexity
The algorithm’s complexity validates the usability and functionality of the proposed algorithm. Algorithms with high computational complexity are usually not studied due to their high computation cost. Hence, the effectiveness of the algorithm should not only possess excellent optimization ability but also exhibit fast convergence speed and low computational complexity. In this subsection, we provide the central processing unit (CPU) running time consumed by all tested algorithms on IEEE CEC2017 functions with 10, 30, 50, and 100 dimensions, and discuss the impact of the improved method in this paper on the algorithm complexity of MRFO.
The maximum number of function evaluations for all algorithms is set to be the same. The CPU running time results are presented in Table 8. From the table, it can be observed that the computational time of WOA is the minimum. HMRFO and MRFO have similar computational time and require very little time. On the other hand, PSOGSA and RSA are the most complex algorithms and take the longest time.
Figure 8 is plotted to display the computational time more visibly. The horizontal axis represents the eight algorithms, and the vertical axis represents the log value of CPU time. From the bar graph, it is evident that the computational complexities of HMRFO and MRFO are close and reasonable. The effect of the improved method in this paper on the algorithm complexity of MRFO is negligible. The low computational complexity and minimal computation cost make HMRFO suitable for high-dimensional problems and engineering problems in the future.
4.5 Real-World Optimization Problems with Large Dimensions
Four real-world optimization problems with large dimensions from the IEEE Congress on Evolutionary Computation 2011 (CEC2011) [83] were used to evaluate the practicality of HMRFO. These problems are the hydrothermal scheduling problem (HSP), dynamic economic dispatch (DED) problem, large-scale transmission pricing problem (LSTPP), and static economic load dispatch (ELD) problem. The objective of HSP is to minimize the total fuel cost for thermal system operation, while DED is to determine the 24-h power generation schedule. LSTPP is a large-dimensional problem as transmission pricing is affected by many factors. The objective function of ELD is to minimize the fuel cost of generating units during a specific period of operation. The dimensions of HSP, DED, LSTPP, and ELD are 96, 120, 126, and 140, respectively. All of them are large-dimensional problems. HMRFO and MRFO were each run 51 times on each problem individually. The experimental results and running time are listed in Tables 9 and 10, respectively.
In Table 9, on the HSP, DED, and ELD problems, both HMRFO and MRFO found similar Best values. On the LSTPP problem, MRFO explored a better Best value, but after 51 independent runs on each problem, HMRFO obtained smaller Mean values on all four problems. These results demonstrate that HMRFO is practical for optimizing real-world problems with large dimensions. In Table 10, although all four problems are large-dimensional, both HMRFO and MRFO require minimal computational time to complete the optimization, illustrating the great potential of HMRFO for engineering problems and large-dimensional optimization problems.
5 Discussions
5.1 Analysis for Parameters of HMRFO
The parameters \(\mu \) and \(\sigma \) used in the FW selection method of HMRFO have been proven in [68]. The algorithm performs best when \(\mu =3/4\) and \(\sigma =1/12\), and these values are still used in this paper.
In the hierarchical structure of HMRFO, there are two layers that randomly select an individual from the FW to update the population. The selection of an individual from the FW is crucial for the overall performance of the algorithm. Specifically, the values of \(PR_{1}\) and \(PR_{2}\) can significantly affect the performance of HMRFO, and the optimal combination of these parameters can maximize its performance. As \(PR_{1}\) and \(PR_{2}\) both range from 0 to 1, eleven combinations of \(PR_{1}\) and \(PR_{2}\) are tested on IEEE CEC2017 benchmark functions with 30 dimensions, displayed in Tables 11 and 12, where \(PR_{1}=0.8\), \(PR_{2}=0.6\) is the main algorithm in statistical results (W/T/L). From the tables, according to W/T/L, HMRFO with \(PR_{1}=0.8\) and \(PR_{2}=0.6\) is better than the other ten combinations. And it has fourteen minimum mean values, which is the most out of all parameter combinations. Therefore, HMRFO with \(PR_{1}=0.8\) and \(PR_{2}=0.6\) performs the best.
5.2 Analysis for Individuals per Layer of HMRFO
In the hierarchical structure of HMRFO introduced earlier, the first layer (\(L_{1}\)) contains 60% of individuals, the second layer (\(L_{2}\)) contains 30% of individuals, and the third layer (\(L_{3}\)) contains 10% of individuals. In this section, we analyze the reasons behind this allocation. First, we set the second layer (\(L_{2}\)) to be 20%, 30%, and 40% respectively, and the third layer (\(L_{3}\)) to be 5%, 10%, and 15%, respectively, so that \(L_{1}=100\%-L_{2}-L_{3}\), resulting in a total of nine combinations tested on IEEE CEC2017 benchmark functions with 30 dimensions, presented in Tables 13 and 14. According to the W/T/L results in the tables, HMRFO with \(L_{1}=60\%\), \(L_{2}=30\%\), and \(L_{3}=10\%\) performs the best.
5.3 Analysis of Population Diversity
The FW selection method and hierarchical structure presented in this paper can enhance the population diversity of the MRFO algorithm. To better visualize the population diversity of HMRFO and MRFO, the following equations, taken from [84], are used to calculate it:
where N is the population size, \(\bar{x}\) is the mean point.
To provide a clear understanding of how the proposed method compares with existing methods in terms of diversity, we also included the latest variant of MRFO, fractional-order Caputo manta ray foraging optimization (FCMRFO) [85], and evaluated the diversity changes of unimodal function F3, multimodal function F4, hybrid function F16, and composition function F29. Figure 9 shows the population diversity of HMRFO, MRFO and FCMRFO on these four functions with 30 dimensions. The figure indicates that HMRFO has a higher population diversity than MRFO and FCMRFO on F3, F16, and F29, suggesting that the proposed method can effectively improve diversity. On F4, at the beginning of the iteration, the algorithm focuses on exploration, resulting in higher population diversity for HMRFO than MRFO and FCMRFO. In the late stage of the iteration, the algorithm focuses on exploitation, leading to lower population diversity of HMRFO than FCMRFO, but still higher than that of MRFO. Thus, HMRFO can perform effective search and avoid being trapped in local optima.
6 Conclusion and Future Work
In this paper, we propose a hierarchical manta ray foraging optimization with weighted fitness-distance balance selection (HMRFO) by combining a hierarchical structure with the latest improved selection method. The proposed method aims to increase population diversity to solve the problem of MRFO with premature convergence and trapping in local optima. To verify the performance of HMRFO, we compare it with MRFO and six state-of-the-art algorithms on IEEE CEC2017 functions. The experimental results demonstrate that HMRFO has superior performance and can find better solutions, escape local optima, and converge fast, indicating that the proposed method effectively increases population diversity. In terms of algorithm complexity, HMRFO and MRFO have similar computational time, suggesting that the added improved method has little effect on the algorithm complexity of MRFO. We also apply HMRFO and MRFO to optimize large-dimensional real-world problems and find that HMRFO has good practicality, especially for large-dimensional problems, as it takes less time and has low computation cost. This is valuable information for studying large-dimensional optimization problems. Finally, the curves of population diversity of HMRFO and MRFO on four different types of problems from IEEE CEC2017 further confirm that the improved method in this paper can successfully enrich population diversity.
After conducting experiments, we have discovered the following two advantages of HMRFO:
-
(1)
The incorporation of FW and hierarchical structure significantly enhances the population diversity of HMRFO. This results in the algorithm being able to escape local optima, avoid premature convergence, and improve the quality of solutions by considering different solutions in the search space.
-
(2)
HMRFO has a fast convergence rate and low computational complexity, making it a cost-effective approach to optimize large dimensional problems.
In future work, the following studies could be considered:
-
(1)
The number of individuals per layer in this study is fixed and may have limitations as it may perform differently on different problems. In the future, the hierarchical structure could be further improved by incorporating mutual information interaction. For example, the number of individuals in each layer can be dynamically adjusted based on some evaluation metric.
-
(2)
The FW selection method used in this paper could be applied to improve other meta-heuristic algorithms.
-
(3)
HMRFO could be applied to tasks such as solar photovoltaic parameter estimation, dendritic neural models, and multi-objective optimization.
Data Availability
Related data and material can be found at https://toyamaailab.github.io.
Abbreviations
- MRFO:
-
Manta ray foraging optimization
- HMRFO:
-
Hierarchical manta ray foraging optimization with weighted fitness-distance balance selection
- EA:
-
Evolution-based algorithms
- SI:
-
Swarm-based intelligence
- PM:
-
Physics-based methods
- HM:
-
Human-based behaviors
- OBL:
-
Opposition-based learning
- FO:
-
Fractional-order
- GBO:
-
Gradient-based optimizer
- FW:
-
Fitness-distance balance selection method with functional weight
- CEC:
-
Congress on Evolutionary Computation
- NFEs:
-
The maximum number of function evaluations
- AWDO:
-
Adaptive wind-driven optimization
- CMAES:
-
Covariance matrix adaptive evolutionary strategy
- PSOGSA:
-
A hybrid algorithm that combines particle swarm optimization and gravitational search algorithm
- CPU:
-
Central processing unit
- HSP:
-
Hydrothermal scheduling problem
- DED:
-
Dynamic economic dispatch problem
- LSTPP:
-
Large-scale transmission pricing problem
- ELD:
-
Static economic load dispatch problem
- FCMRFO:
-
Fractional-order Caputo manta ray foraging optimization
References
Kramer, O.: Genetic Algorithm Essentials, vol. 679. Springer (2017)
Beyer, H.-G., Schwefel, H.-P.: Evolution strategies-a comprehensive introduction. Nat. Comput. 1(1), 3–52 (2002)
Kenneth, V.P.: Differential evolution. In: Zelinka, I., Snášel, V., Abraham, A. (eds.) Handbook of Optimization. Intelligent Systems Reference Library, vol 38. Springer, Berlin, Heidelberg (2013)
Moscato, P., Mendes, A., Berretta, R.: Benchmarking a memetic algorithm for ordering microarray data. Biosystems 88(1), 56–75 (2007)
De Jong, K.: Evolutionary computation: a unified approach. In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, pp. 185–199 (2016)
Passino, K.M.: Bacterial foraging optimization. Int. J. Swarm Intell. Res. 1(1), 1–16 (2010)
Meng, Z., Pan, J.-S.: Monkey king evolution: a new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization. Knowl.-Based Syst. 97, 144–157 (2016)
Uymaz, S.A., Tezel, G., Yel, E.: Artificial algae algorithm (AAA) for nonlinear global optimization. Appl. Soft Comput. 31, 153–171 (2015)
Yang, X.-S., Gandomi, A.H.: Bat algorithm: a novel approach for global engineering optimization. Eng. Comput. 29(5), 464–483 (2012)
Dasgupta, D.: Artificial Immune Systems and their Applications. Springer Science & Business Media (2012)
Zhao, W., Zhang, Z., Wang, L.: Manta ray foraging optimization: an effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 87, 103300 (2020)
Abualigah, L., Elaziz, M.A., Sumari, P., Geem, Z.W., Gandomi, A.H.: Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 191, 116158 (2022)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)
Wang, D., Tan, D., Liu, L.: Particle swarm optimization algorithm: an overview. Soft. Comput. 22(2), 387–408 (2018)
Shi, Y.: Brain storm optimization algorithm. In: International Conference in Swarm Intelligence, pp. 303–309. Springer (2011)
Shadravan, S., Naji, H.R., Bardsiri, V.K.: The sailfish optimizer: a novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 80, 20–34 (2019)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Dorigo, M., Stützle, T.: Ant colony optimization: overview and recent advances. In: Gendreau, M., Potvin, J.Y. (eds.) Handbook of Metaheuristics. International Series in Operations Research & Management Science, vol 146. Springer, Boston, MA (2019)
Dhiman, G., Kumar, V.: Spotted hyena optimizer: a novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48–70 (2017)
Yang, X.-S.: Firefly algorithm, levy flights and global optimization. In: Bramer, M., Ellis, R., Petridis, M. (eds.) Research and Development in Intelligent Systems XXVI. Springer, London (2010)
Połap, D., Woźniak, M.: Red fox optimization algorithm. Expert Syst. Appl. 166, 114107 (2021)
Abualigah, L., Shehab, M., Alshinwan, M., Alabool, H.: Salp swarm algorithm: a comprehensive survey. Neural Comput. Appl. 32(15), 11195–11215 (2020)
Cuevas, E., Cienfuegos, M., Zaldívar, D., Pérez-Cisneros, M.: A swarm optimization algorithm inspired in the behavior of the social-spider. Expert Syst. Appl. 40(16), 6374–6384 (2013)
Fausto, F., Cuevas, E., Valdivia, A., González, A.: A global optimization algorithm inspired in the behavior of selfish herds. Biosystems 160, 39–55 (2017)
Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
Bayraktar, Z., Komurcu, M., Werner, D.H.: Wind driven optimization (WDO): a novel nature-inspired optimization algorithm and its application to electromagnetics. In: 2010 IEEE Antennas and Propagation Society International Symposium, pp. 1–4. IEEE, (2010)
Kaveh, A., Bakhshpoori, T.: Water evaporation optimization: a novel physically inspired optimization algorithm. Comput. Struct. 167, 69–85 (2016)
Zhao, W., Wang, L., Zhang, Z.: A novel atom search optimization for dispersion coefficient estimation in groundwater. Futur. Gener. Comput. Syst. 91, 601–610 (2019)
Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W., Mirjalili, S.: Henry gas solubility optimization: a novel physics-based algorithm. Futur. Gener. Comput. Syst. 101, 646–667 (2019)
Doğan, B., Ölmez, T.: A new metaheuristic for numerical function optimization: vortex search algorithm. Inf. Sci. 293, 125–145 (2015)
Venkata Rao, R., Savsani, V.J., Balic, J.: Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Eng. Optim. 44(12), 1447–1462 (2012)
Gajawada, S.: Entrepreneur: artificial human optimization. Trans. Mach. Learn. Artif. Intell. 4(6), 64–70 (2016)
Seyyed Hamid Samareh Moosavi and Vahid Khatibi Bardsiri: Poor and rich optimization algorithm: a new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 86, 165–181 (2019)
Huan, T.T., Kulkarni, A.J., Kanesan, J., Huang, C.J., Abraham, A.: Ideology algorithm: a socio-inspired optimization methodology. Neural Comput. Appl. 28(1), 845–876 (2017)
Punnathanam, V., Kotecha, P.: Yin-yang-pair optimization: a novel lightweight optimization algorithm. Eng. Appl. Artif. Intell. 54, 62–79 (2016)
Philip Chen, C.L., Zhang, T., Chen, L., Tam, S.C.: I-ching divination evolutionary algorithm and its convergence analysis. IEEE Trans. Cybern. 47(1), 2–13 (2017)
Ezugwu, A.E., Shukla, A.K., Rl Nath, A.A., Akinyelu, JO Agushaka., Chiroma, H., Muhuri, P.K.: Metaheuristics: a comprehensive overview and classification along with bibliometric analysis. Artif. Intell. Rev. 54(6), 4237–4316 (2021)
Tang, J., Liu, G., Pan, Q.: A review on representative swarm intelligence algorithms for solving optimization problems: applications and trends. IEEE/CAA J. Autom. Sin. 8(10), 1627–1643 (2021)
Hare, W., Nutini, J., Tesfamariam, S.: A survey of non-gradient optimization methods in structural engineering. Adv. Eng. Softw. 59, 19–28 (2013)
Abualigah, L., Diabat, A.: Advances in sine cosine algorithm: A comprehensive survey. Artif. Intell. Rev. 54(4), 2567–2608 (2021)
Fonseca, C.M., Fleming, P.J.: An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 3(1), 1–16 (1995)
Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of swarm algorithms applied to discrete optimization problems. In: Swarm Intelligence and Bio-Inspired Computation, pp. 169–191. Elsevier (2013)
Biswas, A., Mishra, K.K., Tiwari, S., Misra, A.K.: Physics-inspired optimization algorithms: a survey. J. Optim. 2013, Article ID 438152. https://doi.org/10.1155/2013/438152
Kosorukoff, A.: Human based genetic algorithm. In: 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236), volume 5, pp. 3464–3469. IEEE (2001)
Eiben, A.E., Smith, J.: From evolutionary computation to the evolution of things. Nature 521(7553), 476–482 (2015)
Boussaïd, I., Lepagnot, J., Siarry, P.: A survey on optimization metaheuristics. Inf. Sci. 237, 82–117 (2013)
Yang, Y., Lei, Z., Wang, Y., Zhang, T., Peng, C., Gao, S.: Improving dendritic neuron model with dynamic scale-free network-based differential evolution. IEEE/CAA J. Autom. Sin. 9(1), 99–110 (2022)
Hong, W.-J., Yang, P., Tang, K.: Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. Int. J. Autom. Comput. 18, 155–169 (2021)
Jiang, Y., Luo, Q., Wei, Y., Abualigah, L., Zhou, Y.: An efficient binary gradient-based optimizer for feature selection. Math. Biosci. Eng. 18(4), 3813–3854 (2021)
Zhao, Z., Liu, S., Zhou, M.C., Abusorrah, A.: Dual-objective mixed integer linear program and memetic algorithm for an industrial group scheduling problem. IEEE/CAA J. Autom. Sin. 8(6), 1199–1209 (2020)
Yousri, D., Elaziz, M.A., Abualigah, L., Oliva, D., Al-qaness, M.A.A., Ewees, A.A.: COVID-19 X-ray images classification based on enhanced fractional-order cuckoo search optimizer using heavy-tailed distributions. Appl. Soft Comput. 101, 107052 (2021)
Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nat. Mach. Intell. 3(1), 9–15 (2021)
Ji, J., Gao, S., Cheng, J., Tang, Z., Todo, Y.: An approximate logic neuron model with a dendritic structure. Neurocomputing 173, 1775–1783 (2016)
Cuevas, E., Gálvez, J., Toski, M., Avila, K.: Evolutionary-mean shift algorithm for dynamic multimodal function optimization. Appl. Soft Comput. 113, 107880 (2021)
Rodríguez, A., Camarena, O., Cuevas, E., Aranguren, I., Valdivia-G, A., Morales-Castañeda, B., Zaldívar, D., Pérez-Cisneros, M.: Group-based synchronous-asynchronous grey wolf optimizer. Appl. Math. Model. 93, 226–243 (2021)
Díaz, P., Pérez-Cisneros, M., Cuevas, E., Avalos, O., Gálvez, J., Hinojosa, S., Zaldivar, D.: An improved crow search algorithm applied to energy problems. Energies 11(3), 571 (2018)
Izci, D., Ekinci, S., Eker, E., Kayri, M.: Improved manta ray foraging optimization using opposition-based learning for optimization problems. In: 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp 1–6. IEEE (2020)
Feng, J., Luo, X., Gao, M., Abbas, A., Yi-Peng, X., Pouramini, S.: Minimization of energy consumption by building shape optimization using an improved manta-ray foraging optimization algorithm. Energy Rep. 7, 1068–1078 (2021)
Sheng, B., Pan, T., Luo, Y., Jermsittiparsert, K.: System identification of the PEMFCs based on balanced manta-ray foraging optimization algorithm. Energy Rep. 6, 2887–2896 (2020)
Micev, M., Ćalasan, M., Ali, Z.M., Hasanien, H.M., Abdel Aleem, S.H.E.: Optimal design of automatic voltage regulation controller using hybrid simulated annealing - manta ray foraging optimization algorithm. Ain Shams Eng. J. 12(1), 641–657 (2021)
Elaziz, M.A., Yousri, D., Al-qaness, M.A.A., AbdelAty, A.M., Radwan, A.G., Ewees, A.A.: A Grunwald-Letnikov based manta ray foraging optimizer for global optimization and image segmentation. Eng. Appl. Artif. Intell. 98, 104105 (2021)
Hassan, M.H., Houssein, E.H., Mahdy, M.A., Kamel, S.: An improved manta ray foraging optimizer for cost-effective emission dispatch problems. Eng. Appl. Artif. Intell. 100, 104155 (2021)
Gang, H., Li, M., Wang, X., Wei, G., Chang, C.-T.: An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl.-Based Syst. 240, 108071 (2022)
Kahraman, H.T., Aras, S., Gedikli, E.: Fitness-distance balance (FDB): a new selection method for meta-heuristic search algorithms. Knowl.-Based Syst. 190, 105169 (2020)
Alba, E., Dorronsoro, B.: The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. 9(2), 126–142 (2005)
Nandar Lynn and Ponnuthurai Nagaratnam Suganthan: Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 24, 11–24 (2015)
Wang, Y., Gao, S., Yang, Yu., Cai, Z., Wang, Z.: A gravitational search algorithm with hierarchy and distributed framework. Knowl.-Based Syst. 218, 106877 (2021)
Wang, K., Tao, S., Wang, R.-L., Todo, Y., Gao, S.: Fitness-distance balance with functional weights: a new selection method for evolutionary algorithms. IEICE Trans. Inform. Syst. E–104D(10), 1789–1792 (2021)
Aras, S., Gedikli, E., Kahraman, H.T.: A novel stochastic fractal search algorithm with fitness-distance balance for global numerical optimization. Swarm Evol. Comput. 61, 100821 (2021)
Bayraktar, Z., Komurcu, M.: Adaptive wind driven optimization. In: Proceedings of the 9th EAI International Conference on Bio-Inspired Information and Communications Technologies (Formerly BIONETICS), pp. 124–127. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (2016)
Tang, Z., Tao, S., Wang, K., Bo, L., Todo, Y., Gao, S.: Chaotic wind driven optimization with fitness distance balance strategy. Int. J. Comput. Intell. Syst. 15(1), 46 (2022)
Zhao, W., Zhang, H., Zhang, Z., Zhang, K., Wang, L.: Parameters tuning of fractional-order proportional integral derivative in water turbine governing system using an effective SDO with enhanced fitness-distance balance and adaptive local search. Water 14(19), 3035 (2022)
Azadifar, S., Rostami, M., Berahmand, K., Moradi, P., Oussalah, M.: Graph-based relevancy-redundancy gene selection method for cancer diagnosis. Comput. Biol. Med. 147, 105766 (2022)
Rostami, M., Oussalah, M., Farrahi, V.: A novel time-aware food recommender-system based on deep learning and graph clustering. IEEE Access 10, 52508–52524 (2022)
Abualigah, L., Diabat, A., Geem, Z.W.: A comprehensive survey of the harmony search algorithm in clustering applications. Appl. Sci. 10(11), 3827 (2020)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
Awad, N.H., Ali, M.Z., Liang, J.J., Qu, B.Y., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective real-parameter numerical optimization. Technical Report (2016)
García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization. J. Heuristics 15(6), 617 (2008)
Luengo, J., García, S., Herrera, F.: A study on the use of statistical tests for experimentation with neural networks: Analysis of parametric test conditions and non-parametric tests. Expert Syst. Appl. 36(4), 7798–7808 (2009)
García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)
Carrasco, J., García, S., Rueda, M.M., Das, S., Herrera, F.: Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol. Comput. 54, 100665 (2020)
Mirjalili, S., Mohd Hashim, S.Z.: A new hybrid PSOGSA algorithm for function optimization. In: 2010 International Conference on Computer and Information Application, pp. 374–377. IEEE (2010)
Das, S., Suganthan, P.N.: Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems. In: Jadavpur University, Nanyang Technological University, Kolkata, pp. 341–359 (2010)
Wang, K., Wang, Y., Tao, S., Cai, Z., Lei, Z., Gao, S.: Spherical search algorithm with adaptive population control for global continuous optimization problems. Appl. Soft Comput. 132, 109845 (2023)
Yousri, D., AbdelAty, A.M., Al-qaness, M.A.A., Ewees, A.A., Radwan, A.G., Elaziz, M.A.: Discrete fractional-order Caputo method to overcome trapping in local optima: Manta ray foraging optimizer as a case study. Expert Syst. Appl. 192, 116355 (2022)
Acknowledgements
This work was mainly supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643, Japan Science and Technology Agency (JST) Support for Pioneering Research Initiated by the Next Generation (SPRING) under Grant JPMJSP2145, and JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation under Grant JPMJFS2115.
Funding
This research was partially supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant JP22H03643, Japan Science and Technology Agency (JST) Support for Pioneering Research Initiated by the Next Generation (SPRING) under Grant JPMJSP2145, and JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation under Grant JPMJFS2115.
Author information
Authors and Affiliations
Contributions
ZT: methodology, software, writing—original draft preparation. KW: methodology, software. ST: methodology, Software. YT: methodology, software, writing—reviewing and editing. RW: writing—reviewing and editing. SG: conceptualization, methodology, software, supervision, writing—review and editing. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no conflict of interest.
Ethics Approval and Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tang, Z., Wang, K., Tao, S. et al. Hierarchical Manta Ray Foraging Optimization with Weighted Fitness-Distance Balance Selection. Int J Comput Intell Syst 16, 114 (2023). https://doi.org/10.1007/s44196-023-00289-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-023-00289-4