1 Introduction

1.1 Background and Literature Review

With the progress of society, when making decisions to solve obstacles, people always find the best processing method to keep things in the optimal state. Metaheuristic algorithms have appeared in recent years as the general alternative in dealing with real-life problems, such as economics, industrial manufacturing, physical science research, civil engineering design, manufacturing systems, and communication network applications. When the problem’s dimension increases extensively, traditional mathematical or physical optimization methods cannot solve these problems [1] due to the complex relationship between high dimensions and variables. Therefore, nature-inspired optimization algorithms came into being, broadly including evolutionary algorithms and swarm intelligence algorithms. With the growth of biology and AI, swarm intelligence algorithms have developed rapidly. They explore randomly between the solution space and update the optimal location through every iteration of the agents in the search range. Because of the effectiveness and convenience of the swarm algorithms [2,3,4,5,6,7], lots of researchers devoted themselves to their study of them.

Most metaheuristics are enlightened by the behavior and physical phenomena of biological groups in nature. For example, ant colony algorithm (ACO) [8] is encouraged by the phenomenon of ants’ foraging strategy. That is, ants always search for the optimal route for food. Particle swarm optimization (PSO) [9] is encouraged by the swarming phenomenon of insects, animal herds, birds, etc. Grey wolf optimization algorithm (GWO) [10], the basic idea, is grey wolves’ leadership level and predator rule. Moth-flame optimization (MFO) [11] mimics the laterally oriented moths’ navigation mechanism. Multi-verse optimizer (MVO) [12] simulates the salps’ chain phenomenon and is enlightened by the concepts in the cosmology of multiple universes. Salp swarm algorithm (SSA) [13], the main inspiration, is from the salp chain phenomenon, which could help the salps achieve better sailing and foraging in oceans. Many metaheuristics are applied to many domains [14,15,16,17,18,19,20,21,22,23,24], such as solar photovoltaic models, solid oxide fuel cell (SOFC), hydropower production, trusses optimization, and feature selection, and obtain excellent performance because of their efficiency, simplicity, and avoidance of local optimum. However, no algorithm declares it could tackle all the optimization problems, and the algorithm could exhibit outstanding performance on all the issues. Hence, the continued improvement of the optimization algorithms is essential to tackle more problems.

Marine Predators Algorithm (MPA) [25] has the same advantages as MAs, like efficiency, flexibility, simplicity, and excellent robustness, so MPA is effective in solving real-world problems. For example, Dinh PH et al. [26] introduce the MPA to generate the optimal parameters, which synthesize the low-frequency components. The results present that MPA performs excellently in improving the quality of fused images. To settle the problem of optimal power flow (OPF), Islam, M.Z. et al. [27] adopted the MPA on it and designed the objective function involving fuel cost, power loss, etc. They validated the outstanding performance of the IEEE 30-bus test system.

Besides, many scholars have devoted themselves to the study of MPA variants, and the proposed MPA variants perform outstandingly in global optimization problems. Yousri, D et al. [28] applied concept learning to MPA and then applied the improved MPA to deal with the feature selection problems. Elaziz, MA et al. [29] used the differential evolution operators to improve the original MPA’s exploration stage. They adopted the improved MPA to the photovoltaic models. The stable and excellent results show that it is talented in PV modeling. Yousri, D et al. [30] proposed a new metaheuristic approach, CLMPA, combining comprehensive learning, and MPA. The proposed CLMPA is applied to distinguish the best parameters of the supercapacitor equivalent circuit. Sahlol AT et al. [31] introduced the fractional order into the MPA to put forward an improved algorithm, FO-MPA, for the hybrid classification of COVID-19 images while improving classification accuracy. Yousri, D et al. [32] combined the comprehensive learning and multi-swarm strategies with the MPA to generate an enhanced algorithm, and then, they used the CLDMMPA to solve the solid oxide fuel cell (SOFC) models. In order to address the problem of multilevel threshold image segmentation, Abd Elaziz, M et al. [33] proposed the developed quantum marine predators. The improved algorithm gets excellent results according to convergence and segmentation quality. Aiming at the problems of losses and voltage deviations in the distribution systems, Ahmad Eid et al. [34] introduced the MPA method into the distribution systems and got good results. To better deal with the Joint Regularized Semi-Supervised Extreme Learning Machine (JRSSELM) problem, Yang, WB et al. [35] introduced a multi-strategy MPA to optimize the parameter extraction process, and MSMPA-JRSSELM performs well in logging oil formation identification.

1.2 Novelty and Contributions

From the literature, the MPA-based variants are difficult to balance exploitation and exploration effectively, and most of them are only built on sharing historical experiences. However, the ability to jump out of the local optimum still needs to be improved. Although the above variants of MPA have improved the MPA’s explosive ability, they still easily fall into the local optimal solution when tackling some practical problems [81], and there are still opportunities to improve their convergence, consistency, and reliability.

Aiming to address the shortcomings of MPA, an improved MPA was established based on two main mechanisms, the outpost mechanism [36] and differential evolution[37] mutation with simulated annealing [38] (DE-SA) mechanism. The convergence efficiency and accuracy of the algorithm are enhanced by the outpost mechanism, and the solution variety is increased by the DE-SA mechanism. For the original MPA, since the population initialization is chosen randomly, which may cause uneven distribution of solutions, so the tent chaotic map is introduced in the initialization stage. The better N individuals are selected from 2N individuals for iteration, which increases the diversity of the initial population and makes the population iterate toward the optimal direction. After the tent chaotic map [40] processes the initial population, the outpost mechanism is introduced, which consists of greedy selection and Gaussian mutation [36]. The outpost individuals performed exploration to decide whether the large group would move forward. This mechanism significantly improved the convergence efficiency and accuracy of MPA. Afterward, DE-SA mechanism is used to generate a new solution and decide whether to accept it. DE-SA mechanism greatly increases the solution variety and makes the solution less prone to fall into the local optima. This modified MPA is tested on thirty CEC2014 benchmark problems, and for real-world problems, the engineering design problems and photovoltaic models are chosen.

This paper’s major contributions are as follows:

  1. (1)

    An improved and efficient Marine Predators Algorithm variant ODMPA is proposed.

  2. (2)

    Introduce the outpost mechanism and DE-SA strategy to improve the disadvantages of the premature phenomenon and slow convergence accuracy of MPA.

  3. (3)

    The ODMPA is validated on the IEEE CEC2014 benchmark functions and ranks first in the conventional and advanced MAs by statistical analysis.

  4. (4)

    The ODMPA is adopted to three classical engineering design problem and photovoltaic models, and ODMPA gets the lowest results compared to other algorithms.

The study is organized as follows. The next section briefly discusses the concise background and conception of MPA. The outpost mechanism, consisting of the mutation of Gaussian mutation and the greedy selection and differential evolution mutation mechanism, is presented in Sect. 3. It is worth noting that the simulated annealing enriches the DE mechanism. Furthermore, this section describes a detailed process for adding the strategies to the original MPA. Section 4 discusses the performance of ODMPA on IEEE CEC2014 benchmark functions, engineering design problems, and photovoltaic models. The last section is arranged for the conclusions and future work.

2 Overview of the Marine Predators Algorithm (MPA)

Marine Predators Algorithm (MPA) involved in this study is a recent meta-heuristic optimization algorithm. The inspiration for MPA mainly comes from the different motion modes of marine predators under different conditions. When predators encounter prey in nature, Levy and Brownian motions are suitable for predators to adopt encounter strategies. According to the information in the predation process, the different velocity ratios and motion patterns between the predator and prey, the predator will adopt Levy or Brownian motion to achieve the best predation strategy to capture the prey.

Like other nature-inspired metaheuristics, the initial solution of MPA is produced randomly between the threshold from the search range. The threshold indicates the solution space between the upper and lower bound. The search agents size is predefined, and each individual represents the search agent:

$$\begin{aligned} {X_0} = {X_{\min }} + rand({X_{\max }} - {X_{\min }}) \end{aligned}$$
(1)

where \({X_{\min }}\) and \({X_{\max }}\) represent the search space’s lower and upper bound, and rand is a random number belonging to [0,1].

MPA considers the top predator, which has the most optimal fitness value and potential, so the best solution is nominated to construct the Elite matrix. Elite is the top predator, and it also has d dimensions. Moreover, the Elite and Prey matrices participate in each iteration of the predator’s position migration.

$$\begin{aligned} Elite = \left[ \begin{array}{l} X_{1,1}^t\mathrm{{ }}X_{1,2}^t\mathrm{{ }}...\mathrm{{ }}X_{1,d}^t\\ X_{2,1}^t\mathrm{{ }}X_{2,2}^t\mathrm{{ }}...\mathrm{{ }}X_{2,d}^t\\ \vdots \\ \vdots \\ X_{n,1}^t\mathrm{{ }}X_{n,2}^t\mathrm{{ }}...\mathrm{{ }}X_{n,d}^t \end{array} \right] Prey = \left[ \begin{array}{l} {X_{1,1}}\mathrm{{ }}{X_{1,2}}\mathrm{{ }}...\mathrm{{ }}{X_{1,d}}\\ {X_{2,1}}\mathrm{{ }}{X_{2,2}}\mathrm{{ }}...\mathrm{{ }}{X_{2,d}}\\ \vdots \\ \vdots \\ {X_{n,1}}\mathrm{{ }}{X_{n,2}}\mathrm{{ }}...\mathrm{{ }}{X_{n,d}} \end{array} \right] \end{aligned}$$
(2)

where n represents the population size, t represents the current iteration, and d represents the problem dimension. \({X^t}\) denotes the best predator, and Elite consists of n duplications of the best solution. \({X_{i,j}}\) denotes the \({i_{th}}\) prey’s \({j_{th}}\) dimension. If the top predator is updated, the Elite will be refreshed by the substituted solution. MPA initializes the Prey matrix and chooses the optimal agent as the top predator to generate the Elite matrix.

MPA simulates the widespread foraging strategy in ocean predators with the consideration of three different velocity ratio of the predators and preys. And in the following phases, v represents the velocity ratio of prey to predator. Thus, the definition and formulas of three stages are given by: Thus, the definition and formulas of three stages are given by:

Phase 1: High Velocity

The prey swims more rapidly than the predators (\(v \ge 10\)) under the circumstance, and the predator’s wisest foraging strategy is remaining stationary. The prey and predators are the searcher and target, respectively. That is to say, while \(Iter < 1/3 Max\_ Iter\), the formula of the rule is:

$$\begin{aligned} \begin{array}{l} {\overrightarrow{step} _i} = \overrightarrow{{R_B}} \otimes (\overrightarrow{Elit{e_i}} - \overrightarrow{{R_B}} \otimes {\overrightarrow{Prey} _i})\\ \overrightarrow{Pre{y_i}} = \overrightarrow{Pre{y_i}} + P.\overrightarrow{R} \otimes {\overrightarrow{step} _i} \qquad i = 1,\ldots ,n \end{array} \end{aligned}$$
(3)

where \({\overrightarrow{R_B}}\) is a normal distribution vector which denotes the Brownian motion. \({\overrightarrow{R}}\) is a vector of the random numbers between 0 and 1. Iter is the present iteration, and \(Max\_Iter\) is the maximum iteration number. \(P=0.5\) is a constant number.

Phase 2: Unit Velocity

The velocity is fundamentally the same for the prey and predator in this phase because they swim simultaneously (\(v \approx 1\)). The circumstance mimics the trail of prey conducts the prey and predator’s movement. The roles of the predator and prey are not stationary. They can be the searcher or target. The exploration and exploitation operation are the same significant for this phase, so the population is divided into two parts. While \(1/3Max\_Iter< Iter < 2/3Max\_Iter\), the formula of the first part:

$$\begin{aligned} \begin{array}{l} {\overrightarrow{step} _i} = \overrightarrow{{R_L}} \otimes (\overrightarrow{Elit{e_i}} - \overrightarrow{{R_L}} \otimes {\overrightarrow{Prey} _i})\\ \overrightarrow{Pre{y_i}} = \overrightarrow{Pre{y_i}} + P.\overrightarrow{R} \otimes {\overrightarrow{step} _i} \qquad i = 1,\ldots ,n/2 \end{array} \end{aligned}$$
(4)

where \(\overrightarrow{{R_L}} \) is a Levy movement vector, and \(\overrightarrow{{R_L}} \otimes \overrightarrow{Prey} \) presents the prey motion. The formula of the second half:

$$\begin{aligned} \begin{array}{l} \overrightarrow{ste{p_i}} = \overrightarrow{{R_B}} \otimes (\overrightarrow{{R_B}} \otimes \overrightarrow{Elit{e_i}} - {\overrightarrow{Prey} _i}) \\ \overrightarrow{Pre{y_i}} = \overrightarrow{Elit{e_i}} + P.CF \otimes \overrightarrow{ste{p_i}} \qquad i = n/2,\ldots ,n \end{array} \end{aligned}$$
(5)

where \(\overrightarrow{{R_B}} \otimes \overrightarrow{Elit{e_i}} \) represents the movement of the predator, and CF as the step size control parameter for predator movement, the definition is as follows:

$$\begin{aligned} CF = {\left( 1 - \frac{{Iter}}{{Max\_Iter}}\right) ^{(\frac{{2 \cdot Iter}}{{Max\_Iter}})}} \end{aligned}$$
(6)

Phase 3: Low Velocity

The predators swim more rapidly than the prey (v = 0.1), and the situation occurs in the last phase of iteration. The prey is the target, so the predator’s wisest foraging strategy is the Levy movement. While \(2/3Max\_Iter< Iter < Max\_Iter\), this phase is presented as:

$$\begin{aligned} \begin{array}{l} \overrightarrow{ste{p_i}} = \overrightarrow{{R_L}} \otimes (\overrightarrow{{R_L}} \otimes \overrightarrow{Elit{e_i}} - {\overrightarrow{Prey} _i}) \\ \overrightarrow{Pre{y_i}} = \overrightarrow{Elit{e_i}} + P.CF \otimes \overrightarrow{ste{p_i}} \qquad i = 1,\ldots ,n \end{array} \end{aligned}$$
(7)

where \(\overrightarrow{{R_L}} \otimes \overrightarrow{Elit{e_i}}\) represents the levy movement of the predator.

Besides Levy and Brownian movements in three different phases, the most influential factor is environmental effects, fish aggregating devices (FADs). MPA uses long jumps to avoid getting stuck into the sub-optimal solution, and the jumping mode’s formula is shown as follows:

$$\begin{aligned} \overrightarrow{Pre{y_i}} = \left\{ \begin{array}{l} {\overrightarrow{Prey} _i} + CF\left[ {{{\overrightarrow{X} }_{\min }} + \overrightarrow{R} \otimes ({{\overrightarrow{X} }_{\max }} - {{\overrightarrow{X} }_{\min }})} \right] \\ \quad \otimes \overrightarrow{U} \qquad if \ r \le FADs \\ {\overrightarrow{Prey} _i} + \left[ {FADs(1 - r) + r} \right] \\ \quad \times ({\overrightarrow{Prey} _{r1}} - {\overrightarrow{Prey} _{r2}}) \qquad \ \ \ if \ r > FADs \end{array} \right. \end{aligned}$$
(8)

where r is a random number between 0 and 1, \({\overrightarrow{X} _{\min }}\) and \({\overrightarrow{X} _{\max }}\) represent the lower and upper bound of the search space, and \({\overrightarrow{Prey} _{r1}}\) and \({\overrightarrow{Prey} _{r2}}\) are randomly selected, mutually exclusive search agents in the Prey matrix. \(\overrightarrow{U} \) denotes a random vector whose elements only can be 0 or 1, and FADs = 0.2 is the probability of FADs effect.

Finally, the marine memory operation is used to update the position. This is similar to the greedy selection, which preserves the better solution and makes the population iterate toward a better direction. The principle is that after updating the prey matrix, the new solution should be evaluated to decide whether to update the Elite matrix. Ocean memory greatly improves the performance of MPA in population iteration. The main process of MPA is displayed in Algorithm 1.

figure a
Fig. 1
figure 1

Flowchart of outpost mechanism

3 Proposed ODMPA

Although the MPA is well known for its advantages of efficient foraging strategy and simple structure, the algorithm easily falls into the sub-optimum for complex optimization problems. In order to avoid the premature phenomenon of the algorithm and improve the diversity of solutions, the outpost mechanism and differential evolution mutation with simulated annealing (DE-SA) mechanism are introduced into the original MPA to propose ODMPA.

3.1 Outpost Mechanism

The outpost mechanism [36] includes the greedy selection and Gaussian mutation phases, and the process of the outpost mechanism is shown in Fig. 1. In the first phase, the prey compares the current fitness value with the last iteration. If the current fitness is more optimal than the last one, replace the prey’s position with the current location; otherwise, keep the previous solution,

$$\begin{aligned}{} & {} \left\{ \begin{array}{l} [u] = min(fitness(Pre{y_{curr}}),fitness(Pre{y_{last}}))\\ Pre{y_i} = Pre{y_u} \end{array} \right. \nonumber \\ {}{} & {} \qquad i = 1,...n \end{aligned}$$
(9)

where u denotes the prey’s location.

In the second phase, the position search distribution of prey could be simulated by Gaussian distribution. During the foraging process of ocean predators and prey, although the optimal individual will be found, the individual will continue to find a better position by conducting random searches near its position. The Gaussian distribution’s definition is given by,

$$\begin{aligned} f(x) = \frac{1}{{\sigma \sqrt{2\pi } }}{e^{\frac{{{{(x - \mu )}^2}}}{{2{\sigma ^2}}}}}, - \infty< x < \infty \end{aligned}$$
(10)

The normal distribution characteristics obtain the distribution density of individuals, when \(\mu = 0\), \({\sigma } = 1\), and the normal distribution becomes the standard normal distribution. According to the literature [36], the standard normal distribution is also adopted. \({\sigma ^2}\) and \(\mu \) are the standard deviation of the search agents in solution space and the average value of total preys, respectively, so we also set \({\sigma }\) = 0, \(\mu \) = 1. In the outpost mechanism, the muted individual is defined:

$$\begin{aligned} Pre{y_i}' = Pre{y_i} \otimes (1 + Gaus), \qquad i = 1,\ldots ,n \end{aligned}$$
(11)

and in equation (11), \(Pre{y_i}'\) and \(Pre{y_i}\) represent the \({i_{th}}\) prey after Gaussian mutation and before mutation, respectively.

3.2 Differential Evolution Mutation with Simulated Annealing

3.2.1 Differential Evolution (DE)

Differential evolution (DE) algorithm is a classical and influential global optimization algorithm aiming at solving many optimization problems, proposed by R Storn and Price in 1997 [37]. The steps of DE are mutation, crossover, and selection, which are simple and effective. Because of the great power of DE, it has been successfully applied in some fields.

Like MPA, DE begins by generating an initial \(N \times D\) matrix, and N and D denote the number of the population size and the dimension, respectively. The initialization formula of differential evolution is defined:

$$\begin{aligned} {Z_i} = {Z_{min}} + rand \times ({Z_{max}} - {Z_{min}}) \end{aligned}$$
(12)

where the \({Z_{min}}\) and \({Z_{max}}\) mean the lower and upper bound of the individual. \({Z_{i}}\) represents the \({i_{th}}\) individual. rand means the number randomly generated between 0 and 1. Then using the mutation operator, the solution is updated to \(M_{_i}^t\) as the following equation:

$$\begin{aligned} M_i^t = Z_{r1}^t + F \times (Z_{_{r2}}^t - Z_{_{r3}}^t) \end{aligned}$$
(13)

where \({Z_{r1}}\), \({Z_{r2}}\), and \({Z_{r3}}\) are different individuals, and their selection applied to three randomly generated numbers in the range of 1 to n, where \(r1 \ne r2 \ne r3 \ne i\). t represents the current iteration. F denotes the mutation factor, and is set to 0.5. Then, DE generates the solution \(V_i^t\) by introducing the crossover operator CR, and the equation is as follows:

$$\begin{aligned} V_i^t = \left\{ \begin{array}{l} M_i^t \qquad if(rand \le CR)\\ Z_i^t \qquad otherwise \end{array} \right. \end{aligned}$$
(14)

where \(rand \in [0,1]\), and CR is the crossover probability, which determines whether the solution is mutated or not, and it is set to 0.2 according to the conventional DE. The last step to generate the solution by selection is dependent on the better fitness of \(Z_i^t\) and \(V_i^t\), and the formula is shown as:

$$\begin{aligned} Z_i^{t + 1} = \left\{ \begin{array}{l} V_i^t \qquad if(fit(V_i^t) < fit(Z_i^t)) \\ Z_i^t \qquad otherwise \end{array} \right. \end{aligned}$$
(15)

3.2.2 Simulated Annealing (SA)

The simulated annealing optimization is a classical and efficient algorithm that is easy to achieve, proposed by physicist S. Kirkpatrick et al. [38]. The inspiration for SA is from solid annealing. Firstly, to increase the internal energy of the solid, it should be heated to a very high temperature. As the temperature continues to grow, the solid turns into a disordered state. Then, when the temperature slowly decreases, the internal particles of the solid become orderly. Eventually, the solid achieves a state of internal energy equilibrium when the temperature drops to a minimum threshold.

SA simulates the slight decrease in the temperature of a high-temperature solid, aiming at reducing defects and minimizing the system energy. SA follows the Metropolis criterion [39]. That is, although the last solution’s fitness value is more optimal, the current solution also has a certain probability of being accepted. The definition of simulated annealing is given by:

$$\begin{aligned} P = \left\{ \begin{array}{ll} 1 &{} \quad if(fit(pre{y_{new}}) \le fit(prey))\\ \exp (\frac{{fit(prey - pre{y_{new}})}}{T})&{} \quad if(fit(pre{y_{new}}) > fit(prey)) \end{array} \right. \end{aligned}$$
(16)

where \(fit(pre{y_{new}})\) and fit(prey) denotes the fitness of the newly generated solution after DE variation and present solution, respectively.

3.2.3 Process of the DE-SA Mechanism

DE-SA mechanism combines the differential evolution and the simulated annealing, and this hybrid improvement mechanism has the advantages of DE and SA. This mechanism can not only generate new competitive solutions but also has a certain probability of accepting the suboptimal solution, which significantly improves the MPA’s exploration ability. Figure 2 depicts the DE-SA mechanism’s detailed process for working in the MPA.

Fig. 2
figure 2

The detailed process of differential evolution mutation with simulated annealing

3.3 Tent Chaotic Map

Besides, ODMPA also introduces the chaotic tent map to increase the diversity of initialization population. Chaos is a common phenomenon in nonlinear systems. In a specific range, the change of chaotic variables has randomness, ergodicity, and regularity, and chaotic mapping operators significantly impact the optimization process. The tent map [40] has better ergodic uniformity than the logistic map and higher optimization efficiency. So the tent map is adopted to disturb the population after the initialization to generate a new \(N \times D\) tent matrix and select the top N individuals from 2N individuals for iteration, increasing the initial population’s variety. The mathematics formula for a tent map is as follows:

$$\begin{aligned} {x_{i + 1}} = \left\{ \begin{array}{l} {x_i}/\alpha \qquad \qquad \qquad \ 0 \le {x_i} \le \alpha \\ (1 - {x_i})(1 - \alpha ) \qquad \alpha < {x_i} \le 1 \end{array} \right. \end{aligned}$$
(17)

where \(\alpha \in [0,1]\), and the value of alpha affects the chaotic distribution, which in turn affects the generation of the initial population. Lots of experience shows that alpha taken between 0.55 and 0.56 may result in a more evenly distribution of chaotic values. Therefore, the cases were tested with the alpha of 0.55,0.555, and 0.56 in increments of 0.005, and the best results are achieved on CEC2014 benchmark functions when alpha is taken as 0.555.

3.4 The Proposed ODMPA

In this section, the outpost mechanism and DE-SA mechanism are introduced into the conventional MPA to construct an improved MPA, called ODMPA. Figure 3 demonstrates the detailed framework of ODMPA. In the improved ODMPA, there are three main improvements, tent chaotic map, outpost mechanism variation, and DE-SA variation. Firstly, the tent map to disturb the population after the initialization and then create a new matrix and select the top N individuals. This disturbance has a beneficial effect on the initial population and increases the diversity of solutions. Then, the outpost mechanism and DE-SA mechanism are sequentially introduced into the MPA. The outpost mechanism includes two main steps. Firstly, the prey compares the current fitness value with the last one; if it is worth moving forward, update the position; otherwise, keep the previous iteration’s fitness. This step greatly enhances the convergence speed. Secondly, the prey should explore near the best value, and the motion is simulated by Gaussian mutation. After the variation of the outpost mechanism, the most important DE-SA mechanism is introduced. DE-SA variation could generate competitive new solutions by mutation, crossover, and selection. And because of the introduction of simulated annealing, it has also a certain probability of accepting the suboptimal solution, which enhances the solution’s diversity and ability to escape off the optima.

Fig. 3
figure 3

The detailed framework of ODMPA

4 Experiment Results and Analysis

In Sect. 4.1, ODMPA was compared to the CEC2014 [41] benchmark functions with other algorithms, including MPA, classical swarm intelligence algorithms, the latest swarm intelligence algorithms of the last two years, MPA variants, and advanced algorithms. The purpose of the comparison with the classical algorithms and improved MPA variants is to verify whether the ODMPA is improved or not. The comparison with the advanced algorithms is to demonstrate the effectiveness of ODMPA. In Sect. 4.2, ODMPA was validated on some engineering problems, including WBD [79], PVD [78], and SRD [80]. Moreover, in Sect. 4.3, ODMPA was adopted into the static and dynamic photovoltaic models. Engineering design problems and photovoltaic models are the real-world problems, compared to other counterparts, ODMPA could successfully tackle real-world problems in extracting the unknown parameters to get the minimum error. The results showed that ODMPA has great effectiveness in solving the practical problems.

For the results of the CEC2014 benchmark functions, the mean value result (Average) and standard deviation (Stdv) were adopted, validating the performance, and Table 4 lists the overall results. The Friedman test [42], the nonparametric statistical test, is used to assess the performance of ODMPA and the other swarm algorithms statistically. Moreover, the sign “+/=/−” results for each algorithm are calculated through the Friedman test. So through the overall information, the average ranking value (ARV) can be calculated, and ODMPA’s ARV value is 4.55. The lowest ARV value indicates ODMPA’s performance excels its counterparts. Moreover, all experiments were carried out on MATLAB R2016a, and the hardware platform is Core i7 CPU, 8 GB RAM.

Table 1 Detail of the IEEE CEC2014 test functions
Table 2 The parameter settings of the conventional and advanced metaheuristics

4.1 IEEE CEC2014 Benchmark Functions

The benchmark functions are always a strong tool [82,83,84,85,86] to evaluate the effectiveness of optimization algorithms. IEEE CEC2014 benchmark functions is a standard test function set divided into four parts, which can well test the algorithm’s performance. Table 1 displays the details of CEC2014 benchmark functions. ODMPA is compared with the well-known advanced and original algorithms. The parameters were set to the same value to make sure the competition’s fairness. And the total experiments were carried on the same environment. The search agents scale and dimension are installed as 30, and the \(Max\_Iter\) is 1000 based on the settings of the most experiments. And each algorithm was run for thirty independent trials to guarantee the fairness and reliability of the experiments. Table 2 displays the particular parameter settings of the conventional and advanced metaheuristics. Table 3 provides the average and standard deviation values of best-so-far solutions found in each run. The counterparts include MPA, classical swarm intelligence algorithms, the latest swarm intelligence algorithms of the last two years, MPA variants, and advanced algorithms. The comparison with these algorithms, especially with MPA variants and advanced algorithms, and the better results obtained by ODMPA prove the effectiveness of our improvements to MPA, all these algorithms are important.

  • Differential Evolution Algorithm (DE)

  • Gravitational Search Algorithm (GSA) [43]

  • Multi-Verse Optimizer (MVO)

  • Manta Ray Foraging Optimization (MRFO) [44]

  • Marine Predators Algorithm (MPA)

  • Harris Hawks Optimization (HHO) [45]

  • Pathfinder Algorithm (PFA) [46]

  • Sine Cosine Algorithm (SCA) [47]

  • Slime Mould Algorithm (SMA) [48]

  • Sparrow Search Algorithm (SSA) [49]

  • Grey Wolf Optimizer (GWO)

  • Tunicate Swarm Algorithm (TSA) [50]

  • Salp Swarm Algorithm (SSAsalp)

  • Whale Optimization Algorithm (WOA) [51]

  • COOT Bird Optimization Method (COOT) [52]

  • African Vultures Optimization Algorithm (AVOA) [53]

  • Expanded GWO (Ex-GWO) [54]

  • Modified Self-adaptive Marine Predators Algorithm (MMPA) [55]

  • Enhanced Marine Predators Algorithm (EMPA)

  • Improved Marine Predators Algorithm (IMPA) [56]

Table 3 Results of ODMPA and other algorithms on IEEE CEC2014 functions

Table 3 displays the mean value (Average) and standard deviation (Stdv) results. The results show that ODMPA has obtained the most minimum value in general, which shows that ODMPA can perform better than other algorithms in the precision of the solution. The minimum values are bolded in the table. Friedman’s rank test fully uses the complete information of the average fitness values of the obtained algorithms to perform nonparametric tests and generate a combined ranking of the ARV values of all algorithms. The ARV values are calculated by SPSS and are shown in Table 4. ODMPA ranks first and has the lowest ARV value, which verifies that ODMPA displays best in finding the optimal solution.

Figures 4, 5, and 6 show the iterative curve graphs. F1–F3 unimodal functions’ properties decide they do not have the sub-optima, so they are suitable for evaluating the elementary performance of ODMPA. F1, F2, and F3 functions’ optimal values by ODMPA are closer to the global optima than the original MPA. The numerical results show that the search ability and convergence speed of the ODMPA were vastly enhanced on unimodal functions. The differential evolution mutation with simulated annealing mechanism helps to extend new feasible solutions in the threshold range. The outpost mechanism enhances the convergence efficiency. Tent mapping in the initialization phase makes the initial randomization of the population closer to the optima. In F4 and F7 functions, almost all the algorithms’ best values are close to the optima. Moreover, ODMPA’s convergence speed is relatively faster than the original MPA and most algorithms. In F8 and F9 functions, while the convergence speed of ODMPA is slower in earlier iterations, as the iteration increases, ODMPA finally reaches the global optimal value. For the F12 function, ODMPA’s convergence speed and optimal value are better than other counterparts.

F18 and F19 are taken as the representative functions for the hybrid functions. From the iterative graphs of F18 and F19, it can be derived that ODMPA has the best capacity to find the best optimal value. Moreover, for the composition functions, F23 and F24 also have the same phenomenon. These results prove that ODMPA can obtain excellent results on complex optimization problems.

Table 4 Over rank off ODMPA and other algorithms by Friedman assessment
Fig. 4
figure 4

The iteration curve of ODMPA on CEC2014 benchmark functions

Fig. 5
figure 5

The iteration curve of ODMPA on CEC2014 benchmark functions

Fig. 6
figure 6

The iteration curve of ODMPA on CEC2014 benchmark functions

4.2 ODMPA for Engineering Design Problems

The engineering application problem is also the mainstream test problem of the proposed algorithm, which can better test the ability of the algorithm to solve real-world problems. It is vital to optimizing the problems with a constrained search space, and these problems always belong to the engineering domain. ODMPA is adapted to three real-life engineering design problems with constraints, including pressure vessel design (PVD) problem [78], welded beam design (WBD) [79] problem, and speed reducer design (SRD)[80] problem. There are numerous constraints in the mathematics model of engineering problems, so choosing a proper approach to deal with them is essential. Therefore, the penalty function is added to the original objective function to obtain an augmented objective function. The work [57] mentioned many types of penalty functions, including segregated GA functions, death penalty functions, static penalty functions, etc. In solving engineering problems, a simple and practical approach should be taken to ensure there is no infeasible solution in the search range. Thus, the death penalty function is the most suitable for engineering design problems. Moreover, the penalty factor weighs the loss and is usually specified as a large positive number.

4.2.1 Welded Beam Design Problem

The welded beam is a continuous engineering optimization problem, and whose main target is to minimize the cost of the welded beam. The point is to find an optimal set of dimensions: the height of beam (t), the width of beam (b), the thickness of beam (h), the length of beam (l). The seven constraints, the definition of WBD problem, are given by Eq. 18.

$$\begin{aligned} \begin{array}{l} \overrightarrow{x} = [{x_1},{x_2},{x_3},{x_4}] = [h,l,t,b],\\ \mathrm{{Minimize: }}f(\overrightarrow{x} ) = 1.10471{x_2}x_1^2 + 0.04811{x_3}{x_4}(14 + {x_2}),\\ \mathrm{{Subject to: }}\\ {g_1}(\overrightarrow{x} ) = \tau (\overrightarrow{x} ) - {\tau _{\max }} \le 0\\ {g_2}(\overrightarrow{x} ) = \sigma (\overrightarrow{x} ) - {\sigma _{\max }} \le 0\\ {g_3}(\overrightarrow{x} ) = \delta (\overrightarrow{x} ) - {\delta _{\max }} \le 0\\ {g_4}(\overrightarrow{x} ) = {x_1} - {x_4} \le 0\\ {g_5}(\overrightarrow{x} ) = P - {P_c}(\overrightarrow{x} ) \le 0\\ {g_6}(\overrightarrow{x} ) = 0.125 - {x_1} \le 0\\ {g_7}(\overrightarrow{x} ) = 0.10471x_1^2 + 0.04811{x_3}{x_4}(14 + {x_2}) - 5 \le 0,\\ Variable\mathrm{{ }}range\mathrm{{ }}0.1 {\le } {x_1} {\le } 2,0.1 {\le } {x_2} {\le } 10,0.1 {\le } {x_3} {\le } 10,0.1 {\le } {x_4} {\le } 2,\\ where\mathrm{{ }}\tau (\overrightarrow{x} ) = \sqrt{{{(\tau ')}^2} + 2\tau '\tau ''\frac{{{x_2}}}{{2R}} + {{(\tau '')}^2}} ,\\ \tau ' = \frac{P}{{\sqrt{2} {x_1}{x_2}}},\tau '' = \frac{{MR}}{J},M = P(L + \frac{{{x_2}}}{2}),\\ R = \sqrt{\frac{{x_2^2}}{4} + {{\left( {\frac{{{x_1} + {x_3}}}{2}} \right) }^2}} ,\\ J = 2\left\{ {\sqrt{2} {x_1}{x_2}\left[ {\frac{{x_2^2}}{4} + {{\left( {\frac{{{x_1} + {x_3}}}{2}} \right) }^2}} \right] } \right\} ,\\ \sigma (\overrightarrow{x} ) = \frac{{6PL}}{{{x_4}x_3^2}},\delta (\overrightarrow{x} ) = \frac{{4P{L^3}}}{{E{x_4}x_3^2}},\\ {P_c}(\overrightarrow{x} ) = \frac{{4.013E\sqrt{\frac{{x_3^2x_4^6}}{{36}}} }}{{{L^2}}}\left( {1 - \frac{{{x_3}}}{{2L}}\sqrt{\frac{E}{{4G}}} } \right) ,\\ P = 6000lb,L = 14in.,{\delta _{\max }} = 0.25in.,E = 30 \times {10^6}psi,\\ G = 12 \times {10^6}psi,{\tau _{\max }} = 13600psi,{\sigma _{\max }} = 30000psi. \end{array} \end{aligned}$$
(18)

A lot of algorithms have solved the WBD problem and obtained good results. The comparison results to other famous traditional and advanced algorithms, including BA [58], RO [59], GSA, IHS [60], hHHO-SCA [61], MTSA [62], CPSO [63], are displayed in Table 5. The results demonstrate that ODMPA has a more excellent performance than other algorithms, and the best cost is 1.7017.

Table 5 WBD problem’s results on different algorithms

4.2.2 Pressure Vessel Design Problem

The main target of this engineering problem is aimed at minimizing the total cost pf the pressure vessel. The PVD problem has four variables: thickness of the shell (\({T_s}\)), thickness of the head (\({T_h}\)), internal range (R), and length of the shell (L). The four constraints and the structural formulation are given by Eq. 19.

$$\begin{aligned}&\overrightarrow{x} = [{x_1},{x_2},{x_3},{x_4}] = [{T_s},{T_h},R,L],\nonumber \\&\mathrm{{Minimize: }}f(\overrightarrow{x} ) = 0.6224{x_1}{x_3}{x_4} {+} 1.7781{x_2}x_3^2 \nonumber \\&\qquad {+} 3.1661x_1^2{x_4} {+} 19.84x_1^2{x_3},\nonumber \\&\mathrm{{Subject to:}}\nonumber \\&{g_1}(\overrightarrow{x} ) = - {x_1} + 0.0193{x_3} \le 0,\nonumber \\&{g_2}(\overrightarrow{x} ) = - {x_2} + 0.00954{x_3} \le 0,\nonumber \\&{g_3}(\overrightarrow{x} ) = - \pi {x_4}x_3^2 - \frac{4}{3}\pi x_3^3 + 1296000 \le 0,\nonumber \\&{g_4}(\overrightarrow{x} ) = {x_4} - 240 \le 0,\nonumber \\&\mathrm{{where}}\nonumber \\&\mathrm{{0}}\mathrm{{.0625}} {\le } {x_1} {\le } \mathrm{{6}}\mathrm{{.1875}},\mathrm{{0}}\mathrm{{.0625}} {\le } {x_2} \nonumber \\&\qquad {\le } \mathrm{{6}}\mathrm{{.1875}},10 {\le } {x_3} {\le } 200,10 {\le } {x_4} {\le } 200. \end{aligned}$$
(19)

The comparison results of ODMPA and other famous traditional and advanced algorithms for PVD problem, including BA, IHS, PSO, GA, ES [64], SGOA [2], and MMPA [55], are shown in Table 6. The results demonstrate that ODMPA performs more excellent than other algorithms, and the best cost is 5835.5822.

Table 6 PVD problem’s results on different algorithms

4.2.3 Speed Reducer Design Problem

The model of this engineer problem is aiming at minimizing the cost of the speed reducer. The SRD problem has seven variables: teeth number of the pinion (z), width of reducer (b), first shaft’s length (\(l_1\)), diameter of shaft 1 (\(d_1\)), module of teeth (m), second shaft’s length (\(l_2\)), diameter of shaft 2 (\(d_2\)). The eleven constraints and the mathematical formula of SRD are shown in Eq. 20.

$$\begin{aligned}&\overrightarrow{x} = [{x_1},{x_2},{x_3},{x_4},{x_5},{x_6},{x_7}]\nonumber \\&\mathrm{{Minimize: }}f(\overrightarrow{x} ) = 0.7854{x_1}x_2^2(3.3333x_3^2 + 14.9334{x_3} \nonumber \\&\qquad - 43.0934) - 1.508{x_1}(x_6^2 + x_7^2) \nonumber \\&\qquad \mathrm{{ }}+ 7.4777(x_6^3 + x_7^3) + 0.7854({x_4}x_6^2 + {x_5}x_7^2),\nonumber \\&\mathrm{{Subject to:}}\nonumber \\&{g_1}(\overrightarrow{x} ) = \frac{{27}}{{{x_1}x_2^2{x_3}}} - 1 \le 0,\nonumber \\&{g_2}(\overrightarrow{x} ) = \frac{{397.5}}{{{x_1}x_2^2x_3^2}} - 1 \le 0,\nonumber \\&{g_3}(\overrightarrow{x} ) = \frac{{1.93x_4^3}}{{{x_2}x_6^4{x_3}}} - 1 \le 0,\nonumber \\&{g_4}(\overrightarrow{x} ) = \frac{{1.93x_5^3}}{{{x_2}x_7^4{x_3}}} - 1 \le 0,\nonumber \\&{g_5}(\overrightarrow{x} ) = {\frac{{\left[ {{{\left( {\frac{{745{x_4}}}{{{x_2}{x_3}}}} \right) }^2} + 16.9 \times {{10}^6}} \right] }}{{110x_6^3}}^{1/2}} - 1 \le 0,\nonumber \\&{g_6}(\overrightarrow{x} ) = {\frac{{\left[ {{{\left( {\frac{{745{x_4}}}{{{x_2}{x_3}}}} \right) }^2} + 157.5 \times {{10}^6}} \right] }}{{85x_7^3}}^{1/2}} - 1 \le 0,\nonumber \\&{g_7}(\overrightarrow{x} ) = \frac{{{x_2}{x_3}}}{{40}} - 1 \le 0,\nonumber \\&{g_8}(\overrightarrow{x} ) = \frac{{5{x_2}}}{{{x_1}}} - 1 \le 0,\nonumber \\&{g_9}(\overrightarrow{x} ) = \frac{{{x_1}}}{{12{x_2}}} - 1 \le 0,\nonumber \\&{g_{10}}(\overrightarrow{x} ) = \frac{{1.5{x_6} + 1.9}}{{{x_4}}} - 1 \le 0,\nonumber \\&{g_{11}}(\overrightarrow{x} ) = \frac{{1.1{x_7} + 1.9}}{{{x_5}}} - 1 \le 0,\nonumber \\&\text {where}\nonumber \\&2.6 \le {x_1} \le 3.6,0.7 \le {x_2} \le 0.8,17 \le {x_3} \nonumber \\&\qquad \le 28,7.3 \le {x_4} \le 8.3,\nonumber \\&7.3 \le {x_5} \le 8.3,2.9 \le {x_6} \le 3.9,5.0 \le {x_7} \le 5.5. \end{aligned}$$
(20)

In this problem, ODMPA is compared with other algorithms, including WCA [65], HEAA [66], MDE [67], m-HHO [68], EHHO, GLF-GWO [69], and m-SSA [70]. The results displayed in Table 7 demonstrate that ODMPA performs more excellent than other algorithms, and the optimal cost is 2753.9866.

Table 7 SRD problem’s results on different algorithms

4.3 Static and Dynamic Photovoltaic Models

The performance of the whole PV system now depends on its effective modeling, and the swarm intelligence algorithm can effectively solve the problem of PV identification parameters. Compared with other algorithms, ODMPA performs better in solving the problem of PV system identification parameters and also shows its superior performance. The model of this problem is nonlinearly optimized, and the main target is to extract the parameters of photovoltaic models. The static and dynamic models are chosen in this paper. An objective function was designed to set up accurate photovoltaic models for efficient reproduction. The minimization objective function involves the parameters needed to distinguish. Considering the sensitivity of the model parameters, the root-mean-square error is introduced into these experiments as the final evaluation standard, and the value of RMSE is calculated among the measured and estimated current.

4.3.1 Single-Diode Model

Modeling of photovoltaic modules is usually based on an equivalent circuit model with total parameters. The theory of SDM is established in the Shockley equation [71]. The formula of the output current is given by Eqs. (2122):

$$\begin{aligned}{} & {} {I_L} = {I_{ph}} - {I_d} - {I_{sh}}\nonumber \\{} & {} {I_d} = {I_{sd}} \cdot [\exp \left( {\frac{{{V_L} + {R_S} \cdot {I_L}}}{{n \cdot {V_t}}}} \right) - 1]\nonumber \\{} & {} {I_{sh}} = \frac{{{V_L} + {R_S} \cdot {I_L}}}{{{R_{sh}}}}\nonumber \\{} & {} where\nonumber \\{} & {} {V_t} {=} \frac{{kT}}{q},k {=} 1.3806503 {\times } {10^{ - 23}},\nonumber \\{} & {} q {=} 1.60217646 {\times } {10^{ - 19}}. \end{aligned}$$
(21)

According to the above equations, the output current expression can be derived:

$$\begin{aligned} \begin{array}{l} \overrightarrow{x} = [x1,x2,x3,x4,x5] = [{I_{ph}},{I_{sd}},{R_S},{R_{sh}},n],\\ {I_L} = {I_{ph}} - {I_{sd}} \cdot \left[ {\exp \left( {\frac{{{V_L} + {R_S} \cdot {I_L}}}{{n \cdot {V_t}}}} \right) - 1} \right] - \frac{{{V_L} + {R_S} \cdot {I_L}}}{{{R_{sh}}}}. \end{array} \end{aligned}$$
(22)

4.3.2 Double-Diode Model

For the SDM in harsh external environments, producing the compound current loss is easy. DDM considers this situation and makes up for it. The formula of the DDM’s output current is given by Eq. (23):

$$\begin{aligned} \overrightarrow{x}&= [{x_1},{x_2},{x_3},{x_4},{x_5},{x_6},{x_7}] \nonumber \\&=[{I_{ph}},{I_{s{d_1}}},{I_{s{d_2}}},{R_S},{R_{sh}},{n_1},{n_2}],\nonumber \\ {I_L}&= {I_{ph}} - {I_{s{d_1}}} \cdot \left[ {\exp \left( {\frac{{{V_L} + {I_L} \cdot {R_S}}}{{{n_1} \cdot {V_t}}}} \right) - 1} \right] \nonumber \\&\qquad - {I_{s{d_2}}} \cdot \left[ {\exp \left( {\frac{{{V_L} + {I_L} \cdot {R_S}}}{{{n_2} \cdot {V_t}}}} \right) - 1} \right] \nonumber \\&\qquad - \frac{{{V_L} + {I_L} \cdot {R_S}}}{{{R_{sh}}}},\nonumber \\&\quad where \nonumber \\ {V_t}&= \frac{{kT}}{q},k = 1.3806503 \times {10^{ - 23}},\nonumber \\ q&= 1.60217646 \times {10^{ - 19}}. \end{aligned}$$
(23)
Table 8 The value range of photovoltaic models
Table 9 Comparison results for SDM problem (data: RTC France)
Table 10 Comparison results for DDM problem (data: RTC France)

4.3.3 PV Module Model

In practical cases, the PV module is made up of some sunshine-based cells in series and parallel. The current formula of SDM for the PV module is expressed in Eq. (24).

$$\begin{aligned} \overrightarrow{x}&= [{x_1},{x_2},{x_3},{x_4},{x_5},{x_6},{x_7}] \nonumber \\&= [{N_p},{N_s},{I_{ph}},{I_{sd}},{R_S},{R_{sh}},n],\nonumber \\ {I_L}&= {I_{ph}}{N_p} - {I_{sd}}{N_p}\left[ {\exp \left( {\frac{{{V_L} + ({N_s}{R_S}{I_L}/{N_p})}}{{n{N_s}{V_t}}}} \right) - 1} \right] \nonumber \\&\qquad - \frac{{{V_L} + ({N_s}{R_S}{I_L}/{N_p})}}{{\left( {{N_s}{R_{sh}}/{N_p}} \right) }},\nonumber \\&\text {where}\nonumber \\ {V_t}&= \frac{{kT}}{q},k = 1.3806503 \times {10^{ - 23}},\nonumber \\ q&= 1.60217646 \times {10^{ - 19}}. \end{aligned}$$
(24)

where \({N_p}\) and \({N_s}\) denote the number of sun-oriented cells in parallel and arrangement, respectively. The parameters setting is the same as the SDM model.

4.3.4 Objective Function

The target of algorithms to optimize the photovoltaic models is minimizing the gap of the estimated and measured current. In order to achieve the goal of validating the performance of photovoltaic models, the choice of evaluation standard is crucial. Root-mean-square error (RMSE) is appropriate, and the original problem is turned into the minimization of RMSE. The mathematical formula is given by Eq. (25):

$$\begin{aligned} RMSE = \sqrt{\frac{{\sum \nolimits _{i = 1}^N {{{({I_{simulated}} - {I_{measured}})}^2}} }}{N}} \end{aligned}$$
(25)

where the \({I_{measured}}\) represents the actual measured current, \({I_{simulated}}\) is the current calculated by formulas (21)–(24), and N indicates the number of groups of the current.

Two literature datasets [72] are used to extract the parameters of PV models, and the detailed information of the datasets is listed as follows. The RTC France cell consists of twenty-six pairs of data of the current and voltage, and its irradiance and temperature are 1000 W/m2 and 33\(^\circ \)C, respectively, while the Photowatt-PWP201 consists of twenty-five pairs of data with thirty-six polysilicon cells, and the temperature is 45\(^\circ \)C.

ODMPA and other counterparts are run on MATLAB R2016a, and each algorithm is executed thirty times in case of contingency. The search agents size and dimension are 30, and the maximum number of evaluations is 300000. PV models’ parameter setting is displayed in Table 8.

Table 11 Comparison results for PV module problem (data: Photo-watt-PWP201)

4.3.5 Identify SDM and DDM on the RTC France

Tables 9 and 10 record the RMSE values obtained by running SDM and DDM models with various algorithms and record the parameter groups corresponding to the optimal values. It can be seen that ODMPA shows optimal results, which indicates that ODMPA is more reliable and effective.

4.3.6 Identify the Photovoltaic Cell Module on the Photo-Watt-PWP201

Table 11 records the RMSE values obtained by running SDM and DDM models with various algorithms and the parameter groups corresponding to the optimal values. The results show that ODMPA obtains the lowest RMSE values.

These practical experiments prove that when extracting different models’ unknown parameters, ODMPA can always have a solid ability to get optimal results.

5 Conclusion and Future Direction

This study proposes a new MPA variant: ODMPA. The improved algorithm has stronger search capability and can jump out quickly when it is trapped in a local optimum. In order to retain the advantages of the original MPA and try to overcome the shortcomings, the main framework of MPA is reserved. Firstly, the outpost mechanism is introduced after the three-stage velocity ratio, which improves the exploitation ability in the neighborhood of feasible solutions and enhances the convergence accuracy. Then the DE-SA mechanism is introduced into MPA after the FADs stage to generate a new solution. The best candidate is selected among the traditional MPA variant and the DE-SA variant. The DE-SA strategy helps to improve the variety of solutions, which increases the spaces that can be explored and avoid premature phenomenon. The improved ODMPA is employed to tackle CEC2014 benchmark problems, engineering design problems, and photovoltaic models. Based on the Friedman assessment, the ODMPA ranks first in the 30 CEC2014 functions test set. Combining the excellent convergence speed and accuracy of ODMPA’s convergence curve, the empirical results have demonstrated that ODMPA outperforms other algorithms in terms of search capability and convergence speed. Moreover, in complex real-life problems like engineering design problems and parameter extraction of PV cells, the application of ODMPA also shows it can perform outstandingly to find optimal results.

Accordingly, the ODMPA is an optimizer with great potential, ODMPA can be applied in information fusion [76], recommender systems [77], and machine learning. Moreover, ODMPA has the hope to be improved to tackle discrete or multi-objective optimization problems. Based on the excellent results of combining MPA with DE, combining other optimizers with MPA is also a valuable research direction in the future.