Velocity pausing particle swarm optimization: a novel variant for global optimization

Particle swarm optimization (PSO) is one of the most well-regard metaheuristics with remarkable performance when solving diverse optimization problems. However, PSO faces two main problems that degrade its performance: slow convergence and local optima entrapment. In addition, the performance of this algorithm substantially degrades on high-dimensional problems. In the classical PSO, particles can move in each iteration with either slower or faster speed. This work proposes a novel idea called velocity pausing where particles in the proposed velocity pausing PSO (VPPSO) variant are supported by a third movement option that allows them to move with the same velocity as they did in the previous iteration. As a result, VPPSO has a higher potential to balance exploration and exploitation. To avoid the PSO premature convergence, VPPSO modifies the first term of the PSO velocity equation. In addition, the population of VPPSO is divided into two swarms to maintain diversity. The performance of VPPSO is validated on forty three benchmark functions and four real-world engineering problems. According to the Wilcoxon rank-sum and Friedman tests, VPPSO can significantly outperform seven prominent algorithms on most of the tested functions on both low- and high-dimensional cases. Due to its superior performance in solving complex high-dimensional problems, VPPSO can be applied to solve diverse real-world optimization problems. Moreover, the velocity pausing concept can be easily integrated with new or existing metaheuristic algorithms to enhance their performances. The Matlab code of VPPSO is available at: https://uk.mathworks.com/matlabcentral/fileexchange/119633-vppso.


Introduction
Optimization is an essential process that helps to achieve the best performance in many scientific fields such as engineering and artificial intelligence. As a consequence, the development of effective optimization algorithms is crucial. The need for such development has recently increased due to the increased difficulty level of optimization problems [1]. Although the traditional optimization approaches can be used to solve optimization problems, they have two main limitations: the requirement of gradient information that causes the conventional approaches to be unable to solve non-differentiable functions and local optima entrapment particularly when solving complex problems that have numerous local optima [2].
Metaheuristic algorithms are an effective way to solve diverse optimization problems regardless of their characteristics [3][4][5]. Due to its robustness, efficiency and simplicity, particle swarm optimization (PSO) has become one of the most widely used metaheuristic algorithms [6]. In addition, PSO has demonstrated superior performance when solving a wide range of optimization problems in various areas such as wireless communications [7,8] and artificial intelligence [9,10]. Other applications of PSO include truss layout [11], prestress design [12,13], image segmentation [14] and flat-foldable origami tessellations [15]. Nonetheless, PSO still severely faces the problem of premature convergence [6,16,17]. Moreover, the Extended author information available on the last page of the article performance of PSO in high-dimensional problems is poor [6]. This motivates the development of novel PSO variants that can overcome the limitations of the classical PSO algorithm and its state-of-the-art versions.
In PSO, the iterative process is split into two stages: exploration and exploitation. Exploration performs extensive search at the early stages of the search process in order to move toward the optimal solution [18]. It is essential that PSO algorithms have strong exploration abilities in order to escape from local optima entrapment. On the other hand, exploitation focuses on regions that have a great potential to be the place where the optimal solution can be found. Balancing between exploration and exploitation is crucial in order to be able to locate optimal solutions [19].
The no free lunch (NFL) theorem [20] states that an optimization algorithm that performs well on a given set of problems achieves poor performance when it is tested on a different class of problems. Many state-of-the-art PSO variants and metaheuristic algorithms have shown promising results on a certain class of optimization problems; nonetheless, they have shown degraded performance when they solve different sets of problems. This motivates the development of new PSO variants that can achieve the best solutions when they are applied to a diverse set of optimization problems.
This work proposes a novel PSO variant called velocity pausing particle swarm optimization (VPPSO). The main contributions of this work can be summarized as follows: • A novel idea called velocity pausing is proposed where particles are provided with a third movement option (besides faster or slower speeds as in the classical PSO algorithm) that allows them to move with the same velocity as they did in the previous iteration. • The proposed VPPSO algorithm modifies the first term of the classical PSO velocity equation to to avoid premature convergence. • To maintain diversity, a two-swarm strategy is implemented where particles in the first swarm update their positions based on the classical PSO algorithm, whereas the remaining particles follow the global best position only to update their positions. • A comprehensive comparison analysis that validates the effectiveness of VPPSO is carried out. The performance of VPPSO is evaluated on 23 classical benchmark functions, the CEC2019 test suite, the CEC2020 test functions and 4 real-world engineering problems. The performance of VPPSO on high-dimensional problems is also evaluated. VPPSO is compared with PSO, a recent high-performance PSO variant and five recent prominent metaheuristic algorithms.
The purpose of this work is to develop a high-performance robust PSO variant that can be used to optimize complex real-world problems. The rest of this work is organized as follows. Section 2 presents the related work that includes the classical PSO algorithm and its existing variants. In Sect. 3, the proposed VPPSO algorithm is described in detail. Section 4 presents the results of VPPSO and the competitive algorithms and it provides an in-depth discussion. The performance of VPPSO on realworld engineering problems is presented in Sect. 5. Section 6 concludes this work while Sect. 7 provides some potential research directions that can help to improve the PSO performance further.

Literature review
In this section, the preliminaries and essential definitions of PSO are first introduced. This includes the PSO source of inspiration and its mechanism. Although the original PSO algorithm has shown good optimization performance, it still faces some limitations such as local optima entrapment and slow convergence. This has motivated researchers to develop new PSO variants to tackle the aforementioned issues. Several related works on alleviating the PSO drawbacks are reviewed and discussed in the second subsection.

Particle swarm optimization
PSO is introduced by Kennedy and Eberhart [21] where its mechanism is inspired by social behaviors of birds flocking and fish schooling. In PSO, a swarm of particles flies in the search space to seek an optimal solution [22,23]. Each particle i of the swarm in the D-dimensional space has a position and a velocity that can be mathematically written as follows: V i ¼ ½V i1 ; V i2 ; :::; V iD ; i ¼ 1; 2; :::; N ð1Þ X i ¼ ½X i1 ; X i2 ; :::; X iD ; i ¼ 1; 2; :::; N ð2Þ where V i and X i are the velocity and position vectors of particle i, respectively, D is the number of dimensions and N is the swarm size. At the beginning of the PSO optimization process, the velocity and position of each particle are randomly generated within specific ranges. During the PSO iterative process, a particle i is guided by the global best particle ( gbest ¼ ½gbest 1 ; gbest 2 ; :::; gbest D ) which is the best particle that has been found so far and by its personal best position (Pbest ¼ ½Pbest 1 ; Pbest 2 ; :::; Pbest D ) to update its velocity and position, respectively, as follows: V id ðt þ 1Þ ¼wV id ðtÞ þ c 1 r 1 Pbest id ðtÞ À X id ðtÞ ð Þ þ c 2 r 2 gbest d ðtÞ À X id ðtÞ ð Þ ð3Þ where w is the inertia weight, c 1 and c 2 are the the cognitive and social acceleration coefficients, respectively, and r 1 and r 2 are two random variables distributed uniformly in the range [0,1]. The role of the inertia weight w is to avoid the velocity explosion problem faced by the standard PSO algorithm [21]. The acceleration coefficients c 1 and c 2 control the speed of a particle toward Pbest and gbest, respectively. These three PSO parameters (w, c 1 and c 2 ) play a crucial role for balancing the PSO exploration and exploitation abilities [24,25]. Equation (3) is the core of the PSO algorithm, and it is the most essential formula that is needed to develop novel PSO variants.
After a particle updates its velocity and position, its personal best position is updated as follows: ð Þ¼ In Eq. (5), the personal best position of a particle i is updated only if the fitness of the newly generated particle X i is better than the current fitness of Pbest i . The next step in PSO is to update gbest based on the following: The PSO process is repeated until a stopping criterion is satisfied.

Literature review of related works on PSO improvement
PSO has been modified by several strategies such as adjustment of PSO controlling parameters [26][27][28], multi-swarm schemes [29,30], hybridization [31,32] and new velocity updating mechanisms [33]. The controlling parameters of PSO, namely the inertia weight w, the cognitive component c 1 and the social component c 2 have a direct impact on the searching behavior of PSO [34].
Choosing the optimal values of w, c 1 , c 2 , is a challenging task since some values might perform well on certain optimization problems while the same values achieve poor performance on other sets of problems [6]. Many research efforts have attempted to develop new inertia weight strategies that aim to balance exploration and exploitation. One of the most well-known inertia weight approaches is time-varying inertia weight [35] that linearly decreases throughout the iterative process. In [35], the inertia weight is updated at each iteration as follows: where w max and w min represent the maximum and minimum values of the inertia weight, T is the maximum number of iterations while t is the current iteration. Other common inertia weight approaches that have been proposed to enhance the PSO performance are adaptive inertia weight [36][37][38][39], linearly decreasing inertia weight [27], nonlinear time-varying inertia weight [40][41][42], quadratic inertia weight [43], exponentially decreasing inertia weight [44,45], chaotic inertia weight [46]. On the other hand, significant studies have attempted to improve the PSO performance by adjusting the PSO acceleration coefficients c 1 and c 2 . The authors in [47] proposed a self-organizing hierarchical PSO where the two PSO acceleration coefficients vary with time (HPSO-TVAC). In HPSO-TVAC, c 1 and c 2 are initially assigned a large and small values, respectively, to enable strong exploration at the beginning of the PSO search process. Conversely, c 1 and c 2 should have small and large values, respectively, at the final stages of the iterative process to allow particles exploit the search space significantly. The values of c 1 and c 2 are updated at each iteration as follows: where the i and f subscripts represent the initial and final values, respectively. The authors in [48] proposed a fitnessbased multi-role PSO (FMPSO) algorithm that adjusts its controlling parameters based on fitness. Similarly, a unique adaptive PSO (UAPSO) algorithm is developed in [25] to assign each particle unique inertia weight, c 1 , and c 2 values based on its fitness. A phasor PSO (PPSO) algorithm is proposed in [49] where the first PSO velocity term that contains the inertia weight w is omitted, whereas c 1 and c 2 are replaced by phasor coefficients. Multi-swarm techniques where particles are grouped into several sub-swarms based on a certain criterion have been widely used to enhance the PSO performance. In [50], a cooperative PSO (CPSO) approach is proposed where a number of swarms cooperate to optimize different segments of the solution vector. The work in [51] proposed a novel improved PSO algorithm based on individual difference evolution mechanism (IDE-PSO). According to particle's performances throughout the iterative process, particles are divided into several sub-swarms. The authors in [52] presented a new multi-swarm PSO algorithm based on dynamic learning strategy (PSO-DLS). In the proposed approach, particles are divided into conventional and communication particles where conventional particles perform exploitation while communication particles explore the search space. Using differential mutation operations, a two-swarm PSO algorithm is proposed in [53]. The authors in [54] proposed a multipopulation cooperative PSO (MPCPSO) algorithm that implements a difference mutation operator that can help to achieve better exploration. Another multi-swarm PSO variant is proposed in [55] where the total population is split into a main swarm and a hovering swarm. Utilizing an elite learning strategy, the authors in [56] presented a dynamic multi-swarm PSO (DMS-PSO-EL) algorithm.
One of the most common approaches in the field of metaheuristics that can help to enhance the performance is hybridization where the best properties of two algorithms are combined to develop a more efficient algorithm. In [31], a novel hybrid PSO with genetic algorithm (GA) is proposed where the mechanisms of PSO and the operators of GA (crossover and mutation) are implemented together to create a new generation of candidate solutions. The work in [57] hybridized PSO with Ant Colony Optimization (ACO). In the proposed approach, PSO and ACO execute their individual algorithms separately during the iterative process to create their own new solutions. However, the global best solution among the two algorithms is used to update the positions of particles and ants at each iteration. PSO has been also hybridized with other optimization algorithms such as simulated annealing (SA) [58], gray wolf optimization (GWO) [59], firefly algorithm (FA) [60] and whale optimization algorithm (WOA) [61] where in all proposed approaches the hybrid PSO versions outperform the individual PSO algorithm.
Besides the three aforementioned strategies, many works have proposed other methods such as implementation of different neighbourhood structures and development of new velocity updating mechanisms to enhance the PSO performance. In [62], a Fully Informed PSO (FIPS) algorithm is developed where a particle requires the positions information of its neighbors to update its velocity. A new PSO algorithm is developed in [63] by proposing a dynamic PSO neighbourhood strategy that continuously updates the neighbourhood of each particle throughout the iterative process. The four PSO search strategies presented in Comprehensive Learning PSO (CLPSO) [64], Unified PSO (UPSO) [65], Linearly Decreasing Inertia Weight PSO (LDWPSO) [35], distance-based locally informed PSO (LISP) [66] are combined into one algorithm to develop a PSO with Strategy Dynamics (SDPSO) algorithm [67]. The authors in [68] have proposed an enhanced social learning PSO algorithm that updates the best three particles based on a differential mutation approach. To solve constrained optimization problems, a novel PSO variant named PSO? is proposed in [69] where the authors have proposed a novel strategy to update the positions of particles. A new PSO variant called Generalized PSO (GEPSO) is introduced in [33] where the velocity of the classical PSO algorithm is modified by including two new terms. A novel chaotic grouping PSO algorithm that implements a Dynamic Regrouping Strategy (CGPSO-DRS) is proposed in [70]. The work in [71] has developed an enhanced PSO algorithm by using complex-order derivatives. In [72], a new PSO variant is developed by applying two strategies: multi-exemplar and forgetting ability. Some recent prominent PSO variants are presented in Table 1. The PSO variants mentioned in this section can be applied to optimize various problems including truss layout [11], image segmentation [14], wireless communications [7], prestress design [12,13] and flat-foldable origami tessellations [15]. Although existing PSO variants have shown that they can significantly improve the performance of the classical PSO algorithm, the effectiveness of [33, 48, 49, 52-54, 56, 63, 68, 71, 72] on real-world optimization problems is not validated. In addition, the performances of [33,48,52,53,56,69,71] on high-dimensional problems are not investigated. In [33, 48, 54-56, 68, 71], the proposed algorithms are compared with PSO variants only without comparing their performances with other well-known metaheuristics such as GWO and WOA. Finally, the works in [48, 53-56, 68, 70] require massive number of function evaluations to achieve competitive results.

Velocity pausing particle swarm optimization
This work proposes a novel idea called velocity pausing where each particle does not have to update its velocity at each iteration. In other words, a particle is allowed to move with the same velocity as it did in the previous iteration. This idea allows particles to have the potential of moving with three different speeds, i.e., slower speed, faster speed and constant speed unlike the standard PSO algorithm where particles move with only faster speed or slower speed. The main advantage of velocity pausing is the addition of a third movement option (constant speed) that can help to balance exploration and exploitation and avoid the severe premature convergence of the classical PSO. The velocity pausing concept can be written mathematically as follows: wV i ðtÞ Otherwise þc 1 r 3 ðPbest i ðtÞ À X i ðtÞÞ þc 2 r 4 ðgbestðtÞ À X i ðtÞÞ where V i ðtÞ and V i ðt þ 1Þ are the velocities of particle i at iterations t and t þ 1, respectively, a is the velocity pausing parameter. In case the pausing parameter a has a value higher than 1, all particles will update their velocities at each iteration exactly in the same way as the classical PSO algorithm does. This situation is undesired since no velocity pausing can occur. On the other hand, an extremely low value of a will force particles to move with constant speed and it will restrict them from moving with faster or slower speed. Therefore, it is crucial to choose the best a value to achieve a balanced velocity pausing scenario that can lead to an optimal performance. To further help PSO avoid premature convergence, the velocity equation of the conventional PSO algorithm is modified by changing the first velocity term and omitting its inertia weight component as follows: where a(t) is mathematically written as follows: In Eq. 12, b is constant. By applying the velocity pausing concept and utilizing the modified velocity equation in (11), a particle in VPPSO updates its velocity as follows: Utilizing Equation (13), the position of a particle i is updated as follows: To maintain diversity and avoid premature convergence, the proposed algorithm divides the total population N into two swarms. The first swarm consists of N 1 particles that update their velocities and positions based on the classical PSO mechanism except the following: The first term of the velocity equation is modified and the velocity pausing concept is applied as shown in Eq. 13. The second swarm has N 2 particles that rely only on gbest to update their positions. Each particle in the second swarm updates its position as follows: The optimization process of VPPSO starts by randomly generating the velocities and positions of all particles. During the VPPSO iterative process, particles in the first swarm update their velocities and positions based on Eqs. 13 and 14, respectively, while particles in the second swarm update their positions based on (15). The next step of VPPSO is to evaluate the fitness of all particles. Considering the first swarm, the personal best positions of particles are updated based on Eq. 5 followed by updating the global best position based on (6). The global best position is also updated in the second swarm of VPPSO if a particle in the second swarm can achieve a better fitness. The VPPSO process is repeated until a stopping criterion is satisfied. The Pseudo-code of VPPSO is provided in Algorithm 1. Applying Algorithm 1 is important to solve complex real-world problems particularly high-dimensional problems. Moreover, Algorithm 1 includes velocity pausing, a new velocity equation and a two-swarm strategy that can better balance exploration and exploitation and enhance diversity.
The flowchart of the proposed VPPSO algorithm is presented in Fig. 1. The modifications of VPPSO are highlighted in green colour. The flowchart shows the first VPPSO modification which is updating the velocities of PSO particles based on a new proposed equation. The new velocity equation changes the first term of the original PSO velocity equation to avoid premature convergence. Moreover, the proposed velocity equation implements velocity pausing to help balancing exploration and exploitation. The other modification of VPPSO is the addition of a second swarm where particles in this swarm update their positions differently. The VPPSO two-swarm strategy is needed to enhance diversity. For PSO, VPPSO and the other existing metaheuristic algorithms, the gbest vector is entirely replaced at iteration t if its fitness is better than the fitness of gbest at iteration t À 1. This is not the optimal approach for gbest replacement as some dimensions of gbest at iteration t may be not better than their corresponding dimensions at iteration t À 1. This gbest replacement problem has been tackled in [50]; however, the proposed approach is computationally prohibitive. Other novel approaches are needed to replace the gbest vector more efficiently.

Complexity analysis
The complexity of swarm algorithms is mainly dependant on the population size N, number of dimensions D, the cost of function evaluations C and the maximum number of function evaluations. Functions are evaluated N times at each iteration t; thus, the number of the overall function evaluations is NT where T is the maximum number of iterations. In PSO and other swarm algorithms, the complexity can be divided into two parts: initialization and the iterative loop. The initialization phase randomly generates particles and evaluates their fitness. Generating random particles and evaluating their fitness have complexities of O(ND) and O(NC). As a result, the initialization complexity of PSO becomes OðND þ NCÞ. The PSO iterative loop consists of positions update, function evaluations and memory savings where their computational complexities are given as O(TND), O(TNC) and O(TN), respectively. The overall PSO complexity can be written as follows: The initialization complexity of VPPSO is the same as PSO which is given as OðND þ NCÞ. In the iterative loop of VPPSO, the complexity is the same as PSO except that the the VPPSO second swarm does not involve memory savings.
The overall VPPSO complexity can be written as follows: From (17), it is clear that VPPSO modifies the original PSO algorithm without increasing its complexity. On the contrary, the VPPSO complexity is lower than the complexity of the standard PSO version as the second swarm of VPPSO does not require the information of the personal best positions as in the original PSO. The complexity of VPPSO can be further reduced by relying less on the personal best positions of PSO as they require memory savings and by the implementation of new low-complex searching strategies. In case N 1 ¼ N 2 as in this work, N 1 ¼ N 2 which slightly reduces the complexity of VPPSO to: 4 Results and discussion The effectiveness of VPPSO is first validated by testing it on twenty-three classical benchmark functions that have been widely used to evaluate the performance of new metaheuristic algorithms or their variants [79][80][81][82][83]. These conventional functions are grouped into three categories: unimodal functions (Table 2), multimodal functions (Table 3) and multimodal functions with fixed dimensions ( Table 5). The mathematical expressions of the twentythree functions are shown in Tables 2, 3  Randomly generate the position of the particle i (X i ) and set its velocity to zero V i = 0.

4:
Evaluate the fitness of particle i, i.e., (f (X i )) 5: end if 10: end for 11: for t = 1 : T do 12: for i = 1 : N do 13: if i ≤ N 1 then 14: Update the particle's velocity V i and position X i based on Equations 13 and 14, respectively 15: else 16: Update the particle's position X i based on Equation 15 17: end if 18: end for 19: for i = 1 : N do 20: Evaluate the fitness of particle i, i.e., f (X i ) 21: if i ≤ N 1 then 22: if f (X i ) < f(P best i ) then 23: end if 29: end if 30: else 31: if f (X i ) < f(gbest) then 32: end if 35: end if 36: end for 37: end for 38: return gbest f 14 -f 23 have fixed dimensions. Moreover, the search range of f 8 -f 13 and f 14 -f 23 are different. The performance of the proposed VPPSO algorithm is further validated by testing it on the CEC2019 test suite that consists of ten benchmark functions. Table 5 lists the names, search ranges, dimensions and optimal values of the CEC2019 functions. To further challenge VPPSO, VPPSO is applied to solve the ten CEC2020 complex optimization problems. As shown in Table 6, the CEC2020 test suite consists of one unimodal function (f 34 ), three basic functions (f 35 À f 37 ), three hybrid functions (f 38 À f 40 ) and three composition functions (f 41 À f 43 ). A summary of the CEC2020 functions that include their names, search range and optimal values is shown in Table 6. VPPSO is compared with the classical PSO algorithm as well as with a recent high-performance PSO variant known as PPSO [49]. PPSO has shown that it outperforms several existing well-known PSO variants including CLPSO [64], adaptive particle swarm optimization (APSO) [39] and FIPS [62]. Besides the PSO algorithms, the performance of VPPSO is compared with five  Range f min ,500] À418:9829 Â Dim 0.00030 0.398 À3.86 À3.32 À10.1532 À10.4028 prominent recent metaheuristic algorithms: GWO [79], Henry gas solubility optimization (HGSO) [84], salp swarm algorithm (SSA) [85], WOA [81] and Archimedes optimization algorithm (AOA) [86]. The results of these five algorithms have shown superior optimization performance when compared with many optimization algorithms including equilibrium optimizer (EO) [2], sine-cosine algorithm (SCA) [87], L-SHADE, GA, gravitational search algorithm (GSA) [88] and differential evolution (DE) [89]. For all algorithms, results are averaged over 30 independent runs while the population size is 30. Following the recommendations of the original references, the parameter settings of all compared algorithms are summarized in Table 7.

Exploitation analysis
To evaluate the exploitation ability of the proposed approach, its performance is compared with seven algorithms on seven unimodal functions (f 1 -f 7 ). The statistical results of unimodal functions including the average fitness and standard deviation are recorded in Table 8. From Table 8, it is clear that VPPSO outperforms all other algorithms on all seven functions except f 7 . VPPSO achieves competitive results on f 7 that allows it to be ranked second. It can be also noted that VPPSO is the only algorithm that can achieve the optimal solutions for f 1 , f 2 , f 3  and f 4 . From Table 8, it is evident that VPPSO can achieve a near optimal solution when solving f 5 while all other algorithms achieve poor performances when solving the same function. Overall, VPPSO is ranked first according to the Friedman test as can be seen in Table 8. These results have shown that VPPSO possesses a robust exploitation abilities.

Exploration analysis
The exploration performance of VPPSO is evaluated on 16 multimodal functions (f 8 -f 23 ) that consist of functions with different dimensional sizes and different search ranges as illustrated in Tables 2 and 3. The statistical results of all algorithms for the 16 multimodal functions are provided in Table 8 (f 8 -f 13 ) and

Impact of high dimensionality
One of the main problems of PSO is its poor performance on high-dimensional problems. Therefore, it is crucial to develop a novel PSO variant that can achieve effective and consistent performance on low-and high-dimensional optimization problems. The performance of VPPSO on high-dimensional cases is investigated by increasing the number of dimensions of functions f 1 À f 13 to 100 and 500. Tables 10 and 11 show the comparative results for all algorithms on f 1 À f 13 functions when D ¼ 100 and D ¼ 500, respectively. As Table 8 (D = 30), Table 10 (D = 100) and Table 11 (D = 500) show, VPPSO achieves a consistent performance on the tested functions unlike other algorithms. It is also notable from Tables 10 and 11 that VPPSO still achieves the optimal solutions for f 1 À f 4 , f 9 and f 11 when D ¼ 100 and D ¼ 500, respectively. Tables 8, 10 and 11 demonstrate that all other algorithms particularly PSO and SSA achieved degraded performance as the number of dimensions increases.
Overall, according to the Friedman mean rank, VPPSO achieves the best high-dimensional performance in comparison with the seven other algorithms as Tables 10 and  11 show.

Performance of VPPSO on the CEC2019 and CEC2020 test functions
The performance of VPPSO on the CEC2019 test functions is recorded in Table 12. From Table 12, it is clear that VPPSO outperforms all algorithms on 7 functions out of 10. Table 12 also shows that VPPSO and HGSO are able to obtain the optimal solution of f 24 while the remaining algorithms achieve poor performance. For f 27 and f 29 , VPPSO achieves the second best solutions while the best solutions are achieved by GWO. Based on the Friedman mean rank, VPPSO achieves the best performance as Table 12 illustrates. The 10 CEC2020 complex optimization problems are used to further challenge the performance of VPPSO. Table 13 presents a performance comparison of VPPSO and other algorithms when they are applied to solve the CEC2020 test functions. As Table 13 shows, it can be seen that VPPSO can perform better than all compared algorithms on f 34 , f 35 , f 36 , f 39 and f 43 while its performance on f 37 is equal to the performances of all other algorithms. For the remaining functions, the performance of VPPSO is comparable to other algorithms. The results in Table 13 demonstrates the strength and superiority of VPPSO to solve complex optimization problems. The Friedman mean rank presented in Table 13 shows that VPPSO achieves the first rank when compared with the 7 well-known and highperformance optimization algorithms.

Sensitivity analysis
This subsection investigates the impact of the VPPSO parameters on its optimization performance. The main parameter of VPPSO that is expected to have a direct and significant influence of the VPPSO behavior is the velocity pausing parameter a where a can have any value that is equal to or less than one. A value of a ¼ 1 represents the classical PSO algorithm. To study the impact of a on the performance of VPPSO, ten different scenarios are studied where a varies from 1 to 0.1 in steps of 0.1. Another main parameter that can affect the performance of VPPSO is the number of particles per swarm as VPPSO is a two-swarm algorithm. Three different swarm-size cases are studied where the size of the PSO swarm and the size of the second swarm are N 1 ¼ 20, N 2 ¼ 10, and N 1 ¼ 15, N 2 ¼ 15, and N 1 ¼ 10, N 2 ¼ 20, respectively. For each swarm-size case, results are generated for the 23 classical benchmark functions (f 1 À f 23 ) while considering the 10 different scenarios of a. Tables 14, 15 and 16 present the results of the average fitness and the standard deviation for swarm-size case 1, swarm-size case 2 and swarm-size case 3, respectively, where in each swarm-size case a is varied from 1 to 0.1. From these three tables, it is evident that a better performance is achieved when a decreases from 1 to 0.3 while the performance starts to degrade when the value of a is less than 0.3. It is also clear from the overall rank that the best performance is achieved when a ¼ 0:3 in all swarmsize cases. For any value a, it is observed from Tables 14, 15 and 16 that swarm-size case 2 where N 1 ¼ 15 and N 2 ¼ 15 outperforms both swarm-size case 1 and swarm-size case 3. Overall, the best performance is achieved when a ¼ 0:3, N 1 ¼ 15 and N 2 ¼ 15.

Convergence analysis
Convergence to local optima is a major challenge faced by most of metaheuristic algorithms including PSO. To tackle this issue, it is crucial to achieve a proper balance between exploration and exploitation. PSO has shown that it can be easily trapped in local optima resulting in a poor solution accuracy [6,16]. The convergence curves of VPPSO, PSO and the best four algorithms (according to Friedman test as shown later in Table 17), i.e., HGSO, PPSO, GWO and AOA, are presented in Fig. 2. One of the main limitations of PSO is that particles prematurely converge toward a local solution. This problem can be clearly seen from Fig. 2e where PSO prematurely converges toward a suboptimal value at the 77 th iteration. It is evident that the PSO particles cannot make any further improvements from the 77 th iteration until the end of the PSO searching process. This happens because of the poor exploration ability of the PSO algorithm when the algorithm is trapped in a local optima. On the other hand, Fig. 2e shows that VPPSO can avoid premature convergence by performing efficient exploration that can help to find better solutions as the number of iterations increase. Figure 2f shows another example where PSO suffers from the premature convergence problem. From Fig. 2, it is clear that VPPSO can avoid premature convergence by balancing exploration and exploitation. Although HGSO has shown fast convergence speed on unimodal functions, it can easily converge to a non-optimal point shortly after the the optimization process starts when it solves multimodal functions. This can be clearly seen in Fig. 2.

Statistical significance analysis
To statistically validate the effectiveness of VPPSO, two prominent statistical tests are used: Friedman test and Wilcoxon rank-sum test. The Friedman test ranks algorithms for each problem separately. The best algorithm is ranked first while the remaining best algorithms are ranked second, third and so on. From Tables 8-11, it is clear that VPPSO achieves the first rank on unimodal and multimodal functions when tested on low-and high-dimensional cases.
To evaluate the overall VPPSO performance, the Friedman mean rank is calculated for all tested functions as shown in Table 17. This table shows that VPPSO achieves the first rank which indicates the superiority of VPPSO. Wilcoxon rank-sum test is another widely used statistical test to evaluate the significance of novel metaheuristic algorithms or their variants. Considering a 0.05 significance level, the results of a pair-wise comparison between VPPSO and the seven other algorithms are shown in Table 18 for f 1 À f 13 (D ¼ 30) and (f 14 À f 24 ). The results demonstrate that VPPSO is significantly better than other algorithms.

Engineering problems
The performance of VPPSO is further evaluated by applying it to solve four well-known engineering optimization problems: welded beam design, speed reducer design, pressure vessel design and tension/compression spring design. Since these four engineering problems have some constraints to be satisfied, particles are divided into valid and invalid ones. A particle that can satisfy all constraints is valid; otherwise it is not. This work follows one of the most common ways to penalize invalid particles in minimization problems where the fitness of each invalid particle is assigned an extremely large value. The parameter settings of all algorithms are exactly the same as in Table 7. The following subsections describe the aforementioned engineering problems and they provide the results of all compared algorithms.

Welded beam design (WBD)
Welded beam design problem is a well-known engineering benchmark to test the effectiveness of optimization algorithms. The purpose of this design engineering problem is to obtain the best fabrication cost by defining the optimal values of the given variables. The number of variables and constraints in WBD are four and five, respectively. The mathematical representation of WBD is given in Appendix A [90]. The performance of VPPSO on the welded beam design problem is compared with 13 algorithms including CPSO [91], IPSO [92], marine predators algorithm (MPA) [93], GSA, Harris' Hawk optimization [94] and EO. The best fabrication costs achieved by all compared algorithms are recorded in Table 19. In addition, Table 19 shows the best variable values obtained by each algorithm. From Table 19, it is obvious that VPPSO achieves the best cost in comparison with all algorithms.

Speed reducer design (SRD)
The main objective of this problem is to minimize the weight of speed reducer based on certain constraints associated with diverse components such as gear teeth, bending stress, surface stress, shafts stresses and transverse deflections of the shafts. The SRD problem consists of 7 variables and 11 constraints that must be satisfied. The SRD problem is mathematically written as shown in Appendix B. Table 20 presents the best variables and the best results achieved by all compared algorithms. Results show that the best weight is achieved by VPPSO.

Pressure vessel design (PVD)
Pressure vessel design problem is another well-known engineering problem that is used as a benchmark to validate the effectiveness of metaheuristic algorithms. In PVD, the objective is to find the minimal cost of a pressure vessel. PVD is a problem with four variables and four constraints as shown in Appendix C. Table 21 presents the best solutions of all algorithms. It is evident from Table 21 that VPPSO achieves the best result.

Tension/compression spring design (TSD)
The main objective of this well-known engineering problem is to find the minimum weight of the tension/compression spring while satisfying its design constraints: shear stress, surge frequency and deflection. Three design variables need to be taken into account: wire diameter, mean coil diameter and the number of active coils. The mathematical representation of TSD is given in Appendix D. The performance of VPPSO and the compared algorithms when solving the TSD problem is presented in Table 22. According to the results, VPPSO, PSO, GWO, SSA, WOA, AOA and GSA outperform the other algorithms in terms of finding the minimum weight. The addition of the third movement option has supported VPPSO to better balance exploration and exploitation. This has been clearly seen in the results provided in this section where VPPSO has shown effective and robust exploration and exploitation abilities in low-and high-dimensional cases. The implementation of a two-swarm strategy has further assisted VPPSO to main diversity and avoid premature convergence. Moreover, the proposed modified velocity equation in VPPSO has played an important role in avoiding undesired rapid movements of particles.

Conclusion
A novel PSO variant called Velocity Pausing Particle Swarm Optimization (VPPSO) is proposed in this work. The mean idea of the proposed approach is to provide particles an option that allows them to move with the same velocity in subsequent iterations. The merit of the velocity pausing approach is that it is not limited to PSO variants only but it can also be applied to new or existing metaheuristic algorithms to improve their performances. VPPSO changes the first term of the standard PSO velocity equation to help avoid the premature convergence of PSO.
To enhance diversity, the proposed approach implements a two-swarm strategy where particles in the first swarm update their positions based on the classical PSO mechanism while particles in the second swarm are attracted by the global best position only to update their positions. The performance of VPPSO is validated by testing it on 43 challenging optimization problems: 23 classical benchmark functions, the 10 CEC2019 test functions and the CEC2020 test suite. Moreover, VPPSO is applied to solve four realworld engineering problems. According to the statistical results, VPPSO outperforms recent well-known high-performance optimization algorithms including PPSO, GWO, HGSO and AOA on both low-and high-dimensional problems. This significant VPPSO performance is achieved because the velocity pausing idea can better balance exploration and exploitation. In addition, the two-swarm strategy and the proposed modified velocity equation can help to enhance diversity and better control the movements of particles, respectively. Moreover, VPPSO has shown superior performance when it solves the four real-world constrained engineering problems. These promising results motivate other researchers to apply VPPSO to solve optimization problems in their fields.

Future work
Some potential directions that can help to improve the optimization performance of VPPSO and other metaheuristic algorithms are summarized as follows: • The velocity pausing concept can be integrated with other metaheuristic algorithms to enhance their performance. • Further work is need to develop a binary VPPSO version to solve binary optimization problems such as feature selection and the 0-1 knapsack problem. • Another interesting future work is the development of a multi-objective VPPSO algorithm. • VPPSO can be hybridized with other recent algorithms such as EO, HGSO and AOA to further improve its performance. • In terms of applications, VPPSO can be applied to solve diverse real-world optimization problems such as maintenance scheduling [100], data clustering [101], lot-sizing optimization [102,103] and multilevel thresholding image segmentation [14,104]. • One potential direction is to combine VPPSO with wellknown approaches such as Levy flight and chaotic maps to develop an enhanced version of VPPSO. • VPPSO can be applied to optimize real-world engineering problems such as three-bar truss design and multiple disc clutch brake.               Values that are higher than 0.05 are denoted in bold face, whereas NaN indicates 'Not a Number' which is returned by the Wilcoxon test. The three symbols 'þ,' '%' and '-' indicate that VPPSO performs significantly better, statistically similar and significantly worse in comparison with other algorithms, respectively  Data availability Data sharing not applicable because no datasets are analyzed or generated in this article.

Declaration
Competing interest The authors declare that they have no financial or personal relationships related to this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.