Abstract
Over the last decade, particle swarm optimization has become increasingly sophisticated because well-balanced exploration and exploitation mechanisms have been proposed. The sequential quadratic programming method, which is widely used for real-parameter optimization problems, demonstrates its outstanding local search capability. In this study, two mechanisms are proposed and integrated into particle swarm optimization for single-objective numerical optimization. A novel ratio adaptation scheme is utilized for calculating the proportion of subpopulations and intermittently invoking the sequential quadratic programming for local search start from the best particle to seek a better solution. The novel particle swarm optimization variant was validated on CEC2013, CEC2014, and CEC2017 benchmark functions. The experimental results demonstrate impressive performance compared with the state-of-the-art particle swarm optimization-based algorithms. Furthermore, the results also illustrate the effectiveness of the two mechanisms when cooperating to achieve significant improvement.
Similar content being viewed by others
Introduction
Particle swarm optimization (PSO) is a well-known population-based metaheuristic algorithm [1]. Many PSO-based variants and their applications were proposed in the last decade [2,3,4,5,6,7,8,9,10,11,12]. PSO applied for large-scale group decision-making [13], adsorption control of pipeline robot [14], health estimation for electric vehicle [15], substitution box construction based on quantum-inspired quantum walks [16], feature-related time consumption reduction for surface electromyography [17], and credit risk assessment for personal auto loan [18]. Li et al. [19] indicated that the research directions of the PSO algorithm can be divided into four categories: parameter tuning, topology choices, learning strategy improvements, and integration with other algorithms. Among the large number of PSO-based investigations, hybridization is a popular approach because it can combine several complementary PSO variants or algorithms into a solid framework. In [20], a hybrid genetic algorithm with PSO for multimodal functions was proposed. In [21], search mechanisms of swallow swarm optimization (SSO) were implemented with PSO to formulate the hybrid particle swallow swarm optimization (HPSSO) algorithm. In [22] a hybrid feature selection algorithm based on PSO was proposed; this variant uses a local search strategy, which is embedded in the PSO, to select the less correlated and salient feature subset. In [23], a hybrid PSO algorithm that utilizes an adaptive learning strategy (ALPSO) was proposed; a self-learning-based candidate generation strategy and a competitive learning-based prediction strategy were employed to ensure the exploration ability and the exploitation of the algorithm, respectively. In [24], the variable neighborhood search (VNS) was used to solve the clustered vehicle routing problem (CluVRP) was employed for the PSO to ensure solution intensity and bring the solution to the local optima. In [25], the inertia weight PSO (iwPSO) [26] and social learning PSO (SLPSO) [27] were combined to form a PSO on single-objective numerical optimization (PSO-sono). The PSO-sono approach demonstrates outstanding performance on CEC2013, CEC2014, and CEC2017 benchmark functions.
The essence of hybridization is to select complementary algorithms and exert their respective strengths; however, unsuitable subpopulation size may limit the strengths. According to our observation, subpopulation allocation considerably influences the algorithm performance due to the economy of scale in diverse stages of evolution. Furthermore, the exploitation capability increases the convergence rate, especially of particles near the global optimum. Hence, employing a proper local search method can improve evolution efficiency. The sequential quadratic programming (SQP) method was proposed by Wilson [28] due to its outstanding local search capability, and it is widely used in real-parameter optimization problems [29,30,31,32,33]. The SQP method can be used to seek the local minimum satisfying the constraints. Theories related to the SQP method can be found in [28, 34,35,36].
In this study, the PSO-sono was further reformed using a novel ratio adaptation scheme (NRAS) for calculating the proportion of subpopulations and applying the SQP method intermittently to the best particle generated either by iwPSO or SLPSO, illustrating impressive performance. The proposed PSO-based variant is called SQPPSO-sono, and it performs more competitively than the state-of-the-art algorithms: PSO-sono [25], ensemble particle swarm optimizer (EPSO) [37], pyramid particle swarm optimization (PPSO) [38], modified particle swarm optimization (MPSO) [39], terminal crossover and steering-based particle swarm optimization (TCSPSO) [40], and heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [41]. All the compared algorithms were tested on CEC2013, CEC2014, and CEC2017 benchmark functions. The experimental results demonstrate that SQPPSO-sono not only has outstanding performance in most cases, but also better local search capability when it cooperates with NRAS. The main highlights of this work are the following:
-
The SQP method and NRAS are introduced in the SQPPSO-sono approach, and the two mechanisms cooperate and significantly promote enhanced performance.
-
The SQP method can result in a sharp convergence rate in the early optimization process due to application on the current best particle. It enhances the local search capability of SQPPSO-sono and promotes rankings compared with other algorithms.
-
NRAS is proposed and inspired by an effective butterfly optimizer using the covariance matrix adapted retreat phase [36], for dividing subpopulations.
-
A large test suite including all the benchmark functions from CEC2013, CEC2014, and CEC2017 test suites is used for algorithm validation. To some extent, the over-fitting problem can be avoided owing to the large number of tests for real-parameter single-objective optimization compared with employing only one test suite including a few benchmark functions.
The remainder of this paper is organized as follows. “Particle swarm optimization and PSO-sono” represents the PSO and PSO-sono. “The sequential quadratic programming based particle swarm optimization” represents the novel strategy and the proposed SQPPSO-sono algorithm. The integration and implementation of NRAS and the SQP method are described in detail. “Performance evaluation” presents the experimental results and the parameter settings of the compared algorithms. Comparisons with the recent state-of-the-art PSO-based algorithms, analysis, discussions, and future research directions are presented in this section as well. “Conclusions” concludes and summarizes the paper.
Particle swarm optimization and PSO-sono
PSO-sono is a variant of PSO, with the following improvements: (1) A sorted particle swarm with hybrid paradigms improves the optimization performance; (2) Novel adaptation schemes for the ratio of each paradigm and constriction coefficients during evolution; (3) A fully-informed search scheme based on the current best particle in each generation assists PSO to jump out of the local optimum and improve performance.
Particle swarm optimization
The velocity and position, which are the basic properties of the particles of the classical PSO, are updated for each particle [42]. The velocity and position of the ith particle at generation t are recorded as \(V_{i}(t)\) and \(X_{i}(t)\), respectively, and the formulas can be presented as follows:
\(r1_{i}^d\) and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\), where \(\text {dim}\) is the dimension size. \(P{\text {best}}_{i}^d(t)\) is the ith particle’s previous best solution, and \(G{\text {best}}(t)^d\) is the whole swarm’s best solution, which are as defined below:
Inertia weight PSO and social learning PSO
Meng et al. [25] proposed to combine two complementary PSO variants, iwPSO and SLPSO, to promote the performance of single-objective numerical optimization, named PSO-sono. The iwPSO algorithm uses following equations to calculate the particle’s velocity and position:
where w is the inertia weight of the velocity. \(r1_{i}^d\) and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\). \(P{\text {best}}_{i}^d(t)\) is the ith particle’s previous best experience, and \(G{\text {best}}(t)^d\) is the whole swarm’s best experience.
The SLPSO algorithm uses Eqs. 6 and 7 to update the particle’s velocity and position:
where \(V_{i}^d(t+1)\) satisfies:
where \(X_{k}^{d}(t)\) is a randomly selected better particle of the ith particle with dth dimension in the tth generation, the ith particle is automatically selected if it is the best particle. \(\overline{X}_{\text {center}}^{d}(t)\) is the center of the population. \(\epsilon \) is the social influence, \(\epsilon =0.01*\frac{\text {dim}}{100}\). \(r0_{i}^d\), \(r1_{i}^d\), and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\). \(lp_{i}\) denotes the learning probability of the ith particle, and it obeys the following:
The sequential quadratic programming based particle swarm optimization
Compared with the PSO-sono, the proposed SQPPSO-sono makes two significant improvements. The improvements in the red boxes are shown in Fig. 1. The framework of SQPPSO-sono are given in Algorithm 1. Figure 1 indicates that the NRAS and SQP methods are embedded into the SQPPSO-sono algorithm at the beginning of the generation and after the best particle is produced by iwPSO or SLPSO. The NRAS method is applied after the initialization and calculates the ratio r as a proportion of the subpopulation at the beginning of each generation. Then, iwPSO and SLPSO are run simultaneously according to the allocations, and a global best particle is generated from both subpopulations. Later, the SQP method is applied to further enhance the exploitation capability by starting from the current global best particle. When the termination condition is satisfied, the algorithm stops running. The major contributions of the proposed algorithm are summarized below:
-
A novel ratio adaptation scheme is employed to calculate the proportion of each subpopulation for iwPSO and SLPSO rather than using the numbers of success particles to calculate the proportion of subpopulations.
-
The SQP method is employed to execute local search starting from the current global best particle, either generated by iwPSO or SLPSO, to search a better solution in the current generation. If a better solution is found, then the best particle is replaced by the better solution found by the SQP method.
Novel ratio adaptation scheme
In PSO-sono, by sorting the particles based on their fitness values, the population of particles is divided into two groups: the better-particle-group (PS1) and the worse-particle-group (PS2), which are evolved by iwPSO and SLPSO, respectively. The ratio r is used for calculating the proportion of the better particles in the entire population; thus, the proportion of the worse particles is \(1-r\). The ratio r in PSO-sono is represented as follows:
where \(ns_{b}\) and \(ns_{w}\) represent the amount of success particles in the better-particle-group and worse-particle-group, respectively. Furthermore, a truncation readjustment of r is involved when its value is in range [0.1, 0.9].
In SQPPSO-sono, a novel ratio adaptation scheme (NRAS) is proposed, which is inspired by [36]. An adaptation scheme of ratio is used for selecting strategy in [36], while here NRAS is used for dividing subpopulations. Two factors are considered in calculating the ratio r:
-
The quality of particles: the lower fitness value, the higher quality of particle obtained.
-
The diversity rate of particles in each subpopulation.
The normalized quality values (nq) are calculated as follows:
The diversity rate is calculated as follows:
where \(\text {dis}(X_{PS_{i},z},X_{PS_{i},\text {best}})\) denotes the Euclidean distance between the zth particle and best particle in \(PS_{i}\). The normalized diversity (nd) is calculated as follows:
In Eq. 9, only the fitness values of particles were considered to adjust the subpopulation size for PSO-sono. This algorithm may fall easily into a local optimum. However, the proposed NRAS considers both fitness value and diversity of particles.
Strategy using sequential quadratic programming
In [25], PSO-sono applies the fully-informed search scheme on the best particle in each generation to help the algorithm jump out the local optimum. In contrast, the proposed SQPPSO-sono algorithm replaces the fully-informed search scheme with the SQP method to enhance the local search capability. Due to function evaluations (FEs) are performed in the SQP method, the function evaluations of the SQP method are included in FEs to achieve a fair comparison.
The essence of the SQP method is to transform a nonlinear problem into a linear problem [43]?. The principle of the SQP method is to find a decent direction and models a quadratic optimal problem. The nonlinear optimization problems can be represented as follows:
The Lagrangian of the above formulation can be written as follows:
where \(\lambda \) and \(\mu \) are Lagrangian multipliers. The SQP method is an iterative operation, which repeatably builds the problem for a given iterate \(x_{k}\) by a quadratic programming sub-problem. Moreover, \(x_{k}\) is used to construct a new iterate \(x_{k+1}\). The sub-problem can be established by linearizing the constraints of \(x_{k}\), and the sub-problem can be written as follows:
Here, \(Hf(x_{k})\) is the Hessian of f at \(x \in R^{n}\). Because the SQP method highly relies on the initial estimate [44], the proposed SQPPSO-sono only applies the SQP method to the current global best particle. Thus, the method is used to enhance the local search capability for the proposed algorithm.
Performance evaluation
The recent PSO-based variants were selected in this study to evaluate the performance of SQPPSO-sono, and tests based on the CEC2013, CEC2014, and CEC2017 benchmark functions were performed. The PSO-based variants, namely PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO, were considered. For the sake of fairness, the FEs for each variant with the same dimension size were set to identical values.
Parameter settings
The general form of maximum FEs is \(10^{4} \times \) D; hence, FEs for all tested algorithms were set to \(10 \times 10^{4}\), \(30 \times 10^{4}\), \(50 \times 10^{4}\), and \(10 \times 10^{5}\) for 10D, 30D, 50D, and 100D, respectively. Table 1 lists the parameter settings for the proposed SQPPSO-sono algorithm and the compared PSO variants. We used the recommended default settings, tuned by original investigators and yielding the best performance for the state-of-the-art PSO variants.
Experimental results
To better measure the performance and reliability of the proposed algorithm, the statistics of rankings for SQPPSO-sono, PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO, which are tested on CEC2013, CEC2014, and CEC2017 benchmark functions, are presented in Tables 2, 3, and 4, respectively. The top 3 (1st, \(2{\text {nd}}\), and \(3{\text {rd}}\)) rankings of the mean, minimum, and median values are counted for the 10, 30, and 50-dimensional tests for all compared variants on CEC2013 and CEC2014, while the statistics of 10, 30, 50, and 100-dimensional tests are given for CEC2017. The greatest number of rankings of top 3 are given in boldface. The convergence characteristics graphs of SQPPSO-sono and the compared PSO variants are presented in Figs. 7 and 8. Due to page limitations, only the evolutionary processes of \(f_{n}\) over 50 runs of the 100-dimensional test are presented herein.
To better verify the effectiveness of the NRAS and the SQP method when cooperating with SQPPSO-sono individually, the validations of SQPPSO-sono without using the NRAS and the SQP method were performed. Both validations were tested on CEC2013, CEC2014, and CEC2017 benchmark functions for 30 dimensions. Two ablation experiments were conducted to further illustrate the effectiveness of both methods. The experimental results are presented in Tables 5, 6, and 7.
The experimental results for the 10, 30, and 50-dimensional tests on CEC2013 and CEC2014 are presented in Tables SF-1 to SF-6 in the supplementary file, respectively. Additionally, the experimental results for the 10, 30, 50, and 100-dimensional tests on CEC2017 can be found in Tables SF-7 to SF-10 of the supplementary file. These tables summarize the outcomes based on the minimum, median, and mean values, along with their standard deviations, obtained from the last generation of 50 independent runs for each function. Furthermore, the tables include rankings for the minimum, median, and mean values. The smallest minimum, median, and mean values among all recent PSO-based variants are highlighted in boldface, as are the top 1 rankings for the minimum, median, and mean values. To emphasize reliability and enable effective comparisons, the tables retain additional decimal places, showcasing robust variant performances for certain values, rather than using scientific notation.
The Wilcoxon signed-rank test (WST) was performed for SQPPSO-sono versus PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. Tables SF-1 to SF-10 of the supplementary file also list the outcomes of the WST at the 5% significance level, when contrasting SQPPSO-sono with PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. In the tables, ‘\(\le \)’ and ‘\(\ge \)’ indicate that SQPPSO-sono conducts worse or better on \(f_{n}\) than PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. ‘<’ and ‘>’ indicate that SQPPSO-sono conducts significantly worse or better than the compared variants, and ‘\(=\)’ implies that there is no difference in the performances of the compared variants. In most cases, SQPPSO-sono surpasses PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO.
Comparison and analysis
In CEC2013 benchmark functions, \(f_{1}-f_{5}\), \(f_{6}-f_{20}\), and \(f_{21}-f_{28}\) are unimodal, multimodal, and composition functions, respectively. In CEC2014 benchmark functions, \(f_{1}-f_{3}\), \(f_{4}-f_{16}\), \(f_{17}-f_{22}\), and \(f_{23}-f_{30}\) are unimodal, multimodal, hybrid, and composition functions, respectively. In CEC2017 benchmark functions, \(f_{1}-f_{3}\), \(f_{4}-f_{10}\), \(f_{11}-f_{20}\), and \(f_{21}-f_{30}\) are unimodal, multimodal, hybrid, and composition functions, respectively.
In Table 2, SQPPSO-sono shows the largest number of 1st rankings (7 times) for mean values on the 50-dimensional test; the largest number of 1st rankings (10, 9, and 10 times) for minimum values on the 10, 30, and 50-dimensional tests; the largest number of 1st rankings (9 and 9 times) for median values on the 30 and 50-dimensional tests; the overall number of 1st rankings of SQPPSO-sono for the 30-dimensional test is 25 and is equivalent to HCLPSO; the overall number of 1st rankings of SQPPSO-sono for the 50-dimensional test is the best (26 times), whereas the overall number of 1st rankings of HCLPSO for 10-dimensional test is the best (26 times); SQPPSO-sono only ranks \(3{\text {rd}}\).
In Table 3, SQPPSO-sono has the largest number of 1st rankings for mean, minimum, and median values on the 30 and 50-dimensional tests; thus, the overall number of 1st rankings (both 28 times) of SQPPSO-sono on the 30 and 50-dimensional tests are also the best. HCLPSO has the overall number of 1st rankings (27 times) on the 10-dimensional test, which is the best, while SQPPSO-sono has the second largest number of 1st rankings (25 times).
In Table 4, SQPPSO-sono has the largest number of 1st rankings on all dimensional test except for the rankings (9 times) of mean values, which is the second best, on the 10-dimensional test. The overall number of 1st rankings of SQPPSO-sono for 10, 30, 50, and 100-dimensional tests are 35, 37, 35, and 38 times, respectively, which are the best among all compared algorithms.
According to the statistics of different dimensional tests on CEC2013, CEC2014, and CEC2017, SQPPSO-sono demonstrated impressive performance on the high-dimensional test and outperformed better performance in most cases. To better review the rankings, the radar maps are given in Fig. 2.
In Table 5, the 30-dimensional test on CEC2013 of SQPPSO-sono performs better on \(f_{2}-f_{3}\), \(f_{6}-f_{7}\), \(f_{9}-f_{13}\), \(f_{17}-f_{19}\), and \(f_{24}-f_{27}\), 16 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 4 and 8 functions in total, respectively. The 30-dimensional test on CEC2014 of SQPPSO-sono performs better on \(f_{1}-f_{2}\), \(f_{4}\), \(f_{13}-f_{15}\), \(f_{17}\), \(f_{19}-f_{20}\), \(f_{22}\), \(f_{24}-f_{28}\), and \(f_{30}\), 16 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 7 and 7 functions, respectively. The 30-dimensional test on CEC2017 of SQPPSO-sono performs better on \(f_{2}\), \(f_{4}\), \(f_{6}\), \(f_{8}\), \(f_{11}-f_{12}\), \(f_{14}\), \(f_{17}\), \(f_{20}\), \(f_{22}\), \(f_{26}-f_{27}\), and \(f_{29}-f_{30}\), 14 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 10 and 6 functions in total, respectively. The experimental results not only illustrate the effectiveness of the NRAS and the SQP method but also demonstrate the success cooperation of the NRAS and the SQP method and contains the large test suite with all types of functions including unimodal, multimodal, hybrid, and composition functions.
In Table 6, ablation experiments are conducted to better illustrate the effectiveness of the NRAS method. Three fixed pairs of proportions are evaluated for iwPSO and SLPSO: 30–70%, 50–50%, and 70-30%. When compared with these fixed proportions, SQPPSO-sono outperforms with better mean values for multiple CEC2013, CEC2014, and CEC2017 benchmark functions. Specifically, for the 30-70% proportion, SQPPSO-sono performs better in 19, 20, and 20 functions; for the 50-50% proportion, it excels in 20, 22, and 23 functions; and for the 70-30% proportion, it achieves better results in 21, 22, and 26 functions. The NARS method shows a significant improvement in terms of efficiency.
In Table 7, ablation experiments were conducted to better illustrate the effectiveness of the SQP method. Two local search mechanisms, the interior point method [45] and CMA-ES [46], were applied for SQPPSO-sono. When compared with the interior point method, SQPPSO-sono performed better in 17, 19, and 21 functions on CEC2013, CEC2014, and CEC2017 benchmark functions, respectively. When compared with CMA-ES, SQPPSO-sono performed better in 15, 17, and 18 functions on CEC2013, CEC2014, and CEC2017 benchmark functions, respectively. Although the SQP method emerged as the winner in pairwise comparisons, it has also provided us with inspiration for developing hybrid local search mechanisms in the future.
In Figs. 7 and 8, SQPPSO-sono produces extremely sharp convergence rates in the very early generations of the optimization process for \(f_{1}\), \(f_{3}-f_{4}\), \(f_{10}-f_{15}\), \(f_{17}-f_{18}\), \(f_{25}\), \(f_{28}\), and \(f_{30}\). A characteristic of SQPPSO-sono is that it maintains sharp convergence rates in the very earlier (nearly at the beginning of the) generations and keeps almost flat in the later generations of the evolutionary process. However, this feature also has drawbacks. When optimizing some of the problems, it is difficult to jump out once the process falls into local optima. For example, the optimization of \(f_{2}\), \(f_{5}-f_{6}\), \(f_{8}\), \(f_{10}\), \(f_{20}\), \(f_{22}\), and \(f_{27}\) has significant improvement spaces. Due to the early intervention of the SQP method, the algorithm may prematurely fall into local optima on some benchmark functions. One of the research directions is to identify a method that complements the SQP method and works together to form a synergistic mechanism, allowing for a jump-out approach. The local search capability is strongly enhanced in the current search strategy used by SQPPSO-sono; however, the search behavior, particularly keep population diversity in the earlier generations of the run, needs to be improved further in the future.
From Table SF-1, in the 10-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\) and \(f_{10}\); and the smallest mean value on \(f_{4}\), the smallest minimum values on \(f_{3}\), \(f_{5}-f_{8}\), \(f_{16}\), and \(f_{28}\), and the smallest median values on \(f_{3}-f_{5}\) and \(f_{27}\). From Table SF-4, in the 10-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\), \(f_{17}-f_{18}\), and \(f_{30}\); and the smallest mean and median values on \(f_{20}\); and the smallest minimum values on \(f_{4}-f_{7}\) and \(f_{25}-f_{28}\). From Table SF-7, in the 10-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\), \(f_{4}\), \(f_{12}-f_{15}\), and \(f_{18}\); and the smallest mean value on \(f_{19}\); the smallest minimum values on \(f_{3}\), \(f_{6}\), \(f_{9}\), \(f_{11}\), \(f_{20}-f_{21}\), and \(f_{24}\); and the smallest median values on \(f_{3}\), \(f_{9}\), and \(f_{11}\). In general, SQPPSO-sono is less competitive on CEC2013, CEC2014, and CEC2017 composition functions. However, SQPPSO-sono continues to demonstrate outstanding performance on hybrid functions.
From Table SF-2, in the 30-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{2}-f_{4}\), \(f_{6}-f_{7}\), \(f_{10}\), and \(f_{16}\); and both smallest minimum and median values on \(f_{26}\) and \(f_{28}\). From Table SF-5, in the 30-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{4}\), \(f_{17}\), \(f_{20}-f_{21}\), and \(f_{26}\); and the smallest minimum value on \(f_{15}\); the smallest mean and median values on \(f_{18}\); and the smallest median value on \(f_{29}\). From Table SF-8, in the 30-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}-f_{15}\), \(f_{18}\), and \(f_{30}\); and the smallest mean values on \(f_{9}\) and \(f_{19}\); the smallest minimum values on \(f_{2}\), \(f_{16}\), \(f_{25}\), and \(f_{26}\); and the smallest median and minimum values on \(f_{22}\) and \(f_{28}\). Compared with HCLPSO, TCSPSO, and EPSO, SQPPSO-sono is less competitive on CEC2013, CEC2014, and CEC2017 composition functions, respectively. However, the smallest values of SQPPSO-sono cover all types of functions on CEC2013, CEC2014, and CEC2017 benchmark functions.
From Table SF-3, in the 50-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{2}-f_{4}\), \(f_{6}\), \(f_{10}\), and \(f_{16}\); and the smallest mean value on \(f_{7}\); the smallest minimum values on \(f_{12}\), \(f_{18}\), \(f_{26}\), and \(f_{28}\); and the smallest median values on \(f_{7}\), \(f_{13}\), and \(f_{28}\). From Table SF-6, in the 50-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{4}\), \(f_{13}\), \(f_{17}\), and \(f_{20}-f_{21}\); and the smallest mean value on \(f_{18}\); the smallest minimum value on \(f_{15}\); and the smallest median values on \(f_{15}\) and \(f_{26}\). From Table SF-9, in the 50-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}\), \(f_{14}\), \(f_{18}\), and \(f_{30}\); and the smallest mean values on \(f_{13}\), \(f_{15}\), and \(f_{19}\); the smallest minimum values on \(f_{5}\), \(f_{7}-f_{8}\), \(f_{17}\), \(f_{21}-f_{24}\), and \(f_{28}\); and the smallest median values on both \(f_{13}\) and \(f_{19}\). In general, the SQPPSO-sono performance on CEC2017 composition functions was remarkably better than that of CEC2013 and CEC2014 composition functions.
From Table SF-10, in the 100-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}\), \(f_{14}-f_{15}\), \(f_{18}\), and \(f_{28}\); and the smallest mean values on \(f_{13}\), \(f_{25}\), and \(f_{30}\); the smallest minimum values on \(f_{5}\), \(f_{7}-f_{8}\), \(f_{17}\), \(f_{21}\), \(f_{23}-f_{24}\), and \(f_{29}\); and the smallest median values on \(f_{13}\), \(f_{25}\), and \(f_{30}\). In general, SQPPSO-sono demonstrates outstanding performance in the 100-dimensional test on all types of functions.
In summary, SQPPSO-sono demonstrated superior performance across unimodal, multimodal, hybrid, and composition functions in the 30, 50, and 100-dimensional tests conducted on CEC2013, CEC2014, and CEC2017 benchmark functions. SQPPSO-sono is less competitive on the 10-dimensional test on CEC2013 and CEC2014. In particular, SQPPSO-sono exhibited better performance on unimodal functions; this can be attributed to the application of the SQP method which amplified the local search capability.
Search behavior of SQPPSO-sono
In Fig. 3, the best fitness values with their sources from a single run are plotted using different colors; green represents the best particle from the iwPSO group; blue represents the best particle from the SLPSO group; red represents the best particle from replacement by a better solution generated by the SQP method. The search modes can be summarized as follows:
-
Both iwPSO and SLPSO conduct an effective search in the whole optimization process.
-
Either iwPSO, SLPSO, or both conduct effective searches in the very early generations; then, the SQP method conducts an effective search in the later generations and lasts until the end.
-
Either iwPSO, SLPSO, or both conduct effective searches in the very early generations; then, the SQP method conducts an effective search for a long while, and iwPSO or SLPSO suddenly conducts an effective search for a while. The SQP method later conducts an effective search until the end.
-
The SQP method conducts an effective search almost at the beginning of the optimization process. Either iwPSO, SLPSO, or both conduct an effective search at some point and cooperate with the SQP method to achieve a sharp convergence rate.
From the above search modes, the evolutionary process in step descent, like cliff fall, is common; it is consistent with Fig. 7 and 8, and the convergence rate is very sharp. From the figures, in the various evolutionary stages, the SQP method plays an important role in enhancing the local search capability in the very early optimization process and sometimes even at the beginning of the optimization process; thus, it provides an ideal condition to enhance the global search of SQPPSO-sono. In the future, one possible research direction could be to reduce the frequency of usage for the SQP method and increase the usage of the global search method, probably using niching methods [47]. According to the laboratory observations, the interior-point method [48, 49] significantly outperforms the SQP method on a few optimization problems; thus, another possible research direction is to design and ensemble a local search strategy using both the SQP and interior-point methods.
The 2-dimensional tests are conducted for SQPPSO-sono without single methods. The 2-dimensional particles in diverse generations (0, 10, 20, 30) on \(f_{4}\), \(f_{8}\), and \(f_{26}\) are plotted in Figs. 4, 5, and 6. From the figures, it can be observed that the particles are more scattered during the evolutionary process when SQPPSO-sono does not employ the SQP method; the particles are also more scattered in the later generations for SQPPSO-sono without the SQP method as a local search mechanism. The SQPPSO-sono without the NRAS leads to more gathering of particles due to loss of population diversity, while using the NRAS can maintain the population diversity even in the later generations of the evolution. The figures show forceful evidence that the proposed methods are effective. Thus, the global and local search capabilities of the proposed SQPPSO-sono have been well balanced (Figs. 7, 8).
Algorithmic computational complexity
The computational complexity of the proposed SQPPSO-sono algorithm was determined as delineated in the CEC2017 benchmark competition, which was proposed by [50]. All the experiments were conducted on the below system:
-
CPU: Intel Core i7-1165G7 @ 2.80 GHz 1.69 GHz
-
RAM: 16GB
-
OS: Windows 10
-
Software: Matlab 2018b
In Table 8, the computation complexity of the SQPPSO-sono algorithm on 10, 30, 50, and 100 dimensions are presented. In this table, \(T_{0}\) is the time calculated by performing the below statements:
\(x=(\text {double})0.55;\)
\(\text {for} \ i=1:1000000\)
\(x=x+x; \ x=x/2; \ x=x*x;\)
\(x=\text {sqrt}(x); \ x=\text {log}(x); \ x=\text {exp}(x); \ x=x/(x+2);\)
\(\text {end}\)
\(T_{1}\) is the time to perform benchmark function \(f_{18}\) individually with 200,000 evaluations in D dimensions. \(T_{2}\) is the execution time of the SQPPSO-sono algorithm on function \(f_{18}\) for 200,000 evaluations with D dimensions. The time (\(\hat{T}_{2}\)) is the mean values, averaged over 5 runs, of \(T_{2}\). As presented in Table 8, \(T_{1}\), \(\hat{T}_{2}\), and (\((\hat{T}_{2}-T_{1})/T_{0}\)) scaled linearly with the number of dimensions. The computation complexity comparisons of the SQPPSO-sono algorithm with the compared PSO-based variants are presented in Table 9 for 30 dimensions. The same system and the same procedure are used to calculate, for each compared algorithm, the computation complexity.
In Table 9, the computation complexity of SQPPSO-sono ranks \(3{\text {rd}}\) (excluding SQPPSO-sono without NRAS and SQP), behind PSO-sono and PPSO. The NRAS method is less efficient than the mechanism of the proportional allocation of subpopulations in SQP-sono, as \((\hat{T}_{2}-T_{1})/T_{0}\) is only 83.3907 without using the NRAS method. The SQP method is highly efficient than the fully-informed search scheme employed by SQP-sono, as \((\hat{T}_{2}-T_{1})/T_{0}\) is 129.5536 and 145.3576 with and without the SQP method. Although the computation cost of SQPPSO-sono is slightly higher than PSO-sono, its performance is much more efficient and reliable than PSO-sono. Such a little performance sacrifice is very worthwhile. Among all variants, the computation cost of SQPPSO-sono is still very competitive.
Conclusions
This paper proposed a sequential quadratic programming (SQP)-based novel strategy for particle swarm optimization on single-objective numerical optimization (SQPPSO-sono). To promote the performance of the strategy, we propose a novel ratio adaptation scheme (NRAS) that divides the subpopulation size considering fitness value and diversity of particles. In SQPPSO-sono, the NRAS method is responsible for dividing subpopulations and the SQP method, replacing fully-informed search scheme, is intermittently invoked to enhance the exploitation capability. The NRAS considers both the quality and diversity of particles during evolution to balance the exploration and exploitation capabilities effectively. Meanwhile, the SQP method is applied to exploit the area around the current global best particle after its generation even in the earlier iterations. The strategic timing and targeted application of this method significantly enhance the algorithm’s efficiency. To evaluate the effectiveness of the NRAS and the SQP method, SQPPSO-sono with a single mechanism, either using the NRAS or the SQP method, was validated; the experimental results illustrate that SQPPSO-sono has greater performance with both mechanisms than using individual mechanism. All compared algorithms are tested on CEC2013, CEC2014, and CEC2017 benchmark functions with diverse dimensions. The experimental results demonstrate that SQPPSO-sono has outstanding performance in most cases. The future research directions of SQPPSO-sono include reducing frequency of usage for the SQP method, increasing the global search mechanisms, and designing an ensemble strategy that can dynamically switch according to the landscapes in various evolutionary stages using both the SQP and interior-point methods.
References
Talbi E-G (2009) Metaheuristics: from design to implementation. Wiley, New York
Cura T (2012) A particle swarm optimization approach to clustering. Expert Syst Appl 39(1):1582–1588
Sun C, Zeng J, Pan J, Xue S, Jin Y (2013) A new fitness estimation strategy for particle swarm optimization. Inf Sci 221:355–370
Boudjelaba K, Ros F, Chikouche D (2014) Potential of particle swarm optimization and genetic algorithms for fir filter design. Circuits Syst Signal Process 33(10):3195–3222
Pant S, Anand D, Kishor A, Singh SB (2015) A particle swarm algorithm for optimization of complex system reliability. Int J Perform Eng 11(1):33
Liu Q, Wei W, Yuan H, Zhan Z-H, Li Y (2016) Topology selection for particle swarm optimization. Inf Sci 363:154–173
Mustafa Servet Kiran (2017) Particle swarm optimization with a new update mechanism. Appl Soft Comput 60:670–678
Hajihassani M, Jahed-Armaghani D, Kalatehjari R (2018) Applications of particle swarm optimization in geotechnical engineering: a comprehensive review. Geotech Geol Eng 36(2):705–722
Song A, Chen W-N, Tianlong G, Yuan H, Kwong S, Zhang J (2019) Distributed virtual network embedding system with historical archives and set-based particle swarm optimization. IEEE Trans Syst Man Cybern Syst 51(2):927–942
Zhang Y, Liu X, Bao F, Chi J, Zhang C, Liu P (2020) Particle swarm optimization with adaptive learning strategy. Knowl Based Syst 196:105789
Chen Y, Yan J, Feng J, Sareh P (2021) Particle swarm optimization-based metaheuristic design generation of non-trivial flat-foldable origami tessellations with degree-4 vertices. J Mech Des 143(1)
Yang X, Li H, Yu X (2022) Adaptive heterogeneous comprehensive learning particle swarm optimization with history information and dimensional mutation. Multim Tools Appl 1–33
Liu F, Zhang J, Liu T (2020) A pso-algorithm-based consensus model with the application to large-scale group decision-making. Complex Intell Syst 6(2):287–298
Yilin Yu, Yanli X, Wang F, Li W, Mai X, Hao W (2021) Adsorption control of a pipeline robot based on improved pso algorithm. Complex Intell Syst 7(4):1797–1803
Zhi Y, Wang H, Wang L (2022) A state of health estimation method for electric vehicle li-ion batteries using ga-pso-svr. Complex Intell Syst 8(3):2167–2182
Abd-El-Atty B (2023) Efficient s-box construction based on quantum-inspired quantum walks with pso algorithm and its application to image cryptosystem. Complex Intell Syst
Jie J, Liu K, Zheng H, Wang B, Dai R (2021) High dimensional feature data reduction of multichannel semg for gesture recognition based on double phases pso. Complex Intell Syst 7(4):1877–1893
Rao C, Liu Y, Goh M (2023) Credit risk assessment mechanism of personal auto loan based on pso-xgboost model. Complex Intell Syst 9(2):1391–1414
Li W, Meng X, Huang Y, Zhang-Hua F (2020) Multipopulation cooperative particle swarm optimization with a mixed mutation strategy. Inf Sci 529:179–196
Kao Y-T, Zahara E (2008) A hybrid genetic algorithm and particle swarm optimization for multimodal functions. Appl Soft Comput 8(2):849–857
Kaveh A, Bakhshpoori T, Afshari E (2014) An efficient hybrid particle swarm and swallow swarm optimization algorithm. Comput Struct 143:40–59
Moradi P, Gholampour M (2016) A hybrid particle swarm optimization for feature subset selection by integrating a novel local search strategy. Appl Soft Comput 43:117–130
Wang F, Zhang H, Li K, Lin Z, Yang J, Shen X-L (2018) A hybrid particle swarm optimization algorithm using adaptive learning strategy. Inf Sci 436:162–177
Islam MA, Gajpal Y, ElMekkawy TY (2021) Hybrid particle swarm optimization algorithm for solving the clustered vehicle routing problem. Appl Soft Comput 110:107655
Meng Z, Zhong Y, Mao G, Liang Y (2022) Pso-sono: a novel pso variant for single-objective numerical optimization. Inf Sci 586:176–191
Shi Y, Eberhart R (1998) A modified particle swarm optimizer. In 1998 IEEE international conference on evolutionary computation proceedings. In: IEEE world congress on computational intelligence (Cat. No. 98TH8360). IEEE, 1998, pp 69–73
Cheng R, Jin Y (2015) A social learning particle swarm optimization algorithm for scalable optimization. Inf Sci 291:43–60
Gill PE, Wong E (2012) Sequential quadratic programming methods. In: Mixed integer nonlinear programming. Springer, Berlin, pp 147–224
Richards ZD (2009) Constrained particle swarm optimisation for sequential quadratic programming. Int J Model Identif Control 8(4):361–367
Elaiw AM, Xia X, Shehata AM (2012) Dynamic economic dispatch using hybrid de-sqp for generating units with valve-point effects. Math Probl Eng 2012
Zhang Y, Yao F, Iu HH-C, Fernando T, Wong KP (2013) Sequential quadratic programming particle swarm optimization for wind power system operations considering emissions. J Mod Power Syst Clean Energy 1(3):227–236
Wenxing X, Geng Z, Zhu Q, Xiangbai G (2013) A piecewise linear chaotic map and sequential quadratic programming based robust hybrid particle swarm optimization. Inf Sci 218:85–102
Rajabi F, Rezaie B, Rahmani Z (2016) A novel nonlinear model predictive control design based on a hybrid particle swarm optimization-sequential quadratic programming algorithm: Application to an evaporator system. Trans Inst Meas Control 38(1):23–32
Boggs PT, Tolle JW (1995) Sequential quadratic programming. Acta Numerica 4:1–51
Nocedal J, Wright SJ (2006) Sequential quadratic programming. Numer Optim 529–562
Kumar A, Misra RK, Singh D (2017) Improving the local search capability of effective butterfly optimizer using covariance matrix adapted retreat phase. In: 2017 IEEE congress on evolutionary computation, pp 1835–1842
Lynn N, Suganthan P (2017) Ensemble particle swarm optimizer. Appl Soft Comput 55:533–548
Li T, Jiayi Shi W, Deng, and Zhenda Hu. (2022) Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl Soft Comput 121:108731
Liu H, Zhang X, Liangping T (2020) A modified particle swarm optimization using adaptive strategy. Expert Syst Appl 152:113353
Zhang XW, Liu H, Zhang T, Wang QW, Wang Y, LiangPing T (2019) Terminal crossover and steering-based particle swarm optimization algorithm with disturbance. Appl Soft Comput 85:105841
Nandar Lynn and Ponnuthurai Nagaratnam Suganthan (2015) Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evolut Comput 24:11–24
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, vol 4. IEEE Press, pp 1942–1948
Hamidreza Modares and Mohammad-Bagher Naghibi Sistani (2011) Solving nonlinear optimal control problems using a hybrid ipso-sqp algorithm. Eng Appl Artif Intell 24(3):476–484
Costa CBB, da Costa AC, Filho RM (2005) Mathematical modeling and optimal control strategy development for an adipic acid crystallization process. Chem Eng Process Process Intensif 44(7):737–753
Nemirovski AS, Todd MJ (2008) Interior-point methods for optimization. Acta Numerica 17:191–234
Mohamed AW, Hadi AA, Fattouh AM, Jambi KM (2017) Lshade with semi-parameter adaptation hybrid with cma-es for solving cec 2017 benchmark problems. In: 2017 IEEE congress on evolutionary computation (CEC). IEEE, 2017, pp 145–152
Shir OM, Bäck T (2005) Niching in evolution strategies. In: Proceedings of the 7th annual conference on genetic and evolutionary computation, pp 915–916
Karmarkar N (1984) A new polynomial-time algorithm for linear programming. Combinatorica 4(5):373–395
Forsgren A, Gill PE, Wright MH (2002) Interior methods for nonlinear optimization. SIAM Rev 44(4):525–597
Awad NH, Ali MZP, Suganthan N, Liang JJ, Qu BY (2017) Problem definitions and evaluation criteria for the cec 2017 special session and competition on single objective real-parameter numerical optimization. Nanyang Technological University, Singapore and Jordan University of Science and Technology, Jordan and Zhengzhou University, Zhengzhou, China, technical report, 2017
Acknowledgements
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.
Funding
Funding was provided by The Laboratory Work Research Project of Zhejiang Higher Education Association (Grant number: ZD202201).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hong, L., Yu, X., Tao, G. et al. A sequential quadratic programming based strategy for particle swarm optimization on single-objective numerical optimization. Complex Intell. Syst. 10, 2421–2443 (2024). https://doi.org/10.1007/s40747-023-01269-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-023-01269-z