Introduction

Particle swarm optimization (PSO) is a well-known population-based metaheuristic algorithm [1]. Many PSO-based variants and their applications were proposed in the last decade [2,3,4,5,6,7,8,9,10,11,12]. PSO applied for large-scale group decision-making [13], adsorption control of pipeline robot [14], health estimation for electric vehicle [15], substitution box construction based on quantum-inspired quantum walks [16], feature-related time consumption reduction for surface electromyography  [17], and credit risk assessment for personal auto loan [18]. Li et al. [19] indicated that the research directions of the PSO algorithm can be divided into four categories: parameter tuning, topology choices, learning strategy improvements, and integration with other algorithms. Among the large number of PSO-based investigations, hybridization is a popular approach because it can combine several complementary PSO variants or algorithms into a solid framework. In [20], a hybrid genetic algorithm with PSO for multimodal functions was proposed. In [21], search mechanisms of swallow swarm optimization (SSO) were implemented with PSO to formulate the hybrid particle swallow swarm optimization (HPSSO) algorithm. In [22] a hybrid feature selection algorithm based on PSO was proposed; this variant uses a local search strategy, which is embedded in the PSO, to select the less correlated and salient feature subset. In [23], a hybrid PSO algorithm that utilizes an adaptive learning strategy (ALPSO) was proposed; a self-learning-based candidate generation strategy and a competitive learning-based prediction strategy were employed to ensure the exploration ability and the exploitation of the algorithm, respectively. In [24], the variable neighborhood search (VNS) was used to solve the clustered vehicle routing problem (CluVRP) was employed for the PSO to ensure solution intensity and bring the solution to the local optima. In [25], the inertia weight PSO (iwPSO) [26] and social learning PSO (SLPSO) [27] were combined to form a PSO on single-objective numerical optimization (PSO-sono). The PSO-sono approach demonstrates outstanding performance on CEC2013, CEC2014, and CEC2017 benchmark functions.

The essence of hybridization is to select complementary algorithms and exert their respective strengths; however, unsuitable subpopulation size may limit the strengths. According to our observation, subpopulation allocation considerably influences the algorithm performance due to the economy of scale in diverse stages of evolution. Furthermore, the exploitation capability increases the convergence rate, especially of particles near the global optimum. Hence, employing a proper local search method can improve evolution efficiency. The sequential quadratic programming (SQP) method was proposed by Wilson [28] due to its outstanding local search capability, and it is widely used in real-parameter optimization problems [29,30,31,32,33]. The SQP method can be used to seek the local minimum satisfying the constraints. Theories related to the SQP method can be found in [28, 34,35,36].

In this study, the PSO-sono was further reformed using a novel ratio adaptation scheme (NRAS) for calculating the proportion of subpopulations and applying the SQP method intermittently to the best particle generated either by iwPSO or SLPSO, illustrating impressive performance. The proposed PSO-based variant is called SQPPSO-sono, and it performs more competitively than the state-of-the-art algorithms: PSO-sono [25], ensemble particle swarm optimizer (EPSO) [37], pyramid particle swarm optimization (PPSO) [38], modified particle swarm optimization (MPSO) [39], terminal crossover and steering-based particle swarm optimization (TCSPSO) [40], and heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [41]. All the compared algorithms were tested on CEC2013, CEC2014, and CEC2017 benchmark functions. The experimental results demonstrate that SQPPSO-sono not only has outstanding performance in most cases, but also better local search capability when it cooperates with NRAS. The main highlights of this work are the following:

  • The SQP method and NRAS are introduced in the SQPPSO-sono approach, and the two mechanisms cooperate and significantly promote enhanced performance.

  • The SQP method can result in a sharp convergence rate in the early optimization process due to application on the current best particle. It enhances the local search capability of SQPPSO-sono and promotes rankings compared with other algorithms.

  • NRAS is proposed and inspired by an effective butterfly optimizer using the covariance matrix adapted retreat phase [36], for dividing subpopulations.

  • A large test suite including all the benchmark functions from CEC2013, CEC2014, and CEC2017 test suites is used for algorithm validation. To some extent, the over-fitting problem can be avoided owing to the large number of tests for real-parameter single-objective optimization compared with employing only one test suite including a few benchmark functions.

The remainder of this paper is organized as follows. “Particle swarm optimization and PSO-sono” represents the PSO and PSO-sono. “The sequential quadratic programming based particle swarm optimization” represents the novel strategy and the proposed SQPPSO-sono algorithm. The integration and implementation of NRAS and the SQP method are described in detail. “Performance evaluation” presents the experimental results and the parameter settings of the compared algorithms. Comparisons with the recent state-of-the-art PSO-based algorithms, analysis, discussions, and future research directions are presented in this section as well. “Conclusions” concludes and summarizes the paper.

Particle swarm optimization and PSO-sono

PSO-sono is a variant of PSO, with the following improvements: (1) A sorted particle swarm with hybrid paradigms improves the optimization performance; (2) Novel adaptation schemes for the ratio of each paradigm and constriction coefficients during evolution; (3) A fully-informed search scheme based on the current best particle in each generation assists PSO to jump out of the local optimum and improve performance.

Particle swarm optimization

The velocity and position, which are the basic properties of the particles of the classical PSO, are updated for each particle [42]. The velocity and position of the ith particle at generation t are recorded as \(V_{i}(t)\) and \(X_{i}(t)\), respectively, and the formulas can be presented as follows:

$$\begin{aligned} V_{i}^d(t+1)= & {} V_{i}^d(t)+c_{1}* r1_{i}^d*(P{\text {best}}_{i}^d(t)-X_{i}^d(t))\nonumber \\{} & {} +c_{2}* r2_{i}^d*(G{\text {best}}^d(t)-X_{i}^d(t)) \end{aligned}$$
(1)
$$\begin{aligned} X_{i}^d(t+1)= & {} X_{i}^d(t)+V_{i}^d(t+1) \end{aligned}$$
(2)

\(r1_{i}^d\) and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\), where \(\text {dim}\) is the dimension size. \(P{\text {best}}_{i}^d(t)\) is the ith particle’s previous best solution, and \(G{\text {best}}(t)^d\) is the whole swarm’s best solution, which are as defined below:

$$\begin{aligned} P{\text {best}}_{i}^d(t)= & {} \text {min}\{\text {fit}(X_{i}^d(1)),\text {fit}(X_{i}^d(2)),.....,\text {fit}(X_{i}^d(t))\} \nonumber \\ \end{aligned}$$
(3)
$$\begin{aligned} G{\text {best}}^d(t)= & {} \text {min}\{\text {fit}(P{\text {best}}_{1}^d(t)),\nonumber \\{} & {} \text {fit}(P{\text {best}}_{2}^d(t)),.....,\text {fit}(P{\text {best}}_{N}^d(t))\} \end{aligned}$$
(4)

Inertia weight PSO and social learning PSO

Meng et al. [25] proposed to combine two complementary PSO variants, iwPSO and SLPSO, to promote the performance of single-objective numerical optimization, named PSO-sono. The iwPSO algorithm uses following equations to calculate the particle’s velocity and position:

$$\begin{aligned} V_{i}^d(t+1)= & {} w*V_{i}^d(t)+c_{1}*r1_{i}^d*(P{\text {best}}_{i}^d(t)-X_{i}^d(t))\nonumber \\{} & {} +c_{2}*r2_{i}^d*(G{\text {best}}^d(t)-X_{i}^d(t)) \end{aligned}$$
(5)

where w is the inertia weight of the velocity. \(r1_{i}^d\) and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\). \(P{\text {best}}_{i}^d(t)\) is the ith particle’s previous best experience, and \(G{\text {best}}(t)^d\) is the whole swarm’s best experience.

The SLPSO algorithm uses Eqs. 6 and 7 to update the particle’s velocity and position:

$$\begin{aligned} X_{i}^d(t+1)= \left\{ \begin{array}{ll} X_{i}^d(t)+V_{i}^d(t+1),&{} \text {if} \quad p_{i}(t) \le lp_{i} \\ X_{i}^d(t), &{} {\text {otherwise}.} \\ \end{array} \right. \end{aligned}$$
(6)

where \(V_{i}^d(t+1)\) satisfies:

$$\begin{aligned} \left\{ \begin{array}{ll} I_{i}^{d}(t)=X_{k}^{d}(t)-X_{i}^{d}(t)\\ C_{i}^{d}(t)=\overline{X}_{\text {center}}^{d}(t)-X_{i}^{d}(t)\\ V_{i}^d(t+1)=r1_{i}*V_{i}^d(t)+r2_{i}^d*I_{i}^{d}(t)+r3_{i}^d*\epsilon *C_{i}^{d}(t)\\ \end{array} \right. \end{aligned}$$
(7)

where \(X_{k}^{d}(t)\) is a randomly selected better particle of the ith particle with dth dimension in the tth generation, the ith particle is automatically selected if it is the best particle. \(\overline{X}_{\text {center}}^{d}(t)\) is the center of the population. \(\epsilon \) is the social influence, \(\epsilon =0.01*\frac{\text {dim}}{100}\). \(r0_{i}^d\), \(r1_{i}^d\), and \(r2_{i}^d\) are randomly generated by uniform distribution in range [0,1], \(d=1,2,...,\text {dim}\). \(lp_{i}\) denotes the learning probability of the ith particle, and it obeys the following:

$$\begin{aligned} lp_{i}=\left( 1-\frac{i-1}{ps}\right) ^{ln(\lceil \frac{\text {dim}}{100} \rceil )} \end{aligned}$$
(8)

The sequential quadratic programming based particle swarm optimization

Compared with the PSO-sono, the proposed SQPPSO-sono makes two significant improvements. The improvements in the red boxes are shown in Fig. 1. The framework of SQPPSO-sono are given in Algorithm 1. Figure 1 indicates that the NRAS and SQP methods are embedded into the SQPPSO-sono algorithm at the beginning of the generation and after the best particle is produced by iwPSO or SLPSO. The NRAS method is applied after the initialization and calculates the ratio r as a proportion of the subpopulation at the beginning of each generation. Then, iwPSO and SLPSO are run simultaneously according to the allocations, and a global best particle is generated from both subpopulations. Later, the SQP method is applied to further enhance the exploitation capability by starting from the current global best particle. When the termination condition is satisfied, the algorithm stops running. The major contributions of the proposed algorithm are summarized below:

Fig. 1
figure 1

Main workflow of SQPPSO-sono

  • A novel ratio adaptation scheme is employed to calculate the proportion of each subpopulation for iwPSO and SLPSO rather than using the numbers of success particles to calculate the proportion of subpopulations.

  • The SQP method is employed to execute local search starting from the current global best particle, either generated by iwPSO or SLPSO, to search a better solution in the current generation. If a better solution is found, then the best particle is replaced by the better solution found by the SQP method.

Novel ratio adaptation scheme

In PSO-sono, by sorting the particles based on their fitness values, the population of particles is divided into two groups: the better-particle-group (PS1) and the worse-particle-group (PS2), which are evolved by iwPSO and SLPSO, respectively. The ratio r is used for calculating the proportion of the better particles in the entire population; thus, the proportion of the worse particles is \(1-r\). The ratio r in PSO-sono is represented as follows:

$$\begin{aligned} {r=\frac{ns_{b}}{ns_{b}+ns_{w}}}, \end{aligned}$$
(9)

where \(ns_{b}\) and \(ns_{w}\) represent the amount of success particles in the better-particle-group and worse-particle-group, respectively. Furthermore, a truncation readjustment of r is involved when its value is in range [0.1, 0.9].

In SQPPSO-sono, a novel ratio adaptation scheme (NRAS) is proposed, which is inspired by [36]. An adaptation scheme of ratio is used for selecting strategy in [36], while here NRAS is used for dividing subpopulations. Two factors are considered in calculating the ratio r:

  • The quality of particles: the lower fitness value, the higher quality of particle obtained.

  • The diversity rate of particles in each subpopulation.

The normalized quality values (nq) are calculated as follows:

$$\begin{aligned} nq_{i}=\frac{\text {fitness}(X_{PS_{i},\text {best}})}{\text {fitness}(X_{PS_{1},\text {best}})+\text {fitness}(X_{PS_{2},\text {best}})}, \ {\forall } i=1,2. \end{aligned}$$
(10)

The diversity rate is calculated as follows:

$$\begin{aligned} {\text {div}_{i}=\sum \limits _{z=2}^{PS_{i}}\text {dis}(X_{PS_{i},z},X_{PS_{i},\text {best}}), \ {\forall } i=1,2}, \end{aligned}$$
(11)

where \(\text {dis}(X_{PS_{i},z},X_{PS_{i},\text {best}})\) denotes the Euclidean distance between the zth particle and best particle in \(PS_{i}\). The normalized diversity (nd) is calculated as follows:

$$\begin{aligned} {nd_{i}=\frac{\text {div}_{i}}{\text {div}_{1}+\text {div}_{2}}, \ {\forall }i=1,2}. \end{aligned}$$
(12)

In Eq. 9, only the fitness values of particles were considered to adjust the subpopulation size for PSO-sono. This algorithm may fall easily into a local optimum. However, the proposed NRAS considers both fitness value and diversity of particles.

$$\begin{aligned} k_{i}= & {} (1-nq_{i})+nd_{i},\forall i=1,2 \end{aligned}$$
(13)
$$\begin{aligned} r= & {} \text {max}\left( 0.1,\text {min}\left( 0.9,\frac{k_{1}}{k_{1}+k_{2}}\right) \right) . \end{aligned}$$
(14)

Strategy using sequential quadratic programming

In [25], PSO-sono applies the fully-informed search scheme on the best particle in each generation to help the algorithm jump out the local optimum. In contrast, the proposed SQPPSO-sono algorithm replaces the fully-informed search scheme with the SQP method to enhance the local search capability. Due to function evaluations (FEs) are performed in the SQP method, the function evaluations of the SQP method are included in FEs to achieve a fair comparison.

The essence of the SQP method is to transform a nonlinear problem into a linear problem [43]?. The principle of the SQP method is to find a decent direction and models a quadratic optimal problem. The nonlinear optimization problems can be represented as follows:

$$\begin{aligned} {\text {minimize} \ f(x), \ \text {such} \ \text {that} \ \left\{ \begin{array}{ll} h(x) = 0 \\ g(x) \le 0 \\ \end{array} \right. } \end{aligned}$$
(15)
Algorithm 1
figure a

General framework of SQPPSO-sono

The Lagrangian of the above formulation can be written as follows:

$$\begin{aligned} {L(x,\lambda ,\mu )=f(x)+\lambda ^{T}h(x)+\mu ^{T}g(x)}, \end{aligned}$$
(16)

where \(\lambda \) and \(\mu \) are Lagrangian multipliers. The SQP method is an iterative operation, which repeatably builds the problem for a given iterate \(x_{k}\) by a quadratic programming sub-problem. Moreover, \(x_{k}\) is used to construct a new iterate \(x_{k+1}\). The sub-problem can be established by linearizing the constraints of \(x_{k}\), and the sub-problem can be written as follows:

$$\begin{aligned} \begin{aligned} \text {minimize} \ f^{'}(x_{k})(x-x_{k})+\frac{1}{2}(x-x_{k})^{T}Hf(x_{k})(x-x_{k}), \\ \ \text {such} \ \text {that} \ \left\{ \begin{array}{ll} h(x_{k}) + h^{'}(x_{k})(x-x_{k}) = 0 \\ g(x_{k}) + g^{'}(x_{k})(x-x_{k}) \le 0 \\ \end{array} \right. \\ \end{aligned} \end{aligned}$$
(17)

Here, \(Hf(x_{k})\) is the Hessian of f at \(x \in R^{n}\). Because the SQP method highly relies on the initial estimate [44], the proposed SQPPSO-sono only applies the SQP method to the current global best particle. Thus, the method is used to enhance the local search capability for the proposed algorithm.

Performance evaluation

The recent PSO-based variants were selected in this study to evaluate the performance of SQPPSO-sono, and tests based on the CEC2013, CEC2014, and CEC2017 benchmark functions were performed. The PSO-based variants, namely PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO, were considered. For the sake of fairness, the FEs for each variant with the same dimension size were set to identical values.

Parameter settings

The general form of maximum FEs is \(10^{4} \times \) D; hence, FEs for all tested algorithms were set to \(10 \times 10^{4}\), \(30 \times 10^{4}\), \(50 \times 10^{4}\), and \(10 \times 10^{5}\) for 10D, 30D, 50D, and 100D, respectively. Table 1 lists the parameter settings for the proposed SQPPSO-sono algorithm and the compared PSO variants. We used the recommended default settings, tuned by original investigators and yielding the best performance for the state-of-the-art PSO variants.

Table 1 Parameter settings for the compared PSO variants

Experimental results

To better measure the performance and reliability of the proposed algorithm, the statistics of rankings for SQPPSO-sono, PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO, which are tested on CEC2013, CEC2014, and CEC2017 benchmark functions, are presented in Tables 2, 3, and 4, respectively. The top 3 (1st, \(2{\text {nd}}\), and \(3{\text {rd}}\)) rankings of the mean, minimum, and median values are counted for the 10, 30, and 50-dimensional tests for all compared variants on CEC2013 and CEC2014, while the statistics of 10, 30, 50, and 100-dimensional tests are given for CEC2017. The greatest number of rankings of top 3 are given in boldface. The convergence characteristics graphs of SQPPSO-sono and the compared PSO variants are presented in Figs. 7 and 8. Due to page limitations, only the evolutionary processes of \(f_{n}\) over 50 runs of the 100-dimensional test are presented herein.

To better verify the effectiveness of the NRAS and the SQP method when cooperating with SQPPSO-sono individually, the validations of SQPPSO-sono without using the NRAS and the SQP method were performed. Both validations were tested on CEC2013, CEC2014, and CEC2017 benchmark functions for 30 dimensions. Two ablation experiments were conducted to further illustrate the effectiveness of both methods. The experimental results are presented in Tables 5,  6, and  7.

The experimental results for the 10, 30, and 50-dimensional tests on CEC2013 and CEC2014 are presented in Tables SF-1 to SF-6 in the supplementary file, respectively. Additionally, the experimental results for the 10, 30, 50, and 100-dimensional tests on CEC2017 can be found in Tables SF-7 to SF-10 of the supplementary file. These tables summarize the outcomes based on the minimum, median, and mean values, along with their standard deviations, obtained from the last generation of 50 independent runs for each function. Furthermore, the tables include rankings for the minimum, median, and mean values. The smallest minimum, median, and mean values among all recent PSO-based variants are highlighted in boldface, as are the top 1 rankings for the minimum, median, and mean values. To emphasize reliability and enable effective comparisons, the tables retain additional decimal places, showcasing robust variant performances for certain values, rather than using scientific notation.

The Wilcoxon signed-rank test (WST) was performed for SQPPSO-sono versus PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. Tables SF-1 to SF-10 of the supplementary file also list the outcomes of the WST at the 5% significance level, when contrasting SQPPSO-sono with PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. In the tables, ‘\(\le \)’ and ‘\(\ge \)’ indicate that SQPPSO-sono conducts worse or better on \(f_{n}\) than PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO. ‘<’ and ‘>’ indicate that SQPPSO-sono conducts significantly worse or better than the compared variants, and ‘\(=\)’ implies that there is no difference in the performances of the compared variants. In most cases, SQPPSO-sono surpasses PSO-sono, MPSO, PPSO, MPSO, TCSPSO, and HCLPSO.

Comparison and analysis

In CEC2013 benchmark functions, \(f_{1}-f_{5}\), \(f_{6}-f_{20}\), and \(f_{21}-f_{28}\) are unimodal, multimodal, and composition functions, respectively. In CEC2014 benchmark functions, \(f_{1}-f_{3}\), \(f_{4}-f_{16}\), \(f_{17}-f_{22}\), and \(f_{23}-f_{30}\) are unimodal, multimodal, hybrid, and composition functions, respectively. In CEC2017 benchmark functions, \(f_{1}-f_{3}\), \(f_{4}-f_{10}\), \(f_{11}-f_{20}\), and \(f_{21}-f_{30}\) are unimodal, multimodal, hybrid, and composition functions, respectively.

In Table 2, SQPPSO-sono shows the largest number of 1st rankings (7 times) for mean values on the 50-dimensional test; the largest number of 1st rankings (10, 9, and 10 times) for minimum values on the 10, 30, and 50-dimensional tests; the largest number of 1st rankings (9 and 9 times) for median values on the 30 and 50-dimensional tests; the overall number of 1st rankings of SQPPSO-sono for the 30-dimensional test is 25 and is equivalent to HCLPSO; the overall number of 1st rankings of SQPPSO-sono for the 50-dimensional test is the best (26 times), whereas the overall number of 1st rankings of HCLPSO for 10-dimensional test is the best (26 times); SQPPSO-sono only ranks \(3{\text {rd}}\).

In Table 3, SQPPSO-sono has the largest number of 1st rankings for mean, minimum, and median values on the 30 and 50-dimensional tests; thus, the overall number of 1st rankings (both 28 times) of SQPPSO-sono on the 30 and 50-dimensional tests are also the best. HCLPSO has the overall number of 1st rankings (27 times) on the 10-dimensional test, which is the best, while SQPPSO-sono has the second largest number of 1st rankings (25 times).

In Table 4, SQPPSO-sono has the largest number of 1st rankings on all dimensional test except for the rankings (9 times) of mean values, which is the second best, on the 10-dimensional test. The overall number of 1st rankings of SQPPSO-sono for 10, 30, 50, and 100-dimensional tests are 35, 37, 35, and 38 times, respectively, which are the best among all compared algorithms.

Table 2 Statistics of rankings for SQPPSO-sono, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO on CEC2013 benchmark functions
Table 3 Statistics of rankings for SQPPSO-sono, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO on CEC2014 benchmark functions
Table 4 Statistics of rankings for SQPPSO-sono, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO on CEC2017 benchmark functions

According to the statistics of different dimensional tests on CEC2013, CEC2014, and CEC2017, SQPPSO-sono demonstrated impressive performance on the high-dimensional test and outperformed better performance in most cases. To better review the rankings, the radar maps are given in Fig. 2.

In Table 5, the 30-dimensional test on CEC2013 of SQPPSO-sono performs better on \(f_{2}-f_{3}\), \(f_{6}-f_{7}\), \(f_{9}-f_{13}\), \(f_{17}-f_{19}\), and \(f_{24}-f_{27}\), 16 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 4 and 8 functions in total, respectively. The 30-dimensional test on CEC2014 of SQPPSO-sono performs better on \(f_{1}-f_{2}\), \(f_{4}\), \(f_{13}-f_{15}\), \(f_{17}\), \(f_{19}-f_{20}\), \(f_{22}\), \(f_{24}-f_{28}\), and \(f_{30}\), 16 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 7 and 7 functions, respectively. The 30-dimensional test on CEC2017 of SQPPSO-sono performs better on \(f_{2}\), \(f_{4}\), \(f_{6}\), \(f_{8}\), \(f_{11}-f_{12}\), \(f_{14}\), \(f_{17}\), \(f_{20}\), \(f_{22}\), \(f_{26}-f_{27}\), and \(f_{29}-f_{30}\), 14 functions in total, whereas SQPPSO-sono without using the NRAS or the SQP method only performs better on 10 and 6 functions in total, respectively. The experimental results not only illustrate the effectiveness of the NRAS and the SQP method but also demonstrate the success cooperation of the NRAS and the SQP method and contains the large test suite with all types of functions including unimodal, multimodal, hybrid, and composition functions.

In Table 6, ablation experiments are conducted to better illustrate the effectiveness of the NRAS method. Three fixed pairs of proportions are evaluated for iwPSO and SLPSO: 30–70%, 50–50%, and 70-30%. When compared with these fixed proportions, SQPPSO-sono outperforms with better mean values for multiple CEC2013, CEC2014, and CEC2017 benchmark functions. Specifically, for the 30-70% proportion, SQPPSO-sono performs better in 19, 20, and 20 functions; for the 50-50% proportion, it excels in 20, 22, and 23 functions; and for the 70-30% proportion, it achieves better results in 21, 22, and 26 functions. The NARS method shows a significant improvement in terms of efficiency.

In Table 7, ablation experiments were conducted to better illustrate the effectiveness of the SQP method. Two local search mechanisms, the interior point method [45] and CMA-ES [46], were applied for SQPPSO-sono. When compared with the interior point method, SQPPSO-sono performed better in 17, 19, and 21 functions on CEC2013, CEC2014, and CEC2017 benchmark functions, respectively. When compared with CMA-ES, SQPPSO-sono performed better in 15, 17, and 18 functions on CEC2013, CEC2014, and CEC2017 benchmark functions, respectively. Although the SQP method emerged as the winner in pairwise comparisons, it has also provided us with inspiration for developing hybrid local search mechanisms in the future.

Fig. 2
figure 2

Radar map of the rankings for mean and minimum values on the 50-dimensional tests. The circumference and radius scale represent the benchmark functions and rankings, respectively

In Figs. 7 and 8, SQPPSO-sono produces extremely sharp convergence rates in the very early generations of the optimization process for \(f_{1}\), \(f_{3}-f_{4}\), \(f_{10}-f_{15}\), \(f_{17}-f_{18}\), \(f_{25}\), \(f_{28}\), and \(f_{30}\). A characteristic of SQPPSO-sono is that it maintains sharp convergence rates in the very earlier (nearly at the beginning of the) generations and keeps almost flat in the later generations of the evolutionary process. However, this feature also has drawbacks. When optimizing some of the problems, it is difficult to jump out once the process falls into local optima. For example, the optimization of \(f_{2}\), \(f_{5}-f_{6}\), \(f_{8}\), \(f_{10}\), \(f_{20}\), \(f_{22}\), and \(f_{27}\) has significant improvement spaces. Due to the early intervention of the SQP method, the algorithm may prematurely fall into local optima on some benchmark functions. One of the research directions is to identify a method that complements the SQP method and works together to form a synergistic mechanism, allowing for a jump-out approach. The local search capability is strongly enhanced in the current search strategy used by SQPPSO-sono; however, the search behavior, particularly keep population diversity in the earlier generations of the run, needs to be improved further in the future.

Table 5 The mean values and standard deviations, averaged over 50 runs, of SQPPSO-sono and SQPPSO-sono without using a single component for the 30-dimensional test on CEC2013, CEC2014, and CEC2017 benchmark functions
Table 6 Mean values and standard deviations, averaged over 50 runs, of SQPPSO-sono and SQPPSO-sono using different fixed subpopulation ratios for the 30-dimensional test on CEC2013, CEC2014, and CEC2017 benchmark functions
Table 7 Mean values and standard deviations, averaged over 50 runs, of SQPPSO-sono and SQPPSO-sono using different local search mechanisms for the 30-dimensional test on CEC2013, CEC2014, and CEC2017 benchmark functions

From Table SF-1, in the 10-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\) and \(f_{10}\); and the smallest mean value on \(f_{4}\), the smallest minimum values on \(f_{3}\), \(f_{5}-f_{8}\), \(f_{16}\), and \(f_{28}\), and the smallest median values on \(f_{3}-f_{5}\) and \(f_{27}\). From Table SF-4, in the 10-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\), \(f_{17}-f_{18}\), and \(f_{30}\); and the smallest mean and median values on \(f_{20}\); and the smallest minimum values on \(f_{4}-f_{7}\) and \(f_{25}-f_{28}\). From Table SF-7, in the 10-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{2}\), \(f_{4}\), \(f_{12}-f_{15}\), and \(f_{18}\); and the smallest mean value on \(f_{19}\); the smallest minimum values on \(f_{3}\), \(f_{6}\), \(f_{9}\), \(f_{11}\), \(f_{20}-f_{21}\), and \(f_{24}\); and the smallest median values on \(f_{3}\), \(f_{9}\), and \(f_{11}\). In general, SQPPSO-sono is less competitive on CEC2013, CEC2014, and CEC2017 composition functions. However, SQPPSO-sono continues to demonstrate outstanding performance on hybrid functions.

Fig. 3
figure 3

Sources of best particle in the process of evolution of single run for 30-dimensional test

From Table SF-2, in the 30-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{2}-f_{4}\), \(f_{6}-f_{7}\), \(f_{10}\), and \(f_{16}\); and both smallest minimum and median values on \(f_{26}\) and \(f_{28}\). From Table SF-5, in the 30-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{4}\), \(f_{17}\), \(f_{20}-f_{21}\), and \(f_{26}\); and the smallest minimum value on \(f_{15}\); the smallest mean and median values on \(f_{18}\); and the smallest median value on \(f_{29}\). From Table SF-8, in the 30-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}-f_{15}\), \(f_{18}\), and \(f_{30}\); and the smallest mean values on \(f_{9}\) and \(f_{19}\); the smallest minimum values on \(f_{2}\), \(f_{16}\), \(f_{25}\), and \(f_{26}\); and the smallest median and minimum values on \(f_{22}\) and \(f_{28}\). Compared with HCLPSO, TCSPSO, and EPSO, SQPPSO-sono is less competitive on CEC2013, CEC2014, and CEC2017 composition functions, respectively. However, the smallest values of SQPPSO-sono cover all types of functions on CEC2013, CEC2014, and CEC2017 benchmark functions.

From Table SF-3, in the 50-dimensional test on CEC2013 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{2}-f_{4}\), \(f_{6}\), \(f_{10}\), and \(f_{16}\); and the smallest mean value on \(f_{7}\); the smallest minimum values on \(f_{12}\), \(f_{18}\), \(f_{26}\), and \(f_{28}\); and the smallest median values on \(f_{7}\), \(f_{13}\), and \(f_{28}\). From Table SF-6, in the 50-dimensional test on CEC2014 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}-f_{4}\), \(f_{13}\), \(f_{17}\), and \(f_{20}-f_{21}\); and the smallest mean value on \(f_{18}\); the smallest minimum value on \(f_{15}\); and the smallest median values on \(f_{15}\) and \(f_{26}\). From Table SF-9, in the 50-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}\), \(f_{14}\), \(f_{18}\), and \(f_{30}\); and the smallest mean values on \(f_{13}\), \(f_{15}\), and \(f_{19}\); the smallest minimum values on \(f_{5}\), \(f_{7}-f_{8}\), \(f_{17}\), \(f_{21}-f_{24}\), and \(f_{28}\); and the smallest median values on both \(f_{13}\) and \(f_{19}\). In general, the SQPPSO-sono performance on CEC2017 composition functions was remarkably better than that of CEC2013 and CEC2014 composition functions.

From Table SF-10, in the 100-dimensional test on CEC2017 benchmark functions, SQPPSO-sono owns the smallest minimum, median, and mean values on \(f_{1}\), \(f_{3}-f_{4}\), \(f_{12}\), \(f_{14}-f_{15}\), \(f_{18}\), and \(f_{28}\); and the smallest mean values on \(f_{13}\), \(f_{25}\), and \(f_{30}\); the smallest minimum values on \(f_{5}\), \(f_{7}-f_{8}\), \(f_{17}\), \(f_{21}\), \(f_{23}-f_{24}\), and \(f_{29}\); and the smallest median values on \(f_{13}\), \(f_{25}\), and \(f_{30}\). In general, SQPPSO-sono demonstrates outstanding performance in the 100-dimensional test on all types of functions.

In summary, SQPPSO-sono demonstrated superior performance across unimodal, multimodal, hybrid, and composition functions in the 30, 50, and 100-dimensional tests conducted on CEC2013, CEC2014, and CEC2017 benchmark functions. SQPPSO-sono is less competitive on the 10-dimensional test on CEC2013 and CEC2014. In particular, SQPPSO-sono exhibited better performance on unimodal functions; this can be attributed to the application of the SQP method which amplified the local search capability.

Search behavior of SQPPSO-sono

In Fig. 3, the best fitness values with their sources from a single run are plotted using different colors; green represents the best particle from the iwPSO group; blue represents the best particle from the SLPSO group; red represents the best particle from replacement by a better solution generated by the SQP method. The search modes can be summarized as follows:

  • Both iwPSO and SLPSO conduct an effective search in the whole optimization process.

  • Either iwPSO, SLPSO, or both conduct effective searches in the very early generations; then, the SQP method conducts an effective search in the later generations and lasts until the end.

  • Either iwPSO, SLPSO, or both conduct effective searches in the very early generations; then, the SQP method conducts an effective search for a long while, and iwPSO or SLPSO suddenly conducts an effective search for a while. The SQP method later conducts an effective search until the end.

  • The SQP method conducts an effective search almost at the beginning of the optimization process. Either iwPSO, SLPSO, or both conduct an effective search at some point and cooperate with the SQP method to achieve a sharp convergence rate.

From the above search modes, the evolutionary process in step descent, like cliff fall, is common; it is consistent with Fig. 7 and 8, and the convergence rate is very sharp. From the figures, in the various evolutionary stages, the SQP method plays an important role in enhancing the local search capability in the very early optimization process and sometimes even at the beginning of the optimization process; thus, it provides an ideal condition to enhance the global search of SQPPSO-sono. In the future, one possible research direction could be to reduce the frequency of usage for the SQP method and increase the usage of the global search method, probably using niching methods [47]. According to the laboratory observations, the interior-point method [48, 49] significantly outperforms the SQP method on a few optimization problems; thus, another possible research direction is to design and ensemble a local search strategy using both the SQP and interior-point methods.

Fig. 4
figure 4

Plot of 2-dimensional particles in diverse generations (0, 10, 20, 30) on benchmark function 4 of CEC2017

Fig. 5
figure 5

Plot of 2-dimensional particles in diverse generations (0, 10, 20, 30) on benchmark function 8 of CEC2017

Fig. 6
figure 6

Plot of 2-dimensional particles in diverse generations (0, 10, 20, 30) on benchmark function 26 of CEC2017

Fig. 7
figure 7

Evolutionary process of 100D for the proposed SQPPSO-sono algorithm, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO, averaged over 50 runs for \(f_{1}-f_{15}\) on CEC2017 benchmark functions. The X-axis represents the function evaluations (FEs) and the Y-axis represents the fitness values of the PSO variants

Fig. 8
figure 8

Evolutionary process of 100D for the proposed SQPPSO-sono algorithm, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO, averaged over 50 runs for \(f_{16}-f_{30}\) on CEC2017 benchmark functions. The X-axis represents the function evaluations (FEs), and the Y-axis represents the fitness values of the PSO variants

The 2-dimensional tests are conducted for SQPPSO-sono without single methods. The 2-dimensional particles in diverse generations (0, 10, 20, 30) on \(f_{4}\), \(f_{8}\), and \(f_{26}\) are plotted in Figs. 4,  5, and 6. From the figures, it can be observed that the particles are more scattered during the evolutionary process when SQPPSO-sono does not employ the SQP method; the particles are also more scattered in the later generations for SQPPSO-sono without the SQP method as a local search mechanism. The SQPPSO-sono without the NRAS leads to more gathering of particles due to loss of population diversity, while using the NRAS can maintain the population diversity even in the later generations of the evolution. The figures show forceful evidence that the proposed methods are effective. Thus, the global and local search capabilities of the proposed SQPPSO-sono have been well balanced (Figs. 7, 8).

Algorithmic computational complexity

The computational complexity of the proposed SQPPSO-sono algorithm was determined as delineated in the CEC2017 benchmark competition, which was proposed by [50]. All the experiments were conducted on the below system:

  • CPU: Intel Core i7-1165G7 @ 2.80 GHz 1.69 GHz

  • RAM: 16GB

  • OS: Windows 10

  • Software: Matlab 2018b

In Table 8, the computation complexity of the SQPPSO-sono algorithm on 10, 30, 50, and 100 dimensions are presented. In this table, \(T_{0}\) is the time calculated by performing the below statements:

\(x=(\text {double})0.55;\)

\(\text {for} \ i=1:1000000\)

\(x=x+x; \ x=x/2; \ x=x*x;\)

\(x=\text {sqrt}(x); \ x=\text {log}(x); \ x=\text {exp}(x); \ x=x/(x+2);\)

\(\text {end}\)

\(T_{1}\) is the time to perform benchmark function \(f_{18}\) individually with 200,000 evaluations in D dimensions. \(T_{2}\) is the execution time of the SQPPSO-sono algorithm on function \(f_{18}\) for 200,000 evaluations with D dimensions. The time (\(\hat{T}_{2}\)) is the mean values, averaged over 5 runs, of \(T_{2}\). As presented in Table 8, \(T_{1}\), \(\hat{T}_{2}\), and (\((\hat{T}_{2}-T_{1})/T_{0}\)) scaled linearly with the number of dimensions. The computation complexity comparisons of the SQPPSO-sono algorithm with the compared PSO-based variants are presented in Table 9 for 30 dimensions. The same system and the same procedure are used to calculate, for each compared algorithm, the computation complexity.

In Table 9, the computation complexity of SQPPSO-sono ranks \(3{\text {rd}}\) (excluding SQPPSO-sono without NRAS and SQP), behind PSO-sono and PPSO. The NRAS method is less efficient than the mechanism of the proportional allocation of subpopulations in SQP-sono, as \((\hat{T}_{2}-T_{1})/T_{0}\) is only 83.3907 without using the NRAS method. The SQP method is highly efficient than the fully-informed search scheme employed by SQP-sono, as \((\hat{T}_{2}-T_{1})/T_{0}\) is 129.5536 and 145.3576 with and without the SQP method. Although the computation cost of SQPPSO-sono is slightly higher than PSO-sono, its performance is much more efficient and reliable than PSO-sono. Such a little performance sacrifice is very worthwhile. Among all variants, the computation cost of SQPPSO-sono is still very competitive.

Table 8 Computational complexity of the SQPPSO-sono algorithm. The comparison is conducted according to the suggestion of the CEC2017 competition
Table 9 Computational complexity of SQPPSO-sono, PSO-sono, EPSO, PPSO, MPSO, TCSPSO, and HCLPSO on 30 dimensions. The comparison is conducted according to the suggestion of the CEC2017 competition

Conclusions

This paper proposed a sequential quadratic programming (SQP)-based novel strategy for particle swarm optimization on single-objective numerical optimization (SQPPSO-sono). To promote the performance of the strategy, we propose a novel ratio adaptation scheme (NRAS) that divides the subpopulation size considering fitness value and diversity of particles. In SQPPSO-sono, the NRAS method is responsible for dividing subpopulations and the SQP method, replacing fully-informed search scheme, is intermittently invoked to enhance the exploitation capability. The NRAS considers both the quality and diversity of particles during evolution to balance the exploration and exploitation capabilities effectively. Meanwhile, the SQP method is applied to exploit the area around the current global best particle after its generation even in the earlier iterations. The strategic timing and targeted application of this method significantly enhance the algorithm’s efficiency. To evaluate the effectiveness of the NRAS and the SQP method, SQPPSO-sono with a single mechanism, either using the NRAS or the SQP method, was validated; the experimental results illustrate that SQPPSO-sono has greater performance with both mechanisms than using individual mechanism. All compared algorithms are tested on CEC2013, CEC2014, and CEC2017 benchmark functions with diverse dimensions. The experimental results demonstrate that SQPPSO-sono has outstanding performance in most cases. The future research directions of SQPPSO-sono include reducing frequency of usage for the SQP method, increasing the global search mechanisms, and designing an ensemble strategy that can dynamically switch according to the landscapes in various evolutionary stages using both the SQP and interior-point methods.