1 Introduction

Metaheuristic search (MHS) algorithms have a big potential to achieve feasible solutions for complicated optimization problems from a range variety of sciences [1, 2]. They imitate processes in nature and perform a heuristic-driven search with two unique search operators: exploitation and exploration. Unlike gradient-based optimization methods, MHS methods do not need derivative information of the objective function and are independent of the problem form [3]. Considering the problem as a black box, randomness, the ability to avoid suboptimal regions, and ease of implementation are some of the advantages of these algorithms [2, 4]. Because of these advantages MHS algorithms have attracted great attention and have become the focus of artificial intelligence researchers.

MHS algorithms are designed by taking inspiration from various methods, processes, and concepts found in nature. From this point of view, MHS algorithms can be separated into four classes: swarm intelligence, evolution, human, and physics-based [5,6,7]. Swarm intelligence-based metaheuristic methods simulate the social and collective behavior of creatures such as insects, birds, etc. The best-known member of this class is particle swarm optimization (PSO) [8]. Other popular methods of swarm-based metaheuristics include African vultures optimization algorithm (AVOA) [9], grasshopper optimization algorithm (GOA) [10], artificial bee colony (ABC) [11], aquila optimizer (AO) [12], grey wolf optimizer (GWO) [4], marine predators algorithm (MPA) [13], nutcracker optimization algorithm (NOA) [14], red fox optimization (RFO) [15], artificial hummingbird algorithm (AHA) [16], salp swarm algorithm (SSA) [17], artificial gorilla troops optimizer (GTO) [18], and moth-flame optimization (MFO) [19]. Evolutionary-inspired algorithms mimic the evolution of organisms in nature. These algorithms use biological phenomena-inspired operators such as crossover and mutation [20, 21]. The most popular methods of evolutionary optimizers are genetic algorithm (GA) [22] and differential evolution (DE) [23]. Driving training-based optimization (DTBO) [24], student psychology-based optimization (SPBO) [25], imperialist competitive algorithm (ICA) [26], and gaining–sharing knowledge (GSK) [27] are in the class of human-based metaheuristics. Some examples of physics-based metaheuristics developed by simulating physical laws are Fick’s law algorithm (FLA) [21], vortex search algorithm (VSA) [28], and Henry gas solubility optimizer [29]. Given the above-mentioned techniques, it can be said that dozens of MHS algorithms inspired by different processes have been developed. The main reason for this diversity can be attributed to the No Free Lunch (NFL) concept [30]. NFL theory highlights that no optimization algorithm can give the best results for all optimization problems. In this respect, researchers are constantly developing new optimization methods or improving the convergence capability of existing optimizers. Because optimization of complex problems is a challenging task and there is always room for improvement.

The optimization process of metaheuristic search algorithms starts with a randomly generated population. This contributes to the algorithm exploring a wide range of the search space and prevents it from getting stuck in local optima in the first iterations. Then, they apply the steps of the metaheuristic search process lifecycle to find the optimal solution [2, 31]. These steps are selection, search, and update [32]. In the first step, guide solutions are selected from members of the population. Since these guides are also inputs of search operators, their successful selection significantly affects the optimization capability of the algorithm [33]. Greedy, tournament, roulette wheel, sequential, and random methods are commonly used approaches to selecting guide solutions. Also, the fitness-distance balance (FDB) selection technique developed by Kahraman et al. [33] in 2020 has gained a great reputation in a short time. In the second step, local and global search is performed with exploitation and exploration operators [34,35,36]. In the third step, it is determined which of the individuals in the population will survive and which will be killed. In almost all MHS algorithms, the survivor selection is made regarding the greedy approach [37]. To put it more clearly, the individual with the better fitness value survives, while the other is killed.

A greedy approach to survivor selection means that the individuals with the best fitness value are always selected to survive, regardless of their genetic diversity. This can lead to a population of individuals that are all very similar to each other, which can reduce the overall genetic diversity of the population [37]. To overcome this drawback, Kahraman et al. [37] proposed a novel solution candidate update approach called the natural survivor method (NSM). In this approach, the survival chance of individuals is evaluated by the NSM score. This score value is a measure of the adaptability of an individual to the environment. NSM score is computed by considering both the fitness value of the individual and its relationship with other members of the population. In [37], the authors applied the NSM to the survivor selection design of the LSHADE-SPACMA [38], teaching–learning-based artificial bee colony (TLABC) [39], and stochastic fractal search (SFS) [40] algorithms. Based on the Friedman test results among 25 competitive algorithms, the following results were obtained: LSHADE- SPACMA is 3rd, SFS is 10th, TLABC is 18th, while NSM-LSHADE-SPACMA is 1st, NSM-SFS is 6th, NSM-TLABC is 9th. The results show that the NSM approach eliminates the premature convergence problem of population-based metaheuristic algorithms by providing effective diversity. To the best of the author’s knowledge, there is no MHS algorithm designed with the NSM update mechanism apart from these three algorithms (LSHADE-SPACMA, TLABC, and SFS), heretofore. To develop powerful algorithms with superior convergence ability, the design of algorithms with NSM deserves further investigation.

Artificial hummingbird algorithm (AHA) is a bio-inspired swarm intelligence optimization technique. It imitates intelligent flight abilities and foraging strategies of hummingbirds to produce feasible solutions to optimization problems. AHA’s unique memory and special biological background are some important features that distinguish it from other bio-inspired metaheuristics. The algorithm has attracted great attention from researchers and has been applied to solve numerous problems in the optimization area. Although AHA achieved promising solutions for many optimization problems from various disciplines of science, it suffers from poor exploration of search space and stagnation in local optima in multimodal and complex search spaces. To eliminate these disadvantages and increase the convergence performance of the algorithm, researchers have focused on strengthening the algorithm with various operators. For example, Ebeed et al. [41] enhanced the original AHA’s exploitation ability with bandwidth motion whereas exploration ability with fitness-distance balance and Levy flight operators. Bhattacharjee et al. [42] combined the AHA algorithm with the opposition-based learning (OBL) rule. Emam et al. [43] boosted the convergence accuracy of the original AHA using OBL and a local escape operator. Ghafari and Mansouri [44] introduced a novel version of AHA improved with chaos mechanism and OBL. In another study, Jamal et al. [45] integrated Lévy flight and pitch adjustment motion into the optimization procedure of traditional AHA. As can be seen from the above-mentioned studies, literature works generally centered on the selection and search (global and local) steps of the metaheuristic search process lifecycle to compensate for the shortcomings of AHA. At this point, researchers have focused either on the effective selection of guide solutions or on strengthening search operators. To the best of the author’s knowledge, there is no study aiming to improve the update mechanism of the AHA algorithm, heretofore. To fill this gap, this paper has centered on the redesign of the update mechanism of AHA. In this direction, a novel updating mechanism termed NSM was applied to determine solution candidates in the population that would survive or be killed and developed a new metaheuristic called NSM-AHA. The superiority of the proposed algorithm over original AHA and state-of-the-art literature algorithms is validated on CEC 2017 and CEC 2020 benchmark problems. To generalize the performance of NSM-AHA, the algorithm is applied to the solution of two constrained engineering problems: parameter extraction of single-diode solar cell model and optimum design of power system stabilizer.

The following remarks give the main contributions of this paper.

  • Integration of natural survivor method to optimization procedure of AHA algorithm. This helps to avoid the local optimum and provides better exploration.

  • Development of a novel metaheuristic called NSM-AHA for global and engineering optimization problems.

  • The superiority of the proposed algorithm is validated on 39 CEC benchmark functions and two constrained real-world engineering problems.

  • Comprehensive comparisons between the NSM-AHA and state-of-the-art optimizers have been presented. Friedman test results demonstrated that the proposed NSM-AHA was highlighted as the best method with a mean rank value of 5.74 among the 22 algorithms.

  • The proposed algorithm is applied to the optimization of single-diode solar cell parameters and the design of a power system stabilizer for the first time.

The organization of the remaining sections is as follows: Section 2 introduces the methodology. Section 3 provides the settings of experimental studies. Section 4 gives the results of four experimental studies conducted to demonstrate the efficiency of the NSM-AHA algorithm. The first two of these experiments are about the optimization of the CEC 2017 and CEC 2020 benchmark functions with NSM-AHA and the original AHA. The third experiment compares NSM-AHA with powerful algorithms from the literature. The last experiment includes the application of the NSM-AHA to constrained engineering problems. Section 5 elaborates on the conclusions of the study.

2 Methodology

This section introduces the basics of the proposed NSM-AHA algorithm. In this regard, four subsections have been prepared. In these subsections, an optimization process of MHS algorithms, natural survivor method, basics of AHA algorithm, and proposed NSM-AHA are introduced.

2.1 Optimization process of MHS algorithms

The optimization process of the swarm-based MHS algorithms can be divided into two stages. These are initialization and metaheuristic search process lifecycle. The details of these stages are as follows [36]:

  1. (i)

    Initialization In the first stage, a set of random solutions called population (\(P\)) is created. Assume that n and d show the numbers of solutions and design variables, \({\text{LB}}\) and \({\text{UB}}\) represent lower and upper boundaries of design variables. Accordingly, the population (\(P\)) vector is generated using Eqs. (1) and (2).

    $$p_{i,j} = \left( {{\text{UB}}_{j} - {\text{LB}}_{j} } \right) \times {\text{rand}} + {\text{LB}}_{j}$$
    (1)
    $$P = \left[ {\begin{array}{*{20}c} {P_{1} } \\ \vdots \\ {P_{n} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {p_{1,1} } & \cdots & {p_{1,d} } \\ \vdots & \ddots & \vdots \\ {p_{n,1} } & \cdots & {p_{n,d} } \\ \end{array} } \right]_{n \times d}$$
    (2)
  1. (ii)

    Metaheuristic search process The second stage typically comprises three steps: selection, search, and update. In the first step, solution candidates (individuals) to guide the search process are selected using sequential, greedy, random, probabilistic, and FDB-based selection methods. The selection of guides is a critical task in the convergence of an algorithm. If they are chosen poorly, the algorithm may not converge at all, or it may converge to a suboptimal solution. In the second step, exploitation and exploration are performed by nature-inspired search operators. In the third step, based on the update mechanism of the algorithm, it is determined which of the individuals in the population will survive and which ones will be killed [32, 36].

Analyzing the update mechanisms of MHS methods, it is observed that the majority of them adopt the fitness value-based update. The “survival of the fittest” principle is a milestone of many optimization algorithms. In these algorithms, a population of individuals is iteratively evaluated and updated. Individuals with better objective function values are more likely to be selected for survival [34, 46,47,48]. It is well-known that such an update mechanism improves the exploitation capacity of the MHS algorithm. However, the survival of individuals only according to the fitness value criterion can lead to the selection of similar solutions and therefore a reduction in genetic diversity. This can negatively affect the convergence success of the MHS algorithm and cause premature convergence. To overcome these problems, Kahraman et al. [37] proposed a novel approach called the “natural survivor method (NSM)” for the selection of individuals to survive in the population. The authors applied NSM to evolutionary, biology, and physics-based MHS algorithms and demonstrated that the proposed update mechanism significantly improved the optimization performance of the algorithms. The next subsection describes the NSM mechanism used for survivor selection in population-based MHS algorithms.

2.2 Natural survivor method

The natural survivor method (NSM) is an effective and powerful update mechanism designed for population management in MHS algorithms. According to this method, the survivors of the population are determined by the value of NSM score. Three criteria are considered in the calculation of the NSM score. These are a contribution to guides, contribution to population, and contribution to the objective function. The individual with the best NSM score survives [37].

In the population (\(P\)) given in Eq. (2), i-th individual is denoted by a 1 \(\times\) d dimensional vector (\(P_{i} = [p_{i,1} ,p_{i,2} , \ldots , p_{i,d} ])\). In NSM score calculation, distance information between individuals is necessary. The Euclidean distance between individuals \(P_{i}\) and \(P_{j}\) is calculated using Eq. (3). As can be seen in the equation, normalized vectors (\({\text{norm}}\,P_{i} \; {\text{and norm}}\,P_{j}\)) are used to prevent i-th and j-th individuals from dominating each other in the distance calculation [37].

$${\text{ED}} \left( {P_{i} , P_{j} } \right) = \sqrt {\mathop \sum \limits_{z = 1}^{d} \left( {{\text{norm}}\,P_{i,\left[ z \right]} - {\text{norm}}\,P_{j,\left[ z \right]} } \right)^{2} }$$
(3)

The criteria considered in the NSM score calculation and their definitions are introduced in detail below.

  1. (i)

    First Criteria: “Contribution to guides”

Assume that \(P_{{{\text{mating\_pool}}}}\) indicates the vector containing individuals that will join the mating pool and guide the search process. The contribution of i-th individual (\(P_{i}\)) to the guides in the mating pool vector is represented with the \(P_{{\left[ i \right]{\text{NSM}}_{{{\text{mps}}}} }}\). The pseudocode used to calculate the \(P_{{\left[ i \right]{\text{NSM}}_{{{\text{mps}}}} }}\) is given in Algorithm 1 [37].

figure a

Pseudocode for calculation of “contribution to guides”

  1. (ii)

    Second Criteria: “Contribution to population”

\(P_{{\left[ i \right]{\text{NSM}}_{{{\text{PS}}}} }}\) shows the contribution of the i th individual (\({P}_{i}\)) to the P-population. Algorithm 2 gives the pseudocode used to calculate the \(P_{{\left[ i \right]{\text{NSM}}_{{{\text{PS}}}} }}\) score value [37].

figure b

Pseudocode for calculation of “contribution to population”

  1. (iii)

    Third Criteria: “Contribution to objective function”

The individual strength of the i-th solution is expressed by the objective function score (\(P_{{\left[ i \right]{\text{NSM}}_{{{\text{OFS}}}} }}\)) and its value is calculated as follows:

$$\begin{gathered} n \in N^{ + } , {}_{i = 1}^{n} \forall , J_{i} = f_{J} \left( {P_{i} } \right), - \infty < J_{i} < + \infty \hfill \\ {\text{norm}} J_{i} = \frac{{J_{i} - J_{{{\text{max}}}} }}{{J_{{{\text{max}}}} - J_{{{\text{min}}}} }} \hfill \\ \left\{ {\begin{array}{*{20}c} {{\text{if }}\,{\text{goal}}\,{\text{ is}}\,{\text{ minimization}}} & {F_{\left[ i \right]} = 1 - {\text{norm}} J_{i} } \\ {{\text{if }}\,{\text{goal}}\,{\text{ is}}\,{\text{ maximization}}} & {F_{\left[ i \right]} = {\text{norm}} J_{i} } \\ \end{array} } \right. \hfill \\ \end{gathered}$$
(4)

The NSM score value of the i th individual (\(P_{{\left[ i \right]{\text{NSM}}_{{{\text{Score}}}} }}\)) is calculated using Eq. (5). The NSM score value corresponds to the survival skill of the relevant solution candidate [37].

$$P_{{\left[ i \right]{\text{NSM}}_{{{\text{Score}}}} }} = w_{{{\text{mps}}}} \times P_{{\left[ i \right]{\text{NSM}}_{{{\text{mps}}}} }} + w_{{{\text{PS}}}} \times P_{{\left[ i \right]{\text{NSM}}_{{{\text{PS}}}} }} + w_{{{\text{OFS}}}} \times P_{{\left[ i \right]{\text{NSM}}_{{{\text{OFS}}}} }}$$
(5)

\({\text{NSM}}_{{{\text{mps}}}}\) and \({\text{NSM}}_{{{\text{PS}}}}\) parameters used in NSM score calculation contribute to exploration and \({\text{NSM}}_{\text{OFS}}\) to exploitation. By dynamically adjusting the weight coefficients of these parameters, survivor selection can be performed. In the first steps of optimization, it is important to keep diversity in the population. This is because diversity helps to ensure that the algorithm does not get stuck in a local optimum. The \({w}_{\text{mps}}\) and \({w}_{\text{PS}}\) coefficients can be used to control the amount of diversity in the population. By increasing the values of these coefficients in the first iterations, the algorithm will be more likely to select individuals who are different from each other. In the final steps of optimization, it is important to focus on the neighborhood search. This is because the neighborhood search is more likely to find better solutions near the optimum. In this regard, the numeric value of the \({w}_{\text{OFS}}\) is increased in the final iterations so that the neighborhood search is dominant. The pseudocode used to dynamically adjust the weight coefficients (\(w_{{{\text{mps}}}}\), \(w_{{{\text{PS}}}}\), and \(w_{{{\text{OFS}}}}\)) is given in Algorithm 3 [37].

figure c

Pseudocode for adjusting NSM weight coefficients

Figure 1 gives the general steps to follow for the design of MHS algorithms with NSM. As shown in the figure, firstly, the initial population is created with defaults. Following this, the search process lifecycle is started. In this context, the selection of the guide solution candidates, searching for the optimum solution using the exploration and exploitation operators, and updating the solution candidate is performed. The NSM mechanism comes into play during the determination of the solution candidates that will survive. Accordingly, the NSM score is calculated by applying Step 1, Step 2, and Step 3 for each solution. Finally, NSM score-based survivor selection is performed as shown in Step 4.

Fig. 1
figure 1

Flowchart for integration of NSM update mechanism to MHS algorithms

2.3 Original AHA algorithm

The artificial hummingbird algorithm (AHA) [16] is a swarm-based metaheuristic that simulates the exclusive flight skills and foraging strategies of hummingbirds. The algorithm implements guided, territorial, and migrating foraging strategies and memorizes food sources with a visiting table concept. During the foraging process, the axial, diagonal, and omnidirectional flight skills of hummingbirds are used. In the AHA algorithm, the food source represents the solution candidate. The nectar refilling rate of the food source indicates the solution quality. The steps followed by the AHA method for optimization are given in Algorithm 4.

figure d

Optimization procedure of the AHA algorithm

As shown in Algorithm 4, the AHA algorithm starts the optimization by generating a random set of solutions and a visiting table. The position of i-th food source and P vector are generated as shown in Eqs. (1) and (2). The visit table is created as shown in Eq. (6) [49].

$${\text{VT}}_{{\left[ {i,j} \right]}} = \left\{ {\begin{array}{*{20}c} {{\text{if}}\, i = j} & {{\text{null}}} \\ {{\text{if}} \,i \ne j} & 0 \\ \end{array} } \right. \,\,i,j = 1,2, \ldots ,n$$
(6)

Following the initialization process, the metaheuristic search process is initiated and continues until the termination criterion is satisfied. The algorithm simulates three foraging behaviors to perform a local and global search. These behaviors are described in detail below [16, 49, 50].

2.3.1 Guided foraging

Guided foraging behavior is a foraging strategy that hummingbirds use to find food sources that are likely to be profitable. Hummingbirds prefer to visit food sources with a high nectar-refilling rate, as this means that they will be able to obtain quick and easy food. Equation (7) gives the mathematical model of guided foraging.

$$v_{i} \left( {t + 1} \right) = p_{{i,{\text{tar}}}} \left( t \right) + \acute{\alpha} \times {\text{DP}} \times \left( {p_{i} \left( t \right) - p_{{i,{\text{tar}}}} \left( t \right)} \right)$$
(7)

where \(p_{i} \left( t \right)\) and \(v_{i} \left( {t + 1} \right)\) represent the current and new positions of i-th food source. \(p_{{i,{\text{tar}}}} \left( t \right)\) indicates the position of the target food source. \(\acute{\alpha}\) is the guiding factor obtained from the normal distribution N∼ (0, 1). \(\text{DP}\) is the flight pattern vector. Mathematical models of axial, diagonal, and omnidirectional flights for d-dimensional search space are given in Eqs. (810), respectively.

$${\text{DP}}^{i} = \left\{ {\begin{array}{*{20}c} {1 } & {{\text{if}} \;i = {\text{rand }}i\left( {\left[ {1, d} \right]} \right)} \\ 0 & {{\text{else}}} \\ \end{array} } \right. i = 1,2, \ldots ,d$$
(8)
$${\text{DP}}^{i} = \left\{ {\begin{array}{*{20}c} 1 & {{\text{ if}}\; i = O\left( j \right), j \in \left[ {1,s} \right], O = {\text{rand}}\,{\text{per}}\,m\left( s \right),s \in \left[ {2,r_{1} \times \left( {d - 2} \right) + 1} \right] ])} \\ {0 } & {{\text{else}}} \\ \end{array} } \right.$$
(9)
$${\text{DP}}^{i}=1 \,\,i=\text{1,2},\dots ,d$$
(10)

Using the fitness value-based survivor selection method, the position of the i-th food source is updated based on Eq. (11). The pseudocode of guided foraging behavior is given in Algorithm 5 [16].

$$p_{i} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {v_{i} \left( {t + 1} \right) } & {f\left( {p_{i} \left( t \right)} \right) > f\left( {v_{i} \left( {t + 1} \right)} \right)} \\ {p_{i} \left( t \right) } & {f\left( {p_{i} \left( t \right)} \right) \le f\left( {v_{i} \left( {t + 1} \right)} \right)} \\ \end{array} } \right.$$
(11)
figure e

Guided foraging

2.3.2 Territorial foraging

After the hummingbird visits and feeds on the target food source, it tends to look for a new food source in the neighboring area within its territory. The process of searching for a new food source in the neighborhood of any hummingbird relative to its current position can be modeled as follows:

$$v_{i} \left( {t + 1} \right) = p_{i} \left( t \right) + \beta \times {\text{DP}} \times p_{i} \left( t \right)$$
(12)

where \(\beta\) represents the territorial factor and its value is obtained from N∼ (0, 1). The selection of survival between the food source obtained by the territorial foraging strategy (\(v_{i} \left( {t + 1} \right)\)) and the i-th food source (\(p_{i} \left( t \right)\)) in the population is performed based on Eq. (11). So while the food source with the better fitness value survives, the other one dies. Also, the visit table is updated after the territorial foraging strategy has been applied. Algorithm 6 gives the pseudocode of the territorial foraging strategy [16].

figure f

Territorial foraging

2.3.3 Migration foraging

When food is scarce in an area frequented by the hummingbird, the hummingbird migrates to a more distant food source to feed. Accordingly, if the number of iterations is greater than the migration coefficient, the hummingbird located in the worst food migrates to a randomly generated food in the search space. Then the visit table is updated. The mathematical model of the migration foraging strategy is given in Eq. (13). Algorithm 7 describes the pseudocode of the migration foraging strategy [16].

$$p_{{{\text{wor}}}} \left( {t + 1} \right) = {\text{rand}} \times ({\text{UB}} - {\text{LB}}) + {\text{LB}}$$
(13)
figure g

Migration foraging

2.4 Proposed NSM-AHA algorithm

The original AHA algorithm adopts a fitness value-based approach for survivor selection. Accordingly, if the fitness value of the \(P_{{i,{\text{new}}}}\) is less than \(P_{i}\), the \(P_{{i,{\text{new}}}}\) survives and the other solution dies. Such an approach can result in fitness-based dominance and thus the weakening of genetic diversity in the population. This causes the AHA algorithm to suffer from premature convergence. To overcome this drawback, this paper combines the NSM mechanism with the AHA algorithm and proposes a new metaheuristic called NSM-AHA. The proposed algorithm adopts the NSM score value to determine which of the solution candidates will survive and which will die during the optimization process. The flowchart of the proposed NSM-AHA algorithm is given in Fig. 2. As can be seen in the figure, the algorithm uses the NSM score-based survival method instead of only the fitness value-based approach to update the solutions in the guided (see Step 3) and territorial (see Step 4) foraging stages. Accordingly, the selection of the surviving in these two foraging strategies is performed regarding the scheme given in Fig. 1.

Fig. 2
figure 2

Flowchart of proposed NSM-AHA algorithm

3 Experimental settings

This study adopts IEEE CEC standards in experiments performed to compare the optimization capability of MHS algorithms. This ensures that the algorithms are competing under fair conditions. The experimental settings considered in the present study can be summarized as follows:

  • The search process termination criterion is the maximum number of fitness function evaluations (maxFEs) and its value is set to 10,000 \(\times\) Dimension.

  • The CEC 2017 [51] and CEC 2020 [52] benchmark functions are used to evaluate the performance of optimization algorithms.

  • For each benchmark function, algorithms are run 51 times.

  • The number of design parameters is changed dynamically. In this regard, 30-, 50-, and 100-dimensional optimization is handled.

  • Wilcoxon statistical test [53] is implemented with a significance level of 5% to determine whether the difference between the two sets of data is statistically significant.

  • The parameter settings given in the original articles of competitive algorithms are used.

  • All algorithms are run on Intel(R) Core (TM) i7-3770 K CPU @3.40 GHz and 8 GB RAM and × 64-based processor.

4 Results and analysis

This section summarizes and discusses the results of experimental studies conducted to scrutinize the effectiveness of the proposed NSM-AHA algorithm. The results are presented in four subsections. Sections 4.1 and 4.2 analyzes the optimization ability of the proposed NSM-AHA and the original AHA in solving CEC 2017 and CEC 2020 problems, respectively. Section 4.3 compares NSM-AHA with recently introduced and well-known algorithms of literature. Section 4.4 covers the application of the NSM-AHA algorithm to the solution of constrained engineering problems.

4.1 Experiment-1: optimization of IEEE CEC 2017 test suite benchmark functions

This experiment evaluates the solution capability of NSM-AHA and AHA algorithms in the CEC 2017 test suite problems. The benchmark functions in the test suite are designed to challenge optimizers and to cover a wide range of problem characteristics. Unimodal functions have a single global optimum and it is relatively easy to find the optimal solution. On the other side, multimodal functions involve multiple local optima. For this reason, algorithms are likely to get caught in local solution traps and converge prematurely. The way to overcome these problems depends on the algorithm exhibiting an effective exploration capability. The search spaces of hybrid and composition functions are more complex than those of the other two functions. To converge to the global optimum in such functions, the algorithm must successfully balance exploration and exploitation. The statistical error results of the NSM-AHA and AHA algorithms on the unimodal (F1–F3), multimodal (F4–F10), hybrid (F11–F20), and composition (F21–F30) functions are given in Tables 1, 2, and 3. As illustrated in the tables, the lowest mean and standard deviation error results for each benchmark function are highlighted in italics. In the study, the solution error measure is adopted as the difference between the best fitness value (f(x)) obtained by the algorithm in one run and the global optimum value (f(x*)) of the benchmark function.

Table 1 Comparison of error statistics obtained from 51 runs for unimodal and multimodal functions of CEC 2017
Table 2 Comparison of error statistics obtained from 51 runs for hybrid functions of CEC 2017
Table 3 Comparison of error statistics obtained from 51 runs for composition functions of CEC 2017

Table 1 gives the comparative optimization results for the unimodal (F1–F3) and multimodal (F4–F10) problems of the CEC 2017. As per the results in the table, the AHA algorithm obtained relatively better results on unimodal F1 and F3 functions compared to NSM-AHA. The proposed NSM-AHA obtained the lowest mean error values in all multimodal functions. The standard deviation metric results of the NSM-AHA were lower in 16 of the 21 experiments conducted on the F4–F10 problems. The overwhelming superiority of the NSM-AHA over the original AHA in multimodal functions illustrates that the exploration capability of the algorithm is stronger.

Comparative optimization results for hybrid and composition problems of CEC 2017 are given in Tables 2 and 3, respectively. Inspecting Table 2, in the optimization of F12, F14, F16, F17, F18, and F20 functions, the mean value of NSM-AHA is better than AHA in all search spaces. In the 50- and 100-dimensional optimization of F13, F15, and F19 functions, the proposed algorithm converged to a lower mean error value. On composition benchmark functions (see Table 3), the mean and standard deviation error values provided by the proposed algorithm are lower than AHA. Considering the optimization results of hybrid and composition functions together, it is noticed that the solution quality of NSM-AHA is superior to AHA. This indicates that NSM-AHA can successfully balance exploration and exploitation.

The curves in Fig. 3 are plotted to examine the convergence behavior of the NSM-AHA and AHA algorithms. The F1, F6, F16, and F21 benchmark functions of the CEC 2017 evaluate how algorithms perform local optimum avoidance, exploration, and exploitation. As can be seen in the convergence graphs of the unimodal F1 function, the NSM-AHA algorithm reached a better value compared to the classic AHA in all search spaces. This confirms that the proposed method has fulfilled the neighborhood (exploitation) search task more effectively. Convergence curves of unimodal F6 function demonstrate that the original AHA method gets stuck at the local optima and converges prematurely. On the other hand, it is seen that the NSM-based AHA algorithm can provide effective diversity and converge to the global optimum stably. Considering the convergence curves of hybrid F16 and composition F21 benchmark functions, it can be seen that the NSM-AHA can strike the exploration–exploitation balance more successfully than AHA. Overall, it can be said that the proposed method consistently converges to the global optimum in search spaces of different complexity.

Fig. 3
figure 3

Convergence curves of algorithms for benchmark functions in the CEC 2017 test suite

Wilcoxon signed-rank test was applied with a significance level of a = 0.05 to check whether the NSM-AHA algorithm is statistically better than the original AHA. For the statistical analysis, 8874 (2 \(\times\) 29 \(\times\) 51 \(\times\) 3) data items acquired from Experiment-1 were used. The dataset includes the results of 2 algorithms, 29 benchmark functions, 51 independent runs for each benchmark function, and 3 problem dimensions. Wilcoxon test results between NSM-AHA and AHA algorithms are given in Table 4. In the table, “ + ” represents that the proposed algorithm performs better than the competitor algorithm (the null hypothesis is rejected), while “ − ” is the reverse. “ = ” shows that there is no statistical difference between the NSM-AHA and the competitor.

Table 4 Wilcoxon signed-rank test results between NSM-AHA and AHA in CEC 2017 test suite

As per the statistical results in Table 4, the Wilcoxon scores obtained in 30, 50, and 100-dimensional optimization of the CEC 2017 test suite functions are 18/7/4, 18/10/1, and 18/9/2, respectively. Considering the Wilcoxon score of 30-dimensional optimization, it is seen that the results are in favor of NSM-AHA in 18 benchmark problems, algorithms exhibited similar performance in 7 benchmark problems, and AHA achieves better results in 4 benchmark problems. Wilcoxon scores obtained for 50 and 100-dimensional experiments show that the NSM-AHA algorithm has a good optimization ability even for high-dimensional problems.

4.2 Experiment-2: optimization of IEEE CEC 2020 test suite benchmark functions

In Experiment-2, the optimization ability of the NSM-AHA and AHA methods is evaluated using ten benchmark functions of CEC 2020. The error statistics of algorithms on unimodal (F1), multimodal (F2–F4), hybrid (F5–F7), and composition (F8–F10) benchmark functions are given in Table 5. In the table, the lowest statistical error results for each benchmark function are shown in italics.

Table 5 Comparison of error statistics obtained from 51 runs on CEC 2020 benchmark functions

As can be seen from Table 5, the NSM-AHA reached a lower mean error value in the 30-dimensional search space of the F1 problem, while the AHA algorithm converged to a better value in the 50- and 100-dimensional optimization. For the same benchmark function, the proposed algorithm gave a lower standard deviation value. It is thought that the close performance in the unimodal F1 problem arises from the algorithms successfully fulfilling the exploitation task. On F2 and F3 multimodal benchmark functions the mean and standard deviation results provided by NSM-AHA are better. In the other multimodal problem (F4), all algorithms reached to the global optimum. The successful search performance of NSM-AHA in multimodal search space including many local solution traps shows that the exploration capability of the proposed method is superior to that of AHA. The results of the mean value metric show that the NSM-AHA obtained better values in all of the hybrid functions (F5–F7). For the same function type, the proposed method achieved the best standard deviation result in all experiments except the 100-dimensional optimization of the F5 problem. On 8 out of 9 experiments in composition benchmark functions, NSM-AHA produced lower mean error results than the original AHA algorithm. The standard deviation values of AHA are better than NSM-AHA in the experiments on the 30-dimensional of the F8 problem and both the 50 and 100-dimensional of the F10 problem. In the remaining experiments of F8, F9, and F10 functions, the performance of the NSM-AHA was superior. From the numerical results of hybrid and composition functions, it is observed that the proposed algorithm successfully balances exploration and exploitation and is more successful in converging to the global optimum in complex search spaces compared to AHA.

Figure 4 shows the distribution of solution error measures during 51 independent runs on CEC 2020 benchmark functions. Inspecting box plots for the unimodal F1 problem, it is seen that NSM-AHA performs the exploitation task more successfully compared to the AHA algorithm in the low-dimensional (D = 30) optimization. In the middle and high-dimensional optimization of the F1 problem, it can be said that both algorithms exhibit a competitive search performance. Box plots of the multimodal F3 problem show that the solution quality of the NSM-AHA algorithm is consistently superior even when the optimization dimension increases. In hybrid F5 and composition F9 problems, it was observed that the proposed algorithm can converge to the global optimum more successfully compared to AHA. The outstanding performance of NSM-AHA indicates that the NSM update mechanism has successfully enhanced the exploration–exploitation capacity of the original AHA.

Fig. 4
figure 4

Box plots of algorithms for CEC 2020 benchmark functions

Wilcoxon test results between NSM-AHA and AHA for CEC 2020 benchmark problems are summarized in Table 6. As per the results in the table, Wilcoxon test scores were obtained as 8/1/1, 7/3/0, and 7/3/0 in 30, 50, and 100-dimensional optimization. Accordingly, in 30-dimensional optimization, the NSM-AHA statistically outperformed the AHA algorithm in 8 out of 10 problems, while it performed worse in only 1 problem. Considering 50- and 100-dimensional optimization, the proposed algorithm outperformed AHA in 7 out of 10 problems, while the algorithms achieved similar results in 3 problems. Overall, Wilcoxon test results showed that the NSM-AHA algorithm was more successful in optimizing CEC 2020 problems compared to the original AHA.

Table 6 Wilcoxon test results for NSM-AHA and AHA in CEC 2020 test suite

4.3 Experiment-3: comparison of proposed NSM-AHA algorithm with state-of-art metaheuristics

Experiment-3 compares the NSM-AHA with those of well-known algorithms in the literature. In this direction, twenty-one competing algorithms were selected. The selected competitive algorithms include chaos game optimization (CGO) [54], salp swarm algorithm (SSA) [17], barnacles mating optimizer (BMO) [55], grey wolf optimizer (GWO) [4], artificial ecosystem-based optimization (AEO) [56], particle swarm optimization (PSO) [8], turbulent flow of water-based optimization (TFWO) [57], supply–demand-based optimization (SDO) [58], artificial bee colony (ABC) [11], equilibrium optimizer (EO) [59], henry gas solubility optimization (HGSO) [29], gravitational search algorithm (GSA) [60], artificial hummingbird algorithm (AHA) [16], differential evolution (DE) [23], lightning search algorithm (LSA) [61], harris hawks optimization (HHO) [62], moth-flame optimization (MFO) [19], dynamic differential annealed optimization (DDAO) [63], whale optimization algorithm (WOA) [7], crow search algorithm (CSA) [64], and sine–cosine algorithm (SCA) [65]. The present study used the Friedman test [53] to rank the overall search performance of competing algorithms. In this direction, 131,274 (22 algorithms × 39 benchmark functions × 51 runs × 3 dimensions) items of data obtained from the experimental studies were used. Table 7 gives the Friedman test results of the 22 competing algorithms.

Table 7 Friedman ranking of competing algorithms

The Friedman test was performed on the mean solutions obtained from 51 algorithm runs. A lower Friedman test score indicates a better performance of the algorithm. As can be seen from Table 7, SDO shows the best performance in the 30-dimensional optimization of CEC 2017 benchmark problems, while PSO is more successful in 50- and 100-dimensional optimization. For the same test suite, the proposed NSM-AHA algorithm ranked 4th, 2nd, and 3rd among the 22 competing algorithms in the 30, 50, and 100-dimensional optimizations, respectively. The NSM-AHA algorithm consistently ranked 1st in all experiments of the CEC 2020 test suite.

Based on the “mean rank” performance metric results, NSM-AHA, EO, PSO, SDO, and ABC algorithms ranked in the top five on the Friedman test. Given that all results are together, it is seen that the proposed NSM-AHA is ranked 1st, while the original AHA algorithm is ranked 8th out of 22 competing algorithms. This demonstrates that the optimization capability of the original AHA algorithm has been significantly improved by the NSM method.

4.4 Experiment 4: engineering application of NSM-AHA algorithm

Experiment-4 analyzes the ability of the NSM-AHA and other competing algorithms to solve constrained engineering problems. In this direction, NSM-AHA, EO, PSO, SDO, and ABC, which are among the top five algorithms in terms of Friedman test results, were applied to solve two engineering problems: (i) optimization of the single-diode solar cell model parameters and (ii) design of power system stabilizer.

4.4.1 Optimization of single-diode solar cell model (SDSCM) parameters

The efficient operation of solar PV systems largely depends on the accurate forecasting of solar cell parameters [66]. It is possible to estimate these parameters using the solar cell mathematical model that describes the behavior of the solar cell. The single-diode solar cell model (SDSCM) is one of the most commonly used models in solar cell design [67]. The electrical equivalent of SDSCM is given in Fig. 5 [68].

Fig. 5
figure 5

Single-diode solar cell model

As shown in Fig. 5, the PV cell output current can be written as follows:

$$I={I}_{\text{ph}}-{I}_{\text{d}}-{I}_{\text{sh}}$$
(14)

where \({I}_{\text{ph}}\), \({I}_{\text{d}}\), and \({I}_{\text{sh}}\) denote the photocurrent, diode current, and shunt resistor current, respectively. \({I}_{\text{d}}\) and \({I}_{\text{sh}}\) are calculated using Eqs. (15) and (16), respectively [69].

$${I}_{\text{d}}={I}_{\text{sd}}\left[\text{exp }\left(\frac{V+ I{ R}_{\text{s}}}{{\psi V}_{t}}\right)-1\right]$$
(15)
$${I}_{\text{sh}}=\frac{V+ I{ R}_{\text{s}}}{{R}_{\text{sh}}}$$
(16)

where \({I}_{\text{sd}}\) shows reverse saturation current of diode, \({R}_{\text{s}}\) and \({R}_{\text{sh}}\) are the series and shunt resistors, \(V\) represents the output voltage of the solar cell, \(\psi\) is the diode’s ideality factor, and \({V}_{t}\) is the heat conduction voltage. Equation (17) gives the calculation of \({V}_{t}\). In the equation, K (1.3807 \(\times\) 10−23 J/K) and Q (1.6021 \(\times\) 10−19 C) indicate the Boltzmann constant and magnitude of the electron charge, respectively. T represents the junction temperature in Kelvin.

$${V}_{t}=\frac{K\times T}{Q}$$
(17)

The general expression of diode current can be derived as shown in Eq. (18) [68, 69]. Accordingly, there are five parameters (\({I}_{\text{ph}},{I}_{\text{sd}},{R}_{\text{s}},{R}_{\text{sh}},\text{ and }\psi\)) that need to be optimized in the design of a single-diode solar cell model.

$$I = I_{{{\text{ph}}}} - I_{{{\text{sd}}}} \left[ {\exp \left( {\frac{{V + I R_{{\text{s}}} }}{{\psi V_{t} }}} \right) - 1} \right] - \frac{{V + I R_{{\text{s}}} }}{{R_{{{\text{sh}}}} }}$$
(18)

The main objective in optimizing the single-diode solar cell parameters is to reduce the error between the measured and estimated data as much as possible. To this end, the objective function shown in Eq. (19) was minimized [69,70,71,72].

$${\text{RMSE}} \left( \Delta \right) = \sqrt {\frac{1}{M}\mathop \sum \limits_{m = 1}^{M} f\left( {V_{m} , I_{m} , \Delta } \right)^{2} }$$
(19)

where M shows the measured current’s number, \(\Delta\) is a set of optimization parameters. \(f\left(V, I,\Delta \right)\) indicates the error function and defined as follows [68, 69]:

$$\left\{ {\begin{array}{*{20}c} {f\left( {V, I, \Delta } \right) = I_{{{\text{ph}}}} - I_{{{\text{sd}}}} \left[ {\exp \left( {\frac{{V + I R_{{\text{s}}} }}{{\psi V_{t} }}} \right) - 1} \right] - \frac{{V + I R_{{\text{s}}} }}{{R_{{{\text{sh}}}} }} - I} \\ {\Delta = \left[ {I_{{{\text{ph}}}} ,I_{{{\text{sd}}}} ,R_{{\text{s}}} ,R_{{{\text{sh}}}} ,\psi } \right]} \\ \end{array} } \right.$$
(20)

The optimized SDSCM parameters with NSM-AHA, EO, PSO, SDO, and ABC algorithms and the corresponding objective function values are depicted in Table 8. As per the results in the table, NSM-AHA and SDO algorithms obtained the minimum root mean square error (RMSE) value of 9.86E − 04. EO algorithm closely followed these two algorithms. On the other hand, the PSO gave the worst RMSE result.

Table 8 Optimized SDSCM parameters and corresponding RMSE value

It is not enough to compare the performance of optimization algorithms only regarding the best objective function value. Statistical parameters also need to be considered. In this direction, each algorithm runs 30 times and statistical RMSE results are recorded. Table 9 reports the minimum, mean, maximum, and standard deviation metric results. As per the results in the table, the NSM-AHA gave the best mean and standard deviation RMSE results of 1.00E − 03 and 2.63E − 05, respectively. Similarly, other statistical parameters confirm that NSM-AHA achieves the best results. Considering statistical metric results are together, it is noticed that the proposed algorithm comes to the fore in the optimization of single-diode solar cell model parameters.

Table 9 Statistical RMSE metric results of optimization algorithms obtained from 30 runs

The box plots in Fig. 6 show the distribution of the RMSE value over 30 independent runs. The narrow box in the figure indicates the superior performance of the algorithm. As can be seen in the figure, the proposed NSM-AHA method has stably achieved optimal solutions, followed by SDO. PSO gave the worst performance for the optimization of SDSCM parameters among all algorithms.

Fig. 6
figure 6

Distribution of RMSE value over 30 runs

Wilcoxon signed-rank test was implemented to assess the performance of the NSM-AHA algorithm against competitor optimizers and results are given in Table 10. In the table, the “↟” sign indicates that the proposed NSM-AHA algorithm outperforms the competitive optimizer, while the “≋” sign represents that there is no statistically significant difference between algorithms. Considering the Wilcoxon test results, it is seen that the NSM-AHA algorithm outperforms all other algorithms, except the SDO. Although the proposed algorithm is statistically equivalent to SDO, it can be said that NSM-AHA is relatively better, as the R+ value is higher.

Table 10 Wilcoxon signed-rank test results for optimization of SDSCM parameters

Figure 7 was drawn for the comparison of the estimated data with NSM-AHA and the experimental data given in Appendix (Table 12). The IV and PV curves given in the figure show that the estimated data highly overlaps with the experimental data. Given that all results are together, it can be said that NSM-AHA is the best-fitting algorithm for solar cell parameter estimation.

Fig. 7
figure 7

Calculated data with proposed NSM-AHA algorithm. a IV curve, b PV curve

4.4.2 Design of power system stabilizer

The primary task of a power system stabilizer (PSS) is to enhance small-signal stability by damping rotor angle oscillations of the generator [73]. The optimal design of PSS is an important issue because of its critical role in ensuring the stable operation of power systems. However, accurate estimation of PSS parameters is a difficult task due to its nonlinear structure and high geometric complexity [74, 75]. To overcome this, this study applies the proposed NSM-AHA and four powerful optimizers (EO, PSO, SDO, and ABC) to the optimization of PSS parameters. Figure 8 shows the schematic diagram of the single-machine-infinitive bus (SMIB) system considered in the study [76]. As can be seen in the figure, the machine is configured with PSS to damp rotor angle oscillations. The mathematical model of the system can be defined with Eqs. (2125) [76, 77].

$$\dot{\delta }= {\omega }_{0} (\omega -1)$$
(21)
$$\dot{\omega }=\left[{P}_{\text{m}}-{P}_{\text{e}}-D(\omega -1)\right]/M$$
(22)
$$\dot{{{E}_{q}}^{\prime}}=\left[{E}_{fd}-{{E}_{q}}^{\prime}-\left({x}_{d}-{{x}_{d}}^{\prime}\right) {i}_{d}\right]/{{T}_{d0}}^{\prime}$$
(23)
$$\dot{{E}_{fd}}=\left[{K}_{A} \left({V}_{\text{ref}}-{V}_{T}+{U}_{\text{PSS}}\right)-{E}_{fd}\right]/{T}_{A}$$
(24)
$${P}_{\text{e}}={{E}_{q}}^{\prime}{i}_{q}+{(x}_{q}-{{x}_{d}}^{\prime}){i}_{d} {i}_{q}$$
(25)

where \(\delta\), \(\omega\) and \({\omega }_{0}\) show rotor angle, rotor speed and synchronous speed, respectively. Mechanical power input and electrical power output are denoted by \({P}_{\text{m}}\) and \({P}_{\text{e}}\). \(D\) and \(M\) are the damping coefficient and inertia constant. Excitation system voltage and internal voltage behind \({{x}_{d}}^{\prime}\) are symbolized with \({E}_{fd}\) and \({{E}_{q}}{\prime}\). \({{T}_{d0}}{\prime}\) depicts d-axis open-circuit transient time constant. Time and gain constants for the excitation circuit are shown with \({T}_{A}\) and \({K}_{A}\), respectively. \({i}_{d}\) and \({x}_{d}\) are the stator current and reactance in the d-axis. \({i}_{q}\) and \({x}_{q}\) denote current and synchronous reactance in q-axis circuit. \({{x}_{d}}{\prime}\) depicts transient reactance of d-axis. \({V}_{\text{ref}}\) and \({V}_{\text{T}}\) are the reference and terminal voltages, respectively. PSS output signal is denoted by \({U}_{\text{PSS}}\). The linearized model of the SMIB system incorporating PSS can be defined as follows [76, 77]:

Fig. 8
figure 8

Single machine infinitive bus system including PSS [76]

$$\Delta \dot{\delta }={\omega }_{0} \Delta \omega$$
(26)
$$\Delta \dot{\omega }=-\frac{{K}_{1}}{M} \Delta \delta -\frac{D}{M} \Delta \omega - \frac{{K}_{2}}{M} \Delta {{E}_{q}}{\prime}$$
(27)
$$\Delta \dot{{{E}_{q}}{\prime}}=-\frac{{K}_{4}}{{{T}_{d0}}{\prime}} \Delta \delta -\frac{1}{{{{K}_{3} T}_{d0}}{\prime}} \Delta {{E}_{q}}{\prime}+ \frac{1}{{{ T}_{d0}}{\prime}} \Delta {E}_{fd}$$
(28)
$$\Delta \dot{{E}_{fd}}=-\frac{{{K}_{A }K}_{5}}{{T}_{A}} \Delta \delta -\frac{{{K}_{A }K}_{6}}{{T}_{A}} \Delta {{E}_{q}}{\prime}- \frac{1}{{ T}_{A}} \Delta {E}_{fd}+\frac{{K}_{A }}{{T}_{A}} \Delta {U}_{\text{PSS}}$$
(29)

The state–space model of the system defined in the above equations can be expressed as shown in Eq. (30). The terms of \(x\left(t\right)\), \(A\), and \(B\) are given in Eqs. (3133) [76, 77].

$$\dot{x}\left(t\right)= Ax\left(t\right)+Bu(t)$$
(30)
$$x\left(t\right)= {\left[\Delta \delta \Delta \omega \Delta {{E}_{q}}{\prime} \Delta {E}_{fd}\right]}^{T}$$
(31)
$$A= \left[\begin{array}{cccc}0& 2\pi f& 0& 0\\ -\frac{{K}_{1}}{M}& -\frac{D}{M}& - \frac{{K}_{2}}{M}& 0\\ -\frac{{K}_{4}}{{{T}_{d0}}{\prime}}& 0& -\frac{1}{{{{K}_{3} T}_{d0}}{\prime}}& \frac{1}{{{ T}_{d0}}{\prime}}\\ -\frac{{K}_{A}{K}_{5}}{{T}_{A}}& 0& -\frac{{{K}_{A }K}_{6}}{{T}_{A}}& - \frac{1}{{ T}_{A}}\end{array}\right]$$
(32)
$$B= {\left[\begin{array}{cccc}0& 0& 0& \frac{{K}_{A }}{{T}_{A}}\end{array}\right]}^{T}$$
(33)

Equation (34) gives the transfer function model of conventional lead–lead PSS [77].

$${U}_{\text{PSS}}={K}_{\text{PSS}}\left(\frac{{s T}_{\text{W}}}{1+{s T}_{\text{W}}}\right)\left(\frac{1+{s T}_{1}}{1+{s T}_{2}}\right) \left(\frac{1+{s T}_{3}}{1+{s T}_{4}}\right) \Delta$$
(34)

where \({K}_{\text{PSS}}\) shows the gain of the PSS. \({T}_{\text{W}}\) is the washout filter time constant. The numeric value of \({T}_{\text{W}}\) is set to 3. Time constants of two lead-lad blocks is represented by \({T}_{1}\), \({T}_{2}\), \({T}_{3}\) and \({T}_{4}\). \(\Delta \omega\) and \({U}_{\text{PSS}}\) denote generator speed deviation and PSS output voltage, respectively. The optimization scheme of the PSS design is shown in Fig. 9. In the optimization of the PSS parameters, the population size and maximum iteration number are set to 30 and 100, respectively.

Fig. 9
figure 9

Optimization process of PSS design

A three-phase fault was applied at t = 0.6 s in one of the transmission lines shown in Fig. 8. The fault continued up to t = 0.78 s. The fault was cleared by switching the relevant transmission line between 0.78 and 0.87 s and the system was operated in the pre-fault conditions. The simulation was done for 10 s. For such a configuration, optimized PSS parameters with NSM-AHA, EO, PSO, SDO, and ABC and calculated integral time square error (ITSE) values are given in Table 11.

Table 11 Optimized PSS parameters and corresponding objective function values

According to the results given in Table 11, the proposed NSM-AHA obtained the best objective function result with a value of 1.43E − 03. The curves given in Fig. 10 show that the NSM-AHA converged successfully to the optimal value, followed by the ABC algorithm. Figure 11 illustrates the change of rotor angle with NSM-AHA, EO, PSO, SDO, and ABC algorithms-based PSS controllers. Inspecting the figure, it is seen that all algorithms successfully damped oscillations.

Fig. 10
figure 10

Convergence behavior for proposed NSM-AHA and competitive algorithms in PSS design

Fig. 11
figure 11

a The deviation of the rotor angle with different algorithms-based PSS controllers, b zoom version

5 Conclusion

This paper proposes a novel metaheuristic algorithm called NSM-AHA for solving global and engineering optimization problems. The superiority of the proposed algorithm is demonstrated on CEC 2017 and CEC 2020 benchmark functions and two real-world engineering problems including parameter extraction of single-diode solar cell parameters and design of power system stabilizer. Moreover, comprehensive comparisons between the NSM-AHA and the 21 powerful and up-to-date metaheuristic algorithms have been presented. Accordingly, the following conclusions can be drawn from the experimental studies of this research.

  • The solution quality of the traditional AHA is relatively better than NSM-AHA in unimodal benchmark problems of the CEC 2017 and CEC 2020 test suites.

  • Considering the error statistics and Wilcoxon signed-rank test results, it is observed that NSM-AHA outperformes the original AHA algorithm in the multimodal, hybrid, and composition benchmark functions of both test suites. This shows that the proposed algorithm exhibits better local optimum avoidance and exploration.

  • Given that Friedman test results are together, it is noticed that NSM-AHA comes to the fore as the best method with a 5.74 mean Friedman score among 22 competitive algorithms. On the other hand, the overall performance of the original AHA algorithm was ranked 8th with an 8.32 mean Friedman score.

  • NSM-AHA and SDO algorithms gave the best result in single-diode solar cell model parameter estimation with an RMSE value of 9.86E − 04. The EO algorithm strictly followed these two algorithms, while the PSO gave the worst RMSE result.

  • The proposed algorithm obtained the minimum value (1.43E − 03) of the ITSE for parameter tuning of PSS. The second best-performing algorithm is ABC. EO and SDO algorithms showed competitive performance. Furthermore, NSM-AHA-based PSS controller successfully damped the oscillations.

To sum up, this study strongly reports that performing survivor selection of the AHA algorithm using the NSM update mechanism improved the optimization capability of the algorithm. This improvement has been confirmed by the optimization of both global and constrained engineering problems. However, limitations of the present paper which can be addressed in future works are as follows:

  • Like the original AHA, the NSM-AHA algorithm chooses exploration and exploitation phases with a randomly generated number. Future attempts may be made to adjust the balance between exploration and exploitation with a dynamic parameter based on the iteration index number.

  • The developed algorithm solves single-objective optimization problems. Future work on the optimization front includes the development of a multi-objective version of the proposed NSM-AHA algorithm.

  • The NSM-AHA can be hybridised with different metaheuristics or search operators to provide an algorithm with a unique solution precision.

Table 12 Experimental data of single-diode solar cell model