1 Introduction

In recent years, due to resource scarcity and increasing demand from people (Feng et al. 2024), improving production efficiency has become a research hotspot (Zhao et al. 2023a, b). As technology advances and problems become more complex, optimization tasks frequently exhibit multi-objective, large-scale, uncertain, and complicated traits to parse (Wan et al. 2023). In the real world, many problems have multiple optimization objectives and constraints, while traditional optimization algorithms (Inceyol and Cay 2022; Wang et al. 2022) are mainly designed for a single objective or a small number of objective issues (Atban et al. 2023; Hu et al. 2023; Wang et al. 2023a, b). Traditional algorithms may not be able to accurately find the optimal solution when faced with these challenging optimization tasks, or the solving procedure may be overly complicated and time-consuming. Secondly, the search space for some problems is vast, and traditional optimization algorithms find it challenging to efficiently search for the optimal solution in this situation. In addition, once the problem involves uncertainty and fuzziness (Berger and Bosetti 2020), traditional optimization algorithms cannot handle it well. This is because conventional optimization algorithms are mainly based on deterministic assumptions and constraints. At the same time, there are always uncertainties and randomness in areas such as venture capital (Xu et al. 2023a, b), supply chain management (Zaman et al. 2023), and resource scheduling (Al-Masri et al. 2023). Finally, traditional optimization algorithms typically rely on the analytical form of the problem, which requires the problem to be clearly defined and described in mathematical form (Kumar et al. 2023). In practical situations, it is often difficult to express the problem analytically, or the problem's objective function and constraint conditions are intricate (Wang et al. 2020). In summary, traditional optimization algorithms often cannot meet the needs and challenges of current optimization tasks.

In this context, meta-heuristic optimization algorithms (Fan and Zhou 2023) have rapidly developed due to their flexibility and gradient-free mechanisms. They have become essential tools for solving production efficiency improvement problems. The flexibility of meta-heuristic optimization algorithms enables them to adapt to diverse production environments and problem scenarios (Melman and Evsutin 2023). Meta-heuristic optimization algorithms can search and explore the problem space based on the characteristics of specific problems to find the best solution or a solution that comes close to the best one (Abdel-Basset et al. 2023a, b, c). Whether facing problems such as product design, production planning, resource allocation, or supply chain management, meta-heuristic optimization algorithms can flexibly adjust and optimize according to actual situations.

Meanwhile, the meta-heuristic optimization algorithm also has the characteristic of no gradient mechanism (Liu and Xu 2023), which allows it to deal with problems without explicit gradient information or continuous derivatives. In many production environments, obtaining gradient information on the issues through analytical methods using traditional optimization methods is difficult. The meta-heuristic optimization algorithm utilizes local knowledge about the problem for optimization through heuristic search and random exploration. In addition to high-dimensional and nonlinear problems, this gradient-free optimization method is also appropriate for discrete and constraint-based problems (Boulkroune et al. 2023).

1.1 Meta-heuristic methods

An optimization algorithm based on a heuristic search is called a meta-heuristic optimization algorithm (Wang et al. 2023a, b). They usually do not have any special requirements for the objective function but instead search by simulating intelligent behavior in nature (Chen et al. 2023) or other phenomena. They are more likely to find a globally optimal solution with a broader range of applications and a certain probability of escaping the local optimum. The characteristic of meta-heuristic optimization algorithms is their global solid search ability and robustness (Xu 2023a, b; Zhao et al. 2023a, b), which can find optimal solutions in large-scale, high-dimensional problems and quickly solve problems that do not exist or have not yet found polynomial time-solving algorithms. The classification diagram for the meta-heuristic optimization algorithm is shown in Fig. 1. Meta-heuristic algorithms, which combine random algorithms with local algorithms to solve challenging optimization problems, are inspired by random phenomena in nature (Bingi, et al. 2023). They can be broadly classified into the following four types based on their various sources of inspiration:

Fig. 1
figure 1

Classification of metaheuristic algorithms

  1. (1)

    The algorithm is designed based on the behavioral characteristics of biological populations. These models simulate organisms' collective intelligence and collaborative strategies, enabling the rapid search of problem space and finding global optimal or better approximate solutions. Biologically inspired optimization models perform well in handling continuous and global search problems. Zamani et al. (2022) present a novel bio-inspired algorithm inspired by starlings’ behaviors during their stunning murmuration named Starling Murmuration Optimizer (SMO) to solve complex and engineering optimization problems as the most appropriate application of metaheuristic algorithms. The SMO introduces a dynamic multi-flock construction and three new search strategies: separating, diving, and whirling. Sand Cat Swarm Optimization (Seyyedabbasi and Kiani 2023) is a meta-heuristic algorithm based on sand cats' natural behavior. This algorithm was influenced by sand cats' capacity to recognize low-frequency noise. Due to its unique traits, the sand cat can find prey above and below ground. The Squirrel Search Algorithm (SSA) (Jain et al. 2019) is a single-objective optimization problem-solving heuristic algorithm based on the feeding habits of wild squirrels. This algorithm simulates the search strategy of squirrels when searching for food, gradually approaching the optimal solution by continuously adjusting the search position and range. To achieve the goal of optimization, Aquila Optimizer (AO) (Abualigah et al. 2021) primarily mimics eagles' behavior while capturing prey. It has strong optimization ability and fast convergence speed. The inspiration for the Sea Horse Optimizer (SHO) (Zhao et al. 2023a, b) comes from the hippocampus's movement, predation, and reproductive behavior in nature. The foraging and navigational habits of African vultures served as the basis for the African Vultures Optimization Algorithm (AVOA) (Abdollahzadeh et al. 2021). Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995) is a search algorithm developed based on group collaboration by simulating the foraging behavior of bird flocks. The Chameleon Swarm Algorithm (CSA) (Braik 2021) models the chameleons' dynamic foraging behavior in and around trees, deserts, and swamps. The Mayfly Algorithm (MA) (Zervoudakis and Tsafarakis 2020) is inspired by the mayflies' flight behavior and mating process. Wild horses' lives and behaviors inspired the Wild Horse Optimizer (WHO) (Naruei and Keynia 2022). Spider Wasp Optimizer (SWO) (Abdel-Basset et al. 2023b) is proposed based on female spider wasps' hunting, nesting, and mating behavior. The Coati Optimization Algorithm (COA) (Dehghani et al. 2022) is inspired by coatis. The grey wolf's social structure and hunting strategies served as the basis for the Grey Wolf Optimization (GWO) algorithm (Mirjalili et al. 2014). The Marine Predators Algorithm (MPA) (Faramarzi et al. 2020a, b) draws inspiration from the prey-hunting Brownian and Lévy movements of marine predators. The Ant Lion Optimizer (ALO) (Mirjalili 2015) is modeled after how ants navigate between their nests and food in their natural behavior. The humpback whales' bubble net hunting techniques and natural behavior served as the basis for the Whale Optimization Algorithm (WOA) (Mirjalili and Lewis 2016). The Dandelion Optimizer (DO) (Zhao et al. 2022) was proposed to simulate the process of dandelion seeds flying over long distances by wind. This algorithm considers two main factors, wind speed, and weather, and introduces Brownian motion and Levi flight to describe the seed's motion trajectory. Golden Jackal Optimization (GJO) (Chopra and Ansari 2022) is inspired by the cooperative hunting behavior of golden jackals in nature.

  2. (2)

    Algorithms abstracted from human behavior or social phenomena. These models have strong learning ability and adaptability and have demonstrated excellent performance in image recognition and natural language processing fields. The Volleyball Premier League (VPL) (Moghdani and Salimifard 2018) is inspired by the rivalry and interaction between various volleyball teams throughout the season. The social learning behavior of humans arranged in families in the social environment is the basis for the Social Evolution and Learning Optimization (SELO) (Kumar et al. 2018) algorithm. The inspiration for Social Group Optimization (SGO) (Satapathy and Naik 2016) comes from social group learning. The inspiration for the Cultural Revolution Algorithm (CEA) (Kuo and Lin 2013) comes from the process of social transformation. Hunter Prey Optimization (HPO) (Naruei et al. 2021) is inspired by the process of animal hunting. The inspiration for the IbI Logic Algorithm (Azizi et al.) (Mirrashid and Naderpour 2023) comes from thinking about brain logic.

  3. (3)

    Inspired by genetic evolution algorithms. These models can handle discrete and multi-objective optimization problems and have strong robustness and global search ability for complex issues. Gene Expression Programming (GEP) (Sharma 2015) aims to use gene expression programming to simulate the mathematical expression relationship between data points in a set of data points based on the laws of genetic inheritance, the idea of natural selection, survival of the fittest, and elimination of the best. The population is constantly evolving to find the most suitable chromosome. The processes of how species move from one island to another, how new species appear, and how species go extinct are the inspirations for Biogeography-Based Optimization (BBO) (Simon 2008) and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) (Hansen and Kern 2004). The inspiration for Symbiotic Organisms Search (SOS) (Cheng and Prayogo 2014) comes from symbiotic phenomena in biology. The inspiration for Evolution Strategies (ES) (Beyer and Schwefel 2002) comes from biological evolution. Genetic programming is inspired by natural selection (GP) (Koza 1992).

  4. (4)

    Algorithms abstracted from physical properties or chemical reactions as inspiration. These models can jump between multiple local optimal solutions and find global optimal solutions by simulating the characteristics of physical phenomena and optimizing search strategies. The Kepler Optimization Algorithm (KOA) (Abdel-Basset et al. 2023a, b, c) is a physics-based meta-heuristic algorithm that predicts the position and motion of planets at any given time by Kepler's laws of planetary motion. Energy Valley Optimizer (EVO) (Azizi et al. 2023) is a brand-new meta-heuristic algorithm that draws inspiration from physical theory's various particle decay modes and stability laws. Light Spectrum Optimizer (LSO) (Abdel-Basset et al. 2022) is a new physics-inspired meta-heuristic algorithm that generates meteorological phenomena of colored rainbow spectra inspired by the dispersion of light at different angles when passing through raindrops. Rime Optimization Algorithm (RIME) (Su et al. 2023), which constructs a soft time search strategy and a hard time puncture mechanism, simulates ice's soft time and hard time growth processes and achieves exploration and development behavior in optimization methods. Multi-verse Optimization (MVO) (Mirjalili et al. 2016) is inspired by the fact that the universe has an expansion rate, utilizing the principle that white holes have higher and black holes have a lower expansion rate. The particles in the universe search through the principle of transferring from white spots to black holes through wormholes. The control volume mass balance model, used to estimate dynamic and equilibrium states, is the primary source of inspiration for the Equilibrium Optimizer (EO) (Faramarzi et al. 2020a, b).

1.2 Related work

In this section, we discussed some recent work.

Banaie-Dezfouli et al. (2023) introduce an improved binary GWO algorithm called the extreme value-based GWO (BE-GWO) algorithm. This algorithm proposes a new cosine transfer function (CTF) to convert continuous GWO into binary form. Then, it introduces an extreme value (Ex) search strategy to improve the efficiency of converting binary solutions. Nama et al. (2023) propose a new ensemble algorithm called e-mPSOBSA with the reformed Backtracking Search Algorithm (BSA) and PSO. Chakraborty et al. (2022) suggest an enhanced SOS algorithm called nwSOS to resolve higher dimensional optimization issues. Nama and Saha (2022) introduce an improved BSA (ImBSA) based on multi-group methods and modified control parameter settings to understand the collection of various mutation strategies. Nama (2021) proposes an improved form of SOS to establish an increasingly stable balance between discovery and activity cores. This technology uses three unique programs: adjusting benefit factors, changing parasitic stages, and searching based on random weights. To achieve the best DE efficiency, Nama and Saha (2020) proposed a new version of the DE algorithm to control parameters and mutation operators, making appropriate adjustments to time-consuming control parameters. Nama (2022) offers a new quasi-reflective slime mold (QRSMA) method that combines the SMA algorithm with a reflective learning mechanism (QRBL) to improve the performance of SMA. Nama, Sharma et al. (2022a, b) proposed an improved BSA framework called gQR-BSA, which is based on quasi-reflection initialization, quantum Gaussian mutation, adaptive parameter execution, and quasi-reflection hopping to change the coordinate structure of BSA. This algorithm adopts adaptive parameter settings, Lagrange interpolation formulas, and a new local search strategy embedded in Levy flight search to enhance search capabilities and better balance exploration and development. Nadimi-Shahraki et al. (2023a, b) wrote a review of Whale optimization algorithms, systematically explaining the theoretical basis, improvement, and mixing of WOA algorithms. Sharma et al. (2022a, b) proposed a new variant of BOA, mLBOA, to improve its performance. Sahoo et al. (2023) propose an improved dynamic reverse learning-based MFO algorithm (m-DMFO) combined with an enhanced emotional reverse learning (DOL) strategy. Sharma et al. (2022a, b) propose a hybrid sine cosine butterfly optimization algorithm (m-SCBOA), which combines the improved butterfly optimization algorithm with the sine cosine algorithm to achieve excellent exploratory and developmental search capabilities. Chakraborty et al. (2023) have proposed a hybrid slime mold algorithm (SMA) to address the issues above and accelerate the exploration of natural slime molds. Nadimi-Shahraki et al. (2023b) have developed an enhanced moth flame optimization algorithm called MFO-SFR to solve global optimization problems. Zamani et al. (2021) propose a novel DE algorithm named Quantum-based Avian Navigation Optimizer Algorithm (QANA) inspired by the extraordinary precision navigation of migratory birds during long-distance aerial paths. In the QANA, the population is distributed by partitioning into multiple flocks to explore the search space effectively using proposed self-adaptive quantum orientation and quantum-based navigation consisting of two mutation strategies, DE/quantum/I and DE/quantum/II. Nama et al. (2022a, b) proposes a new integrated technology called e-SOSBSA to completely change the degree of intensification and diversification, thereby striving to eliminate the shortcomings of (Wolpert and Macready 1997)traditional SOS.

1.3 Motivation of the work

It should be noted that no algorithm can find comprehensive solutions for every problem. As the 'No Free Lunch' (NFL) theorem reasonably indicates, no meta-heuristic algorithm is superior in solving every optimization problem. In other words, a specific meta-heuristic algorithm may achieve excellent results on particular issues but may not perform as well on other types of problems. With the continuous progress of technology and the increasing complexity of problems, some traditional algorithms cannot effectively solve these problems. After reviewing relevant literature, we found that many algorithms have limitations, including insufficient search ability, difficulty in converging to the global optimal solution, etc. These shortcomings have had a certain impact on the performance of the algorithm. We have been prompted to propose an updated and more powerful algorithm to overcome these limitations of existing algorithms and seek more effective solutions. After careful consideration, we have introduced an intelligent optimization algorithm inspired by the black-winged kite. We chose black-winged kites as our source of inspiration because they exhibit high adaptability and intelligent behavior in attack and migration. This inspired us to develop an algorithm to better cope with complex problems. Therefore, the above reasons have become the main driving force behind our research.

1.4 Contribution and innovation to the work

The contribution and innovation of this article are as follows:

  1. (1)

    The proposed Black Winged Kite Algorithm (BKA) lies in its unique biological heuristic features, which not only capture the flight and predatory behavior of black winged kites in nature, but also deeply simulate their high adaptability to environmental changes and target positions. The imitation of this biological mechanism provides the algorithm with robust dynamic search capabilities, enabling it to effectively cope with changing optimization environments.

  2. (2)

    In the black winged kite algorithm, we first introduced the Cauchy mutation strategy, which is a probability distribution strategy that helps the algorithm jump out of local optima and increases the probability of discovering better solutions in the global search space. This strategy improves the performance of the algorithm in discovering global optimal solutions and provides new solutions for high-dimensional complex optimization problems.

  3. (3)

    We have integrated a leadership strategy that mimics the leadership role of leaders in the kite community, ensuring that the algorithm can effectively utilize the current best solution and guide the search direction. This method not only helps to enhance the efficiency of the algorithm in utilizing the current search area, but also effectively balances the dynamics between exploration and utilization, ensuring that potential competitive new areas are not overlooked in the pursuit of optimal solutions.

The remainder of this research is structured as follows: The second section introduces the Black-winged kite's attack strategy and migration behavior (Wu et al. 2023) and develops a mathematical model based on them. The third section analyzes 59 benchmark functions and the test results. Five real-world engineering cases are presented in the fourth section, and the outcomes are examined. This article is summarized, and prospects are suggested in the fifth section.

2 The black-winged kite algorithm (BKA)

In this section, a naturally inspired algorithm called the BKA is proposed.

2.1 Inspiration and behavior of black-winged kites

The black-winged kite is a small bird with a blue gray upper body and a white lower body. Their notable features include migration and predatory behavior (Ramli and Fauzi 2018). They feed on small mammals, reptiles, birds, and insects, possess strong hovering abilities, and can achieve extraordinary hunting success(Wu et al. 2023). Inspired by their hunting skills and migration habits, we established an algorithm model based on black-winged kites.

2.2 Mathematical model and algorithm

The development of the BKA algorithm as a simple and effective meta-heuristic optimization method is illustrated in this section. We modeled the migration and attack stages of the proposed BKA based on the Black-winged kite's attack strategy and migration behavior. In Fig. 2, the pseudo-code of BKA is presented. This pseudocode clearly describes the execution process of the BKA algorithm. It provides steps and operations to solve specific problems and optimizes the results through iteration and adjustment.

Fig. 2
figure 2

Pseudocode of BKA

Fig. 3
figure 3

a Black-winged kite hovering in the air, b The Black-winged kite rushed towards its prey at great speed

2.2.1 Initialization phase

In BKA, creating a set of random solutions is the first step in initializing the population. The following matrix can be used to represent the location of every Black-winged kite (BK):

$$BK = \left[ {\begin{array}{*{20}c} {BK_{1,1} } & {BK_{1,2} } & \ldots & \ldots & {BK_{1,dim} } \\ {BK_{2,1} } & {BK_{2,2} } & \ldots & \ldots & {BK_{2,dim} } \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ {BK_{pop,1} } & {BK_{pop,2} } & \ldots & \ldots & {BK_{pop,dim} } \\ \end{array} } \right],$$
(1)

where pop is the number of potential solutions, dim is the size of the given problem's dimension, and BKij is the jth dimension of the ith Black-winged kite. We are distributing the position of each Black-winged kite uniformly.

$$X_{i} = BK_{lb} + rand(BK_{ub} - BK_{lb} ),$$
(2)

where i is an integer between 1 and pop, where BKlb and BKub are the lower and upper bounds of ith Black-winged kites in the jth dimension, respectively, and the rand is a value chosen at random between [0, 1].

In the initialization process, BKA selects the individual with the best fitness value as the leader XL in the initial population, which is considered the optimal location of the Black-winged kites. Here is the mathematical representation of the initial leader XL using the minimum value as an example.

$$f_{best} = \min (f(X_{{\text{i}}} {)}$$
(3)
$$X_{{\text{L}}} = X(find(f_{best} = = f(X_{{\text{i}}} )))$$
(4)

2.2.2 Attacking behavior

As a predator of small grassland mammals and insects, black-winged kites adjust their wings and tail angles according to wind speed during flight, hover quietly to observe prey, and then quickly dive and attack. This strategy includes different attack behaviors for global exploration and search. Figure 3a shows a scene of a black-winged kite hovering in the air, spreading its wings and maintaining balance. Figure 3b shows the scene of the black-winged kite rushing towards its prey at an extremely fast speed. Figure 4a shows the attack state of the black-winged kite as it hovers in the air, while Fig. 4b shows the state of the black-winged kite as it hovers in the air. The following is a mathematical model for the attack behavior of black-winged kites:

$$y_{t + 1}^{i,j} = \left\{ {\begin{array}{*{20}c} {y_{t}^{i,j} + n\left( {1 + \sin (r)} \right) \times y_{t}^{i,j} } & {p < r} \\ {y_{t}^{i,j} + n \times (2r - 1) \times y_{t}^{i,j} } & {else} \\ \end{array} } \right.$$
(5)
$$n = 0.05 \times e^{{ - 2 \times \left( {\tfrac{t}{T}} \right)^{2} }}$$
(6)
Fig. 4
figure 4

Two attack strategies of Black-winged kites are a hovering in the air, waiting for attack, and b hovering in the air, searching for prey

The following is a definition of the characteristics of Eqs. (5) and (6):

  • y i, j t and y i, j t + 1 represent the position of the ith Black-winged kites in the jth dimension in the t and (t + 1)th iteration steps, respectively.

  • r is a random number that ranges from 0 to 1, and p is a constant value of 0.9.

  • T is the total number of iterations, and t is the number of iterations that have been completed so far.

2.2.3 Migration behavior

Bird migration is a complex behavior influenced by environmental factors such as climate and food supply (Flack, et al. 2022). Bird migration is to adapt to seasonal changes, and many birds migrate south in winter from the north to obtain better living conditions and resources (Lees and Gilroy 2021). Migration is usually led by leaders, and their navigation skills are crucial to the success of the team. We propose a hypothesis based on bird migration: if the fitness value of the current population is less than that of the random population, the leader will give up leadership and join the migratory population, indicating that it is not suitable to lead the population forward (Cheng, et al. 2022). On the contrary, if the fitness value of the current population is greater than that of the random population, it will guide the population until it reaches its destination. This strategy can dynamically select excellent leaders to ensure a successful migration. Figure 5 shows the changes in the leading bird in the migration process of black-winged kites. The following is a mathematical model for the migration behavior of black-winged kites:

$$y_{t + 1}^{i,j} = \left\{ {\begin{array}{*{20}c} {y_{t}^{i,j} + C(0,1) \times \left( {y_{t}^{i,j} - L_{t}^{j} } \right)} & {F_{i} < F_{ri} } \\ {y_{t}^{i,j} + C(0,1) \times \left( {L_{t}^{j} - m \times y_{t}^{i,j} } \right)} & {else} \\ \end{array} } \right.$$
(7)
$$m = 2 \times \sin \left( {r + \pi /2} \right)$$
(8)
Fig. 5
figure 5

The strategic changes of Black-winged kites during migration Pseudocode of BKA

The attributes of Eqs. (7) and (8) are defined as follows:

  • L j t represents the leading scorer of the Black-winged kites in the jth dimension of the tth iteration so far.

  • y i, j t and y i, j t + 1 represent the position of the ith Black-winged kites in the jth dimension in the t and (t + 1)th iteration steps, respectively.

  • Fi represents the current position in the jth dimension obtained by any Black-winged kite in the t iteration.

  • Fri represents the fitness value of the random position in the jth dimension obtained from any Black-winged kites in the t iteration.

  • C(0,1) represents the Cauchy mutation (Jiang, et al. 2023). The definition is as follows:

A one-dimensional Cauchy distribution is a continuous probability distribution with two parameters. The following equation illustrates the probability density function of the one-dimensional Cauchy distribution:

$$f(x,\delta ,\mu ) = \frac{1}{\pi }\frac{\delta }{{\delta^{2} + (x - \mu )^{2} }},\quad - \infty < x < \infty$$
(9)

When δ = 1, μ = 0, its probability density function will become the standard form. The following is the precise formula:

$$f(x,\delta ,\mu ) = \frac{1}{\pi }\frac{1}{{x^{2} + 1}},\quad - \infty < x < \infty$$
(10)

2.3 The balance and diversity analyses

Maintaining a good balance between global and local search is an important factor in optimizing algorithms to find the optimal solution, which involves exploring and developing the search space. In this process, it is necessary to balance the proportion of global search and local search to ensure that the algorithm does not prematurely mature and can find the best solution. To better balance these issues, this algorithm uses parameter p to control different attack behaviors. At the same time, the variable n set in this article will decrease nonlinearly with the increase of iteration times, which can control the algorithm to shift from a global search algorithm to a local search, enabling it to find the optimal solution faster and avoid falling into local optimal solution, to better solve practical problems.

Diversity is very important in intelligent optimization algorithms, as it helps to avoid the population falling into local optima and provides a wide search range, increasing the chances of the algorithm discovering global optima. Like most intelligent optimization algorithms, the individuals in the initial population of this article are randomly generated within a given range, which results in certain differences in the positions and eigenvalues of each individual, thus giving the individuals in the population a certain degree of diversity and better exploration of the solution space. Meanwhile, during the iteration process of the algorithm, the application of the Cauchy strategy and the reasonable setting of parameters improve the diversity of the algorithm, improve its global search ability, and avoid falling into local optima.

2.4 Computational complexity

We can assess the time and space resources needed for algorithms to handle large-scale problems using computational complexity, a crucial indicator of algorithm efficiency. To better understand the effectiveness and viability of the proposed BKA algorithm, we will conduct a thorough analysis of the time and spatial complexity of the algorithm in this section.

2.4.1 Time complexity

The BKA algorithm initializes a set of potential solutions during initialization, which will be used for further search and optimization. The initialization method selected, as well as the size of the problem, typically determine how time-consuming the initialization process is. The number of candidate solutions or the size of the problem, denoted by M, determines the computational complexity of the initialization procedure in this article, which is (M). This process involves generating initial solutions, determining parameter settings, and initializing other necessary operations. This initialization process needs to be executed once before starting the algorithm. Second, one of the crucial components of the BKA algorithm, which is used to assess the effectiveness and quality of potential solutions, is fitness evaluation. The issues considered and the particular evaluation method determine how complicated the fitness assessment process is. For specific problems, fitness assessment involves complex computational or simulation techniques with a time complexity of (T × M) + (T × M × D), where T is the maximum number of iterations and D is the specific problem's dimension. Finally, updating the Black-winged kite is a critical step in the BKA algorithm, which generates new candidate solutions based on the current key and neighborhood search. The neighborhood search difficulty and the update strategy employed determine the difficulty of updating Black-winged kites. Therefore, the runtime complexity of the BKA is O (M × (T + T × D + 1)).

2.4.2 Space complexity

The spatial complexity of the BKA algorithm refers to the additional storage space required during algorithm operation. Let's analyze the spatial complexity of the BKA algorithm. The spatial complexity of the BKA algorithm is relatively low. The primary space consumption comes from storing candidate solutions and related intermediate results and temporary variables. Specifically, BKA algorithms typically only need to store the current best solution, candidate solutions, and some data structures related to the search and optimization process. In the most straightforward implementation, the spatial complexity of the BKA algorithm is approximately (M), where M represents the number of candidate solutions or the size of the problem. This is because the algorithm needs to allocate storage space for each candidate solution and update and compare it during iteration. In addition, additional storage space is needed to store other auxiliary variables and intermediate results. It should be noted that the BKA algorithm's spatial complexity can change depending on the particulars of the problem and its implementation. The spatial complexity may increase if more complex data structures or intermediate result storage are used in the algorithm.

3 Experimental results and discussion

This section conducts simulation studies and assesses the effectiveness of BKA in optimization. The experiments are conducted on MATLAB R2022b with a 3.20 GHz 64 bit Core i9 processor and 16 GB of main memory.

3.1 The benchmark set and compared algorithms

The ability of BKA to handle a variety of objective functions is tested in this article using 59 standard benchmark functions, including 18 benchmark functions, the CEC-2017 test set (Wu et al. 2016), and the CEC-2022 test set (Yazdani et al.  2021). The test results are compared with those of well-known algorithms like MVO, SCA, GWO, MPA, RIME, ALO, WOA, STOA, DO, GJO, PSO, AVOA, SHO, SCSO, SSA, AO, COA, etc. to assess the quality of the best solution offered by BKA. These algorithms' control parameters are all set to the values the algorithm proposer suggested. Three evaluation functions are also mentioned to analyze the algorithm's performance thoroughly: average (Avg), standard deviation (Std), and ranking.

(1) The definition of standard deviation is as follows:

$$Std = \sqrt {\frac{1}{m}\sum\limits_{i = 1}^{m} {\left( {Fi - Avg} \right)}^{2} }$$
(11)

(2) Ranking: ranking depends on the average fitness value of the algorithm. The algorithm is ranked higher when the average value is lower.

3.2 Sensitivity analysis

In this section, experiments and analysis are conducted based on the algorithm's internal parameters. The key internal parameters of the BKA algorithm are discussed and analyzed to determine the optimality and rationality of the key parameters of the proposed algorithm. In this section, when the attack mechanism takes effect, we will change the parameter p in Sect. 2.2.2. This parameter is used to control the switching between two attack behaviors and is an important parameter that affects the overall accuracy and stability of the algorithm. Set the parameter p to 0.3, 0.5, and 0.7 for experiments and compare it with the original parameter p = 0.9 to show the impact of parameter changes on BKA performance in the mechanism. The comparative experiment was conducted within a unified evaluation framework, with the same number of 30 populations and 30 independent runs. The experimental results are shown in Tables 1 and 2.

Table 1 The influence of parameter p on test results (CEC-2017)
Table 2 The influence of parameter p on test results (CEC-2022)

From Table 1, we can see that for the CEC-2017 test set, BKA achieved the best results among 21 functions at parameter p = 0.9, achieved the same optimal results as p = 0.7 on F23, and did not achieve the best results on only 7 functions. From Table 2, we can see that for the CEC-2022 test set, BKA achieved the best results among 7 functions at parameter p = 0.9, achieved the same optimal results as p = 0.5 on the F3 function, achieved the same optimal results as p = 0.7 on the F4 function, and achieved the same optimal results as p = 0.5 and p = 0.3 on the F12 function. Only two functions did not achieve the optimal results. Through a comprehensive analysis of Tables 1 and 2, we believe that the BKA algorithm can achieve better results in processing optimization when the parameter p = 0.9.

3.3 The results of the algorithm on different test sets

This section used several test sets to gauge how well the recently created meta-heuristic algorithm BKA handled global optimization issues.

3.3.1 Evaluation of 18 functions and qualitative analysis

This test set includes both unimodal and multimodal functions to thoroughly assess the performance of the BKA algorithm (Xie and Huang 2021). The unimodal function (F1–F9) in this test set is a function with a globally optimal solution used to verify the efficacy of the optimization algorithm. Multimodal functions (F10–F18) have many local extremum values used to assess the algorithm's exploratory power. Tables 3 and 4 provide detailed information on 18 test functions. The results of all algorithms were obtained using 30 search agents with 500 iterations and 10 independent runs.

Table 3 Unimodal test functions
Table 4 Multimodal test functions

Table 5 shows the results of BKA and the comparison algorithm on 18 test functions. The value of Avg determines the ranking in Table 5; the lower the value, the higher the ranking. In Figs. 6 and 7, where the vertical axis denotes the fitness value and the horizontal axis the number of iterations, the convergence curves of BKA and other optimization algorithms at dimension 10 are contrasted. In unimodal functions, BKA exhibits an advantage over other F1, F3, and F4 algorithms, even surpassing other algorithms by tens of orders of magnitude. However, in F2, F5, and F9, the advantage of BKA is not as apparent as before. In F6, the RIME algorithm has a weak advantage over BKA; in F7 and F8, the WOA algorithm is slightly better than BKA. Although BKA did not achieve the optimal value on all unimodal functions, compared to the optimal algorithm, the difference between BKA and the optimal algorithm is minimal in those functions where BKA did not achieve the optimal solution. It should be emphasized that although BKA has significant advantages in some unimodal functions, its performance is not entirely dominant compared to other algorithms. This means that in specific problem domains and function types, other algorithms still have competitiveness and similar performance. The BKA algorithm achieved the theoretical optimal value of 0 on F10, F11, F13, F15, and F17 in multimodal functions. On F12 and F18, the BKA algorithm achieved results similar to those of other algorithms. While other algorithms are stuck in local optima for F14 and F16, the BKA algorithm still achieves excellent results. These findings show that the BKA algorithm performs well regarding global search and optimization when dealing with multimodal functions. In most multimodal functions, the BKA algorithm can accurately find the theoretical optimal value, demonstrating its powerful effect in global optimization. The BKA algorithm's results in functions F12 and F18 are comparable to those of other algorithms, but they still exhibit the BKA algorithm's effectiveness and robustness in handling complex problems. In contrast to other algorithms, the BKA algorithm can avoid hitting local optima and produce results close to the ideal outcome.

Table 5 Simulation results of BKA and comparative algorithm on F1-F18
Fig. 6
figure 6

Convergence analysis of the proposed BKA and competitor algorithms in unimodal functions in dimension 10

Fig. 7
figure 7

Convergence analysis of the proposed BKA and competitor algorithms in multimodal functions in dimension 10

Figure 8 shows the search surface graph of the benchmark function, the historical search process of BKA, the average convergence curve of fitness values, and the average convergence curve. The first column displays the search space of each algorithm, and observing the search surface of the search space can provide a more precise and intuitive understanding of the characteristics of the function. The graph shows that F1, F2, and F8 have only one extreme value, while F12, F13, and F17 have multiple extreme values. The second column depicts the historical search process of BKA on a global scale, where the red dots represent the positions of the optimal individuals in each generation of BKA, and the blue dots represent the positions of ordinary individuals. Observing the images of the historical search process makes it possible to gain a more intuitive understanding of the distribution of BKA and the changes in individual positions during the iteration process. The intermediate fitness image of BKA represents the average target optimal values of all dimensions during each iteration process in the third column, which also shows the average trend of the population's evolution. The average fitness value of the BKA algorithm exhibits strong oscillations in the early iterations, which gradually weaken and tend to flatten out, as seen in the images. This reveals that the BKA algorithm has been fully explored in its early stages and extensively searched and optimized globally.

Fig. 8
figure 8

Search space, search history, average fitness, and convergence curve of BKA algorithm

Meanwhile, in the later stages of the iteration, we can also observe significant short-term oscillations. This reflects the BKA algorithm's continuous attempts to jump out of the local optimal value in the later stage to find higher accuracy and better solutions. This short-term oscillation indicates that the BKA algorithm has a certain degree of convergence and continuously strives to improve the quality of the key in the later stages. Overall, the BKA algorithm exhibits a strategy of exploration before development during the optimization process. The algorithm uses large oscillation amplitudes in the early stages to identify potential optimization directions. In the later stage, the BKA algorithm focuses more on fine-tuning and optimization, constantly trying to jump out of the local optimal solution to converge to higher accuracy and better results. The fourth column displays an image of the average convergence curve, which shows the optimal solution obtained by the BKA algorithm throughout the entire iteration process. The multimodal function curve decreases gradually during convergence, while the unimodal function curve rapidly decreases as the number of iterations rises. The ability of the BKA algorithm to quickly exit the local extremum and gradually inch closer to the global optimal value during the optimization process is reflected in this trend.

3.3.2 Evaluation of the CEC-2017 suite test

The CEC-2017 suite is chosen as the testing project in this experiment to gauge BKA's effectiveness in resolving optimization issues. The CEC-2017 set contains four different kinds of benchmark functions. It should be noted that the instability of the F2 function may lead to unpredictable optimization results, resulting in uncertain and inconsistent results when evaluating algorithm performance. The decision-maker decides to eliminate the F2 function from the CEC-2017 test suite to guarantee the test set's validity and consistency. The search domain for all functions in this test suite is [− 100, 100], and each test function has ten dimensions. The simulation results of all algorithms are obtained using 30 search agents with 1000 iterations and 10 independent runs.

In calculating the CEC-2017 test set, the outcomes of our algorithm and the comparison algorithm are shown in Table 6, with the best effect denoted in bold. From the data in Table 6, it can be concluded that in the 29 test functions of CEC-2017, the BKA algorithm achieved 21 optimal results, accounting for 72.4%, surpassing the other eight algorithms. A typical statistical chart in the field of statistics is the box plot. Its resemblance to a box's shape led to its name. The box chart can calculate the degree of dispersion of univariate data and clearly and intuitively display the degree of dispersion and distribution interval while highlighting abnormal data values. The box's upper and lower boundaries correspond to the upper and lower quartiles of the data, respectively, and the box's median represents the middle point of the data. The shorter the length of the box, the more concentrated the data. The longer the box length, the more scattered the data, and the worse the stability. Figure 9 shows the box plots of the BKA algorithm and its comparison algorithm on F3, F8, F9, F10, F14, F15, F20, and F26. By observing the chart, we can draw some conclusions. Firstly, the box plot shows that BKA, GJO, PSO, AVOA, and SHO algorithms have almost no outliers, indicating their high stability. This means that on these benchmark functions, the performance of these algorithms is relatively consistent, without significant performance fluctuations or anomalies. Secondly, by observing the box length, we can see that the box length of the BKA algorithm is shorter and at a lower position. This means that the BKA algorithm has a slight difference in the solution set on these benchmark functions, which means its solution accuracy is relatively high. Firstly, the box plot shows that BKA, GJO, PSO, AVOA, and SHO algorithms have almost no outliers, indicating their high stability. This means that on these benchmark functions, the performance of these algorithms is relatively consistent, without significant performance fluctuations or anomalies. Secondly, by observing the box length, we can see that the box length of the BKA algorithm is shorter and at a lower position. As a result, the BKA algorithm's solution set difference for these benchmark functions is minimal, demonstrating a high level of solution accuracy.

Table 6 Simulation results of BKA and comparative algorithm on CEC-2017 test set
Fig. 9
figure 9

Boxplot of different algorithms on partial functions of CEC-2017 in dimension 10

The heat map is a graphical representation based on color coding, which represents the size of data through the strength, depth, and different colors of colors, allowing readers to have a more intuitive understanding of the correlations and trends between data. In Fig. 10, the darker the color, the greater the error of the algorithm. The figure indicates that all algorithms perform poorly for functions F1, F2, F12, F13, F15, F18, F19, and F30, indicating that these functions are relatively difficult. In addition, the SSA algorithm has significant errors in most functions, proving that its performance is weak and cannot effectively solve these problems. Figure 11 shows the total running time of each algorithm on the CEC-2017 test set. Observing the graph, it can be seen that the running time of the BKA algorithm is at a relatively high level, with a difference of no more than 20 s compared to the PSO with the shortest running time. However, it is encouraging to note that in this test set, the performance of the BKA algorithm is significantly better than that of PSO and GJO. This indicates that although the BKA algorithm has a slightly longer runtime, it performs well.

Fig. 10
figure 10

The error performance of different algorithms on the CEC-2017 test set

Fig. 11
figure 11

The total running time of BKA and its comparison algorithm on CEC-2017

3.3.3 Evaluation of the CEC-2022 objective functions

This section further conducts experiments on the algorithm using the most recent CEC-2022 test set to highlight the uniqueness and superiority of the BKA algorithm. The CEC-2022 set includes four different kinds of benchmark functions. In the CEC-2022 test suite, the search domain for all functions is [− 100, 100]. The CEC-2022 test set provides an updated test set and evaluation metrics aimed at comprehensively evaluating the performance of optimization algorithms. We can better understand its performance in the latest environment by comparing the BKA algorithm with the previously mentioned algorithms. The simulation results of all algorithms are obtained using 30 search agents with 1000 iterations and 10 independent runs.

Table 7 shows that the BKA algorithm outperformed the other eight algorithms by achieving 8 out of the 12 test functions with the best results, or 66.7% of the total. Figure 12 shows that the results of BKA, GJO, PSO, and AVOA perform well on F1, but all have outliers, indicating that the performance of these algorithms is relatively high but not stable enough. Other algorithms perform very well in functions F2 and F6 except for SSA. In Fig. 13, it can be seen that BKA performs stably on all functions, proving that BKA is robust. However, SSA performs poorly in various functions and cannot handle these challenging tasks. According to the results shown in Fig. 14, we can observe the error situation of different algorithms. We can see large areas of high error, especially in the color distribution of heat maps for F1 and F6 functions. This indicates that these algorithms typically perform poorly on these specific functions. This indicates that these two functions pose substantial challenges for algorithms, and optimizing these functions is a relatively complex task for most algorithms. The graph shows that, aside from the SSA and COA algorithms, the performance of other algorithms is generally reasonable. They can achieve lower error levels when processing F1 and F6 functions, demonstrating relatively good performance. Figure 15 shows the total running time of each algorithm on the CEC-2022 test set. Observing the graph, it can be seen that the running time of the BKA algorithm is at a relatively high level, with a difference of no more than 10 s compared to the PSO with the shortest running time. This indicates that although the BKA algorithm has a slightly longer runtime, it performs well.

Table 7 Simulation results of BKA and comparative algorithm on CEC-2022 test set
Fig. 12
figure 12

Box plots of different algorithms on the CEC-2022 test set (F1–F6)

Fig. 13
figure 13

Box plots of different algorithms on the CEC-2022 test set (F7–F12)

Fig. 14
figure 14

The error performance of different algorithms on the CEC–2022 test set

Fig. 15
figure 15

The total running time of BKA and its comparison algorithm on CEC–2022

In summary, the reasons why the BKA algorithm can achieve the best results are as follows: The BKA algorithm adopts the Cauchy distribution strategy and has a strong global search ability. Through the global search strategy, the BKA algorithm is highly likely to discover the global optimal solution. The BKA algorithm introduces a leader strategy. By selecting individuals with high fitness values as leaders, others learn and improve the solution through interaction with the leader.

3.4 Nonparametric statistical analysis

To comprehensively evaluate the performance of BKA, we chose to use the Wilcoxon sign rank test and Friedman test to test BKA and its comparison algorithm. Wilcoxon signed-rank test is a non-parametric test method used to compare two sets of related samples. Its main function is to determine whether there is a significant difference in the median between two related samples. This method can be used to test whether the difference in the median between two sets of related samples is significant. The Friedman test is a non-parametric test used to compare multiple sets of related samples. Its main function is to determine whether there is a significant difference in the median of multiple sets of related samples, which can be used to test whether there is a significant difference in the median of multiple sets of related samples.

Tables 8 and 9 list the results of Wilcoxon testing for different algorithms on different test sets, all of which are based on a 95% significance level (α = 0.05). In Tables 8 and 9, the symbol " + " indicates that the reference algorithm performs better than the comparison algorithm, the symbol "−" indicates that the reference algorithm is not as good as the comparison algorithm. The symbol " = " indicates no difference in significance between the reference and comparison algorithms. By observing the last row in the table, we can conclude that the BKA algorithm has a smaller number of '-,' while there are more ' + ' and ' = '. This indicates that, in most cases, the performance of the BKA algorithm is not weaker than that of the comparison algorithm. Tables 10 and 11 list the Friedman test rankings and average rankings of different algorithms on different test sets. By observing Tables 10 and 11, we can see that the BKA algorithm ranks first in most benchmark functions and first in average rankings. These statistical data demonstrate the BKA algorithm's excellent performance on a single benchmark function but, more importantly, by evaluating its overall performance, its practicality in multiple optimization problems can be more reliably evaluated.

Table 8 The Wilcoxon test results of BKA and other comparative algorithms on the CEC-2017 test set (α = 0.05)
Table 9 The Wilcoxon test results of BKA and other comparative algorithms on the CEC-2022 test set (α = 0.05)
Table 10 The Friedman test ranking of BKA and its comparison algorithm on the CEC-2017 test set
Table 11 The Friedman test ranking of BKA and its comparison algorithm on the CEC-2022 test set

3.5 Effectiveness analysis

The overall effectiveness (OE) of the BKA algorithm and other contender algorithms are computed by Eq. (12) and reported in Table 12, where the parameter N is the total number of test functions, and Li is the number of test functions in which the i-th algorithm is a loser (Nadimi-Shahraki and Zamani 2022). From Table 12, it can be seen that BKA demonstrated its effectiveness with 70.7% excellent results on the CEC-2017 and CEC-2022 test sets, far surpassing other comparative algorithms.

$$OE_{i} (\% ) = \frac{{N - L_{i} }}{N} \times 100$$
(12)
Table 12 Effectiveness of the BKA and other competitor algorithms

3.6 Limitation analysis

Although BKA has achieved good results in dealing with optimization problems, it cannot be ignored that this algorithm still has some shortcomings, which can be summarized as follows: this algorithm has not achieved optimal results in solving specific types of optimization problems and has shown insufficient stability in multiple runs. Specifically, the insufficient stability of the algorithm may be due to the uneven distribution of initial parameters, resulting in the search strategy exhibiting variation in multiple runs. In addition, when dealing with complex problems with high-dimensional search spaces, the algorithm may experience premature convergence or repeated convergence during the iteration process, reducing the consistency of the results. Meanwhile, although slightly superior in performance, BKA's running speed is relatively low, which may become a disadvantage in application scenarios that require fast iteration. To improve these limitations, it is recommended to further adjust the initial value distribution, optimize the exploration and utilization mechanism, and consider algorithm acceleration strategies in subsequent research in order to improve the stability and efficiency of the algorithm and better adapt to various complex optimization problems.

4 BKA for solving engineering problems

This section evaluates how well BKA performed in resolving five elaborate engineering design issues: the design of a tension/compression spring, a pressure vessel, a welded beam, a speed reducer, and a three-bar truss design issue. These well-known engineering problems contain numerous equality and inequality constraints, and the ability of BKA to optimize real-world and constrained problems is evaluated from the perspective of constraint processing. Here, the constrained issues are transformed into unconstrained problems using a straightforward method of the death penalty.

Solving constrained optimization problems is a crucial task in both optimization theory and applications. There are numerous methods for processing constraints, including operators, decoder functions, representations that preserve feasibility, repair algorithms, and penalty functions. Constrained optimization issues are typically solved using the penalty function method, a popular technique from optimization theory. The objective of the penalty function approach is to introduce a penalty function that transforms the constraint conditions into a component of the objective function, thereby changing the original constraint problem into an unconstrained one. Without considering constraints, the ideal answer to the original issue can be found by modifying the shape and parameters of the penalty function. This study resolves these practical engineering issues using the penalty function method.

4.1 Pressure vessel design

This engineering challenge aims to reduce the cost of producing cylindrical pressure vessels while meeting four constraints. This problem's resolution can be mathematically stated as follows:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ] = [T_{s} ,T_{h} ,R,L]\)

$${\text{Minimize}}\quad f(H) = 0.6224h_{1} h_{3} h_{4} + 1.7781h_{2} h_{3}^{2} + 3.1661h_{1}^{2} h_{4} + 19.84h_{1}^{2} h_{3}$$
(13)
$${\text{Subject to}}:l_{1} (H) = 0.0193h_{3} - h_{1} \le 0,$$
(14)
$$l_{2} (H) = 0.00954h_{3} - h_{2} \le 0,$$
(15)
$$l_{3} (H) = 1,296,000 - \pi h_{3}^{2} h_{4} - \frac{4}{3}\pi h_{3}^{3} \le 0,$$
(16)
$$l_{4} (H) = - 240 + h_{4} \le 0$$
(17)

Variables range \(0 \le h_{j} \le 100,j = 1,2\)\(10 \le h_{j} \le 200,j = 3,4\)

BKA has optimized this issue. BKA can obtain the optimal function value \(f(H) = 5887.364927\) with the structure variables \(H \, = \, (0.778433,0.384690,40.319619,200)\). Table 13 displays the optimal values and variables that BKA and its comparison algorithm arrived at, demonstrating how well the algorithm resolved this issue. The algorithm performs better when the numerical value is lower. The results indicate that BKA has discovered a new structure that can achieve lower manufacturing costs than other structures.

Table 13 The best solutions to the Pressure vessel design problem using various algorithms

4.2 Design issue with tension/compression springs

This engineering challenge aims to reduce the coil's weight while meeting three criteria. These limitations ensure the coil design adheres to certain engineering limitations and requirements. We can use the following mathematical expression to explain this issue:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ] = [d,D,N]\)

$${\text{Minimize}} \quad f(H) = \left( {h_{3} + 2} \right) \times h_{2} h_{1}^{2}$$
(18)
$${\text{Subject to}}:l_{1} (H) = - \frac{{h_{2}^{3} h_{3} }}{{71,785h_{1}^{4} }} + 1 \le 0,$$
(19)
$$l_{2} (H) = \frac{{4h_{2}^{2} - h_{1} h_{2} }}{{12,566\left( {h_{1}^{3} h_{2} - h_{1}^{4} } \right)}} + \frac{1}{{5,108h_{1}^{2} }} - 1 \le 0,$$
(20)
$$65454555$$
(21)
$$l_{4} (H) = - 1 + \frac{{h_{1} + h_{2} }}{1.5} \le 0.$$
(22)

Variables range \(0.05 \le h_{1} \le 2,0.25 \le h_{2} \le 1.3,2 \le h_{3} \le 15\) 

Table 14 displays the optimal values and variables that BKA and its comparison algorithm arrived at, illustrating how well the algorithm resolved this issue. BKA can obtain the optimal function value \(f(H) = 0.01267027\) with the structure variables \(H \, = \, (0.051173,0.344426,12.047782)\). The experiments and comparative analysis results demonstrate that the BKA algorithm can produce better solutions when tackling these issues. This discovery provides engineers and decision-makers with a reliable tool and method to improve the design, planning, and decision-making processes and achieve higher-quality engineering solutions.

Table 14 Tension/compression spring design problem optimal outcomes of various algorithms

4.3 Welded beam design

This engineering challenge aims to minimize the welded beam's weight while satisfying the four constraints. The welding thickness, rod connection length, rod height, and rod thickness are the four decision variables that we must optimize to describe this issue. For this engineering problem, we can define an objective function to represent the weight of the welded beam, namely:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ] = [h,l,t,b]\)

Minimize:\( (H) = 1.10471h_{2} h_{1}^{2} + \left( {14 + h_{2} } \right) \times 0.04811h_{3} h_{4} \)

$${\text{Subject to}}:l_{1} (H) = - \tau_{\max } + \tau (h) \le 0,$$
(23)
$$l_{2} (H) = - \sigma_{\max } + \sigma (h) \le 0,$$
(24)
$$l_{3} (H) = - h_{4} + h_{1} \le 0,$$
(25)
$$l_{4} (H) = - 5 + 0.10471h_{1}^{2} + \left( {14 + h_{2} } \right) \times 0.04811h_{3} h_{4} \le 0,$$
(26)
$$l_{5} (H) = - h_{1} + 0.125 \le 0,$$
(27)
$$l_{6} (H) = - \delta_{\max } + \delta (h) \le 0,$$
(28)
$$l_{7} (H) = - P_{c} (h) + P \le 0,$$
(29)

where

$$\tau (h) = \sqrt {\left( {\tau^{\prime}} \right)^{2} + 2\tau^{\prime}\tau^{\prime\prime}\frac{{h_{2} }}{2R} + \left( {\tau^{\prime\prime}} \right)^{2} } ,$$
(30)
$$\tau^{\prime} = \frac{P}{{\sqrt 2 h_{1} h_{2} }},\tau^{\prime\prime} = \frac{MR}{J},$$
(31)
$$M = P\left( {L + \frac{{h_{2} }}{2}} \right),R = \sqrt {\frac{{h_{2}^{2} }}{4} + \left( {\frac{{h_{1} + h_{3} }}{2}} \right)^{2} } ,\delta (h) = \frac{{4PL^{3} }}{{Eh_{3}^{3} h_{4} }},$$
(32)
$$J = 2\left[ {\sqrt 2 h_{1} h_{2} \left\{ {\frac{{h_{2}^{2} }}{12} + \left( {\frac{{h_{1} + h_{3} }}{2}} \right)^{2} } \right\}} \right],\sigma (h) = \frac{6PL}{{h_{4} h_{3}^{2} }},$$
(33)
$$P_{c} (h) = \frac{{4.013E\sqrt {\frac{{h_{4}^{6} h_{3}^{2} }}{36}} }}{{L^{2} }}\left( {1 - \frac{{h_{3} }}{2L}\sqrt{\frac{E}{4G}} } \right)$$
(34)

Variables range\(P = 6,000lb,{\text{ }}L = 14in,E = 30e6psi,{\text{ }}G = 12e6psi,\), \(\begin{gathered} \tau _{{\max }} = 13,000{\text{psi}},\sigma _{{\max }} = 30,000{\text{psi}},\delta _{{\max }} = 0.25{\text{in}},0.1 \le h_{1} \le 2,0.1 \le h_{2} \le 10, \hfill \\ 0.1 \le h_{3} \le 10,0.1 \le h_{4} \le 2. \hfill \\ \end{gathered}\) 

BKA can obtain the optimal function value \(f(H) = 1.724853\) with the structure variables \(H{\text{ }} = {\text{ }}(0.205730,3.470488,\begin{array}{*{20}l} {9.036622} \hfill \\ \end{array} ,0.205730)\). The results in Table 15 indicate that BKA can bring better solutions to solving such problems. After analysis and comparison, the BKA algorithm can obtain better solutions under given constraints through flexible heuristic search methods and optimization mechanisms. It can adapt to different problem characteristics and solving requirements and has a high success rate and accuracy. This discovery gives engineers and decision-makers a reliable tool and method to improve the design and decision-making process and achieve higher-quality engineering solutions.

Table 15 The Welded Beam Design Problem's best outcomes from the various algorithms

4.4 Speed reducer design problem

This issue aims to reduce the reducer device's weight while meeting 11 constraints. To describe this problem, we can use the following mathematical expression:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ,h_{5} ,h_{6} ,h_{7} ] = [b,m,p,l_{1} ,l_{2} ,d_{1} ,d_{2} ]\)

$${\text{Minimize}}\,\begin{array}{*{20}c} {f(H) = 0.7854h_{1} h_{2}^{2} (3.3333h_{3}^{2} + 14.9334z_{3} - 43.0934)} \\ { - 1.508h_{1} (h_{6}^{2} + h_{7}^{2} ) + 7.4777(h_{6}^{3} + h_{7}^{3} ) + 0.7854(h_{4} h_{6}^{2} + h5h_{7}^{2} )} \\ \end{array}$$
(35)
$$l_{1} (H) = \frac{27}{{(h_{1} h_{2}^{2} h_{3} )}} - 1 \le 0$$
(36)
$$l_{2} (H) = \frac{397.5}{{(h_{1} h_{2}^{2} h_{3}^{2} )}} - 1 \le 0$$
(37)
$$l_{3} (H) = \frac{{1.93h_{4}^{3} }}{{(h_{1} h_{3} h_{6}^{4} )}} - 1 \le 0$$
(38)
$$l_{4} (H) = \frac{1}{{(110h_{6}^{3} )}} \times \sqrt {16.9 \times 10^{6} + (\frac{{745h_{4} }}{{h_{2} h_{3} }})^{2} } - 1 \le 0$$
(39)
$$l_{5} (H) = \frac{{1.93h_{5}^{3} }}{{(h_{2} h_{3} h_{7}^{4} )}} - 1 \le 0$$
(40)
$$l_{6} (H) = \frac{1}{{(85h_{7}^{3} )}} \times \sqrt {157.5 \times 10^{6} + (\frac{{745h_{5} }}{{h_{2} h_{3} }})^{2} } - 1 \le 0$$
(41)
$$l_{7} (H) = \frac{{h_{2} h_{3} }}{40} - 1 \le 0$$
(42)
$$l_{8} (H) = 5 \times \frac{{h_{2} }}{{h_{1} }} - 1 \le 0$$
(43)
$$l_{9} (H) = \frac{{h_{1} }}{{12h_{2} }} - 1 \le 0$$
(44)
$$l_{10} (H) = \frac{{1.5h_{6} + 1.9}}{{h_{4} }} - 1 \le 0$$
(45)
$$l_{11} (H) = \frac{{1.1h_{7} + 1.9}}{{h_{5} }} - 1 \le 0$$
(46)

Variable range \(2.6 \le h_{1} \le 3.6,0.7 \le h_{2} \le 0.8,17 \le h_{3} \le 28,7.3 \le h_{4} \le 8.3\)

$$7.3 \le h_{5} \le 8.3,2.9 \le h_{2} \le 3.9,5 \le h_{3} \le 5.5$$

The optimal values and corresponding optimal variables that the BKA algorithm and its comparison algorithm arrived at are listed in Table 16. These values offer a simple way to compare how well various algorithms perform when solving problems. BKA can obtain the optimal function value \(f(H) = 2994.47107\) with the structure variables \(H \, = \, (3.5,\begin{array}{*{20}l} {0.7} \hfill \\ \end{array} ,\begin{array}{*{20}l} {17} \hfill \\ \end{array} ,\begin{array}{*{20}l} {7.3} \hfill \\ \end{array} ,7.71532,\begin{array}{*{20}c} {3.350215} \\ \end{array} ,5.286654)\). We can see from comparing the BKA algorithm's results to those of other algorithms that it solves problems more efficiently and produces better optimal values. This suggests that the BKA algorithm does a better job locating the optimal solution and may be closer to the problem's overall optimal solution. These optimal variables serve as crucial guides and references for a deeper comprehension of the problem's solution space and the viability of obtaining optimization results.

Table 16 The best solutions to the Speed reducer design problem using various algorithms

4.5 Three-bar truss design problem

This problem aims to reduce the member structure's weight while maintaining a constant total load. To achieve this goal, we need to consider three constraint conditions: the stress, buckling, and deflection constraints of each steel bar. Firstly, the stress constraint of each steel bar is to ensure that under the design working load, the stress borne by the steel bars in the member will not exceed the limit of their bearing capacity. This is to ensure the safety and reliability of the structure. The limitation of steel bar stress is determined by calculating the strength of the material and the force borne by the steel bar. Secondly, buckling constraint ensures that the member will not experience buckling under stress. Buckling refers to the instability phenomenon of a member under pressure, which may lead to structural failure. To avoid buckling, we need to limit the members' length, cross-sectional shape, and material selection to ensure that the structure can withstand the design load. Finally, deflection constraints ensure the member has sufficient stiffness and stability under stress. Deflection refers to the bending and deformation of a member under external forces. To control deflection, we need to limit the rod's geometric shape, the material's stiffness, and the design conditions' requirements. By simultaneously satisfying these three constraints, engineers can achieve maximum weight reduction in the member structure while maintaining the total load unchanged. This optimization design can reasonably utilize materials and reduce engineering costs while ensuring structural safety and performance. The following is the mathematical expression:

$${\text{variable Consider}}\,H = [h_{1} ,h_{2} ] = [x_{1} ,x_{2} ]$$
$${\text{Minimize}}\,f(H) = (2\sqrt {2h_{1} } + h_{2} ) \times l$$
$${\text{Subject to}}:l_{1} (H) = \frac{{\sqrt {x_{1} } x_{1} + x_{2} }}{{\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma \le 0v$$
(47)
$$l_{2} (H) = \frac{{x_{2} }}{{\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma \le 0$$
(48)
$$l_{3} (H) = \frac{1}{{\sqrt 2 x_{2} + x_{1} }}P - \sigma \le 0$$
(49)
$$l = 100cm,P = 2KN/cm_{2} ,\sigma = 2KN/cm_{2}$$
$${\text{Variables range}}\,(0 \le x_{i} \le 1,i = 1,2)$$

Table 17, which compares the BKA algorithm to other algorithms, shows the optimal values and corresponding optimal variables. This table offers comparative analysis information that will allow us to assess how well various algorithms perform when solving problems. BKA can obtain the optimal function value \(f(H) = \begin{array}{*{20}c} {263.895843} \\ \end{array}\) with the structure variables \(H \, = (\begin{array}{*{20}c} {0.788675} \\ \end{array} ,\begin{array}{*{20}c} {0.408248} \\ \end{array} )\). By analyzing the data, it can be deduced that the BKA algorithm offers a superior solution to these engineering problems.

Table 17 Optimal results of the different algorithms on the Three-bar truss design problem

4.6 Analysis of the results of engineering design problems

By observing the results of the five different types of constrained engineering design problems mentioned above, BKA achieved the optimal results. Below is an analysis of the reasons why BKA achieved optimal results in constraint design problems:

  1. 1.

    Advantages of swarm intelligence: The BKA algorithm is based on swarm intelligence, which enables interaction and information exchange between individuals in a group. The swarm intelligence algorithm can search for the optimal solution through individual cooperation and collaboration and has strong robustness and global search ability. Therefore, individuals in the BKA algorithm can better explore the solution space and find optimal results through the collaborative effect of swarm intelligence.

  2. 2.

    Parameter optimization and adjustment: The BKA algorithm includes some parameters, such as Cauchy distribution's control parameters and individual leaders' selection strategy. The BKA algorithm can better adapt to different engineering examples by optimizing and adjusting reasonable parameters. Reasonably setting parameters can improve the performance and effectiveness of the algorithm in specific problems, thus achieving optimal results.

  3. 3.

    The BKA algorithm adopts the Cauchy distribution strategy, which gives the algorithm a strong global search ability. The Cauchy distribution has a relatively wide tail, which allows for a wider search of the solution space and avoids falling into local optima. Therefore, BKA can traverse more solution spaces in different engineering examples and has a greater probability of finding the global optimal solution.

  4. 4.

    The BKA algorithm introduces a leader strategy to guide the algorithm's entire optimization process. By selecting individuals with high fitness values as leaders, other individuals learn and improve solutions through interaction with the leader. Leaders usually have relatively good solutions; through their guidance, the entire group can evolve toward a more optimal solution. Therefore, BKA can accelerate convergence and achieve optimal results through leader strategy in different engineering examples.

5 Conclusion and future works

This article presents the Black Kite Algorithm (BKA), a new swarm intelligence optimization algorithm inspired by the attack and migration behaviors of Black-winged kites. The algorithm mimics the Black-winged kites' high predatory skills and integrates a migratory strategy to enhance search capabilities, striking a balance between local and global optima. The study's main contents are:

  • Evaluate the performance of BKA using the CEC-2017 test set, CEC-2022 test set, and 18 complex functions, demonstrating superior results across various characteristics and complexities.

  • Statistical validation using the Friedman and Wilson sign rank tests, with BKA securing first place, confirming its effectiveness and scientific reliability.

  • Practical application of BKA in five engineering cases involving challenging conditions and constrained search spaces, where it shows significant superiority by quickly converging to high-quality solutions and exhibiting excellent performance.

In future research, BKA can be integrated with other well-known strategies, such as adversarial learning mechanisms (Lian et al. 2023) and chaotic mapping (Liu et al. 2023), to further enhance the optimization performance of the algorithm. BKA can also be used to optimize various engineering problems in the future, such as multi-disc clutch brake design problems (Yu et al. 2020), step cone pulley problems (Nematollahi et al. 2021), etc.