1 Introduction

The majority of real-world engineering applications frequently center on optimization challenges in the domains of computer, electrical, Internet of Things, robotics, and other engineering (Li and Zhou 2023). The term "optimization problem" refers to the design of an optimization algorithm to determine the combination of schemes or parameters under the given restriction conditions that causes the performance index to reach the highest value among all of the schemes or parameters, using the specific performance index of the entire project as the optimization object (Singh and Kumar 2023; Jiang et al. 2024). Traditional optimization algorithms such as the gradient descent method (Rostami et al. 2021), conjugate gradient method, Newton method, Lagrange multiplier method, etc. must traverse the entire parameter space while being processed (Cao et al. 2023). As a result, they frequently take a long time to find due to the difficulty of the traversal and the size of the parameter space. The failure of the solution might even be caused by the ideal solution (Cao et al. 2022). Therefore, finding accurate and effective optimization algorithms has become one of the primary study themes of associated disciplines due to the increasingly complicated practical engineering application challenges (Bai et al. 2023).

Numerous academics have suggested and begun research on intelligent optimization algorithms for the optimization of complicated engineering applications, drawing inspiration from human intelligence, biological group dynamics, and natural laws (Tang et al. 2024; Wang et al. 2022a). By modeling certain natural ecosystem systems, intelligent optimization algorithms may handle difficult optimization issues (Chen et al. 2024). According to their functional properties, these algorithms are classified as general optimization algorithms, evolutionary algorithms, and swarm intelligence optimization algorithms (Yue et al. 2022). The general optimization algorithm is based on the greedy algorithm, which always searches for the optimal solution in the local range of the current position when solving the problem (Yue et al. 2021), and the typical representative is the hill-climbing algorithm. However, the algorithm is easy to fall into the local optimal solution in the process of solving, so it cannot obtain the global optimal solution. There are two fixes for this issue. Setting a probability parameter comes first. It will hop out of the present place with a fixed probability and solve it again once the entire solution space has reached a local optimum solution. Simulated annealing algorithm is the model algorithm. The second is to emulate human memory, create a taboo list to record the local optimum solution, avoid the local optimal solution in the list in subsequent searches, and alter the search range to be global. The taboo search algorithm (Wang et al. 2024; Yue et al. 2023) is a good example of an algorithm.

The evolutionary algorithm simulates a number of biological evolution processes, such as genetics, selection, and mutation, as a parameter replacement process. It then calculates the value of the corresponding solution after each replacement to arrive at the best possible solution (Bai et al. 2022). Evolutionary algorithms can be categorized into genetic algorithms, cooperative evolutionary algorithms (CCEAS) (Cai et al. 2021a), immune evolutionary algorithms (IA) (Li et al. 2021a), differential evolutionary algorithms (DE) (Deng et al. 2021), multi-objective evolutionary algorithms (MOP) (Falcón-Cardona et al. 2021), and others based on the various replacement processes. Through the use of bionics, the swarm intelligence optimization method resolves challenging optimization issues (Tawhid and Ibrahim 2023). The fundamental idea behind this kind of algorithm is to treat the optimization problem's parameters as individuals within a group, update each individual parameter sequentially by simulating animal group behavior, set the fitness value to assess the updated parameters, and then output all parameters that correspond to the optimal solution in the solution (Jaafari et al. 2022). Representatives of swarm intelligence optimization algorithms include the Ant Colony Optimization Algorithm (ACO) (Maheshwari et al. 2021) that simulates the foraging behavior of ants, the Particle Swarm Optimization Algorithm (PSO) (Yue et al. 2024) that simulates the foraging behavior of birds, and the Shuffled Frog Leaping Algorithm (SFLA) (Maaroof et al. 2022) that simulates the foraging process of frogs, artificial bee colony optimization algorithm (ABC) (Kaya et al. 2022) for simulating the foraging behavior of bee colonies, wolf search algorithm (WSA) (Devika et al. 2021) for simulating the predation behavior of wolves, dragonfly optimization algorithm (DA) (Aghelpour et al. 2021) for simulating the predation behavior of dragonflies, etc. The swarm intelligence optimization algorithms listed are shown in Table 1.

Table 1 Swarm intelligence optimization algorithm

The self-organization capability of swarm intelligence algorithms is the primary cause of the new bionic swarm intelligence algorithms' recent rapid growth trend, as can be seen from the table (Jaafari et al. 2022). Self-organization capability indicates that each parameter in the algorithm operates independently and is only changed and altered in accordance with a predetermined set of rules (Rosado-Olivieri and Brivanlou 2021). The intelligent optimization of the whole algorithm population finally reflects this "decentralized" parameter modification process.

Meng (46), who first introduced the Chicken Swarm Optimization algorithm (CSO), describes CSO as an intelligent optimization algorithm that plans the parameter change process in accordance with the behavior of the flock. This algorithm simulates the population behavior of chickens, divides the parameter variables into three groups of hens, roosters, and chicks, sets the parameter self-organization transformation process in accordance with the rigid foraging hierarchy of the chicken flock (Ishikawa et al. 2020), integrates the optimization process of the optimization problem, etc. The result can be compared to how flocks of chickens forage. Since it was first developed, the chicken swarm optimization technique has received a lot of interest, and several academics have investigated it from various perspectives. From the viewpoint of algorithm convergence, In the literature (Nagarajan 2023), the author demonstrated the Markov chain theory-based chicken swarm optimization algorithm's global convergence. Researchers, including the algorithm's original creator, evaluate and compare the benefits and drawbacks of the chicken swarm optimization algorithm and other intelligent optimization algorithms using a number of benchmark functions for optimization problems. They then conduct simulation experiments to determine the algorithm's solution speed and accuracy. It outperforms the differential evolution algorithm, bat algorithm, and particle swarm algorithm in terms of resilience and robustness (Saif et al. 2023). Some academics have enhanced the chicken swarm optimization technique and used it to solve real-world engineering challenges in addition to providing evidence and analysis. By examining the drawbacks of mindlessly following the hen's parameters in the chicken flock and readily slipping into local optimal solutions, the author of the literature (Wang et al. 2023) developed a better CSO method based on Gaussian migration strategy. In the literature (Nuvvula et al. 2022), the author used the differential improved CSO algorithm for the smart grid's energy management and solved the algorithm's optimal solution to find a way to reduce power consumption and cost. In the literature (Yu et al. 2022), the author utilized the CSO algorithm to extract features, and via comparison studies with various optimization algorithms, confirmed that the flock optimization approach has the advantages of rapid speed and high accuracy in feature extraction application situations. The demands for the accuracy and speed of optimization problem solutions have been increasing in recent years due to engineering applications such as data mining (Gordan et al. 2022), wireless sensor networks (Majid et al. 2022), robotics engineering (Macenski et al. 2022), electric power (Silvestre et al. 2021), feature extraction (Zhang et al. 2022), etc. Numerous academics have presented a number of fresh use cases and implementation strategies for chicken swarm optimization algorithms through constant innovation. Therefore, the review research in this paper will concentrate on the use of the most recent chicken swarm optimization algorithms in particular domains and look ahead to the direction in which optimization algorithms will develop in the future.

The organizational structure of this paper is as follows: Section 2 introduces the biological characteristics of the CSO algorithm, the CSO algorithm and its advantages and disadvantages. Section 3 introduces an improved algorithm for the CSO algorithm. Section 4 introduces the application of CSO algorithm in different fields. Section 5 discusses the above and looks forward to the future research of CSO algorithm. Section 6 Conclusions the paper.

2 Chicken swarm optimization

2.1 Chicken characteristics

The gregarious nature of chickens makes them a particular breed of poultry animal, and they frequently coordinate their food-finding efforts in groups (Yan et al. 2021). In chicken flocks, there are three different categories of people: hens, chicks, and roosters. There is a clear foraging hierarchy in the group, according to varied foraging capacities (Carvalho et al. 2022). Due to their inferior foraging abilities compared to roosters, hens in this hierarchy forage after them, while chicks will do the same because they have worse foraging abilities (Basha et al. 2023). Figure 1 depicts the population structure of chickens, showing that the rooster occupies the population's center, the hens are arranged around the rooster, and the chicks are positioned around the hen. As a result, there is a natural pattern of mutual learning and rivalry between individuals of the same species, such as roosters and roosters, hens and hens, or between members of different species, such as hens and chicks, throughout the process of foraging (Jonsson and Vahlne 2023). For example, hen groups H1, H2 will forage around rooster R2 and learn the foraging patterns of rooster R2, which will determine the foraging trajectory of hens H1, H2. At the same time, as hen H2 is close to rooster R1, the foraging pattern of rooster R1 will also influence hen H2 to some extent. Chicks C4, C5 and C6 will forage around hen H2, learning her foraging pattern, and hen H2 will determine the foraging trajectory of chicks C4, C5 and C6. The developers of the method were motivated by the fact that this coexistence of learning and competition is consistent with the self-organizing growth of group intelligence and so proposed chicken swarm optimization (CSO).

Fig. 1
figure 1

Schematic diagram of chicken population structure

2.2 Algorithm model

The objective function that demands an optimal solution is the optimization object of the intelligent optimization algorithm, and its independent variable parameters can be made up of n j-dimensional space vectors X, where n denotes the number and j denotes the dimensionality, and n is any positive integer. By the quantity of vectors X, the Chicken Optimization Algorithm is classified into three categories. The fitness value f of each individual is what distinguishes the rooster, hen, and chick flocks. In Eq. (1), the rooster group Ri is assigned to the first RN individuals with the lowest fitness value; in Eq. (2), the chick group Ci is assigned to the CN individuals with the highest fitness value; and in Eq. (3), the remaining HN individuals are assigned to the hen group Hi. Consequently, RN, HN, and CN represent the corresponding numbers of individuals in each group within the colony: RN represents the rooster group, HN represents the hen group, and CN represents the chick group.

$$R_{i} = \left\{ {R_{1} ,R_{2} , \cdots ,R_{RN} } \right\}$$
(1)
$$C_{i} = \left\{ {C_{1} ,C_{2} , \cdots ,C_{CN} } \right\}$$
(2)
$$H_{i} = \left\{ {H_{1} ,H_{2} , \cdots ,H_{HN} } \right\}$$
(3)

Every hen in a flock has a matching individual dominant male, and every chick has a corresponding individual mother hen. The following formulas are used to update the foraging location of roosters, hens, and individual chicks (Wang et al. 2023b):

  1. 1)

    Calculation formula for Location Succession of rooster groups

$$R_{i,j}^{t + 1} = R_{i,j}^{t} [1 + randn(0,\delta^{2} )]$$
(4)
$$\delta^{2} = \left\{ {\begin{array}{*{20}c} {1,\quad \quad \quad f_{i} \le f_{s} } \\ {\begin{array}{*{20}c} {e^{{\frac{{f_{s} - f_{i} }}{{\left| {f_{i} } \right| + \varepsilon }}}} ,\quad f_{i} \, > f_{s} } \\ {s \in [1,n],\quad s \ne i} \\ \end{array} } \\ \end{array} } \right.$$
(5)

wherein, in Eq. (4), \({R}_{i,j}^{t}\) is the position of the i-th rooster in the j-th dimension after t iterations, \(0,{\delta }^{2}\) is a Gaussian distributed random number obeying a mean of 0 and a variance of \({\delta }^{2}\).Wherein, in Eq. (5), The individual's fitness value is represented by f, and the random rooster index, or S, is a little but important integer that keeps the denominator from being zero.

  1. 2)

    Calculation formula for Location Succession of the hen groups

$$H_{i,j}^{t + 1} = H_{i,j}^{t} + k_{1} * rand * (R_{Hi}^{t} - M_{i,j}^{t} ) + k_{2} * rand * (RH^{t} - H_{i,j}^{t} )$$
(6)
$$k_{1} = e^{{\frac{{f_{Hi} - f_{rHi} }}{{\left| {f_{Hi} } \right| + \varepsilon }}}}$$
(7)
$$k_{2} = e^{{f_{RH} - f_{Hi} }}$$
(8)

wherein, in Eq. (6), \({H}_{i,j}^{t}\) is the position of the i-th hen in the j-th dimension after t iterations. rand is a random number between [0,1]. \({R}_{Hi}^{t}\) is the position of the leader rooster of the i-th hen after t iterations; \(R{H}^{t}\) is the position after t iterations of individuals randomly selected among the other roosters and hens except the leader cock and hen itself; The rooster's influence factor is represented by k1, and the random individual effect factor is shown by k2. where the fitness value of the i-th hen is represented by fHi in Eq. (7). The fitness value of the rooster that leads the hen is frHi. where fRH, the fitness value of a random person, appears in Eq. (8).

  1. 3)

    Calculation formula for Location Succession of the hen groups

$$C_{i,j}^{t + 1} = C_{i,j}^{t} + F * (Hi_{j}^{t} - C_{i,j}^{t} )$$
(9)

wherein, in Eq. (9), \({C}_{i,j}^{t}\) is the position of the i-th chick in the j-th dimension after t iterations; \(H{i}_{j}^{t}\) is the position of the corresponding hen followed by the i-th chick after t iterations; F is a random number of (0,2).

2.3 Algorithm steps

Assume that the objective function of the optimization problem is\(F\left(\overrightarrow{X}\right)\), where \(\overrightarrow{X}\) is composed of n m-dimensional space vectors, n represents the number, m represents the dimension. The particular implementation phases of the chicken swarm optimization algorithm are detailed below (Ayvaz 2022) based on the composition of the algorithm operators already defined:

  1. 1)

    Set the chicken flock to contain N individuals, corresponding to \(\overrightarrow{X}\) in the objective function. Assign the numbers CN, HN, RN, MN to the hens, roosters, chicks, and mother chicks, accordingly. Establish the number of position dimensions (j), the number of rearrangement indicators (G), the maximum number of iterations (Max_G), and the initial number of iterations (t = 0) for the method. Go to step 2).

  2. 2)

    Check to determine if t < Max_G is true; if so, proceed to step 3; if not, break out of the loop and report the best answer.

  3. 3)

    Calculate the fitness value fi of each individual in the chicken population, go to step 4).

  4. 4)

    Determine whether Eq. (10) is true, if true, go to step 5), if not, go to step 6).

    $$t\,\bmod \,G\, = = \,0$$
    (10)
  5. 5)

    Establish the corresponding connection and define individuals in accordance with the order of fitness values, where the rooster has the lowest fitness value, the chick has the highest, and the hen is in the middle. Go to step 6).

  6. 6)

    All chicken groups start foraging, update the position coordinates, and calculate the objective function value F, and record the latest optimal solution once a better function value Fbest appears. Execute t = t + 1 after updating, go to step 2).

The flow chart of chicken flock optimization algorithm is shown in Fig. 2.

Fig. 2
figure 2

Flowchart of flock optimization algorithm

2.4 Algorithm parameter setting

The Chicken Swarm optimization technique uses the following parameters: N, RN, HN, CN, MN, Max_G, t, j, and G. Max_G, N, and j are the optimization problem's general settings, while N, j and Max_G stand for the parameters' number and dimension, respectively. These three parameters are often chosen by the optimization problem's function to be solved. Problems demanding high solution accuracy typically specify a bigger Max_G, while complicated solution functions typically have larger N and j. The letters RN, HN, CN, and MN, respectively, stand for the number of roosters, hens, chicks, and mother hens (Kumar and Pandey 2022). Among them, it has been demonstrated that HN represents more hens than RN, and the algorithm's performance is good since HN will produce more chicks. The explanation is that hens are better at stabilizing the parameters of chicks. The chicken group optimization algorithm's parameter G is the population reorganization reference index. The population will increase when the requirements of algorithm step 4 are satisfied, and a special performance is that certain chicks' fitness value will decline as they mature into roosters or hens. Some roosters and hens get more fit, which causes them to pass away and be classified as a new chick. The frequency of the changing process depends on the magnitude of the parameter G. The frequency of modifications will be high and the algorithm's effectiveness will decline if the parameter G is set to a value that is too great. The changes won't occur frequently enough if the parameter G is set too low, and the algorithm will quickly settle into a local optimum solution (Gu et al. 2022). After experimental validation, it is advisable to restrict the range of parameter G to Singh and Kumar (2023); Tawhid and Ibrahim 2023).

The parameter setting of CSO algorithm can also reflect that the algorithm has a certain degree of portability (Deng et al. 2022). Set the CSO parameters RN and CN to 0, and the CSO algorithm can be transformed into the DE algorithm. Set the RN and CN of the CSO algorithm parameters to 0, set the k1 and k2 parameters to the C1 and C2 parameters in the PSO, and the chicken group optimization algorithm can be converted into the PSO algorithm.

2.5 Analysis of CSO Advantages

The accuracy and computing efficiency of the swarm intelligence optimization method determine its benefits and drawbacks (Nadikattu 2021). The computational effectiveness of the method depends on its complexity, and the accuracy of the algorithm depends on its traversal range, provided that the number of iteration cycles Max_G is fixed (Sabale and Mini 2021). The CSO method has a straightforward procedure and excellent computing efficiency, and because it splits the parameters into three different population types, the traversal range of the algorithm will also expand as a result of the various search ranges of various populations (Ding et al. 2020). As a result, the CSO method offers a lot of advantages in terms of computing speed and accuracy (Mansouri et al. 2021). Many academics analyze the benefits and drawbacks of the CSO algorithm and other varieties of swarm intelligence optimization algorithms using simulated exercises.

Meng (Meng et al. 2014) conducted performance comparison analyses of the CSO, Particle Swarm Optimization (PSO), Bat Algorithm (BA), and Differential Evolution (DE) algorithms using 12 typical benchmark functions for algorithm testing. The comparison of simulation experiment results reveals that CSO offers greater benefits in terms of accuracy, algorithm efficiency, and resilience. Griewank et al. conducted a performance comparison analysis of the CSO, BA, and Bat Algorithm based on Differential Evolution (DEBA) in the literature (Fu et al. 2019), where the author employed three test functions. The simulation findings demonstrate that CSO performs significantly better. The author conducted a comparative performance analysis of CSO, PSO, and BA using the literature (Kadhuim and Al-Janabi 2023). According to the simulation findings, CSO greatly outperforms PSO and BA in terms of optimizing results. There is a premature occurrence in optimization issues that is simple to fit into a local optimum solution. Table 2 displays the comparative outcomes of several methods. It is clear from the comparative analysis findings in the table that the CSO algorithm performs better and is more stable during the process of finding the best solution than other swarm intelligence algorithms. and increased flexibility (Deb et al. 2020). The CSO algorithm also has the issue that the calculation accuracy is insufficient and that it is simple to slip into a local optimum solution in the latter stage of convergence (Chen et al. 2023).

Table 2 Comparison of CSO and other algorithms

2.6 Analysis of CSO Disadvantages

The CSO algorithm's drawback is that it is simple to settle on a local optimum solution. The operator entering a local optimum and the incorrect parameter setting are to blame (Qi et al. 2022). When parameters are specified incorrectly, the algorithm cannot forecast the ideal setting in advance, leading to the selection of subpar parameters, which ultimately produces subpar output results. The operator is a local optimum solution candidate for the following reasons. The individual hen of CSO will always follow the rooster to perform parameter replacement because the rooster plays a crucial role in updating the position of the hen. As a result, once the rooster operator falls into the local optimal solution, the hen operator will also quickly fall into a local optimal solution, resulting in rapid convergence of the algorithm (Ren and Long 2021). The hen operator will fall into the local optimal solution because the parameter replacement of the chicken operator only depends on the hen operator, and the chicken operator will quickly fall into the local optimal solution as a result, accelerating the convergence of the algorithm based on the previous, creating a vicious circle.

3 Improvement of chicken swarm optimization algorithms

When compared to conventional swarm intelligence algorithms, the CSO has shown improved efficiency and resilience. This is explained by its streamlined algorithmic architecture, built-in stability, and exceptional portability (Singh et al. 2023). However, crucial factors like the mother hen population and recombination coefficient substantially influence the convergence of CSO (Wu 2021). A poor design of these parameters can slow convergence and diminish the algorithm's search accuracy by reducing population diversity, premature convergence, and a tendency to become stuck in local optima. The CSO and other swarm intelligence algorithms are prone to premature convergence, which cannot totally be avoided (Yang et al. 2023).

Various enhancement solutions have been presented by numerous scholars to solve the aforementioned inherent restriction. These methods may be generally divided into three groups: conventional CSO upgrades, hybrid CSO enhancements that combine the algorithm with other methods, and additional cutting-edge improvement methods. Notably, compared to the original CSO method, these modified algorithms have consistently shown greater convergence efficiency and global search optimization capabilities (Goldanloo and Gharehchopogh 2022).

3.1 Improved chicken swarm optimization algorithms

The modified CSO method is often designed to accomplish optimization by changing the algorithm's operators, making implementation relatively simple (Gharehchopogh et al. 2020). The m-CSO algorithm will be used as an illustration of this kind of algorithm in this section. Additionally, we will pay particular attention to introducing the SA-CSO algorithm and the Integrated CSO (ICSO) algorithm collection. In the fourth section, we'll talk about how these algorithms are used in practice.

3.1.1 Modified chicken swarm optimization (m-CSO)

The author put forward m-CSO in the literature (Fu et al. 2019). The literature examined the composition and organization of a chicken swarm and made the case that the mother hen model has a direct impact on how well the chicken swarm algorithm performs. The accuracy and stability of the m-CSO algorithm are found to be better than the CSO algorithm, as confirmed by experiments using the Rosenbrock test function, when the equation for updating the position of the mother hen is changed from formula (3) to formula (11).

$$M_{i,j}^{t + 1} = M_{i,j}^{t} + k_{1} * rand * (R_{mi}^{t} - M_{i,j}^{t} ) + k_{2} * rand * (M_{i,j}^{t} - RM^{t} )$$
(11)

3.1.2 Improved chicken swarm optimization (ICSO)

By refining the CSO method and renaming it the improved CSO (ICSO) algorithm, researchers have successfully solved the issue of local optima. ICSO, which stands for improved CSO, is a comprehensive collection of algorithms developed by academics.

In the literature (Wu et al. 2015), the author proposed ICSO-I. The location of the chick operator in the CSO algorithm is exclusively reliant on the position of the hen and is unaffected by the position of the rooster, according to research into the positional dynamics of chicks. As a result, the chick operator is unable to acquire crucial data produced from the rooster's whereabouts. This restriction indicates that the chick operator is as vulnerable to becoming trapped in a local optima that is difficult to leave when the hen operator does.

The current study advises changing Eq. (9) to Eq. (12) in the chick's location equation to allay this worry. Using a set of eight benchmark issue test functions, a comparison is made between ICSO-I and other well-known algorithms, including PSO, BA, CSO, and ICSO-I. The experimental results support the significant accuracy improvements made by ICSO-I, which are especially noticeable in high-dimensional problem settings and outperform the three alternative methods under study.

$$C_{i,j}^{t + 1} = \omega * C_{i,j}^{t} + F*(Hi_{j}^{t} - C_{i,j}^{t} ) + C * \left( {Ri_{j}^{t} - C_{i,j}^{t} } \right)$$
(12)

wherein, in Eq. (12), C is a learning parameter representing the extent to which the chicks are influenced by the rooster. \(R{i}_{j}^{t}\) represents the dominant rooster associated with the i-th chick after t iterations. The self-learning coefficient for the chick is denoted by ω, with its value ranging from 0.4 to 0.9. The calculation formula for ω is outlined in Eq. (13).

$$\omega = \omega_{fin} * (\omega_{ini} /\omega_{fin} )^{{[1/(1 + 10 * t/\omega_{\max } )]}}$$
(13)

wherein, in Eq. (13), ωfin is the final value of the iteration, ωini the initial value of the iteration, t is the current iteration count, and ωmax is the maximum value of the iteration. Ω is calculated based on the given initial value and iteration count.

The ICSO-II approach, which addresses the issue that the hen operator in CSO is prone to falling into the local optimum solution, is suggested by the author in the literature (Wu et al. 2018a) and offers a measure to increase the crossover probability. Figure 3 depicts the precise use of the crossover probability enhancement. A default crossover probability (dcp) parameter is set while the algorithm is running. Equation (3) is used to update the hen operator, and after that, a random number is produced from a uniform distribution in the [0, 1] range. If α < dcp, the two hens with the best fitness values among all hen operators are selected for the crossover calculation using Eqs. (14) and (15). This produces offspring operators ofs1 and ofs2. Finally, ofs1 and ofs2 replace the two hen operators with the poorest fitness values. By replacing hen operators with subpar fitness values, the crossover probability enhancement has the advantage of lowering the likelihood that the algorithm would enter a local optimum. Furthermore, there is a chance that in the same iteration, the fitness values of the progeny hen operators will be higher than those of the rooster operators. The progeny hen operators will be identified as rooster operators in accordance with the hierarchy concept of CSO, increasing the algorithm's accuracy. By comparing the performance of ICSO-II with CSO, ABC, PSO, and ACO using the cost function of minimal aerodynamic heating rate, the improved algorithm is assessed. According to experimental simulations, ICSO-II outperforms other intelligent algorithms in terms of convergence efficiency and algorithm correctness.

$$ofs1 = p * hen_{1}^{t} + (1 - p) * hen_{2}^{t}$$
(14)
$$ofs2 = (1 - p) * hen_{1}^{t} + p * hen_{2}^{t}$$
(15)

wherein, in Eqs. (14) and (15), hen1t and hen2t represent the two hen operators with the best fitness values after t iterations. p is a percentage coefficient that represents the proportion of inheritance from hen1t and hen2t to the offspring ofs1 and ofs2.

Fig. 3
figure 3

The flow chart crossover probability improvement

In the literature (Liang et al. 2020a), the author proposed ICSO-III. The hen and chick operators in CSO benefit from the advances brought about by this algorithm. A modified search technique based on "Levy flight" is suggested to overcome the issue of the hen operator in CSO slipping into local optima. The comparatively high number of hen parameters among all the factors is taken into consideration by this technique. It modifies the hen's search behavior such that it combines less long-range searches with close-range, in-depth searches. Equation (16) illustrates the enhanced "Levy flight" search technique. The method also advises switching the search pattern of the chick operator to the chick search Eq. (12) from ICSO-I in order to avoid the problem of the chick operator in CSO being affected by the hen and falling into local optima. Finally, it is determined from studies employing eight test functions that the ICSO-III algorithm greatly outperforms the CSO and PSO algorithms in terms of accuracy and efficiency.

$$H_{i,j}^{t + 1} = H_{i,j}^{t} + k_{1} * rand * (R_{Hi}^{t} - M_{i,j}^{t} ) + k_{2} * rand * L_{{{\text{evy}}}} (\lambda ) \otimes (RH^{t} - H_{i,j}^{t} )$$
(16)

wherein, in Eq. (16), Levy(λ) represents a random search jump function that follows a Levy distribution. λ is a random number distributed between 1 and 3. \(\otimes\) is the dot product operator, which represents the inner product of vectors.

A solution to the problem of the chick operator in CSO being vulnerable to local optima controlled by the hen operator was provided in the literature (Wang et al. 2020) by the author as ICSO-IV. Equation (6) should be changed to Eq. (17), according to the algorithm. The crucial change keeps the hen operator's hold over the chick operator while adding the rooster operator. In addition, as shown in Eq. (18), a dynamic cosine inertia weight coefficient is included to control the effect of the rooster. Equation (18) indicates that the coefficient range lies within [wminwmax, wmax + wmin]. The degree of the rooster's effect over the chick is determined by the coefficient's absolute value. The number of iterations has an impact on the coefficient's size. This improvement creates a more even distribution by adding a rooster impact component to the chick calculation. This should lessen the chance that the chick operator may enter a local optimum. In reference (Ishikawa et al. 2020), function simulations are used to compare ICSO-IV with GA and PSO algorithms. The results show that ICSO-IV greatly increases algorithm accuracy when compared to GA and PSO algorithms.

$$C_{i,j}^{t + 1} = C_{i,j}^{t} + F * (Hi_{j}^{t} - C_{i,j}^{t} ) + w * (Ri_{j}^{t} - C_{i,j}^{t} )$$
(17)
$$w = \frac{{w_{\max } - w_{\min } }}{2}\cos \frac{\pi }{{t_{\max } }}t + \frac{{w_{\max } - w_{\min } }}{2}$$
(18)

wherein, in Eq. (17), w is the dynamic cosine inertia weight coefficient, and \(R{i}_{j}^{t}\) is the corresponding dominance of the rooster over the i-h chick after t iterations. Wherein, in Eq. (18), wmax is the maximum inertia coefficient set. wmin is the minimum inertia coefficient set. tmax is the maximum number of iterations. t represents the current iteration count.

ICSO-V was suggested by the author in the literature (Liu et al. 2020a). Both the hen and the chick are susceptible to falling into a local optimum when the rooster operator achieves a local optimum solution since the rooster operator is crucial to the CSO algorithm. The approach incorporates the cosine inertia weight Cip to improve the rooster's capacity to perform local searches in order to resolve this issue. The inertia factor, Cip, is represented in Eq. (19), which is the modified version of Eq. (4). Equation (20) provides the definition of Cip. The rooster operator may switch between global and local searches because to this inertia component, which lessens the chance of entering a local optimum. Additionally, ICSO-V addresses the issue of the hen operator controlling the chick operator solely. The learning part technique, which changes Eq. (9) to Eq. (20), is the improvement it suggests. Equation (20) introduces the global best operator into the chick operator equation and sets a learning coefficient to control its impact. The purpose of this technique is to lessen the likelihood that the chick operator may become stuck in local optima. In comparison to CSO, PSO, and WOA algorithms, simulations in the literature (Liu et al. 2020a) employing six distinct test functions show that ICSO-V produces more accurate computational results, especially for high-dimensional issues.

$$R_{i,j}^{t + 1} = Cip * R_{i,j}^{t} + R_{i,j}^{t} * randn\left( {0,\delta^{2} } \right)$$
(19)
$$Cip = Cip_{\min } + (Cip_{\max } - Cip_{\min } ) * \cos \left( {\pi * \frac{t}{T}} \right)$$
(20)

wherein, in Eq. (19), Cip is the cosine inertia factor, which is used to enhance the search capability of the rooster. Wherein, in Eq. (20), Cipmax is the maximum value of the inertia factor, set to 0.8, Cipmin is the minimum value of the inertia factor, set to 0.3, T is the maximum number of iterations, and t represents the current iteration count.

By altering the equations in the CSO algorithm, the aforementioned group of ICSO algorithms seeks to solve the unique flaws of roosters, hens, and chicks. This technique to improvement has the advantage of being straightforward and simple to apply, which has certain algorithmic optimization implications. An extensive range of enhanced algorithms is also included in the ICSO algorithm library. In the literature (Bharanidharan and Rajaguru 2020), the author presented ICSO-VI, which improves the CSO algorithm through four enhancement measures: controlled randomness optimization (CRO), cross-over operator (CO), rooster selection (RSEL), and. Figure 4 shows the flowchart for this thorough upgrade.

Fig. 4
figure 4

ICSO-VI algorithm flow chart

CPT involves optimizing the control parameters during initialization. In this measure, the algorithm seeks values for the parameters k1, k2, and F that yield high-precision results for the optimization function. Experimental testing determined that using k1 = 0.7, k2 = 0.5, and F = 0.9 leads to highly accurate results for most optimization functions. Recombination strategy for roosters is called RSEL. The placements of the rooster individuals are changed using the roulette wheel selection technique after the algorithm calculates the fitness values and obtains groups of hens, roosters, and chicks. People in the rooster species that score lower on the fitness scale are more likely to switch roles. Position considering rate (PCR) and position adjusting rate (PAR), two fixed parameters that must be established in order to use CRO as an improvement metric. Following the computation of Eqs. (4), (7), and (9), this measurement is made. Following CRO, CO is an improvement metric. According to Eqs. (14) and (15), the crossover procedures on the hen individuals are the main focus of this measurement. The two patients with the lowest fitness scores are changed after the procedure. The CSO algorithm's parameters, rooster equations, hen equations, and overall fitness are all the focus of ICSO-VI's four improvement measures. It has been discovered through simulation testing that the ICSO-VI algorithm greatly increases accuracy when compared to the CSO algorithms.

3.1.3 Self-adaptive chicken swarm optimization (SA-CSO)

In the literature (Kumari et al. 2022a), the author proposed SA-CSO algorithm. The problem of the CSO method becoming stuck in local optima when handling high-dimensional problems was addressed in the literature. After updating Eqs. (1), (3), and (6) in the CSO method, it includes a self-adaptive improvement strategy that modifies the placements of the flock by introducing an adaptive parameter (Gheibi et al. 2021). The flock is aided in escaping local optima by this change. Equation (21) defines the adaptively parameter as follows, while Eq. (22), which focuses on the global random flock operator, describes how the flock is adjusted. SA-CSO is straightforward to use and makes progress in avoiding local optimum. It has been discovered through experimental simulations that SA-CSO surpasses other algorithms in terms of precision and accuracy, including CSO, PSO, WOA, and multi-verse optimizer algorithm (MVO) (Abualigah 2020).

$$\alpha \left( i \right)\, = \;\frac{{F_{i} \left( {ts - 1} \right) - F_{i} \left( {ts} \right)}}{{F_{i} \left( {ts - 1} \right)}}$$
(21)
$$C_{l,j}^{ts} (i,po) = C_{l,j}^{ts} (i,po) \times \alpha (i)$$
(22)

wherein, in Eq. (21), \(\left(i\right)\) is the adjustment parameter for the selected individual i. \(F_{i} \left( {ts} \right)\) is the fitness value of individual i in the ts iteration. Wherein, in Eq. (22), \({C}_{l,j}^{ts}(i,po)\) is the adjusted position of the selected individual i after the fitness parameter adjustment in the ts iteration. Here, l is the ranking position of the individual among all individuals, j represents the dimension, and po is a random number indicating the likelihood of being selected for fitness adjustment.

3.2 Hybrid improved chicken swarm optimization

Standalone methods fall short of properly resolving high-dimensional uncertain optimization issues in the actual world (Radaideh and Shirvan 2021). In order to enhance, hybrid improved chicken swarm algorithms combine the chicken swarm algorithm with additional clever optimization techniques (Chen et al. 2022). The Bat-Chicken Swarm Optimization (CSO-BA), Chicken Swarm plus Deer Hunting Optimization Algorithm (CSO-DH), and Hybrid Ant Lion Chicken Swarm Optimization Algorithm (CSO-ALO) are the three hybrid enhanced algorithms that will be covered in this part. In the fourth section, we'll talk about how these algorithms are used in practice.

3.2.1 Bat-Chicken swarm optimization (CSO-BA)

In the literature (Kong et al. 2020), the author proposed CSO-BA algorithm. The Bat Algorithm is a search algorithm that excels at short-range and high-precision search. It controls the direction of algorithm advancement through two factors: the loudness of sound waves Ai and the frequency ri (Bangyal et al. 2022). With the help of this technique, the algorithm is guaranteed to advance toward greater precision while simultaneously having a chance of successfully escaping from local optima. Based on the Bat Algorithm's recommendations for improvement, CSO-BA incorporates the Bat Algorithm's control parameters into the Rooster operator of CSO. The Rooster Eq. (4) in CSO is modified to Eq. (23), where Ai and ri are defined as Eq. (24) shows. This improvement effectively avoids the Rooster operator from getting stuck in local optima and enhances the computational precision.

$$R_{i,j}^{t + 1} = \left\{ \begin{gathered} x_{best} + 0.01 * randn\left( {1,N} \right)\quad \quad Rand \ge r_{i}^{t} \hfill \\ R_{i,j}^{t} * randn\left( {0,\sigma^{2} } \right) + \varepsilon * A_{i}^{t} \quad {\kern 1pt} Rand\; < \;r_{i}^{t} \hfill \\ \end{gathered} \right.$$
(23)
$$A_{i}^{t + 1} = \alpha A_{i}^{t} ,\quad r_{i}^{t + 1} = r_{i}^{t} (1 - \exp ( - \lambda t))$$
(24)

wherein, in Eq. (23), \({x}_{best}\) is the global best solution among the N individuals in the chicken swarm, and \(\varepsilon\) is a random number distributed in the range [-1,1]. Wherein, in Eq. (24), \(\alpha\) and \(\lambda\) are limiting parameters, which are often set to a value of 0.9 in most cases.

The mother chicken and chick operators are also improved by CSO-BA. To improve the unpredictability of the mother chicken's location and lessen the likelihood that it would become caught in local optima, the parameter F of the chicks is included into the mother chicken Eq. (6), resulting in Eq. (25). Equation (9) is changed to Eq. (26) to include the rooster's position, which increases the unpredictability of the chick's position and lessens the possibility that it will become stuck in local optima.

$$H_{i,j}^{t + 1} = F * H_{i,j}^{t} + k_{1} * rand * (R_{Hi}^{t} - M_{i,j}^{t} ) + k_{2} * rand * (RH^{t} - H_{i,j}^{t} )$$
(25)
$$C_{i,j}^{t + 1} = C_{i,j}^{t} + F * (Hi_{j}^{t} - C_{i,j}^{t} ) + rand * (Ri_{j}^{t} - C_{i,j}^{t} )$$
(26)

Based on the improved operators, the flowchart of the CSO-BA algorithm is shown in Fig. 5, and the specific steps are as follows:

  • Step 1: Initialize all chicken swarm individuals. The initial positions of the chicken swarm are defined by Eq. (27).

  • Step 2: Apply the CSO algorithm to classify people into roosters, mother hens, and chicks according to their fitness scores. Equations (23), (25), and (26) should be used to update people's locations.

  • Step 3: Calculate the updated fitness values and compare them with the previous values. Keep the individuals with better fitness values.

  • Step 4: Update the parameters of sound loudness and frequency.

  • Step 5: Check if the maximum number of iterations is reached. If so, end the loop and output the maximum value.

    $$x_{i,j} = Lb + (Ub - Lb) * rand$$
    (27)

wherein, in Eq. (27), \({x}_{i,j}\) is a random individual. Ub is the maximum value of the entire search space. Lb is the minimum value of the entire search space, and rand is a random number distributed between 0 and 1.

Fig. 5
figure 5

CSO-BA algorithm flow chart

3.2.2 Chicken swarm-plus deer hunting optimization algorithm (CSO-DH)

The CSO-DH method was suggested by the author in the literature (Gawali and Gawali 2021). It is an intelligent optimization system that was motivated by how hunters hunt deer. To arrive at optimal position solutions, this algorithm modifies the hunter's location depending on several hunting strategies (Kanna et al. 2021). The data may be represented as illustrated in Eq. (28), supposing a collection of hunter positions designated as P. The wind angle and the deer angle are two identified hunting effect elements. Equations (29) and (30) yield the definitions of a and b, respectively.

$$P = \left\{ {P_{1} ,P_{2} , \cdots ,P_{j} , \cdots ,P_{n} } \right\};1 \le j \le n$$
(28)
$$\theta_{k} = 2\pi r$$
(29)
$$a_{k} = \frac{\pi }{8} \times r$$
(30)

wherein, in Eq. (28), j is the currently selected position, and n is the total number of hunters. Wherein, in Eqs. (29) and (30), r is a random number between 0 and 1, and k represents the current iteration number.

According to Deer Hunting Optimization, there are three determining elements that affect the hunting position:the optimal hunting position Plead, the wind angle θ and the deer angle a that affect the hunting, and the successful hunting position Psuccess. Based on these factors, the strategy updates in the deer hunting algorithm are defined as Eqs. (31), (32), and (33). Equation (31) represents the strategy for the optimal position, Eq. (32) represents the angle strategy, and Eq. (33) represents the strategy for successful hunting (Prabhakar and Veena 2023).

$$P_{k + 1} = P^{lead} - M * c * \left| {L \times P^{lead} - P_{k} } \right|$$
(31)
$$P_{k + 1} = P^{lead} - c * \left| {\cos (v) \times P^{lead} - P_{k} } \right|$$
(32)
$$P_{k + 1} = P^{success} - M * c * \left| {L \times P^{success} - P_{k} } \right|$$
(33)

wherein, in Eq. (31), M and L are coefficient vectors, defined as shown in Eqs. (30) and (31) respectively. c is a parameter influenced by the angle factor, and its range is between 0 and 2. Wherein, in Eq. (32), v is an influencing parameter defined as v = Φi+1, where Φi+1 = Φi + di, Φi = θi + π, and di = θi—a.

$$M = \frac{1}{4}\log \left( {k + \frac{1}{{k_{\max } }}} \right) * b$$
(34)
$$L = 2 * r$$
(35)

wherein, in Eq. (34), kmax represents the maximum number of iterations, and b is a random number defined in the range [-1, 1]. Wherein, in Eq. (35), r is a random number defined in the range [0, 1].

The strategy selection method of Deer Hunting Optimization is as follows: when c > 1, Eq. (32) is used to update the hunting position; when c < 1 && L < 1, Eq. (33) is used to update the hunting position; when c < 1 && L > 1, Eq. (31) is used to update the hunting position.

The CSO-DH method suggests substituting the iterative calculation process of the rooster Eq. (4) with the hunting position calculation process of Deer Hunting Optimization in order to address the problem of the rooster operator in the CSO algorithm being prone to local optima (Batra et al. 2021). The replacement is specifically carried out by determining if b < 0 is true. If so, the deer hunting technique is utilized to update the rooster operator; otherwise, the old rooster calculation equation is applied. In Fig. 6, the CSO-DH flowchart is displayed.

Fig. 6
figure 6

CSO-DH algorithm flow chart

According to comparisons of test function results, the CSO-DH algorithm solves high-dimensional engineering problems more accurately and with less error loss than the RL algorithm, DHOA-RL algorithm, and CSO-RL algorithm.

3.2.3 Hybrid ant lion chicken swarm optimization algorithm (CSO-ALO)

In the literature (Deb and Gao 2021), the author proposed CSO-ALO algorithm. The Ant Lion Optimization (ALO) algorithm's main principle is to replicate the way that ant lions pursue their prey in order to accomplish global optimization (Abualigah et al. 2021). The ant lion digs a funnel-shaped trap in sandy ground before hunting, hiding at the bottom of the trap and watching for prey to come. A wandering ant that accidentally enters the trap is quickly caught and eaten by the ant lion, who then fixes the trap in preparation for the next hunt (Niu et al. 2022). The ALO algorithm is a population-diverse, highly optimized, parameter-adjustment-friendly, and readily implementable search technique (Mani et al. 2018). It integrates random walks, roulette wheel selection, and elite tactics.

Equation (36) depicts how the ALO algorithm initially specifies the ant s random walk procedure. Since the ant's range of movement is limited, utilizing Eq. (32) to directly simulate the ant's journey might result in an out-of-bounds result. Therefore, depending on the established limits, Eq. (37) is used to normalize the ant's movement.

$$A\left( t \right) = \left[ {0,sum(2r(t_{1} ) - 1), \cdots ,sum(2r(t_{n} ) - 1)} \right]$$
(36)

wherein, in Eq. (36), A(t) represents the set of steps taken by the ant during random walks. sum is the summation operation. t is the number of steps or iteration count of the ant's movement. r(t) is a random integer taken from the interval [0, 1].

$$A_{i}^{t} = \frac{{(A_{i}^{t} - a_{i} ) * (d_{i}^{t} - c_{i}^{t} )}}{{(b_{i} - a_{i} )}} + c_{i}^{t}$$
(37)

wherein, in Eq. (37), \({a}_{i}\) is the minimum value of the i-th dimension variable. \({b}_{i}\) is the maximum value of the i-th dimension variable. \({c}_{i}^{t}\) is the minimum value of the i-th dimension variable during the t-th random walk. \({d}_{i}^{t}\) is the maximum value of the i -th dimension variable during the t-th random walk.

The movement of ants is influenced by the traps created by antlions. Therefore, a mathematical model is assumed for the antlion traps, as shown in Eq. (38).

$$\left\{ {\begin{array}{*{20}c} {c_{i}^{t} = AL_{i}^{t} + C^{t} } \\ {d_{i}^{t} = AL_{j}^{t} - d^{t} } \\ \end{array} } \right.$$
(38)

wherein, in Eq. (38), \({C}^{t}\) is the minimum value of all variables at the t-th iteration. \({d}^{t}\) is the maximum value of all variables at the t-th iteration. \(A{L}_{j}^{t}\) is the position of the selected j-th antlion at the t-th iteration.

Only one antlion at a time may affect an ant colony's movement. To identify which ant gets preyed upon by which antlion, the computer employs a roulette wheel selection technique (Rani and Garg 2021). According to this method, an antlion's fitness directly relates to the likelihood that it will catch an ant. Equation (39) is used by the program to replicate this adaptive process.

$$c^{t} = \frac{{c^{t} }}{I},d^{t} = \frac{{d^{t} }}{I}$$
(39)

wherein, in Eq. (35), I represent the scaling factor, which is defined as shown in Eq. (40).

$$I = \left\{ {\begin{array}{*{20}c} {1,\quad \quad \quad t \le 0.1T} \\ {10^{v} * \frac{t}{T},\,\;t\, > \,0.1T} \\ \end{array} } \right.$$
(40)

wherein, in Eq. (40), T is the maximum number of iterations, and v is a variable that changes as the number of iterations increases.

During the iteration process, if an ant's fitness value is lower than that of its corresponding antlion, the antlion is said to have captured the ant (Li et al. 2022a). Equation (41) is used to update the antlion's location. The elite antlion is chosen based on its fitness rating, and it uses Eq. (42) to decide where the ants will go to in the future.

$$AL_{j}^{t} = A_{i}^{t} ,\;if\;f\left( {A_{i}^{t} } \right) \le f\left( {AL_{j}^{t} } \right)$$
(41)
$$A_{i}^{t + 1} = \frac{{R_{A}^{t} \left( l \right) + R_{E}^{t} \left( l \right)}}{2}$$
(42)

wherein, in Eq. (41), f is the fitness function. Wherein, in Eq. (42), \({R}_{A}^{t}\left(l\right)\) is the value generated by the ant during the l-th step of walking with the corresponding antlion after t iterations. \({R}_{E}^{t}\left(l\right)\) is the value generated by the ant during the lth step of random walking around the elite antlion after t iterations. L represents any arbitrary value during the random walking of the ant.

Based on the defined ALO operators, the algorithm follows the specific steps below:

  • Step 1: Data initialization. Determine the number and dimensions of antlions and ants, and initialize their positions within the feasible domain. Proceed to Step 2.

  • Step 2: Calculate the fitness values for all individuals and determine the elite antlion. Proceed to Step 3.

  • Step 3: Update the Eq. (34's) parameters after choosing your antlions using the roulette wheel selection technique. Based on Eqs. (36) and (37), the ants change their locations randomly around the elite antlion. Apply Eq. (42) to the ant locations to update them. Proceed to Step 4.

  • Step 4: Increase the iteration count by 1. Check if the maximum iteration count has been reached. If yes, end the iteration; if no, go back to Step 2. The ALO algorithm flowchart is shown in Fig. 7.

Fig. 7
figure 7

The flow chart of ALO algorithm

A mixture of the CSO and ALO algorithms is known as the CSO-ALO algorithm. A hybrid optimization solution is produced by first using the ALO algorithm to locate the position of the elite antlion and then using the CSO algorithm to further optimize the elite antlion (Asna et al. 2022). Figure 8 depicts the precise flowchart of the hybrid optimization technique. The hybrid optimization technique adds more layers of optimization than typical optimization algorithms, which has benefits for accuracy. In literature (Deb and Gao 2021), it was effectively proved that the CSO-ALO hybrid algorithm considerably enhances the accuracy of the method when compared to the ALO and CSO algorithms in optimizing different load scheduling issues.

Fig. 8
figure 8

The flow chart of CSO-ALO algorithm

3.3 Other improved chicken swarm optimization algorithms

Other categories of enhanced CSO algorithms will be covered in this section. Score-based improvements, genetic evolution-based improvements, chaos theory-based improvements, neural network-based improvements, and improvements for multi-objective optimization are some of the several types of improvements.

3.3.1 Fractional-chicken swarm optimization (Fractional-CSO)

In the literature (Cristin et al. 2021), the author proposed the Fractional-CSO algorithm. With this method, the fractional calculus is combined with the ideas of hierarchy and flock behavior. The Rooster operator in Eq. (4) is changed to Eq. (43) in order to solve the problem of the Rooster operator easily becoming caught in local optima. The modified Rooster calculation process is influenced by different stages of iteration, thereby enhancing the global randomness of the chicken flock.

$$R_{x,y}^{t + 1} = R_{x,y}^{t} \left( {\beta + \delta \left( {0,\kappa^{2} } \right)} \right) + \frac{1}{2}\beta R_{x,y}^{t - 1} + \frac{1}{6}\left( {1 - \beta } \right)R_{x,y}^{t - 2} + \frac{1}{24}\beta \left( {1 - \beta } \right)\left( {2 - \beta } \right)R_{x,y}^{t - 3}$$
(43)

wherein, in Eq. (43), \(\beta\) is the fractional order, which represents the intermediary difference between the current Rooster term and the calculated term. \({\kappa }^{2}\) is defined as shown in Eq. (44).

$$k^2=\;\left\{\begin{array}{lc}1,\;&if\;f_x\;\leq\;f_m;\;m\;\in\;\left[1,\;A\right],\;m\neq\;x\\\exp\;\left(\frac{\left(f_{\mathrm m}\mathit-f_{\mathit x}\right)}{\left|f_{\mathit x}\right|+y}\right),&otherwise\end{array}\right.$$
(44)

wherein, in Eq. (44), f represents the fitness value of the Rooster individual, m is a random individual index, and y is a very small number.

3.3.2 CSO based clustering algorithm with genetic algorithm (CSOCA-GA)

In the literature (Osamy et al. 2020), the author proposed the CSOCA-GA algorithm.The goal of this approach is to solve the discrete data clustering issue. The algorithm achieves parameter clustering by first introducing Eq. (45) to the binary of the chicken flock's individuals and employing discrete individuals as the output for the program's optimal solution. The algorithm incorporates the crossover operator of the genetic algorithm into the computation of each iteration, providing a certain probability to escape from local optima and increase the accuracy of the algorithm. This prevents the algorithm from getting stuck in subpar local optima and producing inferior clustering results. Figure 9 depicts the genetic algorithm's flowchart.

$$b_{x} = \left\{ {\begin{array}{*{20}c} {1,\quad if\,\frac{1}{{1 + e^{ - x} }}\; > \;0.5} \\ {0,\;\quad \;otherwise\;\;\;\;\;} \\ \end{array} } \right.$$
(45)

wherein, in Eq. (45), x is the fitness value of the corresponding parameter.

Fig. 9
figure 9

The flowchart of genetic algorithm

3.3.3 Chaotic chicken swarm optimization (CCSO)

The initialization of parameters and operator iterations in the CSO algorithm both require procedures that produce random numbers. To deal with unpredictable processes, the CCSO algorithm applies chaos theory. According to experimental findings, adopting chaotic sequences instead of pseudo-random numbers for population initialization, selection, crossover, and mutation operations might significantly alter the entire algorithmic process and produce superior outcomes.

According to the literature (Li et al. 2019), the rooster, hen, and chick operators in the CSO algorithm are processed at random using the chaos theory's tend map and logistics map. This raises the likelihood that the algorithm will elude local optima. To demonstrate that CCSO outperforms CSO and PSO algorithms in a variety of data processing areas, five test datasets were employed.

The tend map is shown in Eq. (46), and the logistics map is shown in Eq. (47).

$$\begin{gathered} P_{i + 1} = \frac{{P_{i} }}{Const1} \hfill \\ P_{i + 1} = Const2 * \left( {1 - P_{i} } \right) \hfill \\ \end{gathered}$$
(46)
$$P_{i + 1} = Const3 * (1 - P_{i} )$$
(47)

wherein, in Eq. (46), Pi+1 is the new position of the individual, and Pi is the current position. Const1 is set to 0.7, Const2 is set to 3.33. Wherein, in Eq. (47), Const3 is set to 4.

The SCCSO method, which updates the location of the individual with the worst fitness value in each iteration using chaos theory, was put out by the author in the literature. Equation (48) illustrates the new position's computation.

$$X_{w} = \left\{ \begin{gathered} X_{Lr} ,\quad \quad if\;Rand2 \ge 1 - \frac{t}{{t_{\max } }} \hfill \\ X_{Lr} + Rand1 * \left( {2C_{k} - 1} \right),\quad otherwise \hfill \\ \end{gathered} \right.$$
(48)

wherein, in Eq. (48), \(X_{w}\) is the updated position after the worst position is updated. \({X}_{Lr}\) is the position of the individual with the worst fitness value. Rand1 and Rand2 are random numbers distributed between [0,1]. t is the current iteration count. tmax is the maximum number of iterations, and Ck is a parameter for the chaotic sequence defined in Eq. (49).

$$C_{k + 1} = 4 * C_{k} * \left( {1 - C_{k} } \right)$$
(49)

3.3.4 Adaptive chicken swarm optimization (ACSO)

Adaptive Chicken Swarm Optimization(ACSO) is suggested as a viable solution, targeting the issues of low convergence accuracy and easy fall into local optimal solutions for the complicated high-dimensional problems treated by flocking. By enhancing the iterative process, the ACSO may determine the parameter selection adaptively during the algorithm, thereby guaranteeing the flocking algorithm's ability to do global depth searches (Wang et al. 2021a).

The author suggested the ACSO algorithm in the literature. The choice of constants in the tend map and logistics map was a topic of discussion in the literature since it affected algorithm speed and accuracy. Equation (50) illustrates how the tend map is updated by ACSO, and initialization values for the parameter in Eq. (51) are established in accordance with the modified tend map.

$$P_{i + 1} = \left\{ {\begin{array}{*{20}c} {\frac{10}{7}P_{i} ,\quad \quad \quad P_{i} \; < 0.7\quad } \\ {\frac{10}{3}P_{i} \left( {1 - P_{i} } \right),otherwise} \\ \end{array} } \right.\;$$
(50)
$$X_{ini} = X_{\min } + P_{i + 1} \left( {X_{\max } - X_{\min } } \right)$$
(51)

wherein, in Eq. (51), Xini is the parameter initialization in the ACSO algorithm. Xmax is the upper bound of the parameter, and Xmin is the lower bound of the parameter.

To prevent being stuck in local optima, the ACSO algorithm combines the ideas of search rate and search range. In order to do this, the ACSO algorithm incorporates the hen's influence component w, which is made up of the search rate influence factor h and the search range influence factor s. Equation (52), instead of Eq. (3), modifies the hen operator. The ACSO method has been found to have superior accuracy, stability, and computing efficiency when compared to the CSO algorithm through simulation studies utilizing seven distinct test functions.

$$H_{i,j}^{t + 1} = wH_{i,j}^{t} + k_{1} * rand * \left( {R_{Hi}^{t} - M_{i,j}^{t} } \right) + k_{2} * rand * \left( {RH^{t} - H_{i,j}^{t} } \right)$$
(52)

wherein, in Eq. (52), w is the hen's influence factor, which is defined in Eq. (53).

$$w = a_{0} + a_{1} e^{{ - a_{2} \frac{h}{s}}}$$
(53)

wherein, in Eq. (53), a0, a1, a2 are random numbers distributed in the range (0,1]. h is the search rate influence factor ranging between (0,1], where a larger h corresponds to a slower search rate, and the algorithm converges when h equals 1. s is the search range influence factor ranging between (0,1], where a smaller s corresponds to a larger search range, and all individuals in the algorithm have the same fitness value when s equals 1.

3.3.5 Algorithm complexity analysis

Assuming that the number of populations of the algorithm is N, the dimension of the search space is D, and the maximum number of searches is Tmax, the complexity of PSO-CSO includes: the initialization complexity of the population O(ND), the fitness value calculation complexity O(ND), the position update complexity of the global and local search is O(N2logD), the fitness value sorting complexity of the algorithm is O(N2) and the control parameter update complexity of the algorithm is O(ND). Then the complexity of the PSO-CSO algorithm is shown in Eq. (54):

$$O(PSO - CSO) = O(ND) + O(T_{\max } )O(ND + N^{2} \log N + N^{2} + ND)$$
(54)

The algorithm time complexity of CSO is shown in Eq. (55):

$$O(CSO) = O(ND) + O(T_{\max } )O(N^{2} \log N + N + ND)$$
(55)

3.4 Analysis of different improved algorithms

The primary directions for CSO algorithm improvement may be split into two categories. First, the field of optimization (FOO) needs to be improved to make it appropriate for certain optimization goal functions. Examples that come to mind include CSOCA-GA. The second step is to accelerate convergence and increase precision in the optimization process (POO). Examples that come to mind are ICSO-VI and CSO-ALO.

Through MATLAB simulations, the performance of all the aforementioned enhanced algorithms has been shown, with varied degrees of improvement over the CSO algorithm. However, there is presently little information on how these enhanced algorithms compare, which can be examined in upcoming reviews and more in-depth studies. Tables 3, 4 and 5 display a compilation of the general improvements’ algorithms, hybrid improvements algorithms, and additional improvements algorithms discussed in this section.

Table 3 General improvements of CSO
Table 4 Hybrid improvements of CSO
Table 5 Other improvements of CSO

4 Function testing and performance analysis

In order to study the effectiveness of the chicken swarm optimization algorithm, we studied particle swarm optimization algorithm (PSO) (Nayak et al. 2023), chicken swarm optimization algorithm (CSO) (Li et al. 2021b), sine–cosine optimization algorithm (SCA) (Rizk-Allah and Hassanien 2023), multiverse optimization algorithm (MVO) (Mishra et al. 2022), tree species optimization algorithm (TSA) (Carreon-Ortiz and Valdez 2022), wind driven algorithm (WDO) (Ibrahim et al. 2020) and other 6 algorithms were tested. This paper conducts comparative experiments using 9 standard benchmark functions, which simulate the different difficulties of actual search spaces. See Table 6 for function details. To be more convincing, 30 independent experiments were performed for each test function in all cases, with a maximum number of iterations of 500 and a population size set to 50.

Table 6 Test functions

The integrated development environment for all experiments is Matlab_R2020b. The operating system is a 64-bit Windows 11 system. In order to verify the stability and convergence of the seven algorithms, the convergence comparison chart of each algorithm on the test function is listed, as shown in Fig. 10.

Fig. 10
figure 10figure 10

Comparison of six algorithms to find the optimal solution

The horizontal axis in Fig. 10 represents the number of iterations of the algorithm, and the vertical axis represents the optimal fitness function value. From the convergence curve and the final data table, it can be seen that for the F1 function, CSO has the fastest convergence speed, and the convergence curves of CSO are all below the convergence curves of other algorithms. From the final results, the WDO algorithm has the best result. Judging from the final results, the average responsiveness value obtained using CSO optimization is 18.35, which is closer to the theoretical optimal value. Among them, PSO stops converging prematurely and has the worst performance. It shows that CSO's optimization ability for the F1 function is very stable, and its repetition accuracy is higher than other algorithms. For function F2, SCA has the best performance and the fastest convergence speed. For the F3 function, WDO results in better performance and better convergence speed than other algorithms. For function F4, WDO results are the best, followed by CSO. For function F5, the performance of WDO algorithm and CSO algorithm is better. For function F6, the CSO algorithm has the fastest convergence speed and the best results. For function F7, the six algorithms are still in the process of calculating the optimal value and have not reached convergence. For function F8, the CSO algorithm has the fastest convergence speed and the best results. For function F9, the CSO algorithm has the fastest convergence speed and the best results.

Judging from different test results, for different optimization applications, the performance of the algorithm needs to be analyzed in detail to determine which algorithm is better for a certain application. Generally speaking, the chicken flock algorithm is the fastest and has the best results in solving the minimum value of most functions.

Under the same standard test function, the mean value represents the convergence accuracy of the algorithm, and the standard deviation represents the stability of the algorithm. Obviously, the smaller the mean value and standard deviation, the stronger the ability of the algorithm to avoid local solutions and determine the global optimal solution. It can be seen from Table 7 that the CSO algorithm and the WDO algorithm can successfully find an excellent solution on 9 test functions, among which the global optimal values ​​are obtained on 5 functions, while the PSO algorithm and the SCA algorithm, MVO algorithm and TSA algorithm successfully find an excellent solution (global optimal value).

Table 7 Comparison results of 9 function data

5 Application of chicken swarm optimization

The CSO algorithm and its improved algorithm continue to be used by academics in the fields of data mining, wireless sensor networks of the Internet of Things, robot engineering, electric power, feature extraction, and image processing to solve the optimal problem (Mirbabaie et al. 2021). This chapter will concentrate on the application, which is grouped in Tables 8, 9, 10, 11 and 12.

Table 8 Application of CSO in data mining
Table 9 Application of CSO in wireless sensor network
Table 10 Application of CSO in wireless sensor network
Table 11 Application of CSO in electrical engineering
Table 12 Application of CSO in feature extraction

5.1 Data mining

Data mining is the process of using clever algorithms to extract hidden, undiscovered, and valuable information from a particular set of data (Liang et al. 2020b). Many academics are investigating the use of chicken swarm optimization (CSO) in the field of data mining since the quick and high-precision search requirements match the traits of CSO algorithms.

5.1.1 Data of classification

For the categorization of violent films, the author of a paper (Abdullahi et al. 2020) suggested combining the CSO method and a deep neural network (DNN). A DNN is a neural network made up of input, hidden, and output layers. It has numerous hidden layers (Gordan et al. 2022). Figure 11 depicts the neural network's structure. The input layer, which in this application instance corresponds to the input data of digital movies, is represented by the layer on the left. The model's characteristics have more expressive potential thanks to the multi-layer structure, which is buried between the intermediate levels. The categorization outcomes for violent videos are shown in this application's output layer. The expressive capability of the hidden layers determines how well classification results using DNN are produced. However, in real-world circumstances, depending simply on conventional techniques, such as different activation functions, frequently yields low accuracy and sluggish computing performance. As a result, in accordance with the literature (Abdullahi et al. 2020), the input data is optimized using the CSO algorithm at the input layer with mean squared error (MSE) serving as the fitness function. Prior to collecting features from the hidden layers, this method permits getting video data with improved precision.

Fig. 11
figure 11

Schematic diagram of DNN network

In the literature (Kumari et al. 2022a), the author proposed the combination of SA-CSO with K-nearest neighbors (KNN) and Fuzzy-Convolutional neural networks (FCNN) for post-harvest grading of mangoes (Liu et al. 2022). Extracting fundamental information from fruit photos, such as the Gray-level co-occurrence matrix (GLCM), local binary pattern (LBP), discrete fourier transform (DFT), and shape characteristics, is the key to intelligent fruit grading. The data that was collected from the fruit comprises normal and aberrant segmentation characteristics. In the study, the feature vectors were optimized using the SA-CSO method, and KNN and CNN were used to model the categorization of mango defects and levels of ripeness. Testing was done using the Kesar Mango dataset from the Mendeley repository. The findings demonstrated that, in comparison to the employment of PSO, WOA, CSO, and other algorithms, the combination of SA-CSO with KNN and FCNN increased the accuracy of optimal feature extraction.

5.1.2 Data clustering

In the literature (Harshavardhan et al. 2023), the author proposed the utilization of the CSO algorithm in data clustering. The classification of comparable items into homogenous groups is the aim of data clustering (Khorshid and Abdulazeez 2021). The Euclidean distance, which is defined as stated in Eq. (56), determines how similar two things are. The similarity metric was used in the study as the CSO algorithm's optimization objective function. The Iris Dataset, Ecoli Dataset, Ionosphere Dataset, and Cancer Dataset were the four datasets used to assess the algorithm's performance. The findings demonstrated that the CSO algorithm exhibited enhanced accuracy in data clustering compared to the GA, CS, and PSO algorithms.

$$ist\left( {o_{i} ,o_{j} } \right) = \left( {\sum\limits_{p = 1}^{m} {\left| {o_{ip} - o_{jp} } \right|^{\frac{1}{2}} } } \right)^{2}$$
(56)

wherein, in Eq. (56), m is the number of target attributes, and Oip is the target value corresponding to the p-th attribute.

The author used the enhanced ACSO algorithm to tackle the challenge of community discovery in intricate social networks in the literature (Yanto et al. 2020). The main goal of community discovery is to redefine the topological structure of complex networks by grouping groups with similar characteristics (Guo et al. 2022). The ACSO method was modified in the literature (Yanto et al. 2020) to become a discrete swarm intelligence algorithm appropriate for clustering issues. As objective functions, we employ the Modularity, NMI, and Group truth of each category. Testing on data sets like the Zachary Karate Club data set and the American college football data set demonstrates that ACSO considerably increases clustering accuracy. The research also emphasizes that the CCSO method can be further enhanced in the future to accommodate application scenarios with a wider scope.

A multi-objective chicken swarm optimization method (MOCSO), which addresses the multi-objective issues faced by various neural network algorithms (Su et al. 2024), was suggested by the author in the literature (Rabani and Soleimanian 2019). It is used with four different kinds of tri-objective functions and five different kinds of bi-objective functions. When compared to multi-objective tasks using algorithms like PSO, GA, and EA, MOCSO enhances convergence and optimum bounds according to the performance indicators of Generational distance (GD), Spacing, and Maximum Spread (MS).

In the literature (Wei et al. 2021), the author focused on the problem of the dataset in unsupervised clustering of Alzheimer's disease tending to fall into local optima in deep learning models (Garud et al. 2021). To fine-tune the parameters in deep learning clustering, it is suggested to employ the search concept of the CCSO method. It is determined by contrasting the FCM and IFCM clustering algorithms that initializing the parameters with CCSO may greatly increase the accuracy of the outcomes.

5.1.3 Prediction

The CSO method is used to initialize the weight values of a neural network algorithm in the prediction model of crude oil prices, according to the literature (Dhanusha et al. 2022). The neural network model is then given the optimum weights. It is tested to see if it is less than the halting condition using the mean square error (MSE) as a criterion (Alashwal et al. 2019). If so, the anticipated outcomes are outputted. Otherwise, the weights are continuing to be optimized using the CSO method. It is determined that CSO algorithm optimization delivers greater accuracy and lower MSE by contrasting it with ABCNN (Artificial Bee Colony Neural Network) and ABCBP (Artificial Bee Colony Back-Propagation).

The CSO algorithm is used in the feature extraction stage of predicting cervical cancer (Hodson 2022) in the literature (Khan et al. 2019) and (AbuKhalil et al. 2022). The CSO algorithm's chickens include the feature values, which are then calculated iteratively to produce the chicken values. Following that, these values are compared to random integers between 0 and 1. The related characteristic is recorded as 1 if a value exceeds the random number; otherwise, it is marked as 0. CSO is discovered through simulation trials to increase the findings' accuracy when compared to prediction algorithms like KNN, MLP, SVM, CART, and CNN.

In the literature (Rizk-Allah and Hassanien 2023), the author addressed the issue of traditional point-based prediction algorithm, which is inability to accurately obtain landslide displacement (Akter et al. 2021). Two support vector machines (SVM) are used in a proposed approach to produce upper and lower bounds for the displacement. By establishing a projected interval of landslide displacement, this optimization increases the point prediction algorithm's accuracy of prediction. Given a dataset, SVM is examined using the PICP and PINAW parameters (Eqs. 58 and 60), which result in the comprehensive index CWC (Eq. 61), which is used to calculate the prediction interval (Eq. 57). Reference 82 also suggests using Eq. 61 as the ACSO algorithm's goal function to enhance the prediction parameters. Using the ACSO method, as opposed to the CSO, SMO, and GWO algorithms, increases forecast accuracy, according to simulation trials.

$$\hat{I}^{\left( \alpha \right)} \left( {x_{i} } \right) = \left[ {\hat{L}^{\left( \alpha \right)} \left( {x_{i} } \right),\hat{U}^{\left( \alpha \right)} \left( {x_{i} } \right)} \right]$$
(57)

wherein, in Eq. (57), \({\widehat{L}}^{\left(\alpha \right)}\left({x}_{i}\right)\) is the lower limit of the prediction, and \({\widehat{U}}^{\left(\alpha \right)}\left({x}_{i}\right)\) is the upper limit of the prediction.

$$PICP = \frac{1}{N}\sum\limits_{i = 1}^{N} {\delta_{i} }$$
(58)

wherein, in Eq. (58), \(\delta\) is defined as shown in Eq. (59).

$$\delta_{i} = \left\{ {\begin{array}{*{20}c} {1,\quad if\;\hat{L}^{\left( \alpha \right)} \left( {x_{i} } \right) \le y_{i} \le \hat{U}^{\left( \alpha \right)} \left( {x_{i} } \right)} \\ {0,\quad \quad \quad \quad otherwise\quad \quad \quad } \\ \end{array} } \right.$$
(59)
$$PINAW = \frac{1}{NR}\sum\limits_{i = 1}^{N} {\left( {\hat{U}^{\left( \alpha \right)} \left( {x_{i} } \right) - \hat{L}^{\left( \alpha \right)} \left( {x_{i} } \right)} \right)}$$
(60)

wherein, in Eq. (60), R represents the normalized average width of PICK.

$$CWC=PINAW(1+\gamma (PICP){e}^{-\tau (PICP-\kappa )})$$
(61)

wherein, in Eq. (61), \(\tau\) is the control parameter of the Coverage Width Criterion (CWC).\(\gamma\) is defined as shown in Eq. (62).

$$\gamma (PICP) = \left\{ \begin{gathered} 1,\quad if\;PICP\, < \kappa \hfill \\ 0,\quad {\text{otherwise}} \hfill \\ \end{gathered} \right.$$
(62)

5.2 Wireless sensor network (WSN)

In the spatial domain, sensors are distributed and form a network to perceive the surrounding environmental data in a unified and autonomous manner. Wireless Sensor Networks (WSN) (Liu et al. 2020b) are the name given to this technology. WSN has been widely used in various fields such as national defense (Majid et al. 2022), military (Wang and Zhu 2021), industrial monitoring (Pragadeswaran et al. 2021), environmental detection (Aponte-Luis et al. 2018), and smart living (Safaldin et al. 2021), due to its low cost, high scalability, and stability (Li et al. 2022b). However, due to the wide coverage of WSN and the limited energy of nodes, it is difficult to provide precise positioning devices similar to GPS for each node or replace the batteries of each node regularly (Elsmany et al. 2019). Numerous academics have used intelligent optimization algorithms on WSN to solve these problems, with positive outcomes.

5.2.1 Positioning

A small number of GPS sensors (anchor nodes) are used in real-world situations to get over GPS's limitations, while other sensors rely on localization algorithms that take use of the precise location supplied by GPS nodes (Han et al. 2022). According to the literature (Tripathi et al. 2020), the major focus for developing this technology should be on natural-inspired algorithms, with the decrease of computing time and localization mistakes as the primary goals. Equation (63), which accounts for the influence of the environment on wireless sensor networks, determines the separation between other nodes and anchor nodes:

$$\hat{d} = d_{i} + n_{noise}$$
(63)

wherein, in Eq. (63), \(d\) represents the distance between a node and the i-th anchor node. \({n}_{noise}\) is the Gaussian noise affected by the environment.

The CSO optimization procedure, whose objective function is represented by Eq. (58), is first described in the literature (Tripathi et al. 2020) to determine the ideal distance.

$$f = \frac{1}{M}\sum\limits_{i = 1}^{M} {(d_{i} - \hat{d}_{i} )^{2} }$$
(64)

wherein, in Eq. (64), M represents the number of anchor nodes within the specified area.

EL is set as the optimized error metric to evaluate the results of CSO optimization, and its definition is given in Eq. (65).

$$EL = \frac{{\sum {_{i = M + 1}^{N} \sqrt {\left( {x - x_{i} } \right)^{2} + \left( {y - y_{i} } \right)^{2} } } }}{NL}$$
(65)

wherein, in Eq. (65), NL is the number of unknown nodes. N is the total number of nodes. (xi, yi) is the final optimized positions of the target nodes, and (x, y) is the optimal position obtained by the optimization algorithm after each iteration.

The optimization results achieved using the CSO method enhanced the accuracy by 55% compared to the PSO and BPSO optimization algorithms, according to experiments and simulations reported in the literature (Tripathi et al. 2020). In terms of speed, the CSO algorithm decreased the computing time by 50%. The CSO optimization algorithm outperformed the BPSOA optimization algorithm by 10% in terms of accuracy while speeding up calculation by 30%.

The author of the literature (Al Shayokh and Shin 2017) largely concentrated on the usage of WSN in the field of deep mining. An extensive wireless sensor network is set up in deep mine shafts to monitor the environment (Gupta and Mahaur 2021). The received signal strength indication (RSSI) parameter (Muduli et al. 2018) determines the theoretical computed distance for the placement of each node in the network. The real placement distance is less than the estimated distance because of the complicated mining environment and the low connection distance between nodes. Within each node's communication radius, there must be a minimum of three other nodes. Clustering the nodes in a WSN is a frequent practice to increase accuracy and reduce energy usage during transmission. After clustering, an optimization technique must be created to improve overall communication by setting anchor nodes inside each cluster (Nagah Amr et al. 2021).

The literature (Al Shayokh and Shin 2017) made use of the chicken swarm optimization's unique properties, treating anchor nodes like individual birds and the rest m nodes like food in a 2D space. Equation (66) is used to determine the ideal food that the anchor nodes should seek. The Eqs. (67) and (68) define the objective function.

$$X = \left[ {\left( {x_{1} ,y_{1} } \right), \cdots ,\left( {x_{m} ,y_{m} } \right)} \right] = \left[ {x_{1} ,x_{2} ,x_{3} ,x_{4} , \cdots ,x_{2m - 1} ,x_{2m} } \right]$$
(66)
$$\begin{gathered} f\left( x \right) = f\left( {x_{1} ,x_{2} , \cdots ,x_{2m} } \right) \hfill \\ = \frac{{\sum\limits_{k = 1}^{m} {\sum\limits_{l = 1}^{m} {\left( {\frac{{\left\| {\left( {x_{2k - 1} ,x_{2k} } \right) - \left( {x_{2l - 1} ,x_{2l} } \right)} \right\| - D\left( {k,l} \right)}}{{D\left( {k,l} \right)}}} \right)} } }}{{m^{2} /E\left( {k,l} \right)}} \hfill \\ \end{gathered}$$
(67)
$$E\left( {k,l} \right) = \left\{ {\begin{array}{*{20}c} {0,\quad D\left( {k,l} \right) = 0} \\ {1,\quad D\left( {k,l} \right) \ne 0} \\ \end{array} } \right.$$
(68)

wherein, in Eq. (67), D is a two-dimensional matrix containing distance data for all nodes.

The CSO algorithm is used to compute the objective function in order to produce the optimum parameters. The final optimal node locations are then produced using the Wheel graph technique utilizing these adjusted settings. The CSO-W localization optimization algorithm is the name given to this coupled method in the literature (Al Shayokh and Shin 2017). By comparing the accuracy performance of the CSO-W algorithm to that of the D3D-MAP and CSO algorithms, it is found that the CSO-W algorithm produces greater localization accuracy.

The objective function (64) is changed to Eq. (69) in the literature (Yu et al. 2019), and the error target EL is changed to Eq. (70). Additionally, ICSO-I, a more advanced CSO algorithm, is shown. Through simulation tests, it was discovered that the ICSO-I method outperforms the original CSO localization optimization and PSO algorithms in terms of localization accuracy.

$$f\left( {x,y} \right) = \frac{1}{M}\sum\limits_{i = 1}^{M} {\left( {\sqrt {\left( {x - xi} \right)^{2} + \left( {y - yi} \right)^{2} } - d_{i} } \right)^{2} }$$
(69)
$$E_{L} = \frac{1}{{N_{L} }}\sum\limits_{i = 1}^{l} {\left( {\left( {xi - \hat{x}i} \right)^{2} + \left( {yi - \hat{y}i} \right)^{2} } \right)}$$
(70)

wherein, in Eqs. (69) and (70), (x,y) is the coordinates of anchor nodes, (xi,yi) is the coordinates of unknown nodes being optimized. L denotes the number of optimized nodes, and \(\widehat{x}i,\widehat{y}i\) is the optimal position obtained after each iteration of the node.

5.2.2 Energy management

The lifespan of a wireless sensor network (WSN) depends on the energy consumption of nodes in the system (Karim et al. 2021). In terms of application scenarios, the node with the longest lifespan that results in system failure determines the overall lifespan. This node could be a long-lifespan node that is infrequently used or a short-lifespan node that is often used (Abdulzahra and Al-Qurabat 2022). Optimization methods should be developed to reduce the usage frequency of short-lifespan nodes in order to prolong the lifespan of the WSN (Dalal et al. 2022). With a view to minimizing energy consumption and extending the lifespan of sensor nodes, LEACH is a wireless sensor network routing protocol that tackles the crucial problem of equally spreading network load across sensor nodes. A round of data, which consists of cluster formation and steady data transmission phases, is how LEACH divides the entire sensor network into periodic clusters (Angurala et al. 2022). A threshold random number determines how LEACH clusters develop, and this random formation has downsides because it causes node clustering and higher energy usage.

The CSO method is described in the literature (Sandeli et al. 2021) as a replacement for the LEACH routing protocol's random cluster generation procedure. The fitness function is the energy state of the nodes, and after a predetermined amount of iterations, the CSO solution is used to decide the cluster node composition. Through simulation studies, it is shown that the CSO algorithm outperforms the LEACH routing protocol in terms of fewer nodes experiencing energy exhaustion every round, extending the lifespan of the WSN system as a whole.

In the literature (Osamy et al. 2020), a WSN system with n nodes is constructed based on LEACH. Each node has the same initial energy and remains stationary. The energy consumption of nodes is defined as the transmission energy ETx and reception energy ERx, as specified in Eqs. (71) and (72):

$$E_{Tx} \left( {b,d} \right) = E_{elec} \times b + v \times b \times d^{p}$$
(71)
$$E_{Rx} \left( {b,d} \right) = E_{elec} \times b$$
(72)

wherein, in Eqs. (71) and (72), Eelec is the electronic energy. b is the packet size. d is the transmission distance. v is the energy amplification factor due to distance, and p is a loss parameter ranging from 2.0 to 4.0.

Based on the transmission and reception energy, within a cluster, assuming there are m nodes, the energy loss functions for the cluster head node (CH) and other nodes (CM) can be defined as in Eqs. (73) and (74). The overall energy loss of the cluster, E, is given by Eq. (75). The optimization objective of the chicken swarm algorithm is to minimize energy loss, which is defined as the fitness function in Eq. (76):

$$e_{CH} = \left( {m - 1} \right)E_{Rx} \left( b \right) + mbE_{DA} + E_{Tx} \left( {b,d_{toBS} } \right)$$
(73)
$$e_{CM} = E_{Tx} \left( {b,d_{toCH} } \right)$$
(74)
$$E = e_{CH} + e_{CM}$$
(75)
$$F = \frac{{\sum\nolimits_{i = 1}^{k} {E\left( i \right)} }}{{a + \sum\nolimits_{i = 1}^{k} {E\left( i \right)} }} + \left( {\frac{\beta }{a + \beta }} \right)$$
(76)

wherein, in Eqs. (73), (74), and (75), dtoBS and dtoCH are the distances from nodes to the base station and to the CH node, respectively. k is the total number of CH nodes, β is the total number of selected CH nodes, and a is a positive constant.

The CSOCA-GA algorithm is suggested for usage for optimization based on the fitness function in the literature (Osamy et al. 2020), and simulation tests are carried out to compare it with CSO, EDOC, GCDC, and LEACHPSO (LEACH optimized using PSO).The results show that CSOCA-GA reduces energy loss per round compared to the other algorithms.

In the literature (Gambhir et al. 2020), a multi-objective optimization algorithm called MWCSGA (Multi-Weight Chicken Swarm-based Genetic Algorithm) is proposed, building upon the CSOCA-GA algorithm, to account for the surrounding environment of the sensor network. The weight parameter ω is introduced in the objective function F, as defined in Eq. (77). Through experimental simulations, MWCSGA is shown to improve energy efficiency and reduce loss compared to CSOCA-GA and the optimized LEACH algorithm.

$$F = \sum\limits_{i} {\left( {\omega_{i} * f_{i} } \right)}$$
(77)

wherein, in Eq. (77), fi is the objective function based on energy loss, and ωi is the weight parameter of each node, which is related to the distance of the node.

In the literature (Kong et al. 2020), CSO-BA algorithm is introduced to reduce the lower peak side lobe level (PSL) in antenna arrays of sensor networks. The study also comprehensively compares the optimization results of the CSO-BA algorithm with CSO, BA, PSO, and BBO algorithms. It concludes that CSO-BA improves the convergence speed and accuracy of the algorithm compared to other optimization algorithms, showing promising prospects for practical applications.

5.3 Robotics engineering (RE)

Robotics engineering is a multidisciplinary field that focuses on intelligent manufacturing and intelligent robots (Daanoune et al. 2021). It integrates various cutting-edge disciplines such as computer science, optoelectronics (Wang et al. 2021b), automatic control (Pham et al. 2022), sensors, and bionics (Berberich et al. 2022). With the in-depth research on optimization algorithms, corresponding research achievements have been made in the areas of robot path planning and artificial intelligence machine learning.

5.3.1 Trajectory

In the literature (Ajmi et al. 2021), the author applied the chicken swarm optimization algorithm to the optimization of robot motion trajectories. The objective function of the method is chosen to be the least trip time, and a third-order B-spline curve is used to build the motion trajectory. The trials demonstrated that the chicken swarm optimization algorithm successfully decreased the trip time when utilizing a six-degree-of-freedom robotic arm to polish metal workpieces.

In the literature (Mu et al. 2016), the author studied the optimization problem of the ascent trajectory of a generic hypersonic vehicle (GHV). This analysis included limitations including dynamic pressure, load factor, and aerodynamic heating to guarantee the flight safety and structural integrity of the GHV. The GHV's fuel consumption was selected as the goal function, and the optimization process used the chicken swarm technique. The optimization impacts of the CSO algorithm and the PSO algorithm were compared through experimental simulations. The outcomes demonstrated that CSO considerably increased the algorithm's convergence speed and accuracy.

In the literature (Wei et al. 2021), the author addressed the shortcomings of the CSO algorithm in handling high-dimensional problems and proposed the ICSO-II algorithm based on the crossover operator. It was applied to the reentry trajectory problem of the GHV, which has high-dimensional optimization requirements. The least aerodynamic heating rate and the shortest flight time were chosen as the optimization process's objective functions. Through simulation studies, the optimization performance of five algorithms—ICSO-II, CSO, PSO, ABC, and ACO—was examined and compared. The outcomes showed that ICSO-II outperformed the other algorithms by a wide margin.

In the literature (Liang et al. 2018), the author addressed the issue of the CSO algorithm easily converging to local optima and proposed the ICSO-III algorithm for robot path planning. The goal function used by the optimization process, as given in Eq. (78) is the length of the robot's journey. Finally, it was determined through a comparison of the ICSO-III, CSO, and PSO results that, in comparison to the other two algorithms, the ICSO-III method improved algorithm convergence accuracy and stability.

$$f = \sum\nolimits_{i = 1}^{n - 1} {\sqrt {\left( {x_{i + 1} - x_{i} } \right)^{2} + \left( {y_{i + 1} - y_{i} } \right)^{2} } }$$
(78)

wherein, in Eq. (78), n is the number of nodes in the path, and (xi,yi) is the horizontal and vertical coordinates of the robot.

5.3.2 Machine learning

In the literature (Gawali and Gawali 2021), the author focused on the popular machine learning problem in recent years, within the context of reinforcement learning, the CSO-DH optimization algorithm is proposed to optimize the robot's arm motion trajectory as the objective function. The results of simulation experiments show that the CSO-DH optimization-based reinforcement learning algorithm is more accurate than traditional reinforcement learning algorithms, WOA-based reinforcement learning algorithms, and DHOA-based reinforcement learning algorithms, further bridging the gap between humans and robots.

5.4 Electrical engineering (EE)

Electrical engineering plays a crucial role in the energy field today. In recent years, with the introduction of intelligent optimization algorithms, significant progress has been made in smart grids (Tan et al. 2022), new energy technologies (Lamnatou et al. 2022), and new energy vehicles (Pan and Dong 2023).

5.4.1 Smart grids

In the literature (Li et al. 2017), the author aimed to reduce electricity costs. CSO algorithm is utilized for optimization based on the peak-shaving and valley-filling technology in smart grids. The electrical load and Critical Peak Pricing (CPP) are selected as the objective functions. Experimental models lead to the conclusion that CSO successfully lowers its electricity costs.

In the literature (Sivanantham et al. 2022), to achieve balanced electricity consumption during peak periods, a method for regulating interruptible loads to reduce peak loads is proposed. A load shedding scheduling model is designed considering the users' subsidy rate. The model aims to minimize the overall power consumption, defined as the objective function in Eq. (79). Finally, an improved algorithm called ICSO-IV is designed to optimize the mixed nonlinear problem. Through simulation experiments, it is observed that ICSO-IV outperforms the GA and PSO optimization algorithms in terms of convergence speed and accuracy of the optimization model.

$$\min P = P_{1} + P_{2} + P_{3}$$
(79)

wherein, in Eq. (79), P is the overall power consumption. P1 is the power consumption of interruptible loads. P2 is the power consumption of compensating loads. and P3 is the power consumption of smart grid operation and maintenance.

In the literature (Awais et al. 2017), parallel capacitor banks are introduced into radial distribution systems. The system's bus voltage and power factor are both improved by the capacitor banks' optimized arrangement. The CSO algorithm is created to optimize the power factor on the 85th and 118th bus systems with the goal of reaching the ideal arrangement and size of the parallel capacitor banks, hoping to lower the system's power consumption. The experimental findings show that the CSO-achieved optimized capacitor bank arrangement successfully lowers costs while providing dependable power factor performance.

5.4.2 New energy technology

In the literature (Liu et al. 2020a), the author proposed that photovoltaic (PV) power generation is influenced by weather conditions, and its impact on the power grid depends on the short-term power prediction accuracy under different weather conditions. The paper enhances the CSO algorithm first, then suggests the ICSO-V method to increase prediction accuracy. The ICSO-V technique is then used in conjunction with an extreme learning machine model to forecast PV power under various weather scenarios. The test results show that the ICSO-V algorithm and the extreme learning machine model together minimize mean square error and percentage error.

In the literature (Biswal and Shankar 2021), the author aimed to enhance the frequency stability of interconnected renewable energy power systems. It proposes an optimization control scheme that combines CSO with Adaptive Virtual Inertia Control (AVIC). The scheme utilizes CSO to find the optimal values of the gain of the adaptive PID controller and the parameters required by AVICs. Experimental simulations are conducted to compare the superiority of the CSO algorithm with differential evolution and PSO algorithms. The results show that the control scheme based on CSO-AVIC significantly improves the dynamic performance of the system.

In the literature (Mishra et al. 2022), the author addressed the efficiency of converting solar energy to electrical energy in PV systems, which is influenced by the parameters of the PV model. The PV model's parameter identification problem is seen as an optimization problem. To minimize the difference between experimental data and simulated data, error functions Ferror for single diode and double diode are defined. The objective function RMSE, as shown in Eq. (80), is then defined based on the errors. Finally, function optimization using the Spiral-based Chaotic Chicken Swarm Optimization (SCCSO) technique is used to determine the PV model's ideal parameters. Experimental simulations indicate that the SCCSO algorithm improves the convergence accuracy and stability compared to the CSO and PSO algorithms.

$$RMSE\left( X \right) = \sqrt {\frac{1}{{N_{M} }}\sum\limits_{i = 1}^{N} {F_{error}^{i} \left( {V_{L} ,I_{L} ,X} \right)^{2} } }$$
(80)

wherein, in Eq. (80), X represents the PV parameter solution vector to be solved, Nm is the number of I-V data obtained from measurements, VL is the output voltage at the port, and IL is the output current at the port.

In the literature (Othman and El-Fergany 2021), the author addressed the issue of long optimization time in solving high-dimensional optimization problems using the CSO algorithm. It proposes a modified version of the CSO algorithm called ICSO-IV, which incorporates initial sorting based on chaotic sequences and introduces adaptive weight optimization. The algorithm is then applied to the maximum power point optimization tracking problem in photovoltaic systems. Through simulation experiments, it is demonstrated that this optimization algorithm improves the convergence speed and accuracy compared to the CSO, PSO, and BA algorithms.

5.4.3 New-energy vehicle

In the literature (Deb and Gao 2021), the author focused on the optimal placement problem of electric vehicles (EVs) and considers it as a complex high-dimensional optimization problem. The study confirms the CSO algorithm's limits in tackling high-dimensional optimization issues and suggests CSO-ALO, a hybrid enhanced optimization method that combines the CSO algorithm and the ALO algorithm. Through benchmark functions and tests on EV charger placement problems, it is demonstrated that CSO-ALO outperforms CSO, ALO, TLBO, and other optimization algorithms, exhibiting higher convergence accuracy in solving placement problems.

In the literature (Wu et al. 2018b), the maximum power point tracking (MPPT) problem for fuel cells in electric vehicles is addressed. Through simulation studies, it is demonstrated that a CSO-based MPPT algorithm may more accurately reach the ideal fuel cell power.

In the literature (Priyadarshi et al. 2021), the author addressed the need for intelligent energy management algorithms to meet the new requirements for the number of charging stations in parking lots and the capacity of the power grid with the large-scale production of new energy vehicles. The study proposes a charging station planning and operation solution based on the CSO algorithm. The objective functions include reliability, voltage stability characteristics, and power consumption. According to experimental simulations, the CSO-based approach is superior than coordinated charging, non-coordinated charging, and vehicle-to-grid (V2G) solutions in the aforementioned metrics. Future issues with new energy car charging are well addressed by this.

5.5 Feature extraction

With the development of artificial intelligence technology, feature extraction from given datasets and analysis and processing of given graphics are now common research topics.

5.5.1 Feature extraction

In the literature (Sachan et al. 2021), a CSO-based data packet pattern feature extraction method is proposed. This method aims to find the optimal point of the feature fitness function in a multidimensional feature space. It minimizes the selected number of features while ensuring optimal classification performance. Finally, practical simulations show that the CSO-based feature extraction method enhances the accuracy of fitness function optimization compared to methods based on PSO and GA utilizing 18 datasets from the UCI repository as test data. It achieves superior fitness function values.

In the literature (Li et al. 2021b), the author addressed the problem of the CSO algorithm easily getting trapped in local optima when performing dimensionality reduction in feature extraction. It proposes an improved clustering algorithm based on chaos called CCSO. The effectiveness of CCSO is compared to CSO, PSO, DOA, and BAT in feature dimensionality reduction using five datasets (spambase, wbdc, ionosphere, lung, sonar). Through experimental simulations, it is concluded that the CCSO algorithm significantly improves the accuracy of feature extraction.

5.5.2 Image processing

In the literature (Verma et al. 2023), a fast segmentation method for Synthetic Aperture Radar (SAR) images is proposed to address the issues of poor segmentation performance and slow segmentation speed in traditional SAR image processing (Cai et al. 2021b). The method first narrows down the search space of the swarm based on the characteristics of SAR images. Then, it swiftly locates the ideal solution by using the gray entropy model as the algorithm's fitness function. The improved search space chicken swarm optimization technique, as compared to genetic algorithms and artificial fish swarm algorithms, enhances the accuracy and speed of picture segmentation, according to experimental simulations.

In the literature (Bharanidharan and Rajaguru 2020), the author focused on the problem of low convergence accuracy and slow convergence speed when applying the CSO algorithm to medical image processing (Mondini et al. 2021). It proposes a controlled CSO algorithm called ICSO-VI, which improves the algorithm by setting control parameters. The classification of MRI images into DEM type and ND type is the focus of the research, and both statistical features and no statistical features are used in all trials. Finally, it is discovered that the ICSO-VI algorithm greatly increases the accuracy of image classification compared to the CSO method based on the examination of 65 ND and 52 DEM-type real brain MRI images.

In the literature (Cristin et al. 2021), the author combined the behavior patterns of the chicken swarm with derivative factors to enhance the accuracy of the swarm's hierarchical classification. The fitness function's ideal solution is discovered by repeatedly comparing the positions of roosters; this enhanced method is known as fractional-CSO. Finally, it is determined that Fractional-CSO greatly enhances accuracy, specificity, and sensitivity in image processing compared to the CSO algorithm by preprocessing brain pictures and utilizing the accuracy, specificity, and sensitivity of diagnosing cancer as experimental indicators.

In the literature (Wu et al. 2018a), the author proposed that predicting models for Alzheimer's disease often require preprocessing and important feature extraction. However, using preprocessed models for training often performs poorly in clinical scenarios. Therefore, in reference (Radaideh and Shirvan 2021), an unsupervised deep learning model is introduced, and a chaos theory-based improved chicken swarm optimization algorithm (CCSO) is employed to optimize the parameters of deep clustering in unsupervised learning (Li et al. 2023). This overcomes the problem of unsupervised clustering parameters easily getting trapped in local optima. Finally, it is determined through experimental simulations that the CCSO enhanced clustering model, when compared to two other clustering models, FCM and IFCM, enhances the accuracy of the method.

In the literature (Liang et al. 2020a), the author applied the CSO algorithm to brain tumor image classification and proposes a hybrid classification algorithm combining neural networks and CSO, which is also used in feature selection and dimensionality reduction. The introduction of the CSO algorithm helps obtain more realistic classification results. The hybrid classification method of neural networks and CSO is found to increase the accuracy of classification results when compared to classic neural network algorithms, decision tree algorithms, and SVM algorithms through validation testing with current brain tumor picture data.

In the literature (Wang et al. 2020), the author addressed the problem of reduced image processing quality in image enhancement techniques due to data loss. It proposes a CSO-based large-scale optimization algorithm for image enhancement, where the algorithm optimizes the entropy and peak signal-to-noise ratio of the enhanced image. Finally, through comparative experimental simulations, it is concluded that the CSO-based image enhancement algorithm improves the entropy value and peak signal-to-noise ratio of the enhanced image compared to the traditional histogram equalization (HE) method.

In the literature (Vamsidhar et al. 2022), the author addressed the problem of slow computation and large computational complexity in two-dimensional maximum entropy segmentation of images (Schmarje et al. 2021). It proposes an improved two-dimensional maximum entropy segmentation algorithm based on CSO optimization. In this algorithm, the maximum entropy of the two-dimensional image is used as the fitness function of the optimization algorithm. The chicken swarm algorithm quickly determines the ideal threshold of the objective function to get the ideal segmentation solution through the optimization process. The segmentation method based on CSO optimization, as opposed to particle swarm optimization (PSO) and artificial fish swarm algorithm, enhances the convergence performance and computing speed of image segmentation, according to experimental simulations.

In the literature (Kumari et al. 2022b), the author focused on the parameter setting problem of pulse-coupled neural networks (PCNN) in the field of image segmentation (Wunnava et al. 2022). It proposes an improved image segmentation algorithm called ICSO-ISPCNN, which combines an improved chicken swarm optimization algorithm with an improved PCNN model. In this algorithm, the threshold function in the PCNN model is modified, and the CSO algorithm is enhanced with a tournament selection mechanism. The pulse-coupled neural network approach for automated image segmentation iteratively optimizes the parameter values using the integrated cross-entropy as the fitness function. Through experimental simulations, it is discovered that ICSO-ISPCNN outperforms genetic algorithms and ant colony optimization algorithms in terms of convergence performance and segmentation accuracy.

6 Discussion and the key problems

This study systematically explores the CSO algorithm's past research findings in light of the history of optimization algorithms and its guiding principles. Figures 12 and 13 indicate the number of magazines in the five core subjects and the number of publications in the twelve distinct study topics. The CSO algorithm has been incorporated into a number of sectors and has proven effective in realizing the best answer to the relevant engineering challenges. Its convergence accuracy and speed have significantly improved as compared to conventional approaches. The number of scholarly journals in various disciplines from 2016 to 2022 is depicted in Fig. 14. It can be concluded that since 2020, the application of CSO algorithm has increased significantly, and its main research content is concentrated in the fields of new energy and medical image processing. The CSO algorithm has demonstrated beneficial universal applicability when taken into account with the application situation since the algorithm was introduced, and it may be used in new application sectors that are offered in the future when taken into account with societal demands. Researchers in robotics don't have any clear ideas for combining the CSO algorithm with this field based on the statistical data. Technological progress has led to the emergence of industrial robots, medical robots, and autonomous driving robots, which will soon dominate research in their respective domains. The application of this study subject has not incorporated the CSO algorithm.

Fig. 12
figure 12

Core application of CSO algorithm

Fig. 13
figure 13

Specific application of CSO algorithm

Fig. 14
figure 14

CSO algorithm research progress

Compared with most intelligent optimization algorithms, CSO algorithm has been proved to have the advantages of high convergence accuracy and fast convergence speed in dealing with engineering optimization problems. Many enhanced CSO algorithms have been developed to increase the accuracy of the algorithm since, in the later stages of processing the issue, the algorithm has a tendency to settle on a local optimum solution. These include enhancing the hen, chick, and rooster operators' trajectories as well as the hybrid algorithm. There is no universal improvement method, and the improved CSO algorithm cannot be applied to all application scenarios from the perspective of optimization effect, and the applicable conditions and scope after the improvement still need to be further explored by simulation experiments.

Since the CSO algorithm was proposed, it has achieved results for different expected goals. The combination of different improved algorithm results will make the improved algorithm more targeted and the applicable objects more accurate. The combination of algorithm results in different application fields makes the algorithm collide with sparks in multiple cross fields, obtain better engineering optimization results, and at the same time promote the birth and development of new fields after cross fusion. The relevant work of the future CSO is as follows:

  • More theoretical calculation comparison: The CSO algorithm has good performance results in solving theoretical optimization problems with the benchmark function as the objective function. However, the birth of the new bionic algorithm is bound to be in contrast with the CSO algorithm. For example, Agushaka proposed the Gazelle Optimization Algorithm (GOA) in 2023 (Minaee et al. 2021). This algorithm simulates the survivability of antelopes under the rule of predators, and divides the algorithm into two stages: development and exploration. The problem-solving ability and superior competitiveness of the GOA algorithm are proved by the benchmark function and a number of engineering design problems. CSO and its improved algorithm can be compared with GOA and other algorithms in theoretical calculation under the same objective function and target task conditions (Agushaka et al. 2023). So as to obtain their respective application advantages and applicable problem ranges.

  • More cross-blending improved contrast: The optimization algorithm after cross-blending tends to get higher convergence accuracy. A large number of improved algorithms can be obtained by cross-mixing the CSO algorithm and the new optimization algorithm (Zhang et al. 2021). This section of the study material can be completed as a research review or with the creation of a brand-new hybrid algorithm. The circumstances under which the hybrid algorithm and the newly developed hybrid algorithm are applicable are determined by contrasting the benefits and drawbacks of the algorithms in various application scenarios.

  • Explore more application fields: This article describes the application of the CSO algorithm in the five major fields of Data Mining, WSN, Robotics Engineering, Electrical Engineering, and Feature Extraction. The future research fields of CSO may include brain-computer interface (Tang et al. 2023), integrated circuit low-power design (Ye et al. 2022), artificial intelligence chip (Choi et al. 2022), driverless. The research content can be aimed at the breakthrough of a single problem, such as power consumption, signal acquisition integrity, path tracking strategy, or comprehensive research on multiple engineering problems in the corresponding field.

Chicken swarm optimization algorithm is developing rapidly, and most of its research is on the improvement of the application level, mainly focusing on improving the population coding method, combining with other strategies and ideas to form a hybrid algorithm, and applying it to specific problems. But up to now, the theoretical basis of chicken swarm optimization algorithm is still not perfect. Theoretical analysis plays an important role in our in-depth understanding of the mechanism of the algorithm, and the convergence and stability analysis of the algorithm is helpful for its further development. Moreover, it is more convincing and academically valuable to evaluate the performance of the algorithm from the perspective of theoretical analysis.

7 Conclusions and future work

While the chicken swarm optimization method has several benefits over other bionic intelligence algorithms in terms of addressing difficult optimization issues, it also reveals numerous drawbacks when it comes to actually solving problems. Therefore, the following characteristics can be used to conduct further study on the chicken flock optimization method.

  1. (1)

    The performance of the chicken swarm optimization technique is easily influenced by the choice of parameter values. How to reduce the unsatisfactory parameter value setting and how to impact the performance of the wolf pack algorithm by setting the size of each parameter value appropriately or adaptively according to different challenges.

  2. (2)

    The late convergence speed decreased significantly, which was attributed to the low-efficiency search behavior of the flock. The search intensity reflects the local search ability of the flock. How to balance or strengthen the global search ability and local search ability of the flock algorithm to further improve the flock The search efficiency of the group optimization algorithm will be one of the focuses of future research.

  3. (3)

    As a bionic intelligent algorithm, the chicken swarm optimization algorithm has clear biological and social traits and only a flimsy mathematical foundation, necessitating a thorough theoretical investigation and mathematical justification.

  4. (4)

    Chicken swarm optimization algorithm is an algorithm produced by multidisciplinary fusion and crossover. In future research, new improved algorithms can be designed in combination with theories in other disciplines.

  5. (5)

    It is essential to improve search speed while reducing time complexity while building the chicken swarm optimization method in the future to ensure that it is applicable to all problems.

As a relatively new nature-inspired swarm intelligence optimization algorithm, chicken group optimization algorithm has the advantages of strong global and local search ability, high population diversity and strong robustness, so it is widely used in engineering Practice and solve real life problems. First, a thorough explanation of the flock optimization algorithm's fundamental principles is provided. Next, a summary of the algorithm's benefits and drawbacks, as well as its improvement strategies, is provided. Finally, a prediction is made regarding the flock optimization algorithm's future research. The flock optimization approach has only been suggested for a little over eight years. The algorithm itself has been continuously improved and developed. Its theoretical research and engineering application have made great progress, and the theoretical research is gradually becoming mature.

Practical applications: (1) At present, the performance verification of most improved algorithms is based on the benchmark function proof. This verification method is too simple, and the performance of the algorithm can be verified in specific engineering problems in the future. (2) The flock optimization algorithm is widely used in combinatorial optimization problems, and its application in other fields needs to be expanded urgently, such as nonlinear, discrete and large-scale integration problems.